Core Architecture Changes

Ivy Bridge is considered a tick from the CPU perspective but a tock from the GPU perspective. On the CPU core side that means you can expect clock-for-clock performance improvements in the 4 - 6% range. Despite the limited improvement in core-level performance there's a lot of cleanup that went into the design. In order to maintain a strict design schedule it's not uncommon for a number of features not to make it into a design, only to be added later in the subsequent product. Ticks are great for this.

Five years ago Intel introduced Conroe which defined the high level architecture for every generation since. Sandy Bridge was the first significant overhaul since Conroe and even it didn't look very different from the original Core 2. Ivy Bridge continues the trend.

The front end in Ivy Bridge is still 4-wide with support for fusion of both x86 instructions and decoded uOps. The uOp cache introduced in Sandy Bridge remains in Ivy with no major changes.

Some structures within the chip are now better optimized for single threaded execution. Hyper Threading requires a bunch of partitioning of internal structures (e.g. buffers/queues) to allow instructions from multiple threads to use those structures simultaneously. In Sandy Bridge, many of those structures are statically partitioned. If you have a buffer that can hold 20 entries, each thread gets up to 10 entries in the buffer. In the event of a single threaded workload, half of the buffer goes unused. Ivy Bridge reworks a number of these data structures to dynamically allocate resources to threads. Now if there's only a single thread active, these structures will dedicate all resources to servicing that thread. One such example is the DSB queue that serves the uOp cache mentioned above. There's a lookup mechanism for putting uOps into the cache. Those requests are placed into the DSB queue, which used to be split evenly between threads. In Ivy Bridge the DSB queue is allocated dynamically to one or both threads.

In Sandy Bridge Intel did a ground up redesign of its branch predictor. Once again it doesn't make sense to redo it for Ivy Bridge so branch prediction remains the same. In the past prefetchers have stopped at page boundaries since they are physically based. Ivy Bridge lifts this restriction.

The number of execution units hasn't changed in Ivy Bridge, but there are some changes here. The FP/integer divider sees another performance gain this round. Ivy Bridge's divider has twice the throughput of the unit in Sandy Bridge. The advantage here shows up mostly in FP workloads as they tend to be more computationally heavy.

MOV operations can now take place in the register renaming stage instead of making it occupy an execution port. The x86 MOV instruction simply copies the contents of a register into another register. In Ivy Bridge MOVs are executed by simply pointing one register at the location of the destination register. This is enabled by the physical register file first introduced in Sandy Bridge, in addition to a whole lot of clever logic within IVB. Although MOVs still occupy decode bandwidth, the instruction doesn't take up an execution port allowing other instructions to execute in place of it.

ISA Changes

Intel also introduced a number of ISA changes in Ivy Bridge. The ones that stand out the most to me are the inclusion of a very high speed digital random number generator (DRNG) and supervisory mode execution protection (SMEP).

Ivy Bridge's DRNG can generate high quality random numbers (standards compliant) at 2 - 3Gbps. The DRNG is available to both user and OS level code. This will be very important for security and algorithms going forward.

SMEP in Ivy Bridge provides hardware protection against user mode code being executed in more privileged levels.

Motherboard & Chipset Support Cache, Memory Controller & Overclocking Changes
Comments Locked


View All Comments

  • Hrel - Thursday, September 22, 2011 - link

    1: I said comparable, not competitive.

    2: I don't care about price. I make enough it doesn't matter. I just care about performance. At the same time, I don't waste money, so I don't buy Extreme Editions either. I buy whatever CPU has the best performance around 200 bucks.

    Point: At this point if AMD is even close (within 15%) I'm switching.
  • mino - Monday, September 26, 2011 - link

    If price does not matter, the you shall not bother about desktop stuff and go directly fro 2P workstations with ECC.

    Just a thought.
  • JKflipflop98 - Monday, October 24, 2011 - link

    Hind sight is 20/20 now.
  • Zoomer - Saturday, September 17, 2011 - link

    That stuff can, and imo should, be implemented in the filesystem.
  • Cr0nJ0b - Saturday, September 17, 2011 - link

    I'm wondering they wounldn't just got with all USB 3.0 ports since they are backward compatible with other UBS forms. Maybe a licensing cost issue?
  • Zoomer - Saturday, September 17, 2011 - link

    Intel's platform is really a mess and a hodgepodge nowadays. Pity.
  • ggathagan - Saturday, September 17, 2011 - link

    There aren't enough PCIe lanes to allow for that kind of bandwidth.
  • DanNeely - Sunday, September 18, 2011 - link

    Along with the fact that USB3 controllers are larger and need more pins on the chip to connect. They're the same reasons that AMD only has a 4 USB3 ports on its most recent southbridges.
  • marcusj0015 - Saturday, September 17, 2011 - link

    Intel Invented USB...

    so no there are no licensing costs that i can think of.
  • Aone - Sunday, September 18, 2011 - link

    Is Ivy's Quick Sync in the same power gated domain together with IGP as it happens in SB or Quick Sync and IGP can be switched on/off independently?

Log in

Don't have an account? Sign up now