The speed of the processor is not as important as the number of PCIe lanes the CPU supports.
PCIe uses pathways between the CPU, RAM and disk called "lanes."
Each lane has an inbox and an outbox. The in and out boxes can communicate with the rest of the components on multiple in and out routes at the same time.
PCIe slots come in many speeds from 1x to 32x. The higher the number, the more in and out routes (lanes) that slot has.
For example, a PCIe 1x connector has one lane composed of four connections. a 32x PCIe has 32 lanes with a total of 128 connections.
To use an analogy, imagine you have a FedEx depot with 1x truck docking bays. Well, all the trucks are going to have to line up and load/unload one at a time. This would take days.
Now imagine you have a FedEx depot with 128 bays. You could load up every single truck at the same time and still have plenty of bays available.
The more connections a PCIe slot has the faster it can communicate. However, assigning more lanes to one PCIe slot takes them away from other slots, and there's a limit to how many lanes any CPU can have based on its architecture.
8GB/sec of bandwidth, either in the form of PCIe 2 x16 or PCIe 3 x8, is necessary to operate a Radeon 7970 at full load, so for 2 of them you need 16GB/sec bandwidth.
A Sandy Bridge CPU has 2x8 lanes (8GB/sec) whereas an Ivy Bridge CPU has 2x16 lanes (16GB/sec).
Or in other words, if you want to run two Radeon 7970's to maximum potential at load you need 2 x16 PCIe slots (PCIe version 2) or 2 x8 slots (PCIe version 3) and an Ivy Bridge CPU.
Core i5-3570 CPU's are Ivy Bridge.
Core i5-2500 CPU's are Sandy bridge.
Core i7-3770K CPU's are Ivy Bridge.
Core i7-2600K CPU's are Sandy Bridge.
Ivy Bridge processors are vastly more expensive than Sandy Bridge CPU's because they are newer and because managing all those lanes of traffic requires a lot more L2 (on dye) and L3 (off dye) cache. This type of RAM that is built directly onto the chip is extremely difficult and expensive to produce, resulting in lots of bad parts that have to be thrown out and re-fabricated at high cost.
It should also be noted that if you are running your GPU's in Crossfire/SLI configuration, 8 lanes are devoted to interdevice communication.
However mining software (Diablominer) addresses each GPU separately, dispatching work to them in chunks called jobs.
So you should switch Crossfire/SLI off before you begin mining.
You typed a wall of text that is full of FAIL. How many GPUs do you have mining? I have over 40 7970s and I'm getting almost 700MH/s on each running off of 1x extenders and controlled by a lowly Celeron G530. I think you're confusing the bandwidth needed for applications like GAMING. We happen to be on a forum about BITCOIN MINING which uses trivial amounts of bandwidth on the bus. Death and Taxes has posted all the math before in his threads and had one of the nicest racks ever seen (haha I'm so childish).
GPU mining is nearing the end but no need to spread incorrect information (WRT mining at least).