God Emperor Akhenaton
You are comparing the worst i7 to the best i5. And even then, they barely do better if that. Point being, he should get an i7. Because in the end, you are proving my point entirely. Better product means less planned obsolescence and I don't need a bought off review company to tell me that.
I was comparing products of similar price points. I can compare the i7-3770 to the i7-860 if you want me to. They have similar MSRPs as well. The story is the same though; the newer part outperforms the old by a decent margin.
I then went on to point out that you can get an i5 THAT IS $100 CHEAPER that performs similarly to the latest-and-greatest i7 processor. I then also compared a Core-i3 that is nearly $200 cheaper than a modern i7 that manages to hit gaming performance parity with an older i7 - one that matches the MSRP of this new i7 (as the OP's budget would presumably be the same four years ago as it would be now)
Why the ******** are you bitter that modern i5s are beating older i7s? Isn't that just expected?
The point I was making is that the i5s and i7s are SO CLOSE IN PERFORMANCE that - on a budget - you're better off spending that $100 on graphics. What the ******** is hard to understand about that?
And Anandtech is a "bought off" review site? They are THE most reputable review site. And I'm not linking to reviews. I'm linking to benchmarks. How is a piece of hardware producing a certain framerate "bought off"? Go ******** yourself.
Seriously. Why do you resort to taking vague and impossibly ridiculous positions just because you're too stubborn to see someone else's point of view? I present NUMBERS as in ACTUAL FACTS to backup what I'm saying, and your response is to imply that the numbers are wrong....? That maybe they're the product of some "reviewer conspiracy" that involves years of planning and corporate bribery to make it appear as if new processors are better than the old ones. When these processors can be tested and easily compared by anyone...? Really? Dude.... wtf?
As for why there are more generations of Nvidia graphics cards than Intel processors, it's because it is easier and more advantageous to move GPU silicon to newer manufacturing processes and half-nodes than CPUs.
CPUs are very complex with long development cycles. There are a small number of processor cores designed to perform a wide variety of tasks. GPUs are simpler and far more scalable with large numbers of very limited processor cores all working in parallel. With GPUs, moving from a standard 45nm manufacturing process to a 40nm half-node process is advantageous because it can mean more cores can fit in the same die area or the same number of cores will make for a smaller die. That half-node might only be available 8 months after the standard node became available though, and will only stick around for maybe another 8 months after that until they drop to the next full manufacturing node. It's harder to compensate for potentially poorer manufacturing yields of half nodes when it comes to processors as you will lose larger chunks of performance (AMD's Athlon and Phenom X3s for example) from that kind of binning, let alone having to test and validate your architecture in such a short time period. And what is a half node going to offer that processor architecture? Tiny gains in power efficiency? KBs of extra cache? They're can't possibly fit an extra processor core with such a small change and they're not going to risk introducing major architectural changes to a new manufacturing process; that's why Intel developed their "Tick-Tock" design philosophy. But with GPUs it is easy to add redundancy by, for example, incorporating additional stream processors to help compensate for manufacturing defects. And with such a simpler chip in general with far fewer components it doesn't take long to validate a design or to retool it for a new process. As such those small node improvements could give you enough of a leg-up over your competition - especially if you get onto it first - whether you're looking for small cost reductions from a die shrink or a small performance boost due to more transistors. Games drive hardware sales, and their requirements are only going up over time. If you aren't producing cutting-edge hardware you will lose out on those sales.
There are twice as many graphics card generations as processors because they are easier to produce. Because they can take real advantage of half nodes. Because some generations (like 200 series GeForce, and various mid and low end cards from every generaton) rely heavily on rebranding cards from the previous generation instead of releasing all new hardware.
BUT NONE OF THIS MATTERS. If spending $100 on a better graphics card gives you even just a 20% boost in game performance, versus spending $100 to bring your i5 to an i7 for a 0-5% increase in game performance, and their concern is gaming performance,
why would you recommend the i7 over better graphics?
To be fair, if you're bringing 9000-series GeForce cards into this (even the 200 series was released before Nehalem debuted) then we need to bring the Core 2 architecture into this discussion as well. Core 2 Duos and Quads were the predominant architecture sold and used in 2008. The GeForce 200-series released mid-2008. Nehalem didn't release until the end of the year.
The raw processing performance difference between a Q9650 and a modern i5 or i7 is over 100% in many benchmarks. Why not in gaming? Because THE GRAPHICS CARD IS MORE IMPORTANT. This has nothing to do with how well processors age and everything to do with how powerful your graphics card is. The fact that newer processors can offer upwards of 40-50% better framerates
with the same graphics hardware than 4-5 year old processors is impressive, but the fact remains that a better graphics card is still going to offer substantially better performance. Period. You would need an exceptionally shitty processor to significantly bottleneck most graphics cards.