Guys, you completely missed the point, or maybe you didn't. Hey, I didn't intend to bruise anyone's post count. You say: "The difference with the new cards is that their power isn't all about raw pixel pushing ability." Of course, geez, read my post. What did I say? It has always been like that, but there is one major difference nowadays. Quality (new technologies like AA, AF, HDR, etc) scaled along with pure speed (raw pixel processing power, ROP count, and/or GPU MHz). That is not the case anymore. For comparison purposes: We went from cards with GPUs that ranged from 80Mhz to 125MHz in late 1999 to mid 2000, to cards with GPUs clocked up to 275Mhz a year later. In the same year ROP count doubled. Also nVidia, ATI, and others introduced new technologies, including new filter and/or processing techniques. You say: "The cards are doing A LOT more work than those old GeForce4 cards." Which is true for every new generation of video cards, except when it comes to raw-pixel pushing power (core clocks, ROP counts) which are not scaling so well as when compared to what happened from 1997 to 2002. Anyone here remember anything from GeForce 5xxx series? Exactly. You say: "Basically it comes down to the fact that the card and game manufacturers realized they don't have to push tons of pixels to get the same, and in most cases much greater, image quality. " Image quality and max AF/AA filtering means nada when your "insert game title" is running @ 6fps with. Case in point, or several: entire GeForce 5xxx series, most of GeForce 6xxx series, and even chunk of GeForce 7xxx series, (everything below and including 7600GT) Anyone remember XGI's Volari Duo waste of silicone? All pretty much cards that had fancy new technologies but couldn't perform and/or showcase those technologies due to the lack of raw power. This was only corrected with introduction of 256bit cards with hefty ROP counts, such ATI's Radeon x8xx series and nVidia's upper tier GeForce 7xxx cards. You say: "Yes, for older games like say Quake III they may not be that much faster. " Yes, indeed, and it is an embarrassment how that particular segment of development has stalled, regressed at some points even. I mean, XGI's Volari Duo? multi-GPU? Last time someone tried that they went bankrupt. That was 3Dfx.. Actually, what is the current XGI GPU line up.......exactly. You say: "But when you have a system that can constantly maintain 100+ frames per second how much faster do you need it to be?" If that was the case yeah, but like I said, all parties in question are actually dropping their ROP counts (and decreasing the raw power of their cards in many cases, with the exception of high-end models) which when coupled with stagnating GPU speeds, means really, REALLY weak performance when compared to older cards, even ones dating to the early part of this decade. Sure, some newer cards can run Quake III with AA/AF @ same levels an older GF3TI/GF4TI series could and get better frame rate. But, lower the AA/AF or turn it off completely, and those older cards will not only perform as fast, but in some instances faster. And "a system that can constantly maintain 100+" as you put it, is really new CPUs, RAM, and similar technologies coming into their own. You say: "They're focused on increasing the capabilities the cards can do so that there's higher image quality" I've owned probably 35-40 "2d/3d accelerators" *chuckle* in the last 10 years or so. Starting with several Voodoo/Voodoo2 cards back in the 90s. (Few of which are still in my basement somewhere) I even tried to make a list year or so ago. 2D image quality has remained roughly the same, really, except when it comes to high end professional GPUs geared towards professionals. 3D image quality, and by that I mean various filtering and/or processing techniques, such as AA/AF or hardware optimizations that bettered the way GPUs communicated with VRAM, or the system bus for that matter, well... they have all come pretty far. HDR shure is purtty. But raw pixel pushing power has stagnated, especially if you look at the last 2-3 years. I remember many industry professionals predicting GPU's clocked @ 1Ghz or more, and cards with astronomical ROP counts slated for middle of this decade. It didn't happen. CPU industry hit the same wall, but they tried to correct it with multi-core chips. Hey who knows, maybe true multi-GPU cards in the spirit of Voodoo 6000 are not that far off. I guess introduction of SLI and CrossFire could be a part of that. "^ in short: quantity != quality <---- read above.