Jump to content

ATI Radeon R600 GPU's News|Pics


Recommended Posts

It better be worth the wait :)

Sounds like the first cards are going to be 80nm (rv610?) and then 65nm (rv630?) though.

screenshots captured from the page for the HIS Radeon HD 2900XT

http://www.techpowerup.com/?30504

Problem is the HD 2600 is not coming out until June and the r700 is supposed to come out Q1 2008 with multiple gpus. Probably know a lot more in 2 weeks.

Link to comment
Share on other sites


The first reviews or the R600 are in a word - disappointing. At the prices they charge for this level of graphics card you have a right to expect excellence. The new nVidia Ultra 8800 XTX is coming in around $800+!!! Sheeesh!

Link to comment
Share on other sites

I don't think anything ATI is going to come out with is going to compete very well with the 8800 until they decide to go with a universal shader approach like nVidia did.

Link to comment
Share on other sites

Anyone notice how cards nowadays are lacking for a lack of a better word in the pixel pushing department? (pixel-fill-rate-wise that is). In fact many times with newer cards as far as raw pixel fill-rate power is concerned, they are slower than the cards from the previous generation. (What's the ROP count on R600s?) I mean, I understand cost cutting and the fact that these corporations are in it for the money, but if my shiny brand new $450 card from nVidia and/or ATI scores lower under 3DMark2001SE (or any other DirectX 8.x benchmark, no-AF, no AA included) than say my 5+ year old GeForce4 TI, then I would be a bit concerned with the state of the industry.

It's all shaders, shaders, shaders nowadays. Kind of like what auto manufacturers are doing. Why waste cash on developing better engines or similar, when you can just slightly redesign the shell, (make it look "wicked", or whatever the PR department decides is "wicked" for that particular year) and slap $50 dollars worth of "bling-bling" decals or what-not (charge extra $3000 for it), and call it a "brand new model".

Latest cards might be new, but they ain't much faster, raw-power wise. Call me old fashioned, but I like me some pixel fill-rate.

Link to comment
Share on other sites

That would be true for most of the graphics cards, nVidia's 8800 is an exception to that though. With the universal shaders, they can be programmed to do anything, including raw pixel filling.

Link to comment
Share on other sites

My EVGA 8800GTS scored only 5% to 8% at best, better on older DirectX 7/8 and OGL benches than my much, MUCH older X850XT. Yeah 8800GTS has 96/96 fragment/vertex pipelines vs. 16/6 on the X850XT, but 8800GTS has only 20 ROPs vs 16 on the X850XT.

Link to comment
Share on other sites

Anyone notice how cards nowadays are lacking for a lack of a better word in the pixel pushing department? (pixel-fill-rate-wise that is). In fact many times with newer cards as far as raw pixel fill-rate power is concerned, they are slower than the cards from the previous generation. (What's the ROP count on R600s?) I mean, I understand cost cutting and the fact that these corporations are in it for the money, but if my shiny brand new $450 card from nVidia and/or ATI scores lower under 3DMark2001SE (or any other DirectX 8.x benchmark, no-AF, no AA included) than say my 5+ year old GeForce4 TI, then I would be a bit concerned with the state of the industry.

It's all shaders, shaders, shaders nowadays. Kind of like what auto manufacturers are doing. Why waste cash on developing better engines or similar, when you can just slightly redesign the shell, (make it look "wicked", or whatever the PR department decides is "wicked" for that particular year) and slap $50 dollars worth of "bling-bling" decals or what-not (charge extra $3000 for it), and call it a "brand new model".

Latest cards might be new, but they ain't much faster, raw-power wise. Call me old fashioned, but I like me some pixel fill-rate.

I'd beg to differ. The difference with the new cards is that their power isn't all about raw pixel pushing ability. The cards are doing A LOT more work than those old GeForce4 cards. When you start to understand the work that's being done to each pixel the card is pushing then you'll see that they do indeed have a lot of raw power, they just aren't pushing as many pixels. Basically it comes down to the fact that the card and game manufacturers realized they don't have to push tons of pixels to get the same, and in most cases much greater, image quality. Try to get a GeForce4 to do all of the lighting, shadows, AA, AF, bump mapping, etc, etc to each pixel and see how much it slows to a crawl.

Yes, for older games like say Quake III they may not be that much faster. But when you have a system that can constantly maintain 100+ frames per second how much faster do you need it to be? The card manufacturers aren't focusing on making the old, already proven, tech any faster. They're focused on increasing the capabilities the cards can do so that there's higher image quality. You can't just look at the raw pixels/second numbers anymore. You have to look at what the card is doing with those pixels before it's being output to the screen.

Link to comment
Share on other sites

Guys, you completely missed the point, or maybe you didn't. Hey, I didn't intend to bruise anyone's post count.

You say: "The difference with the new cards is that their power isn't all about raw pixel pushing ability."

Of course, geez, read my post. What did I say? It has always been like that, but there is one major difference nowadays. Quality (new technologies like AA, AF, HDR, etc) scaled along with pure speed (raw pixel processing power, ROP count, and/or GPU MHz). That is not the case anymore. For comparison purposes: We went from cards with GPUs that ranged from 80Mhz to 125MHz in late 1999 to mid 2000, to cards with GPUs clocked up to 275Mhz a year later. In the same year ROP count doubled. Also nVidia, ATI, and others introduced new technologies, including new filter and/or processing techniques.

You say: "The cards are doing A LOT more work than those old GeForce4 cards."

Which is true for every new generation of video cards, except when it comes to raw-pixel pushing power (core clocks, ROP counts) which are not scaling so well as when compared to what happened from 1997 to 2002. Anyone here remember anything from GeForce 5xxx series? Exactly.

You say: "Basically it comes down to the fact that the card and game manufacturers realized they don't have to push tons of pixels to get the same, and in most cases much greater, image quality. "

Image quality and max AF/AA filtering means nada when your "insert game title" is running @ 6fps with. Case in point, or several: entire GeForce 5xxx series, most of GeForce 6xxx series, and even chunk of GeForce 7xxx series, (everything below and including 7600GT) Anyone remember XGI's Volari Duo waste of silicone? All pretty much cards that had fancy new technologies but couldn't perform and/or showcase those technologies due to the lack of raw power. This was only corrected with introduction of 256bit cards with hefty ROP counts, such ATI's Radeon x8xx series and nVidia's upper tier GeForce 7xxx cards.

You say: "Yes, for older games like say Quake III they may not be that much faster. "

Yes, indeed, and it is an embarrassment how that particular segment of development has stalled, regressed at some points even. I mean, XGI's Volari Duo? multi-GPU? Last time someone tried that they went bankrupt. That was 3Dfx.. Actually, what is the current XGI GPU line up.......exactly.

You say: "But when you have a system that can constantly maintain 100+ frames per second how much faster do you need it to be?"

If that was the case yeah, but like I said, all parties in question are actually dropping their ROP counts (and decreasing the raw power of their cards in many cases, with the exception of high-end models) which when coupled with stagnating GPU speeds, means really, REALLY weak performance when compared to older cards, even ones dating to the early part of this decade. Sure, some newer cards can run Quake III with AA/AF @ same levels an older GF3TI/GF4TI series could and get better frame rate. But, lower the AA/AF or turn it off completely, and those older cards will not only perform as fast, but in some instances faster. And "a system that can constantly maintain 100+" as you put it, is really new CPUs, RAM, and similar technologies coming into their own.

You say: "They're focused on increasing the capabilities the cards can do so that there's higher image quality"

I've owned probably 35-40 "2d/3d accelerators" *chuckle* in the last 10 years or so. Starting with several Voodoo/Voodoo2 cards back in the 90s. (Few of which are still in my basement somewhere) I even tried to make a list year or so ago. 2D image quality has remained roughly the same, really, except when it comes to high end professional GPUs geared towards professionals. 3D image quality, and by that I mean various filtering and/or processing techniques, such as AA/AF or hardware optimizations that bettered the way GPUs communicated with VRAM, or the system bus for that matter, well... they have all come pretty far. HDR shure is purtty. But raw pixel pushing power has stagnated, especially if you look at the last 2-3 years. I remember many industry professionals predicting GPU's clocked @ 1Ghz or more, and cards with astronomical ROP counts slated for middle of this decade. It didn't happen. CPU industry hit the same wall, but they tried to correct it with multi-core chips. Hey who knows, maybe true multi-GPU cards in the spirit of Voodoo 6000 are not that far off. I guess introduction of SLI and CrossFire could be a part of that.

"^ in short: quantity != quality <---- read above.

Edited by InvaderFromMars
Link to comment
Share on other sites

another review:

http://www.tweaktown.com/articles/1100/1/p...tion/index.html

now its obvious that the 8800gtx is beating the HD2900XT. that 8800gtx is also 150$ more than the HD2900XT. i would rather see the 8800gts 640mb vs HD2900XT. this makes the 8800gts the cheaper one and perforance may be about equal.

edit:

found some 8800gts 640mb vs HD2900XT results here :)

http://guru3d.com/article/Videocards/431/

Edited by ripken204
Link to comment
Share on other sites

pretty interesting stuff. nvidia may win on raw power, but if any game developers incorporate all of ATI's fancy new stuff i think ATI could look better.

re:InvaderFromMars

seriously, wtf are you smoking? puff puff pass whatever it is, i want a toke.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...