• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD RDNA 4 GPUs To Incorporate Brand New Ray Tracing Engine, Vastly Different Than RDNA 3

I wonder if they will return with full stack with new architecture.
After RDNA4 they will release UDNA cards.
 

Wolzard

Member
If it isn't cheap then they shouldn't even bother releasing a GPU this time around. Also would be interesting if they have AI/ML based FSR.

RDNA 4 is kind of a transition to the future UDNA, which will once again unify the architecture of GPUs for servers and games (it was unified in the GCN era).
It is already confirmed that FSR will be accelerated by AI, I am almost certain that it will be similar to PSSR on PS5 Pro.

 

Sanepar

Member
I really hope udna have high end models in 2026 and fsr5 delivery at least dlss 3.5 quality and i will be in. I don't care about rt.
 

Sanepar

Member
Fixes problems with raster graphics = con, lol. It also does things (global illumination) that raster can't do.

Not to mention DLSS is much better than FSR 3.1 (we will see how 4 will compare).

AMD is behind in every metric, in RT they are even behind Intel (so far).

Only thing that AMD does better is (usually) vram amount at same performance tiers than nvidia.

I HOPE AMD will deliver and fix things but they usually fuck up on GPU front.
Raster they are on par.
Dlss is better but the way u put it looks like is a night and day diff and it is not.

They can easily catch dlss 3.5 quality imo.

RT maybe they will catch with udna or not but if they have better price and fsr and raster on par with nvidia. I will go amd. Ot gamers change their posture of blind buy nvidia or in 2 gens we will be paying $5k for a gpu.
 
They can easily catch dlss 3.5 quality imo.

If PSSR is anything to go by then likely not. We also don’t know how RDNA 4 will handle ML and I doubt they’ll come close to Nvidia especially on the first iteration.

They’re welcome to surprise me though.
 
If AMD Strix Point Halo APU's are equivalent to NVIDIA 4070 performance with RDNA 3.5 CU, will there be an RDNA 4 based APU? Are there any news and rumors on RDNA 4 APU's after their discrete desktop release after Q1-25 release?
 

SolidQ

Member
9070XT 16GB, 9070 16GB, 9060 12GB. N44 later

N44 seems doesn't have encoders, but that good, reduce die size and cost + today almost every cpu have igpu.
 
Last edited:

mitchman

Gold Member
If PSSR is anything to go by then likely not. We also don’t know how RDNA 4 will handle ML and I doubt they’ll come close to Nvidia especially on the first iteration.

They’re welcome to surprise me though.
PSSR quality suffers from games using different versions of the SDK, many using older SDKs that do not have the recent implementations of PSSR, meaning quality will be much worse.
 

llien

Member
ML upscaling
Which of these 2 is more of a ML upscaling, if I may ask:
DLSS 1 or DLSS 2 and later?

when it comes to RT
It's nice to keep in mind what a lackluster it is to date, with 4th (!!!) gen of "hardwahr RT" around the corner, NONE of the promises of "hardwahr RT" have materialized. Let me list them:

1) Next gen games with yet unseen effects - yeah, sure, John
2) Ease of development - ahaha
3) "with enough hardwahr RT there will be no performance hit - right
 

Bojji

Member
Which of these 2 is more of a ML upscaling, if I may ask:
DLSS 1 or DLSS 2 and later?


It's nice to keep in mind what a lackluster it is to date, with 4th (!!!) gen of "hardwahr RT" around the corner, NONE of the promises of "hardwahr RT" have materialized. Let me list them:

1) Next gen games with yet unseen effects - yeah, sure, John
2) Ease of development - ahaha
3) "with enough hardwahr RT there will be no performance hit - right

1. You say that real time global illumination and accurate reflections can be done in raster?
You may mention lumen but it's software RT that is heavy on hardware and planar reflections are limited and require lot of manual work.
2. You can make game without any prebaked lighting and then use RT/lumen to make it work. Only Indiana Jones and Metro EE were done like that, there is stil too much ancient hardware holding things back.
3. This might be true, we still don't enough hardware RT power I guess?
 

Kenpachii

Member
After RDNA4 they will release UDNA cards.

So is this upcoming generation another 5000 series of AMD. Will UDNA bring stuff that old cards can't do.
 

FireFly

Member
Which of these 2 is more of a ML upscaling, if I may ask:
DLSS 1 or DLSS 2 and later?
They do different things. DLSS 1 uses spatial upscaling to try to "guess" what the missing detail in a given frame should be. While, DLSS 2 tries to intelligently reuse information from previous frames.

The advantage of DLSS 2.0 is that it avoids the kind of hallucinated detail seen with spatial upscaling. I imagine that FSR 4 will follow the same strategy.
 

JohnnyFootball

GerAlt-Right. Ciriously.
Raster they are on par.
Dlss is better but the way u put it looks like is a night and day diff and it is not.

They can easily catch dlss 3.5 quality imo.

RT maybe they will catch with udna or not but if they have better price and fsr and raster on par with nvidia. I will go amd. Ot gamers change their posture of blind buy nvidia or in 2 gens we will be paying $5k for a gpu.
We are reaching diminishing returns in the quality improvements of DLSS. The only way DLSS can drastically improve is if they can take like a 720p image and successfully upscale it to 4K with minimal visual impact.

FSR has a lot of room for improvement and it appears that AMD realizes that these features are too significant to not invest time to improve. Good to see them devoting actual silicon to these tasks. Sony lit a fire under their ass with PSSR.

As everyone says with AMD GPUs....I will believe it when I see it.
 

Zathalus

Member
1) Next gen games with yet unseen effects - yeah, sure, John
2) Ease of development - ahaha
3) "with enough hardwahr RT there will be no performance hit - right

1) Not unseen but RT effects are simply superior to most non-RT solutions. Baked lighting (offline ray traced) can look superior to RTGI but requires extensive dev time and can balloon game size. It is also static so if you want a game with dynamic elements such as time of day, you are just out of luck. PT is the endgame, but performance tax for it is heavy. Reflections and transparency for RT is simply better, cube maps and screen space simply doesn’t compare and while planar reflections look good they are a pain to implement, also have a rather high rendering cost, and just don’t capture dynamic elements like RT does. RT shadows and RTAO are more accurate but are something a bit more subtle.

2) It does, but requires the game to be built up with it in mind. A raster game with RT elements on top obviously won’t reduce dev time, on the contrary it will increase. But as developers from 4A said, developer time on Metro Exodus enhanced edition was far easier as they ripped out all the manual lights and relied entirely on their RTGI, which made scene iteration far easier. Bloober also mentioned how Lumen was a game changer for them, as it allowed for them to have enhanced scene fidelity without extensive dev time.

3) As with every single graphical feature, there will always be a performance cost. But that cost is dropping with every new Nvidia and AMD generation.
 

Kenpachii

Member
We are reaching diminishing returns in the quality improvements of DLSS. The only way DLSS can drastically improve is if they can take like a 720p image and successfully upscale it to 4K with minimal visual impact.

FSR has a lot of room for improvement and it appears that AMD realizes that these features are too significant to not invest time to improve. Good to see them devoting actual silicon to these tasks. Sony lit a fire under their ass with PSSR.

As everyone says with AMD GPUs....I will believe it when I see it.

Major problem with AMD is that they could drop some DLSS version AI wise that is full of problems and never update it ever again until the new series come out. It's hard to trust AMD on the software level of things.
 

Wolzard

Member
Major problem with AMD is that they could drop some DLSS version AI wise that is full of problems and never update it ever again until the new series come out. It's hard to trust AMD on the software level of things.

It has the advantage that its software is normally open-source.
Today FSR and that other frame generator support many more games through mods than officially.
 

JohnnyFootball

GerAlt-Right. Ciriously.
Major problem with AMD is that they could drop some DLSS version AI wise that is full of problems and never update it ever again until the new series come out. It's hard to trust AMD on the software level of things.
When does AMD have a history of doing that? You just perfectly described nvidia like how they locked DLSS3 and FG behind the 4000 series.

AMD FSR is compatible with all their recent video cards.

Intel seems to be doing with XeSS is offering a generic version for non-Intel GPUs and an enhanced version with their GPUs. I suspect that will be where AMD goes from here.
 

llien

Member
This doesn’t make any sense.

They already have 7900xtx.

Why their latest flagship gpu be less powerful than that?
Next gen is much cheaper to produce.
7900XTX marvel is priced at around 4070Ti, yet guess what is selling more.

AMD has uphill battles to fight, and product quality or price has little to do with it.
 

llien

Member
Strix Halo 8060



Gfk0D1_XEAAiFPs

Cool, but let's see if The Filthy Green lets OEMs sneak AMD's covert dGPUs in APU packaging into laptops.
The situation is simply embarassing, after amazing 6800m advantage edition, there is only 7600s with crappy screen, at MSRP and hidden deep in asus.com.
 

llien

Member
And by the way. DLSS inst exactly upscaling. Upscaling simply resizes the image to a higher resolution, but cannot add new details/restore missing detail, whereas DLSS does exactly that based on temporal data.
Oh my FG.
Upscaling is getting to higher res from lower res, regardless of what PF shills told you. (how is 8k gaming with 3090 going, by the way?)

Upscaling is one of the steps in, cough, Stable Diffusion pipeline.

TAA is what does "it" based on "temporal data".
Nobody knows exactly what is done by AI afterwards, but most likely just denoising.

The actual AI upscaling, the DLSS 1.0, has failed miserably.
And no, neural networks do not really need "motion vectors" to function, our brain being the good example.
Yet it had failed. (things are of course more complex than they sound)

The thought that AI approach is better at any task is misinformed.
At things like denoising, though, AI should excel.
 

llien

Member
$1000 GPU can run PT well,
Ryan Reynolds Reaction GIF


Of 37 recent games reviewed by HUB, only 30% bring definite improvements with RT enabled.
Price paid for that is a hefty FPS drop. (don't read it like other 70% don't drop it, they do, just with arguably no benefit)

We were into "better visuals, but lower FPS" business way before RT gimmick even existed, doh.

In the context of this thread, namely, buying decisions: how are people who bought 2000 series "for RT" going today? Was it worth it? Perhaps 2025 will bring games that will justify it?
 

llien

Member
The 7900XTX does have a slight edge is rasterization. But then Nvidia has a big advantage in both Ray Tracing, AI and DLSS.
With all these deficits, the 7900XTX should be significantly cheaper.
7900XTX has whopping 24GB of RAM, letting you run models that 4080 cannot.
It still costs less than 4080 where I live.
Whether 16-17% perf diff at RT gimmick justifies it... I am not sure.

It's not like market is driven by informed decisions anyhow.
 

winjer

Gold Member
7900XTX has whopping 24GB of RAM, letting you run models that 4080 cannot.
It still costs less than 4080 where I live.
Whether 16-17% perf diff at RT gimmick justifies it... I am not sure.

It's not like market is driven by informed decisions anyhow.

Although that is true, anyone considering running serious AI workloads, will go for a 4090.
Not only it has a 24GB on vram, but it has greater memory bandwidth, better Tensor Cores and much better software and support for AI applications.
 

Hudo

Member
The point of comparison is the price. The 4080 and the 7900XTX have similar prices.
The 7900XTX does have a slight edge is rasterization. But then Nvidia has a big advantage in both Ray Tracing, AI and DLSS.
With all these deficits, the 7900XTX should be significantly cheaper.

Edit: the 4080 also uses 50W less. It's not much in this GPU range, but it's something to consider.
The problem I have is that I also do some home office stuff from my PC and require CUDA, so I am fucking vendor locked...
 

llien

Member
1. You say that real time global illumination and accurate reflections can be done in raster?
You may mention lumen but it's software RT
"In raster" huh? Nice move.
I'd say "without 'hardwahr RT'".

there is stil too much ancient hardware holding things back.
This is the answer to "why did NONE of the promises by The Filthy Green hyped by PF shills materialize", right?
Let me add it to the list you've already voiced:

1) Lazy devs
2) F*cking consoles
3) "Lot's of old GPUs" <= new one

we still don't enough hardware RT power I guess?
Oh, so the "just not enough 'hardwahr RT'", at this point an obvious myth, is still alive.


What needs to happen for you to accept that it was a lie upfront? Is there something theoretically possible that would make you doubt words of the creators of "8k gaming with 3090"?
 

llien

Member
Although that is true, anyone considering running serious AI workloads, will go for a 4090.
Not only it has a 24GB on vram, but it has greater memory bandwidth, better Tensor Cores and much better software and support for AI applications.
For 2 times the price. Which for some might make a difference, I think.
And if money does not matter, why go for puny 4090, when you can go for MI325X with 256GB of HBM3.
 

FireFly

Member
Software Lumen doesn't support accurate mirror-like reflections, due to only tracing against SDFs, has a 800M cutoff for GI (Default 200M), and the reflections don't support skinned objects (characters). It's also really expensive in itself, such that enabling Hardware Lumen only incurs a modest performance hit (~7% in the Matrix Demo for Nvidia cards). Epic's newest MegaLights technology is designed to be used with Hardware RT, even on consoles, and performance has been significantly improved to allow this.
 

llien

Member
They do different things. DLSS 1 uses spatial upscaling to try to "guess" what the missing detail in a given frame should be. While, DLSS 2 tries to intelligently reuse information from previous frames.
Are you telling me that DLSS 1 idea was stupid?
Why was access to previous frames denied to DLSS 1, the only real ML upscaling we actually had seen in the gaming world?

What makes you think DLSS 2 is anything, but glorified TAA, with ML used to denoise. (something that NNs excel at)
 

llien

Member
Software Lumen doesn't support accurate mirror-like reflections
This is relevant in the "can't do global illumination without 'hardwahr RT'"? Ok.
So one can do "mirror like reflection" without "hardwahr RT", just not accurate ones. Got it.

Let me cite "accurate reflection" using "hardwahr RT" from the previous page, just for reference:
sddefault.jpg

Absolutely stunning. Nothing of that sort has ever been seen in games before we got the "HRT'. Absolutely.
 
Last edited:

winjer

Gold Member
For 2 times the price. Which for some might make a difference, I think.
And if money does not matter, why go for puny 4090, when you can go for MI325X with 256GB of HBM3.

Yes, costing double the price is a concern. But it's offset by performance and software support.
Just recently, there was a report that showed that AMD's MI hardware was faster than Nvidia's. And cheaper. But most companies still prefer to buy Nvidia due to the software stack.
Part of it is because CUDA is a swamp, that locks in developers. But also because Nvidia creates much more software than AMD, to support their products.

Something like the MI325X is not even in the same ballpark as the 7900XTX. It's probably 10-20 times more expensive.
That is hardware only for enterprises. Not for enthusiasts or prosumers.
 

llien

Member
1) Not unseen but RT effects are simply superior to most non-RT solutions.
Where can I see those "superior" effects? Given that this needs no "hardwahr RT"


2) It does, but requires the game to be built up with it in mind. A raster game with RT elements on top obviously won’t reduce dev time, on the contrary it will increase.
Agreed.
This is where I'd expect engines to pitch in. Still nothing at this point. With 4th gen of hardware RT releasing in days, it's rather peculiar.

A re-do of a game being less of a pain is hardly surprising.

3) As with every single graphical feature, there will always be a performance cost.
There will always be a perf hit, because "ray tracing runs just on the hardwahr RT" is a lie.
But unlike you, Bojji Bojji still remembers the original lie.
 
Where can I see those "superior" effects? Given that this needs no "hardwahr RT"

I seriously don't know if you guys just play dumb at this point for trolling purposes or if you've seriously slept through thousands of pages, press conferences and analytics showing the differences and limitations of all the technologies xyz times.....
 

llien

Member
Just recently, there was a report that showed that AMD's MI hardware was faster than Nvidia's. And cheaper. But most companies still prefer to buy Nvidia due to the software stack.
Part of it is because CUDA is a swamp, that locks in developers. But also because Nvidia creates much more software than AMD, to support their products.
There are not that many companies that are in AI business, but do not buy from The Filthy Green.
And this is just a glimpse how filthy things get, once one considers alternative vendor:



companies are afraid to even MEET with AMD rep, cough. Cool ain't it?

As far as professional ML/AI development goes, there aren't as many frameworks as one would think. There are native AMD ROCm versions of the major ones: pytorch and tensorflow.

99%+ of development in AI world never goes deeper than those libs.

---------------------------------------

People need to remember the pre-Ryzen CPU world. When 8 core CPU was $1000.
Mr Huang is filthier than any Intel CEO we have seen.
 
Slept through f*cking what???

I need a f*ckng conference and thousands of pages written by amazing analyst to spoon feed me how I perceive sh*t with my eyes?
A so you're one of those pretending to have heavy glaucoma...

We've been over this at least a thousand times and we've even got some games running with Pathtracing by now...
and your kind still thinks this "I don't see it" angle is anything but completey demented.
 
Last edited:

llien

Member
Your majesty.
I still don't get which thousands of pages I've slept through.

Propaganda has interesting side effects, when "conclusions" are put into brain even without any credible path to them.

We've even got some games running with Pathtracing by now
That is probably very impressive.
What happened to this marbles demo:



?

so far on AMD, the more complex the BVH is, the worse the game runs it seems.
You might want to check how many of such games were sponsored by a known vendor, cough.
 

winjer

Gold Member
there are extreme examples of that where an RTX2060 outperforms any AMD card if you enable pathtracing in Cyberpunk for example.

so far on AMD, the more complex the BVH is, the worse the game runs it seems.

RNDA4 will apparently try to solve this

Actually it's AMD that uses the most complex BVH for it's RT effects.
For RTX, the BVH is usually around 2-6 levels deep. But with AMD, it's usually 8-11 levels deep.
This means AMD can use fewer rays per scene.
But the problem with AMD RT solution is that it has significant lower ray triangle intersection throughput.
And to make things worse, it has lower coherency and lower unit occupancy.
This means that when ray count increases, AMD's RT functions get overwhelmed much faster.
 

Zathalus

Member
Where can I see those "superior" effects? Given that this needs no "hardwahr RT"



Agreed.
This is where I'd expect engines to pitch in. Still nothing at this point. With 4th gen of hardware RT releasing in days, it's rather peculiar.

A re-do of a game being less of a pain is hardly surprising.


There will always be a perf hit, because "ray tracing runs just on the hardwahr RT" is a lie.
But unlike you, Bojji Bojji still remembers the original lie.

You linked me a video demonstrating Hardware Lumen that is specifically accelerated by both the RT units on AMD and Nvidia? So what’s the point of that? Hardware Lumen is ray tracing.

As for your obsession with hardware RT you do know both the units on AMD and Nvidia accelerate things like intersection tests and BVH calculations right? But that all of that are not the only thing that occurs when you trace rays? There is still a computational cost involved, but without these hardware RT cores the corresponding computational cost would be far higher? That computational cost has been decreasing with each generation but the corresponding RT effects have been increasing. A 2080ti struggled with basic reflections in Battlefield and now we have games that are modern AAA releases with path tracing.
 

FireFly

Member
Are you telling me that DLSS 1 idea was stupid?
Why was access to previous frames denied to DLSS 1, the only real ML upscaling we actually had seen in the gaming world?

What makes you think DLSS 2 is anything, but glorified TAA, with ML used to denoise. (something that NNs excel at)
1.) The DLSS 1.0 idea wasn't stupid in itself, since we can see Auto SR and Sony's Legacy Improve Image Quality feature delivering decent results. But I think gamers on PC, especially those on high-end hardware, are more sensitive to hallucinated detail, so Nvidia's approach was the right one.
2.) I think the issue is not just with the algorithm lacking access to previous frames, but with what it is trying to do, i.e "guess" the missing information based on training data, rather than more intelligently apply TAAU.
3.) DLSS is a glorified form of TAA. That's the entire point. It just does a much better job than TAA at reconstructing detail, since it uses machine learning rather than manual heuristics to decide what data from previous frames is still relevant. This allows games to look as good while running at a lower base resolution.
4.) ML was not used for denoising until Ray Reconstruction was introduced.
This is relevant in the "can't do global illumination without 'hardwahr RT'"? Ok.
So one can do "mirror like reflection" without "hardwahr RT", just not accurate ones. Got it.

Let me cite "accurate reflection" using "hardwahr RT" from the previous page, just for reference:
sddefault.jpg

Absolutely stunning. Nothing of that sort has ever been seen in games before we got the "HRT'. Absolutely.
If you want to compare the accuracy of reflections in software Lumen, you should use an example like the below.



Edit: UE 5.3 added multi-bounce support for hit reflections, so that would clean up the issue with reflections of reflections not showing.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Actually it's AMD that uses the most complex BVH for it's RT effects.
For RTX, the BVH is usually around 2-6 levels deep. But with AMD, it's usually 8-11 levels deep.
This means AMD can use fewer rays per scene.
But the problem with AMD RT solution is that it has significant lower ray triangle intersection throughput.
And to make things worse, it has lower coherency and lower unit occupancy.
This means that when ray count increases, AMD's RT functions get overwhelmed much faster.
Aren’t these all points addressed by RDNA4 assuming RDNA4 is PS5 Pro RT at worst and not insignificantly improved beyond it at best?
 

Bojji

Member

Lumen is software RT. Hardware version of it using the same quality should run faster thanks to RT cores.

But hardware lumen also has higher quality than software version so it's heavier:



Qdj8IRR.jpeg


You can also see big fucking difference between RT acceleration and raster fallback in avatar:

uvxrZpi.jpeg




So RT hardware is useless? I think even Mark Cerny will completely not agree with you.
 
Your majesty.
I still don't get which thousands of pages I've slept through.
Guess you're not involved into any kind of graphics discussions then besides your trolling attempts here.
Kinda explains your nonsense Lumen comments considering that's using RT in both its hard- and software modes...

Propaganda has interesting side effects, when "conclusions" are put into brain even without any credible path to them
considering we've got pathtracing running in actual games on our home PCs, I'm seriously just wondering at this point just how desperate you are exactly to literally stand under a blue sky and claim it's red? If there is a bottom of the barrel you're really scratching that hard right now....

That is probably very impressive.
What happened to this marbles demo:
We've got the used tech in actual games now. :)
 
Last edited:
Oh my FG.
Upscaling is getting to higher res from lower res, regardless of what PF shills told you. (how is 8k gaming with 3090 going, by the way?)

Upscaling is one of the steps in, cough, Stable Diffusion pipeline.

TAA is what does "it" based on "temporal data".
Nobody knows exactly what is done by AI afterwards, but most likely just denoising.

The actual AI upscaling, the DLSS 1.0, has failed miserably.
And no, neural networks do not really need "motion vectors" to function, our brain being the good example.
Yet it had failed. (things are of course more complex than they sound)

The thought that AI approach is better at any task is misinformed.
At things like denoising, though, AI should excel.
Some youtubers may refer to DLSS technology as upscaling because people tend not to care about small details. The small details however can make all the difference.

Nvidia engineers refer to DLSS as either image reconstruction or super-resolution, and given what this technology does, they are absolutely right.



"DLSS samples multiple lower resolution images and uses motion data and feedback from prior frames to reconstruct native quality images."

What's interesting some DLSS buffers still run at native resolution (HUD elements for example), but of course most of the work consist of sampling multiple lower resolution images. To put it simply, DLSS is merging previous frames into a single high-quality frame.

A native 4K image has 8M of pixels, but by stitching together several 3.7M frames (DLSS Q 1440p internally), you get a much more detailed image than native 4K. You can do this process without AI (FSR2, or TSR for example) and still get very comparable results to AI-powered DLSS on the static image, but merged frames can show shimmering and other artefacts during motion. You need AI to analyse images and choose how to render merged details without artefacts.

I tried playing the RE4 remake at 2880p FSRQ, but even when downsampled to 1440p I saw very distracting shimmering, even without moving the camera, as trees and vegetation still moved on the wind and creating shimmering. I wish this game supported DLSS, but it is what it is. AMD sponsored games will not include far superior DLSS image reconstruction.

Upscaling, as the name suggests, simply increases the size of the image in the most simple way, but this process cannot add any new detail and will always degrade the image quality, because you cannot upscale pixels without losing quality due to uneven pixel scaling (but even integer scaling will ruin the image because it makes pixels look square instead of round and this will cause pixelation and aliasing). DLSS does the exact opposite, so why you want to refer to it as upscaling? That makes no sense.

Believe or not, but in some games even DLSS performance look better than native TAA. In RDR2, for example, the native TAA looks very soft, and there's even more blur when you move the camera. The DLSS built into the games is not good either, but with the updated 3.8.1 DLSS DLL file the image looks much sharper (both static and moving) even when you choose DLSS performance.


7900XTX has whopping 24GB of RAM, letting you run models that 4080 cannot.
It still costs less than 4080 where I live.
Whether 16-17% perf diff at RT gimmick justifies it... I am not sure.

It's not like market is driven by informed decisions anyhow.
My RTX4080S has 22% faster RT on average, but in games where RT really counts (heavy RT workloads) you will often see 50% at minimum. In PT games the 7900XTX can be 3.5x slower and that's a lot. In Black Myth Wukong for example medium PT on my card at 1440p only tanks performance by 3% compared to "lumen" (I get 123fps with PT medium, and 127fps with lumen). Try to run even PT medium on AMD RX7900XTX and no amount of tweaking will give you playable results.

relative-performance-rt-3840-2160.png


The RX7900XTX has 24GB VRAM. Of course that's more compared to the RTX4080S 16GB VRAM, but it's not something that can make a real difference in current games. Most games use 9-12GB VRAM even at 4K. There are few games that can use more than 16GB VRAM, but you need to turn on PT + FG at 4K native. My card cant even run such extremely high settings. I need to use DLSS to get smooth fps and then my card is no longer VRAM limited. AMD card has 24GB but with 4x worse performance in PT you arnt going to play the most VRAM intensive PT games anyway.

VRAM requirements will skyrocket when developers start porting PS6 games to PC, and that won't happen any time soon (2028/2029). I have heard RDNA4 RX9700 will "only" have 16GB too. It seems even AMD knows 24GB in gaming GPU is overkill at the moment. I would be worried of my card would only have 12GB VRAM (it would be still enough with some tweaks in few games), but 16GB is still plenty.

Both DLSS and FG can be considered gimmicks (very usefull, but still). RT however is not a gimmick, just the opposite. Do you even know why RT is demanding? It's because you arnt faking the lighting effects with raster gimmicks (SSR, prebaked lighting, shadow casades), yet you want to call it a gimmick.

I want to vomit every time I see AMD fans trying to suggest that RT is too demanding to be usable. On my RTX4080S RT runs surprisingly well (in some games I only saw 1fps difference between RT and OFF). I have not played a single game where turning on RT would push performance into unplayable territory. Quite a few RT games in my library runs at well over 60fps even at 4K native, and some even at well over 120fps, for example RE3 Remake runs at around 130-200fps, RE village 120-160fps. Of course the most demanding RT games (The witcher 3, cyberpunk) require the help of DLSS to get into high refreshrate territory, but I dont mind using DLSS given how well it works. I would be stupid not to use it. I always use DLSS even if I dont need more fps, because thanks to DLSS I'm able to get sharper image in every single game (DLDSR + DLSS absolutely destroy TAA native when it comes to image quality).

Before I uograded my PC I really wasnt expecting to play cyberpunk with RT at 140-170fps at 1440p egen with DLSS. Evrn at 4K thanks to I can get very smooth 100-130fps with psycho RT and 80-100fps with PT.

RT games runs great on my PC and this technology absolutely does make a difference. When I first played The Witcher 3 with RT, I was shocked at how much better it looked. Lighting with RT was no longer flat, because RT adds indirect lighting / shadows. RT reflections look much sharper and no longer fade during movement. Also, thanks to RT shadows no longer draw a few metres in front of the character. It's impossible to ignore such huge difference unless you really dont care about the graphics at all. RT is also very scalable. In some games SSR cost more than RT reflections. You need to turn every single RT effect to tank performance on my card, and even then performance is still good :).

It still costs less than 4080 where I live.
You know what's funny? Many people chose 7900XTX over 4080S simply because AMD card is $100 cheaper, but that's not the case in the long run. I saw bank4buckPCgamer YT gameplay videos and his 7900XTX at 99% GPU usage ALWAYS draws 465W. My RTX4080S draws between 260-315W depending on the game (at 99% GPU usage), so that's about 150W difference. My entire PC draws 430W at max. 150W in use for 8 hours (480 minutes) is 1.20 kWh, or 438.00 kWh per year. I have to pay 452zł for that (109 $USD at today's exchange rate). In my country, the 7900XTX would definitely end up costing more money in the long run, maybe not the first year becasue I'm not playing for 8 hours daily, but after 3 years that would be the case.
 
Last edited:
Lumen is software RT. Hardware version of it using the same quality should run faster thanks to RT cores.

But hardware lumen also has higher quality than software version so it's heavier:



Qdj8IRR.jpeg


You can also see big fucking difference between RT acceleration and raster fallback in avatar:

uvxrZpi.jpeg




So RT hardware is useless? I think even Mark Cerny will completely not agree with you.

That's correct Bojji Bojji 👌


Lumen is a hybrid tracing pipeline that uses Software Ray Tracing. It traces against the depth buffer first, which we call Screen Traces, then it traces against the distance field and applies lighting to ray hits with the Surface Cache. Lumen takes any given scene and renders a very low-resolution model of it.
 

Wolzard

Member
The RX7900XTX has 24GB. Of course that's more compared to the RTX4080S 16GB VRAM, but it's not something that can make a real difference in current games. Most games use 9-12GB VRAM even at 4K. There are few games that can use more than 16GB VRAM, but you need to turn on PT + FG at 4K native. My card cant even run such extremely high settings. I need to use DLSS to get smooth fps and then my card is no longer VRAM limited. AMD card has 24GB but with 4x worse performance in PT you arnt going to play the most VRAM intensive PT games anyway.

VRAM requirements will skyrocket when developers start porting PS6 games to PC, and that won't happen any time soon (2028/2029). I have heard RDNA4 RX9700 will "only" have 16GB too. It seems even AMD knows 24GB in gaming GPU is overkill at the moment. I would be worried of my card would only have 12GB VRAM (it would be still enough with some tweaks in few games), but 16GB is still plenty.

The RX 9070 will be an intermediate card, like a 4070/5070, which has/will have 12 GB.
Regarding the use of VRAM, in 4K there must be at least 16GB of VRAM. In the PS6 generation, it will probably be much worse, as the console is expected to have 32 GB of RAM.

vram.png
avatar-frontiers-of-pandora-vram.png


vram.png
 

MarV0

Member
You know what's funny? Many people chose 7900XTX over 4080S simply because AMD card is $100 cheaper, but that's not the case in the long run. I saw bank4buckPCgamer YT gameplay videos and his 7900XTX at 99% GPU usage ALWAYS draws 465W. My RTX4080S draws between 260-315W depending on the game (at 99% GPU usage), so that's about 150W difference. My entire PC draws 430W at max. 150W in use for 8 hours (480 minutes) is 1.20 kWh, or 438.00 kWh per year. I have to pay 452zł for that (109 $USD at today's exchange rate). In my country, the 7900XTX would definitely end up costing more money in the long run, maybe not the first year becasue I'm not playing for 8 hours daily, but after 3 years that would be the case.
I bet you weren't making these kind of calculations when AMD had the power efficiency crown, you were just blindly buying Nvidia.
 

Crayon

Member
The games that require it on console are the most interesting thing about rt rn. There's so little capability in that hardware and yet these games lean into it and seem to benefit.
 
Top Bottom