The 5800X3D has the same core architecture as the 5800X but it runs at 11% lower base and 4% lower boost clocks. The lower clocks are in exchange for an extra 64MB of cache (96MB up from 32MB) and around 40% more money. For most real-world tasks performance is comparable to the 5800X. Cache sensitive scenarios such as low res. canned game benchmarks with a 3090-Ti ($2,000 USD) benefit at the cost of everything else. Be wary of sponsored reviews with cherry picked games that showcase the wins, conveniently ignore frame drops and gloss over the losses. Also watch out for AMD’s army of Neanderthal social media accounts on reddit, forums and youtube, they will be singing their own praises as usual. Instead of focusing on real-world performance, AMD’s marketers aim to dupe consumers with bankrolled headlines. The same tactics were used with the Radeon 5000 series GPUs. Zen 4 needs to bring substantial IPC improvements for all workloads, rather than overpriced “3D” marketing gimmicks. New PC builders have little reason to look further than the $260 12600K which, at a fraction of the price, offers better all round performance in gaming, desktop and workstation applications. Users with an existing AM4 build should wait just a few more months for better performance at lower prices with Raptor Lake or even Zen 4. The marketers selling expensive “3D” upgrades today will quickly move onto Zen 4 (3D) leaving unfortunate buyers stuck on an overpriced, 6 year old, dead-end, platform. [Mar '22 CPUPro]
What’s scary is that I think the owner of userbenchmark actually believes that statement. Which might explain how he’s so out of touch that he thinks his own crap doesn’t stink and deserves to be locked behind a subscription. I’m just sad that there might be a not insignificant number of people that pay for it.
The real Neanderthal social media account is the one writing that review.
Instruction and data caches have a real, tangible benefit. Although there is a point of diminishing returns, more L3 cache is absolutely worth a 10% clock speed trade-off for consumer systems. Fetching memory from the bus is an order of magnitude slower than fetching from cache, and the processor has to perform other work or stall while it’s waiting for that.
But, knowing the bias of the reviewer, they’re probably running DDR4 at 5200 MT/s (2000 over JEDEC specs) on their Intel systems to make up for the lack of cache while thinking, “just buy a more expensive processor and RAM, you brain-dead cretins.”
I mean it’s kinda amazing that there’s someone looking at a 14th gen Intel CPU sucking back 200+ watts, while it gets spanked by a 7800X3D running at 65 watts, and thinking “AMD is hurting consumers”. That’s some next level shit.
Ok so I am about to build a new rig, and looking at the specs the X3D does seem less powerful and more expensive than the regular 7950.
While I completely agree that this guy seems extremely biased and that he comes off like an absolute dickbag, I don’t think the essence of his take is too far off base if you strip off the layers of spite.
Really, it seems like the tangible benefit of the X3D that most people will realize is that it offers similar performance with lower energy consumption, and thus lower cooling requirements. Benchmarks from various sources seem to bear this out as well.
It seems like a chip that in general performs on par with the 7950x but with better efficiency, and if you have a specific workload that can benefit from the extra cache it might show a significant improvement. Higher end processors these days already have a fuckton of cache so it isn’t surprising to me that this doesn’t benchmark much better than the cheaper 7950x.
I was comparing the 7950x and the 7950x3d because those are the iterations that are available right now and what I have been personally comparing as I mentioned. I apologize if I wasn’t clear enough on that point.
My point was that the essence of the take, which I read to be, “CPUs with lower clocks but way more cache only offer major advantages in specific situations” is not particularly off base.
I still fail to see how comparing an AM5 chip is in any way shape or form a good addition for discussing an objectively terrible review of a late addition the AM4 product family. What you say might be true… for AM5. Which is not the subject of the review everyone is talking about. Nor is anybody except you talking about what X3D currently offers, we’re all talking about a review that, at the time it was written, was horribly researched, full of bias and false facts.
You coming in and suddenly talking about the 7950X/X3D adds nothing of value to the topic at hand. Because the topic at hand isn’t “Is X3D worth it” it’s specifically “look at how badly Userbenchark twisted this 5800X3D review”.
So sorry to interrupt your circlejerk about this guy’s opinion on 3d V-Cache technology with a tangentially related discussion about 3d V-Cache technology here on the technology community.
I fully understand the point you’re trying to make here, but just as you think my comments added nothing to the discussion, your replies to them added even less.
The only reason I can think for a site to do this is that they were about to go under already. This will absolutely tank them as there are free alternatives.
Uhh… Aren’t… Aren’t these two statements kinda contradictory?
No no, you see; it performs reasonably consistency under varying real world conditions but for a CPU to truly shine it needs to handle all workloads, including unrealistic synthetic ones.
Hopefully this will hurt them up to a point where they go out of business. Just look at their review of the 5800X3D, it’s so unreal.
Jesus
What’s scary is that I think the owner of userbenchmark actually believes that statement. Which might explain how he’s so out of touch that he thinks his own crap doesn’t stink and deserves to be locked behind a subscription. I’m just sad that there might be a not insignificant number of people that pay for it.
I’m certain he must’ve lost a lot of money betting against amd on the stock market right around the time of zen1 and he never got over it.
The real Neanderthal social media account is the one writing that review.
Instruction and data caches have a real, tangible benefit. Although there is a point of diminishing returns, more L3 cache is absolutely worth a 10% clock speed trade-off for consumer systems. Fetching memory from the bus is an order of magnitude slower than fetching from cache, and the processor has to perform other work or stall while it’s waiting for that.
But, knowing the bias of the reviewer, they’re probably running DDR4 at 5200 MT/s (2000 over JEDEC specs) on their Intel systems to make up for the lack of cache while thinking, “just buy a more expensive processor and RAM, you brain-dead cretins.”
I mean it’s kinda amazing that there’s someone looking at a 14th gen Intel CPU sucking back 200+ watts, while it gets spanked by a 7800X3D running at 65 watts, and thinking “AMD is hurting consumers”. That’s some next level shit.
Well said. The only thing hurting consumers is the reviewers omitting information or spreading misinformation.
Ok so I am about to build a new rig, and looking at the specs the X3D does seem less powerful and more expensive than the regular 7950.
While I completely agree that this guy seems extremely biased and that he comes off like an absolute dickbag, I don’t think the essence of his take is too far off base if you strip off the layers of spite.
Really, it seems like the tangible benefit of the X3D that most people will realize is that it offers similar performance with lower energy consumption, and thus lower cooling requirements. Benchmarks from various sources seem to bear this out as well.
It seems like a chip that in general performs on par with the 7950x but with better efficiency, and if you have a specific workload that can benefit from the extra cache it might show a significant improvement. Higher end processors these days already have a fuckton of cache so it isn’t surprising to me that this doesn’t benchmark much better than the cheaper 7950x.
Why are you talking about the 7950, the review is about the 5800X3D, when it released AM5 amd Ryzen 7000 chips were not released.
Seems a bit silly to say the (lainch) review is right and then use a piece of hardware that didn’t exist at the time as proof.
How about you compare the 5800X3D to a 5800X and a 5900X instead?
I was comparing the 7950x and the 7950x3d because those are the iterations that are available right now and what I have been personally comparing as I mentioned. I apologize if I wasn’t clear enough on that point.
My point was that the essence of the take, which I read to be, “CPUs with lower clocks but way more cache only offer major advantages in specific situations” is not particularly off base.
I still fail to see how comparing an AM5 chip is in any way shape or form a good addition for discussing an objectively terrible review of a late addition the AM4 product family. What you say might be true… for AM5. Which is not the subject of the review everyone is talking about. Nor is anybody except you talking about what X3D currently offers, we’re all talking about a review that, at the time it was written, was horribly researched, full of bias and false facts.
You coming in and suddenly talking about the 7950X/X3D adds nothing of value to the topic at hand. Because the topic at hand isn’t “Is X3D worth it” it’s specifically “look at how badly Userbenchark twisted this 5800X3D review”.
So sorry to interrupt your circlejerk about this guy’s opinion on 3d V-Cache technology with a tangentially related discussion about 3d V-Cache technology here on the technology community.
I fully understand the point you’re trying to make here, but just as you think my comments added nothing to the discussion, your replies to them added even less.
The only reason I can think for a site to do this is that they were about to go under already. This will absolutely tank them as there are free alternatives.
Wat
Fellow AMD Neanderthal Army soldiers: any idea when I get my cool uniform and …paycheck?
Uhh… Aren’t… Aren’t these two statements kinda contradictory?
Not if you remember that the writers are being paid by Intel. Then, it all comes together.
No no, you see; it performs reasonably consistency under varying real world conditions but for a CPU to truly shine it needs to handle all workloads, including unrealistic synthetic ones.
You’re expecting rationale from someone who just made crazy statements because their feeling are hurt.