Single threaded performance was the only reason to go Intel.
Maybe this will push more game developers to develop games that use multiple cores? I know nothing about game development.
That has been happening for the last decade, but it’s really hard.
Most AAA game studios target consoles first. Their in-house or external porting teams will then adapt it for Windows, but by then major engine decisions will likely have already been made in service of supporting the Ryzen/RDNA based Xbox Series and PS5 consoles. Smaller studios might try to target all systems at once but aiming for the least common denominator (Vulkan, low hardware requirements). Switch is a bit of its own best when trying to get high performance graphics.
Multi threading is mostly used for graphics, sound, and animation tasks while game logic and scripting is almost always single threaded.
I bought Ryzen 3950x 16 cores 32 threads.
The first thing I noticed is some AAA games only utilize 8 cores. When you go multi threaded, it’s a matter of adding more threads which can dynamically selected based on the host hardware. AAA game studios are going the bad practice route.
I understand if they port an algorithm optimized to run on specific hardware as it’s. But, a thread count?
There is only so much that can be multi-threaded, beyond that the overhead just slows things down (and can cause bugs)
More simulation type games (city skylines etc) can multithread more (generally) while your standard shooter has much less that it can do (unless you have AI bots etc)
Plus it only takes one unthreadable task to bottleneck the whole thing anyway.
My point here is the developer managed to split the load evenly between 8 threads. How come they cannot do it for 16?
The keyword, evenly, means all 8 threads are at 100% while other 8 threads are at 1-2%.
You’d need to look at the actual implementation, it’s hard to speculate from a tiny amount of data. What game are you referencing?
And as someone who has done multi threaded programming I can tell you that for games it is unlikely that they can just add more cores. You need work that truly can be split up, meaning that each core doesn’t needs work to do that doesn’t rely on the results from another core
Graphics rendering is easy for this and it’s why gpus have a crazy number of cores. But you aren’t going to do graphics compute on the cpu
That was long time ago. I believe the game was BF1.
I know it’s hard to speculate but 100% cpu usage for solid 5~7 seconds only for 8 cores cannot be separate workload (single threaded). A spike is understandable tho.
The game play wasn’t impacted to be honest.
For that number to be 8 though suggests that there’s just a “number of workers” variable hard-coded somewhere.
it’s a matter of adding more threads
You can’t ask 300 people to build a chair, and expect the chair to be finished 300x faster than if a single person would build it.
Also, to make it more accurate to what multi-threading does, none of those 300 people can see what the others are doing. And the most reliable ways of sending messages to each other involve taking a nap (though it might be brief, you might wake up in an entirely different body and need to fetch your working memory from your old body or worse, from RAM).
Or you can repeatedly write your message until you can be sure that no one else wrote over it since you started writing it. And the more threads you have, the more likely another one wrote over your message to the point where all threads are spending all of their time trying to coordinate and no time working.
Blocking collections?
I’m not familiar with their implementation but they’ll likely have one of those mechanisms under the hood.
You can only avoid them in very simple cases that don’t really scale up to a large number of threads in most cases. The one exception that does scale well is large amounts of data that can be processed independently of the rest of the data and the results are also independent. 3D rendering is one example, though some effects can create dependencies.
So 8 cores is doable but 16 no?
ah, if only it were that simple. One can dream. The cpu is just one component in the system
I wish all the computer parts companies would only release new products when they are definitively better rather than making them on a schedule no matter what. I don’t want to buy this year’s 1080p gaming CPU and GPU combo for more than I spent for the last one with the same capabilities, I want the next series of the same part to be capable of more damn it.
Inflation has entered the chat
Every *flation seems to exist solely to make me sad and miserable…
Lifeboat/Life jacket inflation is pretty much always good. Airbags cause harm going off early.
Then for deflation, a person’s ego can be deflated for good reasons, maybe.Unless you inflated it while still onboard the sinking aircraft.
That’s what happens when some in society are able to “print” as much money as they damn well please and the rest of us have to work for it …
Think of the quarterly profits, won’t someone please think of the shareholders?!?
/s
deleted by creator
The article mentions the results are probably because of Intel’s focus on AI, but it’s more likely that this was because of Intel’s focus on making their chips use less power. Laptops with the new generation have a significantly better battery life.
wasn’t Intel the one which raised the bar of TDP on laptop CPUs in the first place? so they could win in CPU benchmarks
How’s the performance per watt?
Oh wait. Nevermind, Intel sucks anyway. If it’s not performance issues, it’s hardware exploits. Not to mention Intel’s support for genocide in Gaza.
🇮🇱
Removed by mod
🇮🇱
Remember guys: killing jews - normal, killing terrorists - genocide.
Why does it have to be one or the other? Killing Jews = bad. Killing innocent Palestinians = bad.
Why did you only add the innocent qualifier to the latter part?
To differentiate between civilians and Hamas combatants to hopefully avoid “whataboutisms” from people who try to pick apart any innocuous statement on the Internet.
Hamas combatants were not so long ago normal people, pushed out of their homes by foreign settlers and then have their entire families turned to glass for peacefully protesting. This is not a “both sides have valid arguments” matter.
70% of the murdered are women and children, this is if you assume all Palestinian men are terrorists with no right to defend themselves or exist on their land.
“All Natives Resist Colonialists” – Zeev Jabotinsky
There’re entire farms of hungry animals with that strawman argument. No one is saying that.
On a technical level, it’s hard to say why Meteor Lake has regressed in this test, but the CPU’s performance characteristics elsewhere imply that Intel simply might not have cared as much about IPC. Meteor Lake is primarily designed to excel in AI applications and comes with the company’s most powerful integrated graphics yet. It also features Foveros technology and multiple tiles manufactured on different processes. So while Intel doesn’t beat AMD or Apple with Meteor Lake in IPC measurements, there’s a lot more going on under the hood.
comes with the company’s most powerful integrated graphics yet.
Not a particularly high bar there…
I wonder if these have increased ram latency due to the chiplet design. These are the first mobile chiplet I’ve seen, aside from desktop-replacements using am4/am5 ryzens.
Hopefully Anandtech will have more detailed look whenever they ever get their hands on a sample.
Intel is making the transition to ARM -and eventually RISC-V- inevitable.
Legacy compatibility always has had a cost, i guess its finally meaningfully showing up.
That’s silly. But I’m pretty sure AMD is pretty happy with the situation.
At how many watts?
Only a TJ’s worth.
https://www.notebookcheck.net/Intel-Core-Ultra-7-155H-Processor-Benchmarks-and-Specs.783323.0.html
Has various tests and results. Looks like TDP is 23 watts and the range during tests is 30-77 watts with one at 90 but given that it was tested at idle, I don’t know what to make of it.
Say it with me: For the shareholders!
ITT: non devs that think multithreading is still difficult.
It’s become so trivial in many frameworks/languages nowadays, its starting to actually shifting towards single threading being something you have to do intentionally.
Everything is async by default first class and you have to go out of your way to unparallelize it.
It’s being awhile since I have seen anything mainstream that seriously cared about single thread performance enough to make it the most important benchmark.
I care about TDP way more. Your single thread performance doesn’t mean shit if your cpu starts to thermal throttle.
Async features in almost all popular languages are a single thread running an event loop (Go being an exception there I believe). Multi threading is still quite difficult to get right if the task isn’t trivially parallelizable.
Exactly.
Also every time I’ve used async stuff, I’ve pined for proper threads. Continuation spaghetti isn’t my bag.
Which language? Usually there’s a thread pool where multiple tasks are run in parallel. CPython is a special case due to gil, but we have pypy which has actual parallelism
I’ve only ever used it in those lua microcontrollers and in Rust with the async keyword.
In lua I doubt they use proper threading due to the GIL. Rust probably can do async with threads, but it just wasn’t fun to work with.
Tokio has support for multiple threaded async in rust. As for micro controller, I don’t think you can have multiple threads in flight anyways, so that’s the best you’ll get
Wait, wat? Looking at first sentence. Also async != multi threading.
My goto for easy multi threading is lock free queues. Generate work on one thread and queue it up for another thread to process. Easy message passing and stuff like that. It doesn’t solve everything but it can do a lot if you are creative with them. As long as you maintain a single thread ownership of memory and just pass it around the threads via message passing on queues, everything just sorta works out.
Don’t use goto.
A lot of languages have an asunc/await facade for tasks run on a background thread for result (c#, clj, py, etc), but it’s certainly not the default anywhere, and go most goroutines(?)/other csp implementations are probably going to be yielding for some io most of the time at the bottom anyway
Yes I’m mostly familiar with this in Kotlin. Sometimes this is kinda a footgun because you’re writing multi threaded code without explicitly doing so.
Concurrent is not the same as parallel.
But concurrent execution is multithreaded. “unparallelize” is the only misnomer in the comment you replied to. Asynchronous execution is not necessarily concurrent, but often is.
However, a high TDP does not inherently mean that thermal throttling will occur, and there are countless everyday processes that are inherently sequential (“single-threaded”), so I still disagree with the comment on most counts.
I’m a software engineer. And yes multithreading is difficult, just slapping on async isn’t necessarily going to help you run code in parallel
Think about the workload a game is using, you have to do most calcs on a frame by frame basis and you tend to want effects to apply in order. So you have a hard time running in parallel as the state for frame 1 needs to be calculated before frame 2. And within frame 1 any number of scripts can rely on the results of another, so you can’t just throw threads at the problem You can do some things like the sound system but beyond that it’s not trivial
many professional apps. solidworks comes to mind
as a dev, seeing you conflate “async” with “multithreaded” is painful.
And what you’re saying is just not true anyway.