

Rimworld’s DLCs are kinda assumed purchases for the modding scene, too. I feel like this drives a lot of their sales TBH.
Rimworld’s DLCs are kinda assumed purchases for the modding scene, too. I feel like this drives a lot of their sales TBH.
This is why work/life balance is so important. I wouldn’t ever call myself “well-off” but I don’t have kids and my job allows me ample time off to play games and watch movies and shit.
Neither do they! They aren’t workaholics, they’re home bodies that work the least they can!
It’s just that the workplaces are shit. One went back to mandated RTO for no reason even though much of the work is overseas at odd hours. The company’s literally trying to make employees miserable so they quit without severence. The other is work-from-home, but with enough pointless meetings and complete workplace dysfunction to eat energy.
And these seem like well above average jobs.
next Xbox
If it’s really a PC, I bet AMD customized Strix Halo (their 40 CU APU) for Microsoft instead of doing a fully custom chip like before.
It’d save them money (as custom chip tapeouts are 9 figures last I heard). I bet Microsoft couldn’t help themselves, heh.
+1 to literally everything.
Fuck brand recognition or loyalty, fuck development talent, fuck community building, fuck long-term strategy, we can realize a gain right now by sowing half the planet with salt, so that’s what we’re going to do. So what is there for people to buy?
I wish this would fit on a bumpersticker.
That noise you heard last week was Xbox’s death rattle. One out of the three mainstream home console platforms is an outright stupid idea to buy now.
And wasn’t Sony the big risk of bowing out before? And then we got the Switch 2… It’s remarkable that Microsoft somehow made Xbox the least likely to survive.
Single data point: my young, working, well off gaming part of my family is just out of energy. It’s easier to watch a YouTube video instead of TV or gaming, before then falling asleep to wake up for work. Seems like much of their circle is similar.
As for myself, I’m going through a, uh, icky phase of life and am not really motivated to play unless it’s coop.
…Maybe others are struggling similarly?
Also, the games we do look at tend to be from indie to mid-size studios, with BG3 and KCD2 being the only recent exceptions.
That’s fascinating. I vaguely knew of the superstition angle, but not specifics or the extent.
There goes my afternoon, thanks.
But it does remind me of similar issues in other countries. China, for example (not to single them out) has issues with Eastern Medicine culture conflicting with scientific practices, right?
ChatGPT (last time I tried it) is extremely sycophantic though. Its high default sampling also leads to totally unexpected/random turns.
Google Gemini is now too.
And they log and use your dark thoughts.
I find that less sycophantic LLMs are way more helpful. Hence I bounce between Nemotron 49B and a few 24B-32B finetunes (or task vectors for Gemma) and find them way more helpful.
…I guess what I’m saying is people should turn towards more specialized and “openly thinking” free tools, not something generic, corporate, and purposely overpleasing like ChatGPT or most default instruct tunes.
So, literally exactly what was promised. In excruciating detail.
It’s mind boggling how Trumps policy is twisted positively so relentlessly. There’s so much deciphering of “oh he really means this writes an essay.” No, his platform means what it says.
Then people are shocked when it happens!
As much as history was distorted, the Nazis regime still fancied itself as secular and intellectual, right?
This one seems to view the scientific establishment as a distrusted obstacle, corrupt. There’s not even the pretense. Demolishing “woke” science is the stated point.
TBH this is a huge factor.
I don’t use ChatGPT much less use it like it’s a person, but I’m socially isolated at the moment. So I bounce dark internal thoughts off of locally run LLMs.
It’s kinda like looking into a mirror. As long as I know I’m talking to a tool, it’s helpful, sometimes insightful. It’s private. And I sure as shit can’t afford to pay a therapist out of the gazoo for that.
It was one of my previous problems with therapy: payment depending on someone else, at preset times (not when I need it). Many sessions feels like they end when I’m barely scratching the surface. Yes therapy is great in general and for deeper feedback/guidance, but still.
To be clear, I don’t think this is a good solution in general. Tinkering with LLMs is part of my living, I understand the jist of how they work, I tend to use raw completion syntax or even base pretrains.
But most people anthropomorphize them because that’s how chat apps are presented. That’s problematic.
Oh actually that’s a great card for LLM serving!
Use the llama.cpp server from source, it has better support for Pascal cards than anything else:
https://github.com/ggml-org/llama.cpp/blob/master/docs/multimodal.md
Gemma 3 is a hair too big (like 17-18GB), so I’d start with InternVL 14B Q5K XL: https://huggingface.co/unsloth/InternVL3-14B-Instruct-GGUF
Or Mixtral 24B IQ4_XS for more ‘text’ intelligence than vision: https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506-GGUF
I’m a bit ‘behind’ on the vision model scene, so I can look around more if they don’t feel sufficient, or walk you through setting up the llama.cpp server. Basically it provides an endpoint which you can hit with the same API as ChatGPT.
Is this an ADHD meme?
I’m afraid it might be, cause I have a trail of ‘one giant playlists’ and songs on repeat.
I was testing heavily modded Minecraft, specifically Enigmatica, which chugs even on beefy PCs.
Out of curiosity, what mod are you running for shaders, specifically? That may have an effect.
Not in niche games. Rimworld and Stellaris (for instance) are dramatically faster on Windows, hence I keep a partition around. I’m talking 40%ish better simulation speeds vs Linux native (and still a hit with Proton, though much less).
Minecraft and Starsector, on the other hand, freaking love Linux. They’re dramatically faster.
These are kinda extreme scenarios, but the point is AAA benchmarks don’t necessarily apply to the spectrum of games across hardware, especially once you start looking at simulation heavy ones.
1650
You mean GPU? Yeah, it’s good, I was strictly talking about purchasing a laptop for LLM usage, as most are less than ideal for the money. Laptop vram pools are relatively small and SO-DIMMS are usually very slow.
Things will get much better once the “Max” AMD SKUs proliferate.
Yeah, just paying for LLM APIs is dirt cheap, and they (supposedly) don’t scrape data. Again I’d recommend Openrouter and Cerebras! And you get your pick of models to try from them.
Even a framework 16 is not good for LLMs TBH. The Framework desktop is (as it uses a special AMD chip), but it’s very expensive. Honestly the whole hardware market is so screwed up, hence most ‘local LLM enthusiasts’ buy a used RTX 3090 and stick them in desktops or servers, as no one wants to produce something affordable apparently :/
I was a bit mistaken, these are the models you should consider:
https://huggingface.co/mlx-community/Qwen3-4B-4bit-DWQ
https://huggingface.co/AnteriorAI/gemma-3-4b-it-qat-q4_0-gguf
https://huggingface.co/unsloth/Jan-nano-GGUF (specifically the UD-Q4 or UD-Q5 file)
they are state-of-the-art at this size, as far as I know.
8GB?
You might be able to run Qwen3 4B: https://huggingface.co/mlx-community/Qwen3-4B-4bit-DWQ/tree/main
But honestly you don’t have enough RAM to spare, and even a small model might bog things down. I’d run Open Web UI or LM Studio with a free LLM API, like Gemini Flash, or pay a few bucks for something off openrouter. Or maybe Cerebras API.
…Unfortunely, LLMs are very RAM intensive, and >4GB (more realistically like 2GB) is not going to be a good experience :(
Actually, to go ahead and answer, the “fastest” path would be LM Studio (which supports MLX quants natively and is not time intensive to install), and a DWQ quantization (which is a newer, higher quality variant of MLX models).
Hopefully one of these models, depending on how much RAM you have:
https://huggingface.co/mlx-community/Qwen3-14B-4bit-DWQ-053125
https://huggingface.co/mlx-community/Magistral-Small-2506-4bit-DWQ
https://huggingface.co/mlx-community/Qwen3-30B-A3B-4bit-DWQ-0508
https://huggingface.co/mlx-community/GLM-4-32B-0414-4bit-DWQ
With a bit more time invested, you could try to set up Open Web UI as an alterantive interface (which has its own built in web search like Gemini): https://openwebui.com/
And then use LM Studio (or some other MLX backend, or even free online API models) as the ‘engine’
Alternatively, especially if you have a small RAM pool, Gemma 12B QAT Q4_0 is quite good, and you can run it with LM Studio or anything else that supports a GGUF. Not sure about 12B-ish thinking models off the top of my head, I’d have to look around.
…They don’t use it over API?