

In six months, the market will be absolutely flooded with cheap barely-used RAM.
Developer and refugee from Reddit


In six months, the market will be absolutely flooded with cheap barely-used RAM.


We have. Every day. Whether we want to or not.
It feels like that fucking asshole has been raping our minds for years.
A software developer found out that the failing company they’re at, which was winding down for business reasons, decided to try becoming a zombie company by replacing its software stack (and employees) with a vibe-coded SaaS piece of garbage that’s broken in dozens of ways.


There is exactly one situation in which it sort of makes sense: Copilot integrated with VS Code, running repo-specific commands like yarn build, with direct human oversight.
That’s it. That’s the only situation.


You do know you can use AD with Linux, don’t you?


The thing is, it really won’t. The context window isn’t large enough, especially for a decently-sized application, and that seems to be a fundamental limitation. Make the context window too large, and the LLM gets massively offtrack very easily, because there’s too much in it to distract it.
And LLMs don’t remember anything. The next time you interact with it and put the whole codebase into its context window again, it won’t know what it did before, even if the last session was ten minutes ago. That’s why they so frequently create bloat.


Well, if I’m not, then neither is an LLM.
But for most projects built with modern tooling, the documentation is fine, and they mostly have simple CLIs for scaffolding a new application.


Yeah, I have never spent “days” setting anything up. Anyone who can’t do it without spending “days” struggling with it is not reading the documentation.


Sadly, there are some who don’t even know it, because they’re buying services from someone else that buys them from someone else that buys them from Amazon. So they’re currently wondering what the fuck is even going on, since they thought they weren’t using AWS.


I’m a software developer and my company is piloting the use of LLMs via Copilot right now. All of them suck to varying degrees, but everyone’s consensus is that GPT5 is the worst of them. (To be fair, no one has tested Grok, but that’s because no one in the company wants to.)


On top of that, there’s so much AI slop all over the internet now that the training for their models is going to get worse, not better.


They’ll ask their parents, or look up cooking instructions on actual websites.


Venture capital drying up.
Here’s the thing… No LLM provider’s business is making a profit. None of them. Not OpenAI. Not Anthropic. Not even Google (they’re profitable in other areas, obviously). OpenAI optimistically believes it might start being profitable in 2029.
What’s keeping them afloat? Venture capital. And what happens when those investors decide to stop throwing good money after bad?
BOOM.
I… write code. It does stuff. Usually the wrong stuff, until I’ve iterated over it a few times and gotten it to do the right stuff. I don’t “click around in a GUI.” If a tutorial is making you do that, it’s a bad tutorial.
My pleasure! And if you’re being the GM, remember to keep track of the character trouble for each character. It’s basically a built-in way to make everything personal for the characters, as well as a mechanic to offer them extra fate points in return for invoking the trouble.
My favorite example is this: Imagine you’ve got Indiana Jones as a player character in your game. His trouble would be, “Snakes… Why’d it have to be snakes?” He gets a fate point when you invoke it (if he accepts), but in return, it guarantees that he’s falling into a pit of snakes. Instant drama!


Yep, it’s a rule in the sidebar. But this one is so goddamn bonkers I’m making an exception.


Daily Mail is by no means a credible source. Nevertheless, I’m going to let this one stand because it technically is a newspaper, and that’s the actual fucking headline, God help us.


There are tricks to getting better output from it, especially if you’re using Copilot in VS Code and your employer is paying for access to models, but it’s still asking for trouble if you’re not extremely careful, extremely detailed, and extremely precise with your prompts.
And even then it absolutely will fuck up. If it actually succeeds at building something that technically works, you’ll spend considerable time afterwards going through its output and removing unnecessary crap it added, fixing duplications, securing insecure garbage, removing mocks (God… So many fucking mocks), and so on.
I think about what my employer is spending on it a lot. It can’t possibly be worth it.


Yeah, code bloat with LLMs is fucking monstrous. If you use them, get used to immediately scouring your code for duplications.
True. So in six months, the market will be flooded with cheap, barely-used “AI” server hardware no one wants, and RAM for PCs will still be stupid expensive, because we live in the stupidest timeline.