We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
Isn’t everyone just sick of his bullshit though?
US tax payers clearly aren’t since they’re subsidising his drug habit.
If we had direct control over how our tax dollars were spent, things would be different pretty fast. Might not be better, but different.
At this point a significant part of the country would decide to airstrike US primary schools to stop wasting money and indoctrinating kids.
More guns?
adding missing information and deleting errors
Which is to say, “I’m sick of Grok accurately portraying me as an evil dipshit, so I’m going to feed it a bunch of right-wing talking points and get rid of anything that hurts my feelings.”
That is definitely how I read it.
History can’t just be ‘rewritten’ by A.I. and taken as truth. That’s fucking stupid.
It’s truth in Whitemanistan though
The plan to “rewrite the entire corpus of human knowledge” with AI sounds impressive until you realize LLMs are just pattern-matching systems that remix existing text. They can’t create genuinely new knowledge or identify “missing information” that wasn’t already in their training data.
What he means is correct the model so all it models is racism and far-right nonsense.
Remember the “white genocide in South Africa” nonsense? That kind of rewriting of history.
It’s not the LLM doing that though. It’s the people feeding it information
Try rereading the whole tweet, it’s not very long. It’s specifically saying that they plan to “correct” the dataset using Grok, then retrain with that dataset.
It would be way too expensive to go through it by hand
Literally what Elon is talking about doing…
Yes.
He wants to prompt grok to rewrite history according to his worldview, then retrain the model on that output.
But Grok 3.5/4 has Advanced Reasoning
To be fair, your brain is a pattern-matching system.
When you catch a ball, you’re not doing the physics calculations in your head- you’re making predictions based on an enormous quantity of input. Unless you’re being very deliberate, you’re not thinking before you speak every word- your brain’s predictive processing takes over and you often literally speak before you think.
Fuck LLMs- but I think it’s a bit wild to dismiss the power of a sufficiently advanced pattern-matching system.
I said literally this in my reply, and the lemmy hivemind downvoted me. Beware of sharing information here I guess.
Generally, yes. However, there have been some incredible (borderline “magic”) emergent generalization capabilities that I don’t think anyone was expecting.
Modern AI is more than just “pattern matching” at this point. Yes at the lowest levels, sure that’s what it’s doing, but then you could also say human brains are just pattern matching at that same low level.
“If we take this 0.84 accuracy model and train another 0.84 accuracy model on it that will make it a 1.68 accuracy model!”
~Fucking Dumbass
1.68 IQ move
More like 0.7056 IQ move.
[My] translation: “I want to rewrite history to what I want”.
That was my first impression, but then it shifted into “I want my AI to be the shittiest of them all”.
Why not both?
adding missing information
Did you mean: hallucinate on purpose?
Wasn’t he going to lay off the ketamine for a while?
Edit: … i hadnt seen the More Context and now i need a fucking beer or twnety fffffffffu-
He means rewrite every narrative to his liking, like the benevolent god-sage he thinks he is.
Let’s not beat around the bush here, he wants it to sprout fascist propaganda.
Yeah, let’s take a technology already known for filling in gaps with invented nonsense and use that as our new training paradigm.
He’s been frustrated by the fact that he can’t make Wikipedia ‘tell the truth’ for years. This will be his attempt to replace it.
There are thousands of backups of wikipedia, and you can download the entire thing legally, for free.
He’ll never be rid of it.
Wikipedia may even outlive humanity, ever so slightly.
Wikipedia may even outlive humanity, ever so slightly.
Seconds after the last human being dies, the Wikipedia page is updated to read:
Humans (Homo sapiens) or modern humans were the most common and widespread species of primate
And then 30 seconds after that it’ll get reverted because the edit contains primary sources.
Elon Musk, like most pseudo intellectuals, has a very shallow understanding of things. Human knowledge is full of holes, and they cannot simply be resolved through logic, which Mush the dweeb imagines.
I elaborated below, but basically Musk has no idea WTF he’s talking about.
If I had his “f you” money, I’d at least try a diffusion or bitnet model (and open the weights for others to improve on), and probably 100 other papers I consider low hanging fruit, before this absolutely dumb boomer take.
He’s such an idiot know it all. It’s so painful whenever he ventures into a field you sorta know.
But he might just be shouting nonsense on Twitter while X employees actually do something different. Because if they take his orders verbatim they’re going to get crap models, even with all the stupid brute force they have.
“Deleting Errors” should sound alarm bells in your head.
And the adding missing information doesn’t. Isn’t that just saying we are going to make shit up.
Whatever. The next generation will have to learn to trust whether the material is true or not by using sources like Wikipedia or books by well-regarded authors.
The other thing that he doesn’t understand (and most “AI” advocates don’t either) is that LLMs have nothing to do with facts or information. They’re just probabilistic models that pick the next word(s) based on context. Anyone trying to address the facts and information produced by these models is completely missing the point.
Thinking wikipedia or other unbiased sources will still be available in a decade or so is wishful thinking. Once the digital stranglehold kicks in, it’ll be mandatory sign-in with gov vetted identity provider and your sources will be limited to what that gov allows you to see. MMW.
Wikipedia is quite resilient - you can even put it on a USB drive. As long as you have a free operating system, there will always be ways to access it.
I keep a partial local copy of Wikipedia on my phone and backup device with an app called Kiwix. Great if you need access to certain items in remote areas with no access to the internet.
They may laugh now, but you’re gonna kick ass when you get isekai’d.
Yes. There will be no websites only AI and apps. You will be automatically logged in to the apps. Linux, Lemmy will be baned. We will be classed as hackers and criminals. We probably have to build our own mesh network for communication or access it from a secret location.
Can’t stop the signal.
The other thing that he doesn’t understand (and most “AI” advocates don’t either) is that LLMs have nothing to do with facts or information. They’re just probabilistic models that pick the next word(s) based on context.
That’s a massive oversimplification, it’s like saying humans don’t remember things, we just have neurons that fire based on context
LLMs do actually “know” things. They work based on tokens and weights, which are the nodes and edges of a high dimensional graph. The llm traverses this graph as it processes inputs and generates new tokens
You can do brain surgery on an llm and change what it knows, we have a very good understanding of how this works. You can change a single link and the model will believe the Eiffel tower is in Rome, and it’ll describe how you have a great view of the colosseum from the top
The problem is that it’s very complicated and complex, researchers are currently developing new math to let us do this in a useful way
asdf
Wikipedia gives lists of their sources, judge what you read based off of that. Or just skip to the sources and read them instead.
Just because Wikipedia offers a list of references doesn’t mean that those references reflect what knowledge is actually out there. Wikipedia is trying to be academically rigorous without any of the real work. A big part of doing academic research is reading articles and studies that are wrong or which prove the null hypothesis. That’s why we need experts and not just an AI to regurgitate information. Wikipedia is useful if people understand it’s limitations, I think a lot of people don’t though.
For sure, Wikipedia is for the most basic subjects to research, or the first step of doing any research (they could still offer helpful sources) . For basic stuff, or quick glances of something for conversation.
This very much depends on the subject, I suspect. For math or computer science, wikipedia is an excellent source, and the credentials of the editors maintaining those areas are formidable (to say the least). Their explanations of the underlaying mechanisms are in my experience a little variable in quality, but I haven’t found one that’s even close to outright wrong.
asdf
Wikipedia is not a trustworthy source of information for anything regarding contemporary politics or economics.
Wikipedia presents the views of reliable sources on notable topics. The trick is what sources are considered “reliable” and what topics are “notable”, which is why it’s such a poor source of information for things like contemporary politics in particular.
asdf
Books are not immune to being written by LLMs spewing nonsense, lies, and hallucinations, which will only make more traditional issue of author/publisher biases worse. The asymmetry between how long it takes to create misinformation and how long it takes to verify it has never been this bad.
Media literacy will be very important going forward for new informational material and there will be increasing demand for pre-LLM materials.
asdf
asdf
Again, read the rest of the comment. Wikipedia very much repeats the views of reliable sources on notable topics - most of the fuckery is in deciding what counts as “reliable” and “notable”.
asdf
You had started to make a point, now you are just being a dick.
asdf
So what would you consider to be a trustworthy source?
“and then on retrain on that”
Thats called model collapse.
Dude is gonna spend Manhattan Project level money making another stupid fucking shitbot. Trained on regurgitated AI Slop.
Glorious.
So where will Musk find that missing information and how will he detect “errors”?
I expect he’ll ask Grok and believe the answer.
Because neural networks aren’t known to suffer from model collapse when using their output as training data. /s
Most billionaires are mediocre sociopaths but Elon Musk takes it to the “Emperors New Clothes” levels of intellectual destitution.