I’m a software developer and I know that AI is just the shiny new toy from which everyone uses the buzzword to generate investment revenue.
99% of the crap people use it for us worthless. It’s just a hammer and everything is a nail.
It’s just like “the cloud” was 10 years ago. Now everyone is back-pedaling from that because it didn’t turn out to be the panacea that was promised.
Optimizing AI performance by “scaling” is lazy and wasteful.
Reminds me of back in the early 2000s when someone would say don’t worry about performance, GHz will always go up.
Thing is, same as with GHz, you have to do it as much as you can until the gains get too small. You do that, then you move on to the next optimization. Like ai has and is now optimizing test time compute, token quality, and other areas.
To be fair, GHz did go up. Granted, it’s not why modern processors are faster and more efficient.
TIL
don’t worry about performance, GHz will always go up
TF2 devs lol
I miss flash players.
They’re throwing billions upon billions into a technology with extremely limited use cases and a novelty, at best. My god, even drones fared better in the long run.
I mean it’s pretty clear they’re desperate to cut human workers out of the picture so they don’t have to pay employees that need things like emotional support, food, and sleep.
They want a workslave that never demands better conditions, that’s it. That’s the play. Period.
If this is their way of making AI, with brute forcing the technology without innovation, AI will probably cost more for these companies to maintain infrastructure than just hiring people. These AI companies are already not making a lot of money for how much they cost to maintain. And unless they charge companies millions of dollars just to be able to use their services they will never make a profit. And since companies are trying to use AI to replace the millions they spend on employees it seems kinda pointless if they aren’t willing to prioritize efficiency.
It’s basically the same argument they have with people. They don’t wanna treat people like actual humans because it costs too much, yet letting them love happy lives makes them more efficient workers. Whereas now they don’t want to spend money to make AI more efficient, yet increasing efficiency would make them less expensive to run. It’s the never ending cycle of cutting corners only to eventually make less money than you would have if you did things the right way.
Absolutely. It’s maddening that I’ve had to go from “maybe we should make society better somewhat” in my twenties to “if we’re gonna do capitalism, can we do it how it actually works instead of doing it stupid?” in my forties.
And the tragedy of the whole situation is that they can‘t win because if every worker is replaced by an algorithm or a robot then who‘s going to buy your products? Nobody has money because nobody has a job. And so the economy will shift to producing war machines that fight each other for territory to build more war machine factories until you can’t expand anymore for one reason or another. Then the entire system will collapse like the Roman Empire and we start from scratch.
producing war machines that fight each other for territory to build more war machine factories until you can’t expand anymore for one reason or another.
As seen in the retro-documentary Z!
Why would you need anyone to buy your products when you can just enjoy them yourself?
Because there’s always a bigger fish out there to get you. Or that’s what trillionaires will tell themselves when they wage a robotic war. This system isn’t made to last the way it’s progressing right now.
I don’t think any designer does work without heavily relying on ai. I bet that’s not the only profession.
It’s ironic how conservative the spending actually is.
Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?
No.
Universities and such are seeding and putting out all this research, but the big model trainers holding the purse strings/GPU clusters are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient. And it relies on lies and jawboning from people like Sam Altman.
Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.
wait so the people doing the work don’t get paid and the people who get paid steal from others?
that is just so uncharacteristic of capitalism, what a surprise
It’s also cultish.
Everyone was trying to ape ChatGPT. Now they’re rushing to ape Deepseek R1, since that’s what is trending on social media.
It’s very late stage capitalism, yes, but that doesn’t come close to painting the whole picture. There’s a lot of groupthink, an urgency to “catch up and ship” and look good quick rather than focus experimentation, sane applications and such. When I think of shitty capitalism, I think of stagnant entities like shitty publishers, dysfunctional departments, consumers abuse, things like that.
This sector is trying to innovate and make something efficient, but it’s like the purse holders and researchers have horse blinders on. Like they are completely captured by social media hype and can’t see much past that.
Good ideas are dime a dozen. Implementation is the game.
Universities may churn out great papers, but what matters is how well they can implement them. Private entities win at implementation.
The corporate implementations are mostly crap though. With a few exceptions.
What’s needed is better “glue” in the middle. Larger entities integrating ideas from a bunch of standalone papers, out in the open, so they actually work together instead of mostly fading out of memory while the big implementations never even know they existed.
The actual survey result:
Asked whether “scaling up” current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was “unlikely” or “very unlikely” to succeed.
So they’re not saying the entire industry is a dead end, or even that the newest phase is. They’re just saying they don’t think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they’re betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe
Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they’d probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.
It’s becoming clear from the data that more error correction needs exponentially more data. I suspect that pretty soon we will realize that what’s been built is a glorified homework cheater and a better search engine.
what’s been built is a glorified homework cheater and an
betterunreliable search engine.
I agree that it’s editorialized compared to the very neutral way the survey puts it. That said, I think you also have to take into account how AI has been marketed by the industry.
They have been claiming AGI is right around the corner pretty much since chatGPT first came to market. It’s often implied (e.g. you’ll be able to replace workers with this) or they are more vague on timeline (e.g. OpenAI saying they believe their research will eventually lead to AGI).
With that context I think it’s fair to editorialize to this being a dead-end, because even with billions of dollars being poured into this, they won’t be able to deliver AGI on the timeline they are promising.
AI isn’t going to figure out what a customer wants when the customer doesn’t know what they want.
Yeah, it does some tricks, some of them even useful, but the investment is not for the demonstrated capability or realistic extrapolation of that, it is for the sort of product like OpenAI is promising equivalent to a full time research assistant for 20k a month. Which is way more expensive than an actual research assistant, but that’s not stopping them from making the pitch.
There are plenty of back-office ticket-processing jobs that can, and have been, replaced by current-gen AI.
The bigger loss is the ENORMOUS amounts of energy required to train these models. Training an AI can use up more than half the entire output of the average nuclear plant.
AI data centers also generate a ton of CO². For example, training an AI produces more CO² than a 55 year old human has produced since birth.
Complete waste.
I think most people agree, including the investors pouring billions into this.
The same investors that poured (and are still pouring) billions into crypto, and invested in sub-prime loans and valued pets.com at $300M? I don’t see any way the companies will be able to recoup the costs of their investment in “AI” datacenters (i.e. the $500B Stargate or $80B Microsoft; probably upwards of a trillion dollars globally invested in these data-centers).
Right, simply scaling won’t lead to AGI, there will need to be some algorithmic changes. But nobody in the world knows what those are yet. Is it a simple framework on top of LLMs like the “atom of thought” paper? Or are transformers themselves a dead end? Or is multimodality the secret to AGI? I don’t think anyone really knows.
No there’s some ideas out there. Concepts like heirarchical reinforcement learning are more likely to lead to AGI with creation of foundational policies, problem is as it stands, it’s a really difficult technique to use so it isn’t used often. And LLMs have sucked all the research dollars out of any other ideas.
Technology in most cases progresses on a logarithmic scale when innovation isn’t prioritized. We’ve basically reached the plateau of what LLMs can currently do without a breakthrough. They could absorb all the information on the internet and not even come close to what they say it is. These days we’re in the “bells and whistles” phase where they add unnecessary bullshit to make it seem new like adding 5 cameras to a phone or adding touchscreens to cars. Things that make something seem fancy by slapping buzzwords and features nobody needs without needing to actually change anything but bump up the price.
I remember listening to a podcast that is about scientific explanations. The guy hosting it is very knowledgeable about this subject, does his research and talks to experts when the subject involves something he isn’t himself an expert.
There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.
So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).
Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.
In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.
This also shows why the current neglect of basic/general research without a profit goal is holding back innovation.
Me and my 5.000 closest friends don’t like that the website and their 1.300 partners all need my data.
Why so many sig figs for 5 and 1.3 though?
Some parts of the world (mostly Europe, I think) use dots instead of commas for displaying thousands. For example, 5.000 is 5,000 and 1.300 is 1,300
Yes. It’s the normal Thousands-separator notation in Germany for example.
Yeah, and they’re wrong.
Says the country where every science textbook is half science half conversion tables.
Not even close.
Yes, one half is conversion tables. The other half is scripture disproving Darwinism.
We (in Europe) probably should be thankful that you are not using feet as thousands-separator over there in the USA… Or maybe separate after each 2nd digit, because why not… ;)
It makes sense from typographical standpoint, the comma is the larger symbol and thus harder to overlook, especially in small fonts or messy handwriting
But from a grammatical sense it’s the opposite. In a sentence, a comma is a short pause, while a period is a hard stop. That means it makes far more sense for the comma to be the thousands separator and the period to be the stop between integer and fraction.
I have no strong preference either way. I think both are valid and sensible systems, and it’s only confusing because of competing standards. I think over long enough time, due to the internet, the period as the decimal separator will prevail, but it’s gonna happen normally, it’s not something we can force. Many young people I know already use it that way here in Germany
But usually you don’t put three 000 because that becomes a hint of thousand.
Like 2.50 is 2€50 but 2.500 is 2500€
Is there an ISO standard for this stuff?
No, 2,50€ is 2€ and 50ct, 2.50€ is wrong in this system. 2,500€ is also wrong (for currency, where you only care for two digits after the comma), 2.500€ is 2500€
what if you are displaying a live bill for a service billed monthly, like bandwidth, and are charged one pence/cent/(whatever eutopes hundredth is called) per gigabyte if you use a few megabytes the bill is less than a hundredth but still exists.
Yes, that’s true, but more of an edge case. Something like gasoline is commonly priced in fractional cents, tho:
I knew the context, was just being cheesy. :-D
Too late… You started a war in the comments. I’ll proudly fight for my country’s way to separate numbers!!! :)
oh lol
I liked generative AI more when it was just a funny novelty and not being advertised to everyone under the false pretenses of being smart and useful. Its architecture is incompatible with actual intelligence, and anyone who thinks otherwise is just fooling themselves. (It does make an alright autocomplete though).
The peak of AI for me was generating images Muppet versions of the Breaking Bad cast; it’s been downhill since.
Like all the previous bubbles of scam that were kinda interesting or fun for novelty and once money came pouring in became absolut chaos and maddening.
It peaked when it was good enough to generate short somewhat coherent phrases. We’d make it generate ideas for silly things and laugh at how ridiculous the results were.
AGI models will enter the market in under 5 years according to experts and scientists.
trust me bro, we’re almost there, we just need another data center and a few billions, it’s coming i promise, we are testing incredible things internally, can’t wait to show you!
We are having massive exponential increases in output with all sorts of innovations, every few weeks another big step forward happens
Around a year ago I bet a friend $100 we won’t have AGI by 2029, and I’d do the same today. LLMs are nothing more than fancy predictive text and are incapable of thinking or reasoning. We burn through immense amounts of compute and terabytes of data to train them, then stick them together in a convoluted mess, only to end up with something that’s still dumber than the average human. In comparison humans are “trained” with maybe ten thousand “tokens” and ten megajoules of energy a day for a decade or two, and take only a couple dozen watts for even the most complex thinking.
Humans are “trained” with maybe ten thousand “tokens” per day
Uhhh… you may wanna rerun those numbers.
It’s waaaaaaaay more than that lol.
and take only a couple dozen watts for even the most complex thinking
Mate’s literally got smoke coming out if his ears lol.
A single
Wh
is 860 calories…I think you either have no idea wtf you are talking about, or your just made up a bunch of extremely wrong numbers to try and look smart.
-
Humans will encounter hundreds of thousands of tokens per day, ramping up to millions in school.
-
An human, by my estimate, has burned about 13,000 Wh by the time they reach adulthood. Maybe more depending in activity levels.
-
While yes, an AI costs substantially more
Wh
, it also is done in weeks so it’s obviously going to be way less energy efficient due to the exponential laws of resistance. If we grew a functional human in like 2 months it’d prolly require way WAY more than 13,000Wh
during the process for similiar reasons. -
Once trained, a single model can be duplicated infinitely. So it’d be more fair to compare how much millions of people cost to raise, compared to a single model to be trained. Because once trained, you can now make millions of copies of it…
-
Operating costs are continuing to go down and down and down. Diffusion based text generation just made another huge leap forward, reporting around a twenty times efficiency increase over traditional gpt style LLMs. Improvements like this are coming out every month.
True, my estimate for tokens may have been a bit low. Assuming a 7 hour school day where someone talks at 5 tokens/sec you’d encounter about 120k tokens. You’re off by 3 orders of magnitude on your energy consumption though; 1 watt-hour is 0.86 food Calories (kcal).
-
I have been shouting this for years. Turing and Minsky were pretty up front about this when they dropped this line of research in like 1952, even lovelace predicted this would be bullshit back before the first computer had been built.
The fact nothing got optimized, and it still didn’t collapse, after deepseek? kind of gave the whole game away. there’s something else going on here. this isn’t about the technology, because there is no meaningful technology here.
I have been called a killjoy luddite by reddit-brained morons almost every time.
What’re you talking about? What happened in 1952?
I have to disagree, I don’t think it’s meaningless. I think that’s unfair. But it certainly is overhyped. Maybe just a semantic difference?
Companies aren’t investing to achieve AGI as far as I’m aware, that’s not the end game so I this title is misinformation. Even if AGI was achieved it’d be a happy accident, not the goal.
The goal of all these investments is to convince businesses to replace their employees with AI to the maximum extent possible. They want that payroll money.
The other goal is to cut out all third party websites from advertising revenue. If people only get information through Meta or Google or whatever, they get to control what’s presented. If people just take their AI results at face value and don’t actually click through to other websites, they stay in the ecosystem these corporations control. They get to sell access to the public, even more so than they do now.
Why didn’t you drop the quotes from Turing, Minsky, and Lovelace?
because finding the specific stuff they said, which was in lovelace’s case very broad/vague, and in turing+minsky’s cases, far too technical for anyone with sam altman’s dick in their mouth to understand, sounds like actual work. if you’re genuinely curious, you can look up what they had to say. if you’re just here to argue for this shit, you’re not worth the effort.
There are some nice things I have done with AI tools, but I do have to wonder if the amount of money poured into it justifies the result.
The problem is that those companies are monopolies and can raise prices indefinitely to pursue this shitty dream because they got governments in their pockets. Because gov are cloud / microsoft software dependent - literally every country is on this planet - maybe except China / North Korea and Russia. They can like raise prices 10 times in next 10 years and don’t give a fuck. Spend 1 trillion on AI and say we’re near over and over again and literally nobody can stop them right now.
IBM used to controll the hardware as well, what’s the moat?
How many governments were using computers back then when IBM was controlling hardware and how many relied on paper and calculators ? The problem is that gov are dependend on companies right now, not companies dependent on governments.
Imagine Apple, Google, Amazon and Microsoft decides to leave EU on Monday. They say we ban all European citizens from all of our services on Monday and we close all of our offices and delete data from all of our datacenters. Good Fucking Luck !
What will happen in Europe on Monday ? Compare it with what would happen if IBM said 50 years ago they are leaving Europe.
deleted by creator
Imo our current version of ai are too generalized, we add so much information into the ai to make them good at everything it all mixes together into a single grey halucinating slop that the ai ends up being good at nothing.
We need to find ways to specialize ai and give said ai a more consistent and concrete personality to move forward.
Imo to make an ai that is truly good at everything we need to have multiple ai all designed to do something different all working together (like the human brain works) instead of making every single ai a personality-less sludge of jack of all trades master of none
Mixture of experts is the future of AI. Breakthroughs won’t come from bigger models, it’ll come from better coordinated conversations between models.
They did that awhile ago, it was a big feature if gpt 3
We already did this like a year ago mate. That was like v3 of gpt
Yeah but its like…pretty half baked
No, it’s just not something exposed to you to see
But under the hood it very much does shift gears depending on what you ask it to do
It’s why gpt can do stuff now like analyze contents of images, basic OCR, but also generate images too.
Yet it can also do math, talk about biology, give relationship advice…
I believe open AI called the term “specialists” or something vaguely like that, at the time.
Pump and dump. That’s how the rich get richer.