Questions remain about what spurred the board’s decision to oust Altman, but growing tensions became impossible to ignore as Altman rushed to make OpenAI the next big technology company.
Because 90% of it is all “Hopes N Dreams”, and PR bullshit at this point.
Yup, been saying this wave of “AI” is a fad, the cracks are starting to show…
How do you think that something people are already using to type out dull, routine emails is some kind of fad? It saves time, if you don’t have to spend an extra minute typing out a routine confirmation email, why would you?
Being a fad is relative. Something that’s hugely popular and being brought up in every industry as a world changing technology for a while before normalizing as something useful in a few areas but not a good fit in many others qualifies as a fad in my book. Most tech fads never go totally away because they’re not useless, they were just overblown.
Generative ML models certainly are useful and extend what we think of ML traditionally being capable of doing. But the current worldwide AI madness is not due to the merits of Gen ML, it’s because it’s marketed as AI (which to 90 percent of the population means what AGI means to nerds) and makes people think “we’re about to be in terminator any second now” when in reality it means “hey my autocomplete isn’t totally crap anymore”
I love how your reply everyone agrees with that this is another tech fad…just like the .com boom…yet my reply has everyone disagreeing with calling it a fad lol.
You nailed it though. I don’t know how many people I’ve talked to who think this is legit terminator level AI…or nearly there.
Gotcha. I think you’re right about fads, and I agree.
I feel like there will be a backlash to this. As a recipient, what do I get out of reading your routine confirmation email that I wouldn’t get out of reading whatever (presumably more concise) prompt you used to generate it?
Maybe people will find better ways to use these systems, but so far most of what I’ve seen is text that is bloated without any useful information being added. I think we’ll get to a point pretty quickly where that is considered less polite and less professional.
generating emails that is the worst possible use case of ai. just send me whatever you feed the ai to generate the world salad. i don’t need the word salad, there is nothing wrong with being brief and to the point.
Because sending and receiving dull emails isnt real work. AI has only replaced bullshit, so far. It just isnt worth the billions these companies claim it is until they find a product and sell it.
It’s a shame no one has invented copy and paste yet.
Hello,
Thanks for your message. I’ll check the sales figures out as soon.as.I can.Best regards
NorgurI couldn’t have opened any AI thingy in the time I typed that.
Okay, now imagine you have to type out a formal email but are typically shit at that kind of thing.
That was a formal email.
But imagine you were shit at writing formal emails. Sure, right now you can knock something like that out sooner but that’s because you know how to do it.
Because it’s not AI and it never was. Calling a t9 artificial intelligence was a great marketing but zero substance
But what if we’re just a very fancy T9 too?
The technology that can automate problem solving to some degree and other incredibly complicated tasks that only humans previously could do is toooooooooooooooootally a fad.
Start putting some fucking pressure on your politicians to implement UBI, put legal restrictions on AI, etc. Don’t fucking underestimate this tech. It’s only a matter of time before we’ve got more people than jobs.
You way overestimate this tech. I work in this field. We do not have fast enough anything (storage/ram/CPU) to do what you think it can. We need ubi sure, but this isn’t the leap everyone thinks it is. It’s still dumb tech and just follows what humans feed it. Do remember that less than 40 years ago everyone was still using paper for everything, now one person can do what 20 did in the past with a computer.
Until you see robots walking around, this tech isn’t the apocalyptic expectations you think it has.
When quantum computing actually takes off then you can start worrying
Guess what? I work in this field too.
now one person can do what 20 did in the past with a computer
How can you say this but completely miss the fucking point?
We do not have fast enough anything (storage/ram/CPU) to do what you think it can When quantum computing actually takes off then you can start worrying
Oh, that’s how you can say it. You clearly don’t work in this field if that’s what you think the bottleneck is right now.
I’ve personally used AI to generate more than 200K lines of production code this year alone. One senior engineer today can do the work of dozens with this. We are going to go from labor shortages to labor surpluses.
Guess what? I work in this field too.
Doesn’t sound like it from the shit you posted below that literally just says you used it lol
How can you say this but completely miss the fucking point?
I’m not at all, you’re the dude who started screaming what about the farriers when cars started taking over…that literally what you’re doing right now.
Oh, that’s how you can say it. You clearly don’t work in this field if that’s what you think the bottleneck is right now.
Lol ok Mr. 200k lines of code lol
I’ve personally used AI to generate more than 200K lines of production code this year alone. One senior engineer today can do the work of dozens with this. We are going to go from labor shortages to labor surpluses.
Lol sounds like you’re a pretty shit dev if you are putting in 200k lines of code from AI into prod…that or you’re full of shit…which either way it’s a really dumb argument to use. I’d fire you if you were doing that shit in my company. No way you’re checking over 200k lines for cohesion or security issues. Sounds like outsourced devs all over again, where the code is a fucking nightmare of shit.
You can scream and shout all you want but it’s not replacing humans any time soon. Sure it’s great for augmenting us but it’s not replacing us.
Doesn’t sound like it from the shit you posted below that literally just says you used it lol
Wait… Do you think that people can’t work in a field if they use their own products to make sure they’re working?
I’m not going to dox myself by linking to what I’ve done. It’s just sad that you’re yet another person (probably a greybeard) who thinks they know everything but they’re completely unprepared for the future.
And if you can’t figure out how to check over 200K lines of properly formatted and documented code, then you’re a shit engineer.
Sure it’s great for augmenting us but it’s not replacing us.
What the fuck do you think “augmenting” means to bloodsucking MBAs? If one mid-level engineer can do the work of five, then they can scale a team of ten down to two. There are many reasons not to do that no matter how good AI is (i.e. bus factor). But since when has that stopped management from making dumbass decision?
Not going to waste anymore time on you. Hope you have a good retirement plan in place. You’re going to need it.
Wait… Do you think that people can’t work in a field if they use their own products to make sure they’re working?
So wait… you’re using AI gen code in prod…to make sure the AI gen code you didn’t write works?
No dev is out there using AI to write prod code, they’re using it to help, but a ton of the code that AI spits out is trash and they know it. If you put in over 200k lines of code yourself, you’re not checking it and whatever you are rolling out, is probably swiss fucking cheese in the security department.
I’m not going to dox myself by linking to what I’ve done. It’s just sad that you’re yet another person (probably a greybeard) who thinks they know everything but they’re completely unprepared for the future.
Naa I’m not a greybeard at all, I’m just not a panicked fad follower who thinks the advantages of LLMs are going to replace everyone down to a single guy named Roger who has AI write him 200k lines of code for prod.
And if you can’t figure out how to check over 200K lines of properly formatted and documented code, then you’re a shit engineer.
Shit what AI are you using that spitting out documentation as well as properly formatting? Sounds like you got a gold mine on your hands…
What the fuck do you think “augmenting” means to bloodsucking MBAs? If one mid-level engineer can do the work of five, then they can scale a team of ten down to two. There are many reasons not to do that no matter how good AI is (i.e. bus factor). But since when has that stopped management from making dumbass decision?
O I’m not saying it won’t, just like they did in 08 when the outsourced everything to code farms in India and shit canned a bunch of senior devs only to find out the code is trash and not documented at all…same thing is going to happen here…it’s a fad…hell even the cloud was a fad and a ton of it’s starting to come back in house.
Not going to waste anymore time on you. Hope you have a good retirement plan in place. You’re going to need it.
Sounds like you got it all figured out. Have a good day.
People who have put next to 0 time into understanding generative pre-trained transformers: It’s just PR bullshit!
People who have looked into how it works: This has more applications than previously thought.
Here is an alternative Piped link(s):
This has more applications than previously thought.
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
I haven’t watched a lot of two-minute papers, but this video is very misleading. Simulated environments have been used for years to speed up DeepRL. The only ChatGPT/LLM portion was about defining a scoring mechanism and there video gives no indication of if it did a better job or not, not to mention the problem the LLM was solving is one that’s been studied for decades, which reduces the “it generalizes better”.
I’m not saying LLMs have a lot of potential, but that video isn’t really supportive of that stance.
This is the best summary I could come up with:
The power struggle revolved around Altman’s push toward commercializing the company’s rapidly advancing technology versus Sutskever’s concerns about OpenAI’s commitments to safety, according to people familiar with the matter.
Senior OpenAI executives said they were “completely surprised” and had been speaking with the board to try to understand the decision, according to a memo sent to employees on Saturday by chief operating officer Brad Lightcap that was obtained by The Washington Post.
During its first-ever developer conference, Altman announced an app-store-like “GPT store” and a plan to share revenue with users who created the best chatbots using OpenAI’s technology, a business model similar to how YouTube gives a cut of ad and subscription money to video creators.
Quora CEO Adam D’Angelo, one of OpenAI’s independent board members, told Forbes in January that there was “no outcome where this organization is one of the big five technology companies.”
Two of the board members who voted Altman out worked for think tanks backed by Open Philanthropy, a tech billionaire-backed foundation that supports projects preventing AI from causing catastrophic risk to humanity: Helen Toner, the director of strategy and foundational research grants for Center for Security and Emerging Technology at Georgetown, and Tasha McCauley, whose LinkedIn profile says she began work as an adjunct senior management scientist at Rand Corporation earlier this year.
Within five to 10 years, there could be “data centers that are much smarter than people,” Sutskever said on a recent episode of the AI podcast “No Priors.” Not just in terms of memory or knowledge, but with a deeper insight and ability to learn faster than humans.
The original article contains 1,563 words, the summary contains 268 words. Saved 83%. I’m a bot and I’m open source!
A growing rift between people who rape four year old girls and those who don’t?
Who’s the rapist in this scenario?
I’m out of the loop
She definitely seems a little out there, and a lot of those claims seem a bit insane tbh.
Would you be sane living with that from the age of four?
I question the legitimacy or accuracy of a lot of that given all the things she’s wrote. She’s clearly not mentally stable. I feel at the very least, there’s a lot of context missing. She claims even her therapist is against her. These are still accusations, nothing more.
If half of the claims were even remotely true, that’s gonna be one hell of a toll on one’s mental health. I get that false claims can be a thing, but some people really are that sick. That includes talented people as well.
She only claims her brother slept in the same bed as her at 13. She does not have other claims just insinuations. So if it’s all true and she was abused she doesn’t seem to remember what exactly happened.
Trauma can do that to someone unfortunately.
I believe so. My point is that by your comment half of her claims would be no claims since she only made 1 concrete claim.