US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.
In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that “experts are far more positive and enthusiastic about AI than the public” and “far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years” (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).
The public does not share this confidence. Only about 11 percent of the public says that “they are more excited than concerned about the increased use of AI in daily life.” They’re much more likely (51 percent) to say they’re more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.
Butlerian Jihad
I mean, it hasn’t thus far.
AI has it’s place, but they need to stop trying to shoehorn it into anything and everything. It’s the new “internet of things” cramming of internet connectivity into shit that doesn’t need it.
You’re saying the addition of Copilot into MS Paint is anything short of revolutionary? You heretic.
The problem could be that, with all the advancements in technology just since 1970, all the medical advancements, all the added efficiencies at home and in the workplace, the immediate knowledge-availability of the internet, all the modern conveniences, and the ability to maintain distant relationships through social media, most of our lives haven’t really improved.
We are more rushed and harried than ever, life expectancy (in the US) has decreased, we’ve gone from 1 working adult in most families to 2 working adults (with more than 1 job each), income has gone down. Recreation has moved from wholesome outdoor activities to an obese population glued to various screens and gaming systems.
The “promise of the future” through technological advancement, has been a pretty big letdown. What’s AI going to bring? More loss of meaningful work? When will technology bring fewer working hours and more income - at the same time? When will technology solve hunger, famine, homelessness, mental health issues, and when will it start cleaning my freaking house and making me dinner?
When all the jobs are gone, how beneficial will our overlords be, when it comes to universal basic income? Most of the time, it seems that more bad comes from out advancements than good. It’s not that the advancements aren’t good, it’s that they’re immediately turned to wartime use considerations and profiteering for a very few.
I see it lowering people’s ability to focus and for analytical/critical thinking.
I do as a software engineer. The fad will collapse. Software engineering hiring will increase but the pipeline of new engineers will is dry because no one wants to enter the career with companies hanging ai over everyone’s heads. Basic supply and demand says my skillset will become more valuable.
Someone will need to clean up the ai slop. I’ve already had similar pistons where I was brought into clean up code bases that failed being outsourced.
Ai is simply the next iteration. The problem is always the same business doesn’t know what they really want and need and have no ability to assess what has been delivered.
A complete random story but, I’m on the AI team at my company. However, I do infrastructure/application rather than the AI stuff. First off, I had to convince my company to move our data scientist to this team. They had him doing DevOps work (complete mismanagement of resources). Also, the work I was doing was SO unsatisfying with AI. We weren’t tweaking any models. We were just shoving shit to ChatGPT. Now it was be interesting if you’re doing RAG stuff maybe or other things. However, I was “crafting” my prompt and I could not give a shit less about writing a perfect prompt. I’m typically used to coding what I want but I had to find out how to write it properly: “please don’t format it like X”. Like I wasn’t using AI to write code, it was a service endpoint.
During lunch with the AI team, they keep saying things like “we only have 10 years left at most”. I was like, “but if you have AI spit out this code, if something goes wrong … don’t you need us to look into it?” they were like, “yeah but what if it can tell you exactly what the code is doing”. I’m like, “but who’s going to understand what it’s saying …?” “no, it can explain the type of problem to anyone”.
I said, I feel like I’m talking to a libertarian right now. Every response seems to be some solution that doesn’t exist.
AI can look at a bajillion examples of code and spit out its own derivative impersonation of that code.
AI isn’t good at doing a lot of other things software engineers actually do. It isn’t very good at attending meetings, gathering requirements, managing projects, writing documentation for highly-industry-specific products and features that have never existed before, working user tickets, etc.
I too am a developer and I am sure you will agree that while the overall intelligence of models continues to rise, without a concerted focus on enhancing logic, the promise of AGI likely will remain elusive. AI cannot really develop without the logic being dramatically improved, yet logic is rather stagnant even in the latest reasoning models when it comes to coding at least.
I would argue that if we had much better logic with all other metrics being the same, we would have AGI now and developer jobs would be at risk. Given the lack of discussion about the logic gaps, I do not foresee AGI arriving anytime soon even with bigger a bigger models coming.
If we had AGI, the number of jobs that would be at risk would be enormous. But these LLMs aren’t it.
They are language models and until someone can replace that second L with Logic, no amount of layering is going to get us there.
Those layers are basically all the previous AI techniques laid over the top of an LLM but anyone that has a basic understanding of languages can tell you how illogical they are.
New technologies are not the issue. The problem is billionaires will fuck it up because they can’t control their insatiable fucking greed.
For once, most Americans are right.
Its just going to help industry provide inferior services and make more profit. Like AI doctors.
Just about every major advance in technology like this enhanced the power of the capitalists who owned it and took power away from the workers who were displaced.
So far AI has only aggravated me by interrupting my own online activities.
First thing I do is disable it
I wish it was optional. When I do a search, the AI response is right at the top. If I want AI advice, I’ll go ask AI. I don’t use a search engine to get answers from AI!
I imagine you could filter it with uBlock right?
They’re right. What happens to the workers when they’re no longer required? The horses faced a similar issue at the advent of the combustion engine. The solution? Considerably fewer horses.
the same could be applied to humans… but then who would buy consumer goods?
In all seriousness though the only solution is for the cost of living to go down and for a UBI to exist so that the average person can choose to not work and strikes are a legitimate threat to business because they can more feasibly last for months.
What’s the point of producing goods for “useless eaters”?
money
They won’t have any money.
But as for the people who worked with horses, I’m pretty sure they found different jobs - it’s not like they were sent to a glue factory.
Maybe that’s because every time a new AI feature rolls out, the product it’s improving gets substantially worse.
Maybe that’s because they’re using AI to replace people, and the AI does a worse job.
Meanwhile, the people are also out of work.
Lose - Lose.
Even if you’re not “out of work”, your work becomes more chaotic and less fulfilling in the name of productivity.
When I started 20 years ago, you could round out a long day with a few hours of mindless data entry or whatever. Not anymore.
A few years ago I could talk to people or maybe even write a nice email communicating a complex topic. Now chatGPT writes the email and I check it.
It’s just shit honestly. I’d rather weave baskets and die at 40 years old of a tooth infection than spend an additional 30 years wallowing in self loathing and despair.
30 years ago I did a few months of 70 hour work weeks, 40 doing data entry in the day, then another 30 stocking grocery shelves in the evening - very different kinds of work and each was kind of a “vacation” from the other. Still got old quick, but it paid off the previous couple of months’ travel / touring with no income.
It didn’t even need to take someone’s job. A summary of an article or paper with hallucinated information isn’t replacing anyone, but it’s definitely making search results worse.
If it was marketed and used for what it’s actually good at this wouldn’t be an issue. We shouldn’t be using it to replace artists, writers, musicians, teachers, programmers, and actors. It should be used as a tool to make those people’s jobs easier and achieve better results. I understand its uses and that it’s not a useless technology. The problem is that capitalism and greedy CEOs are ruining the technology by trying to replace everyone but themselves so they can maximize profits.
This. It seems like they have tried to shoehorn AI into just about everything but what it is good at.
The natural outcome of making jobs easier in a profit driven business model is to either add more work or reduce the number of workers.
This is exactly the result. No matter how advanced AI gets, unless the singularity is realized, we will be no closer to some kind of 8-hour workweek utopia. These AI Silicon Valley fanatics are the same ones saying that basic social welfare programs are naive and un-implementable - so why would they suddenly change their entire perspective on life?
we will be no closer to some kind of 8-hour workweek utopia.
If you haven’t read this, it’s short and worth the time. The short work week utopia is one of two possible outcomes imagined: https://marshallbrain.com/manna1
This vision of the AI making everything easier always leaves out the part where nobody has a job as a result.
Sure you can relax on a beach, you have all the time in the world now that you are unemployed. The disconnect is mind boggling.
Universal Base Income - it’s either that or just kill all the un-necessary poor people.
Yes, but when the price is low enough (honestly free in a lot of cases) for a single person to use it, it also makes people less reliant on the services of big corporations.
For example, today’s AI can reliably make decent marketing websites, even when run by nontechnical people. Definitely in the “good enough” zone. So now small businesses don’t have to pay Webflow those crazy rates.
And if you run the AI locally, you can also be free of paying a subscription to a big AI company.
Except, no employer will allow you to use your own AI model. Just like you can’t bring your own work equipment (which in many regards even is a good thing) companies will force you to use their specific type of AI for your work.
No big employer… there are plenty of smaller companies who are open to do whatever works.
Presumably “small business” means self-employed or other employee-owned company. Not the bureaucratic nightmare that most companies are.
Mayne pedantic, but:
Everyone seems to think CEOs are the problem. They are not. They report to and get broad instruction from the board. The board can fire the CEO. If you got rid of a CEO, the board will just hire a replacement.
And if you get rid of the board, the shareholders will appointment a new one. If you somehow get rid of all the shareholders, like-minded people will slot themselves into those positions.
The problems are systemic, not individual.
Shareholders only care about the value of their shares increasing. It’s a productive arrangement, up to a point, but we’ve gotten too good at ignoring and externalizing the human, environmental, and long term costs in pursuit of ever increasing shareholder value.
CEOs are the figurehead, they are virtually bound by law to act sociopathically - in the interests of their shareholders over everyone else. Carl Icahn also has an interesting take on a particularly upsetting emergent property of our system of CEO selection: https://dealbreaker.com/2007/10/icahn-explains-why-are-there-so-many-idiots-running-shit
We shouldn’t be using it to replace artists, writers, musicians, teachers, programmers, and actors.
That’s an opinion - one I share in the vast majority of cases, but there’s a lot of art work that AI really can do “good enough” for the purpose that we really should be freeing up the human artists to do the more creative work. Writers, if AI is turning out acceptable copy (which in my experience is: almost never so far, but hypothetically - eventually) why use human writers to do that? And so on down the line.
The problem is that capitalism and greedy CEOs are hyping the technology as the next big thing, looking for a big boost in their share price this quarter, not being realistic about how long it’s really going to take to achieve the things they’re hyping.
“Artificial Intelligence” has been 5-10 years off for 40 years. We have seen amazing progress in the past 5 years as compared to the previous 35, but it’s likely to be 35 more before half the things that are being touted as “here today” are actually working at a positive value ROI. There are going to be more than a few more examples like the “smart grocery store” where you just put things in your basket and walk out and you get charged “appropriately” supposedly based on AI surveillance, but really mostly powered by low cost labor somewhere else on the planet.
All it took was for us to destroy our economy using it to figure that out!
remember when tech companies did fun events with actual interesting things instead of spending three hours on some new stupid ai feature?