US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.

In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that “experts are far more positive and enthusiastic about AI than the public” and “far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years” (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).

The public does not share this confidence. Only about 11 percent of the public says that “they are more excited than concerned about the increased use of AI in daily life.” They’re much more likely (51 percent) to say they’re more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.

  • dylanmorgan@slrpnk.net
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    3
    ·
    2 months ago

    It’s not really a matter of opinion at this point. What is available has little if any benefit to anyone who isn’t trying to justify rock bottom wages or sweeping layoffs. Most Americans, and most people on earth, stand to lose far more than they gain from LLMs.

    • doodledup@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      42
      ·
      2 months ago

      Everyone gains from progress. We’ve had the same discussion over and over again. When the first sewing machines came along, when the steam engine was invented, when the internet became a thing. Some people will lose their job every time progress is made. But being against progress for that reason is just stupid.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        2
        ·
        2 months ago

        being against progress for that reason is just stupid.

        Under the current economic model, being against progress is just self-preservation.

        Yes, we could all benefit from AI in some glorious future that doesn’t see the AI displaced workers turned into toys for the rich, or forgotten refuse in slums.

        • doodledup@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          edit-2
          2 months ago

          We are ants in an anthill. Gears in a machine. Act like it. Stop thinking in classes “rich vs. poor” and conspiracies. When you become obsolete it’s nobody’s fault. This always comes from people who don’t understand how this world economy works.

          Progress always comes and finds its way. You can never stop it. Like water in a river. Like entropy. Adapt early instead of desperately forcing against it.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 months ago

            We are ants in an anthill. Gears in a machine. Act like it.

            See Woody Allen in AntZ (1998 movie)

            Adapt early instead of desperately forcing against it.

            There should be a balance. Already today’s world is desperately thrashing to “stay ahead of the curve” and putting outrageous investments into blind alleys that group-think believes is the “next big thing.”

            The reality of automation could be an abundance of what we need, easily available to all, with surplus resources available for all to share and contribute to as they wish - within limits, of course.

            It’s going to take some desperate forcing to get the resources distributed more widely than they currently are.

      • msage@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 months ago

        The current drive behind AI is not progress, it’s locking knowledge behind a paywall.

        As soon as one company perfects their AI, it will draw everyone to use it, marketing it as ‘time saver’ so you don’t have to do anything (including browsing the web, which is in decline even now). Just ask and you shall receive everything.

        Once everyone gets hooked, and there won’t be any competiton left, they will own the population. News, purchase recommendations, learning, everything we do to work on our congitive abilities will be sold through a single vendor.

        Suddenly you own the minds of many people, who can’t think for themselves, or search for knowledge on their own… and that’s already happening.

        And it’s not the progress I was hoping to see in my lifetime.

      • function IsOdd():@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        2 months ago

        Everyone gains from progress.

        It’s only true in the long-term. In the short-term (at least some) people do lose jobs, money, and stability unfortunately

        • doodledup@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          6
          ·
          2 months ago

          That’s true. And that’s why so many people are frustrated. Because the majority is incredibly short-sighted unfortunately. Most people don’t even understand the basics of economics. If everyone was the ant in the anthill they’re supposed to be we would not have half as many conflics as we have.

        • doodledup@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          edit-2
          2 months ago

          Your comment doesn’t exactly testify intelligence yourself.

          You might want to elaborate on some arguments actually relate to the comment you’re responding to.

      • 7toed@midwest.social
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        2 months ago

        And as someone who has extensively set up such systems on their home server… yeah it’s a great google home replacement, nothing more. It’s beyond useless on Powerautomate which I use (unwillingly) at my job. Copilot can’t even parse and match items from two lists. Despite my company trying its damn best to encourage “our own” (chatgpt enterprise) AI, nobody i have talked with has found a use.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 months ago

          AI search is occasionally faster and easier than slogging through the source material that the AI was trained on. The source material for programming is pretty weak itself, so there’s an issue.

          I think AI has a lot of untapped potential, and it’s going to be a VERY long time before people who don’t know how to ask it for what they want will be able to communicate what they want to an AI.

          A lot of programming today gets value from the programmers guessing (correctly) what their employers really want, while ignoring the asks that are impractical / counterproductive.

        • doodledup@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          17
          ·
          2 months ago

          You’re using it wrong then. These tools are so incredibly useful in software development and scientific work. Chatgpt has saved me countless hours. I’m using it every day. And every colleague I talk to agrees 100%.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            1
            ·
            2 months ago

            Then you must know something the rest of us don’t. I’ve found it marginally useful, but it leads me down useless rabbit holes more than it helps.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              5
              ·
              2 months ago

              I’m about 50/50 between helpful results and “nope, that’s not it, either” out of the various AI tools I have used.

              I think it very much depends on what you’re trying to do with it. As a student, or fresh-grad employee in a typical field, it’s probably much more helpful because you are working well trod ground.

              As a PhD or other leading edge researcher, possibly in a field without a lot of publications, you’re screwed as far as the really inventive stuff goes, but… if you’ve read “Surely you’re joking, Mr. Feynman!” there’s a bit in there where the Manhattan project researchers (definitely breaking new ground at the time) needed basic stuff, like gears, for what they were doing. The gear catalogs of the day told them a lot about what they needed to know - per the text: if you’re making something that needs gears, pick your gears from the catalog but just avoid the largest and smallest of each family/table - they are there because the next size up or down is getting into some kind of problems engineering wise, so just stay away from the edges and you should have much more reliable results. That’s an engineer’s shortcut for how to use thousands, maybe millions, of man-years of prior gear research, development and engineering and get the desired results just by referencing a catalog.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                4
                ·
                2 months ago

                My issue is that I’m fairly established in my career, so I mostly need to reference things, which LLMs do a poor job at. As in, I usually need links to official documentation, not examples of how to do a thing.

                That’s an engineer’s shortcut for how to use thousands, maybe millions, of man-years of prior gear research, development and engineering and get the desired results just by referencing a catalog.

                LLMs aren’t catalogs though, and they absolutely return different things for the same query. Search engines are tells catalogs, and they’re what I reach for most of the time.

                LLMs are good if I want an intro to a subject I don’t know much about, and they help generate keywords to search for more specific information. I just don’t do that all that much anymore.

          • nickwitha_k (he/him)@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            7
            ·
            2 months ago

            I’ve found it primarily useless to harmful in my software development, making the work debugging poorly-structured code the major place that time is spent. What sort of software and language do you use it for?

          • 7toed@midwest.social
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 months ago

            I’ll admit my local model has given me some insight, but in researching more of something, I find the source it likely spat it out from. Now that’s helpful, but I feel as though my normal search experience wasn’t so polluted with AI written regurgitation of the next result down, I would’ve found the nice primary source. One example was a code block that computes the inertial moment of each rotational axis of a body. You can try searching for sources and compare what it puts out.

            If you have more insight into what tools, especially more i can run local that would improve my impression, i would love to hear. However my opinion remains AI has been a net negative on the internet as a whole (spam, bots, scams, etc) thus far, and certainly has not and probably will not live up to the hype that has been forecast by their CEOs.

            Also if you can get access to powerautomate or at least generally know how it works, Copilot can only add nodes seemingly in a general order you specify, but does not connect the dataflow between the nodes (the hardest part) whatsoever. Sometimes it will parse the dataflow connections and return what you were searching for (ie a specific formula used in a large dataflow), but not much of which seems necessary for AI to be doing.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              3
              ·
              2 months ago

              I think a lot depends on where “on the curve” you are working, too. If you’re out past the bleeding edge doing new stuff, ChatGPT is (obviously) going to be pretty useless. But, if you just want a particular method or tool that has been done (and published) many times before, yeah, it can help you find that pretty quickly.

              I remember doing my Masters’ thesis in 1989, it took me months of research and journals delivered via inter-library loan before I found mention of other projects doing essentially what I was doing. With today’s research landscape that multi-month delay should be compressed to a couple of hours, frequently less.

              If you haven’t read Melancholy Elephants, it’s a great reference point for what we’re getting into with modern access to everything:

              https://www.spiderrobinson.com/melancholyelephants.html

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            5
            ·
            2 months ago

            If you were too lazy to read three Google search results before, yes… AI is amazing in that it shows you something you ask for without making you dig as deep as you used to have to.

            I rarely get a result from ChatGPT that I couldn’t have skimmed for myself in about twice to five times the time.

            I frequently get results from ChatGPT that are just as useless as what I find reading through my first three Google results.

            • doodledup@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              7
              ·
              edit-2
              2 months ago

              You’re using it wrong. My experience is different from yours. It produces transfer knowledge in the queries I ask it. Not even hundreds of Google searches can replace transfer knowledge.