• FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    2
    arrow-down
    3
    ·
    3 months ago

    Except it is capable of meaningfully doing so, just not in 100% of every conceivable situation. And those rare flubs are the ones that get spread around and laughed at, such as this example.

    There’s a nice phrase I commonly use, “don’t let the perfect be the enemy of the good.” These AIs are good enough at this point that I find them to be very useful. Not perfect, of course, but they don’t have to be as long as you’re prepared for those occasions, like this one, where they give a wrong result. Like any tool you have some responsibility to know how to use it and what its capabilities are.

    • conciselyverbose@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      3 months ago

      No, it isn’t.

      You’re allowing a simple tool with literally zero reading comprehension to do your reading for you. It’s not surprising your understanding of what the tech is is lacking.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        3 months ago

        Your comment is simply counterfactual. I do indeed find LLMs to be useful. Saying “no you don’t!” Is frankly ridiculous.

        I’m a computer programmer. Not directly experienced with LLMs themselves, but I understand the technology around them and have written program that make use of them. I know what their capabilities and limitations are.

        • conciselyverbose@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          3 months ago

          Your claim that it’s capable of doing what it claims isn’t just false.

          It’s an egregious, massively harmful lie, and repeating it is always extremely malicious and inexcusable behavior.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            3 months ago

            I have genuinely found LLMs to be useful in many contexts. I use them to brainstorm and flesh out ideas for tabletop roleplaying adventures, to write song lyrics, to write Python scripts to do various random tasks. I’ve talked with them to learn about stuff, and verified that they were correct by checking their references. LLMs are demonstrably capable of these things. I demonstrated it.

            Go ahead and refrain from using them yourself if you really don’t want to, for whatever reason. But exclaiming “no it doesn’t!” In the face of them actually doing the things you say they don’t is just silly.

            • conciselyverbose@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 months ago

              They absolutely cannot reliably summarize the result of searches, like this post is about, and OP in and of itself proves conclusively.

              Any meaningful rate of failures at all makes them massively, catastrophically damaging to humanity as a whole. “Just don’t use them” absolutely does not prevent their harm. Pushing them as competent is extremely fucking unacceptable behavior.

              And this is all completely ignoring the obscene energy costs associated with making web searches complete and utter dogshit.

              • FaceDeer@fedia.io
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                3 months ago

                They absolutely cannot reliably summarize the result of searches, like this post is about

                The problem is that it did summarize the result of this search, the results of this search included one of those “if the Earth was the size of a grain of sand, Alpha Centauri would be X kilometers away” analogies. It did exactly the thing you’re saying it can’t do.

                Any meaningful rate of failures at all makes them massively, catastrophically damaging to humanity as a whole.

                Nothing is perfect. Does that make everything a massive catastrophic threat to humanity? How have we managed to survive for this long?

                You’re ridiculously overblowing this. It’s a “ha ha, looks like AI made a whoopsie because I didn’t understand that I actually asked it to do” situation. It’s not Skynet coming to convince us to eat cyanide.

                And this is all completely ignoring the obscene energy costs associated with making web searches complete and utter dogshit.

                Of course it’s ignoring that. It’s not real.

                You realize that energy costs money? If each web search cost an “obscene” amount, how is Microsoft managing to pay for it all? Why are they paying for it? Do you think they’ll continue paying for it indefinitely? It’d be a completely self-solving problem.

                • conciselyverbose@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  3 months ago

                  Summaries distinguish substance from nonsense. It cannot be described as a summary of a piece of content if it does not accurately portray the substance of that content.

                  LLMs aren’t imperfect. They’re dumpster fire misinformation machines with no redeeming qualities. Of course it’s not Skynet. Skynet was intelligent. This isn’t within 100 orders of magnitude of intelligence.

                  Companies burn obscene amounts of money on moonshots all the time, even ones that have no possibility of success. Willingness to lose billions burning energy to degrade every single search made is not an indication that it’s not a nightmare for the environment (again, for literally no purpose because every single search with an LLM is worse than without it).

                  • FaceDeer@fedia.io
                    link
                    fedilink
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    3 months ago

                    No, a summary is just a condensed version of some larger work. If the larger work contains bullshit then so can the summary, that doesn’t stop it from being a summary. As you say, a summary accurately portrays the substance of that content. In this case there was content that said Alpha Centauri was 13 km from Earth, so the summary said that too.

                    This is really not complicated.

                    Companies burn obscene amounts of money on moonshots all the time, even ones that have no possibility of success.

                    If you think it has no possibility of success, sit back and relax as AI goes away.

    • btaf45@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      AIs are definitely not “good enough” to give correct answers to science questions. I’ve seen lots of other incorrect answers before seeing this one. While it was easy to spot that this answer is incorrect, how many incorrect answers are not obvious?

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        3 months ago

        Then go ahead and put “science questions” into one of the areas that you don’t use LLMs for. That doesn’t make them useless in general.

        I would say that a more precise and specific restriction would be “they’re not good at questions involving numbers.” That’s narrower than “science questions” in general, they’re still pretty good at dealing with the concepts involved. LLMs aren’t good at math so don’t use them for math.

        • btaf45@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          AI doesn’t seem to be good at anything in which there is a right answer and a wrong answer. It works best for things where there are no right/wrong answers.