• ilega_dh@feddit.nl
    link
    fedilink
    English
    arrow-up
    74
    arrow-down
    7
    ·
    4 months ago

    This doesn’t mean anything. It’s an LLM and it will only give you a valid sounding answer regardless of the truth. “Yes” sounds valid and is probably the one with the most occurrences in the training data.

    Stop posting shit like this.

    • ContrarianTrail@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      30
      ·
      4 months ago

      Information can’t be dismissed simply by stating it was written by an LLM. It’s still ad hominem.

      • Feathercrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        31
        arrow-down
        1
        ·
        4 months ago

        What? No, the fact that it’s an LLM is pivotal to the reliability of the information. In fact, this isn’t even information per se, just the most likely responses to this question synthesized into one response. I don’t think you’ve fully internalized how LLMs work.

        • ContrarianTrail@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          19
          ·
          4 months ago

          I disagree. Information can be factual independent of who or what said it. If it’s false, then point to the errors in it, not to the source.

          • Feathercrown@lemmy.world
            link
            fedilink
            English
            arrow-up
            15
            arrow-down
            1
            ·
            4 months ago

            You’re correct, but why are you trusting the output by default? Why ask us to debunk something that is well-known to be easy to lead to the answer you want, and that doesn’t factually understand what it’s saying?

            • ContrarianTrail@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              16
              ·
              4 months ago

              But I’m not trusting it by default and I’m not asking you to debunk anything. I’m simply stating that ad hominem is not a valid counter-argument even in the case of LLMs.

              • Feathercrown@lemmy.world
                link
                fedilink
                English
                arrow-up
                12
                arrow-down
                2
                ·
                edit-2
                4 months ago

                You’re saying ad hominem isn’t valid as a counterargument, which means you think there’s an argument in the first place. But it’s not a counterargument at all, because the LLM’s claim is not an argument.

                ETA: And it wouldn’t be ad hominem anyways, since the claim about the reliability of the entity making an argument isn’t unrelated to what’s being discussed. Ad hominem only applies when the insult isn’t valid and related to the argument.

                • ContrarianTrail@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  9
                  ·
                  4 months ago

                  Dismissing something AI has ‘said’ not because of the content, but because it came from LLM is a choice any individual is free to make. However, that doesn’t serve as evidence against the validity of the content itself. To me, all the mental gymnastics about AI outputs being just meaningless nonsense or mere copying of others is a cop-out answer.

      • explodicle@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        It is possible to create an infinite amount of bullshit at no cost. So by simply hurling waves and waves of bullshit at you, we can exhaust you.

        Feel free to argue further, I’ll be outsourcing my replies to ChatGPT.

          • explodicle@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            That’s a classic misinterpretation of the Friends universe. Ross, being the larger of the group, would never eat the others because his intellectual appetite is already satisfied by correcting their grammar and paleontology facts. Besides, cannibalism is frowned upon in a sitcom setting.

  • FatCat@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    4 months ago

    On a side note the free gemini version (whichever model they use) is absolute poo poo compared to free Claude or even Chatgpt.

  • ContrarianTrail@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    15
    ·
    4 months ago

    When you get a long and nuanced answer to a seemingly simple question you can be quite certain they know what they’re talking about. If you prefer a short and simple answer it’s better to ask someone who doesn’t.

    • Halcyon@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      4 months ago

      It’s a LLM. It doesn’t “know” what it’s talking about. Gemini is designed to write long nuanced answers to ‘every’ question, unless prompted otherwise.

      • ContrarianTrail@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        4 months ago

        Not knowing what it’s talking about is irrelevant if the answer is correct. Humans that knows what they’re talking about are just as prone to mistakes as an LLM is. Some could argue that in much more numerous ways too. I don’t see the way they work that different from each other as most other people here seem to.