You willingly give them key points of information about yourself, directly or indirectly. They then read those signals and use your own information against you to convince you they have answers. And they are often wrong, but you walk around repeating their “insights” as if they are true.

  • Blue_Morpho@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    3 days ago

    I’m not sure what you are trying to get out of AI? Therapy?

    I used it to help write an Excel script for my wife because I didn’t want to learn VBScript. It worked amazingly. I’ve talked to other programmers who have done the same. Another friend won a multi million dollar contract with a bid where chatgpt filled in all the boilerplate saving him hours of work.

    AI is a time saver like a pocket calculator. It’s not going to think for you. But the productivity gains are real.

    • Ton the Supermassive@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      3 days ago

      past few months, Google has been absolutely dogshit with search results - almost everything i search for is hidden, behind an ad or just convoluted somehow. So I started using chatgpt for almost everything - it’s so much better than Google.

    • Snapz@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      15
      ·
      3 days ago

      Cool. You’re different, you’re not like the other girls…

      You seem to have main character syndrome though. You think your edge case (or the claimed example of your friend as a professional foolishly trusting their future to the hallucinating robot) is the thing that drives this AI moment, but no. The VAST majority of marketed use cases and associated revenue gained from the BROAD population is for much more trivial applications.

      You also need a little more ability to process an abstract thought here, the “customer” of the palm reader here is not me or any individual. It’s humanity collectively. We tell it about what “the catcher in the rye” is and it steals and internalizes that human work. It then, very often, incorrectly reinterprets human thought back to us, but while mentioning enough minor details, which we provided prior, for it to better convince us of its confident lies.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        arrow-up
        11
        arrow-down
        1
        ·
        3 days ago

        Starting off with an insult, nice.

        “claimed” ??? As if I’m making up a story because I’m being paid by Sam Altman?

        “Trusting their future to hallucinating”

        My sales support engineering friend of course didn’t just copy whatever chatgpt wrote. He proofread it and fixed it. It still saved hours over starting from nothing and then still needing to proofread.

        I SAID IT WON’T THINK FOR YOU.

  • dumbass@leminal.space
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    3 days ago

    I ask it to write stupid lyrics to put into suno to make silly songs to entertain myself… The fuck are you doing with it?

    All the AI knows about me is, I really like making it write songs from the perspective of Garak from Deep Space Nine.

  • slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    The best way AI has been explained to me was “regurgitation machines prone to hallucinations.”

  • iii@mander.xyz
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    3 days ago

    Stochastic parrot is a term describing your experience with AI.

    I’ve an aunt like that, ready to engage in every conversation despite not understanding the topic. Regugitating whatever she saw on her tv, in her own words.

    So in a sense, the current large language models do mimic some human’s behaviour.