What do you think, ChatGPT? If it can create almost perfect summaries with a prompt; why wouldn’t it work in reverse? AI built into Windows could flag potentially subversives thoughts typed into Notepad or Word, as well as flag “problematic” clicks and compare it to previously profiled behavior. AI built into your GPU could build an behavioral profile based on your interactions with your hentai Sonic the Hedgehog game.

    • Tetsuo@jlai.lu
      link
      fedilink
      arrow-up
      16
      ·
      25 days ago

      Hello, I’m NVIDIA I send every app you use as telemetry. But you know it’s only to know in what apps your driver crash of course. I wouldn’t send that data to telemetry even when it doesn’t crash. Right?

    • sp3ctr4l@lemmy.zip
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      24 days ago

      True, you don’t need AI for security problems…

      …but it is introducing tons of them, for little to no benefit.

      About a month ago I saw a post for a MSFT led AI Security conference.

      None of it, absolutely none of it, was about how to say, leverage LLMs to aid in heuristic scanning for malware, or something like that.

      Literally every talk and booth at the conference was all about all the security flaws with LLMs and how to mitigate them.

      I’ll come back and edit my post with the link to what I’m talking about.

      EDIT: Found it.

      https://www.microsoft.com/en-us/security/blog/2024/09/19/join-us-at-microsoft-ignite-2024-and-learn-to-build-a-security-first-culture-with-ai/

      Unless I am missing something, literally every talk/panel here is about how to mitigate the security risks to your system/db which are introduced by LLM AI.

  • AlecSadler@sh.itjust.works
    link
    fedilink
    arrow-up
    28
    ·
    24 days ago

    I think it was during the Cambridge analytics days, but I read an article that the average person is tracked by over 5000 data points. So we’re already kinda f’d.

  • masterspace@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    ·
    24 days ago

    While it could, and I have no doubt that someone will try to do this, it’s not the reason it’s being shoehorned into everything.

    It’s partially because it’s the tech thing that’s ‘so hot right now’, so every tech enthusiast and hustler thinks it can be used everywhere to solve everything, and it’s partially because it’s a legitimately huge advancement in what computers are capable of doing, and one with a lot of room for growth and improvement, and can be legitimately useful in places like Notepad.

    • lordnikon@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      24 days ago

      Yeah Gen AI is the perfect demo tech looks amazing if you don’t look to close. Plus it’s the perfect bullshiting machine no wonder CEOs love it, it talks like they do. AI has its uses and it’s doing good work in the fields you don’t hear about much. But there are way more pets.com right that will go bust soon and the viable businesses will float to the surface. Hell we are going through that right now. Where web 2.0 are moving out of the growth phase and into the Enshittification phase.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    24 days ago

    They’re just sending every query home right now. Actual training is still resource-intensive and very expensive. I suspect they’re just grabbing as much data as they can get their hands on from everyone with unique identifiers and storing it for later training. Once the data they have is worth more than the cost to train on it then they’ll go ahead and run a giant model of everyone.

    At that point they’ll sell query time to corporations. “How many people would pay $400 for trainers with OLED screens on the sides”. “Oh really? Yes, I’d like to buy ads for all of those people”