What do you think, ChatGPT? If it can create almost perfect summaries with a prompt; why wouldn’t it work in reverse? AI built into Windows could flag potentially subversives thoughts typed into Notepad or Word, as well as flag “problematic” clicks and compare it to previously profiled behavior. AI built into your GPU could build an behavioral profile based on your interactions with your hentai Sonic the Hedgehog game.
Don’t need AI for any of this. It already happens with OS and Application telemetry.
Hello, I’m NVIDIA I send every app you use as telemetry. But you know it’s only to know in what apps your driver crash of course. I wouldn’t send that data to telemetry even when it doesn’t crash. Right?
And it’s been escalated with AI
True, you don’t need AI for security problems…
…but it is introducing tons of them, for little to no benefit.
About a month ago I saw a post for a MSFT led AI Security conference.
None of it, absolutely none of it, was about how to say, leverage LLMs to aid in heuristic scanning for malware, or something like that.
Literally every talk and booth at the conference was all about all the security flaws with LLMs and how to mitigate them.
I’ll come back and edit my post with the link to what I’m talking about.
EDIT: Found it.
Unless I am missing something, literally every talk/panel here is about how to mitigate the security risks to your system/db which are introduced by LLM AI.
Sorry, what was that? “BUY BUY BUY”?
I think it was during the Cambridge analytics days, but I read an article that the average person is tracked by over 5000 data points. So we’re already kinda f’d.
Defeatism plays into their advantage, you can always minimize the tracking. E.g. https://www.goodreads.com/book/show/54033555-extreme-privacy
Ah, yeah, sorry, didn’t mean to come off defeatist. I see that now.
As someone who recently ditched Alexa, blocks his smart TVs, and runs everything through PiHole and a VPN, I’m definitely…sorta trying.
If you don’t start limiting house electrical hours are you even trying
We’ve noticed.
Initiating countermeasures
While it could, and I have no doubt that someone will try to do this, it’s not the reason it’s being shoehorned into everything.
It’s partially because it’s the tech thing that’s ‘so hot right now’, so every tech enthusiast and hustler thinks it can be used everywhere to solve everything, and it’s partially because it’s a legitimately huge advancement in what computers are capable of doing, and one with a lot of room for growth and improvement, and can be legitimately useful in places like Notepad.
Yeah Gen AI is the perfect demo tech looks amazing if you don’t look to close. Plus it’s the perfect bullshiting machine no wonder CEOs love it, it talks like they do. AI has its uses and it’s doing good work in the fields you don’t hear about much. But there are way more pets.com right that will go bust soon and the viable businesses will float to the surface. Hell we are going through that right now. Where web 2.0 are moving out of the growth phase and into the Enshittification phase.
They’re just sending every query home right now. Actual training is still resource-intensive and very expensive. I suspect they’re just grabbing as much data as they can get their hands on from everyone with unique identifiers and storing it for later training. Once the data they have is worth more than the cost to train on it then they’ll go ahead and run a giant model of everyone.
At that point they’ll sell query time to corporations. “How many people would pay $400 for trainers with OLED screens on the sides”. “Oh really? Yes, I’d like to buy ads for all of those people”