- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Robocalls of President Biden already confused primary voters in New Hampshire – but measures to curb the technology could be too little too late
The AI election is here.
Already this year, a robocall generated using artificial intelligence targeted New Hampshire voters in the January primary, purporting to be President Joe Biden and telling them to stay home in what officials said could be the first attempt at using AI to interfere with a US election. The “deepfake” calls were linked to two Texas companies, Life Corporation and Lingo Telecom.
It’s not clear if the deepfake calls actually prevented voters from turning out, but that doesn’t really matter, said Lisa Gilbert, executive vice-president of Public Citizen, a group that’s been pushing for federal and state regulation of AI’s use in politics.
Disinformation is the core issue, but isn’t that like saying “the fire is the problem, not the gasoline being poured on it.”
Nope.
Or to use you analogy: It’s like discussing bans of kerosin to fix the fire instead of acknowledging that they are pouring gasoline on it.
There are good reasons to discuss limiting AI. But this discussion is somewhere between stupidity and diversion. It will not change the fact that a lot of todays (especially social) media is running on narratives, bullshit and desinformation. That’s not new and AI will barely be able to make it even worse.
This none-topic is however very useful in ignoring the underlying issue (lack of media competence and education) and diverting from the actual risks of AI (surveilance).
I get your point. I would disagree that AI will “barely” make it worse, since it’s basically a tool to churn out disinformation at a higher order of magnitude than ever before. However, I do agree that targeting AI isn’t the solution; once the tools exist you can’t put them back in the box. We should be focusing on how to get our society to value truth again.