AI shouldn’t be anywhere near law enforcement. Including automated patrol software.
It’s not AI, though. They’re just using buzzwords, because what they described is functionally no different from AFIS. It’s just a poorly written algorithm.
I’m aware, but unfortunately I’m not big enough in the tech industry to create differentiating terms. AI is an extremely broad term ranging from literal if-else statements to LLMs and generative AI. Unfortunately the specifics usually get buried in the term
Don’t be scared of the inevitable
deleted by creator
I wish the term Ai would be stopped, because these devices are far from the idea of what ai is.
I always thought machine learning was descriptive and made sense. I guess it just didn’t get investors erect enough.
Yeah, things that weren’t called AI years back are just getting called AI now.
“AI” was always an imprecise term - even compilers used to be called AI once
AI has been used to refer to all kinds of dynamic programming in the history of computation. Algebraic solvers, edge detection, fuzzy decision systems, player programs for video games and tabletop games. So when you say AI is this or that you are being rather prescriptivist about it.
The problem with AI and ML is more one of it being presented to the public by grifters as a magical one stop solution to almost any problem. What term was used hardly matters, it was the propaganda that carried the term. It would be like saying the name Nike is the reason for the shoe brand’s success and not it’s marketing.
So discredit the grifters, and if you want to destroy the term then look to dilute it by using it to describe even more things. It was never really a useful term to begin with. I’ll leave you with this quote
A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labelled AI anymore.
It’s almost like the incessant marketing of standard optimisation algorithms as artificial intelligence has diluted the tech industry with meaningless buzzwords.
TLDR:
In 2018, a man in a baseball cap stole thousands of dollars worth of watches from a store in central Detroit.
The AI was trained on a database of mostly white people The photos of people of colour in the dataset were generally of worse quality, as default camera settings are often not optimised to capture darker skin tones.
Mr Williams’ photo didn’t come up first. In fact, it was the ninth-most-probable match.
Regardless…
Officers drove to Mr Williams’ house and handcuffed him.
They arrested him in front of his five and two-year-old kids…
Ai with bad training data + lazy cops who didn’t learn how to use the tools they were given = this mess
Sounds like the same old law enforcement trend; technology deployed as an excuse generator.
The computer didn’t get it wrong; the computer did exactly what it was programmed to do. Blaming the computer implies that this can be solved by fixing the computer, that it “just wasn’t good enough yet”, when it was the humans who actually did it. It was the humans who were supposed to exercise their judgment that got it wrong. You can’t fix that from the computer.
PICNIC, or PEBKAC.
Ever since we let law enforcement use facial recognition technology, they’ve been arresting people for false positives, sometimes for long periods of time.
it’s not just camera problems and being poorly trained regarding non-whites, but that people actually look too much alike, especially when using the tech on blurry low-res security footage,
I used to work in security camera monitoring and I used to think I don’t understand why insurers will touch some of these companies with an electrified cattle prod.
They will be pretty high value asset companies with valuable stuff on premises that could be stolen, construction equipment, medical equipment, guns, cars, steel copper lead etc. and their security cameras would max out at 720p have a giant spider web on them without fail and would invariably be on some wobbly pole somewhere that was blowing around in the wind causing 300 false positives a minute. We literally used to switch those cameras off.
Why don’t they insist on equipment that didn’t cost the company $4.50 from Walmart?
The only cameras we used to work with that were actually any good were the number plate recognition cameras, but they were specialist and were absolutely useless for anything else other than number plate recognition. But boy did they get you that number plate.
This is the best summary I could come up with:
Facial recognition could analyse a blown-up still taken from a security tape, sift through a database of millions of driver licence photos, and identify the person who did the crime.
Months later, the facial recognition system used by Detroit police combed through its database of millions of driver licences to identify the criminal in the grainy security tapes.
By January 2020, as Mr Williams had his mug shot taken in the Detroit detention centre, civil liberties groups knew that black people were being falsely accused due to this technology.
It would give law enforcement and security agencies quick access to up to 100 million facial images from databases around Australia, including driver licences and passport photos.
That didn’t stop the then government from ploughing ahead with its planned national facial recognition system, says Edward Santow, an expert on responsible AI at the University of Technology Sydney, and the Australian Human Rights Commissioner at the time.
Despite this, last month Senate estimates heard the federal police tested a second commercial one-to-many face matching service, Pim Eyes, earlier this year.
The original article contains 1,870 words, the summary contains 162 words. Saved 91%. I’m a bot and I’m open source!
deleted by creator
deleted by creator
deleted by creator