A U.K. woman was photographed standing in a mirror where her reflections didn't match, but not because of a glitch in the Matrix. Instead, it's a simple iPhone computational photography mistake.
What? All of the photo is real. It’s just taken like 50 frames over a second or two and chosen the best. Since it sees 3 “people” it chooses the best for each “person” which leads to those 3 not being from the exact same, but still close enough.
I can’t imagine very many photos in evidence being like “well we can see you were holding the knife , actively stabbing toward them and also that they were stabbed to death by the knife, but I mean who knows WHAT could’ve happened in the one second the phone took the picture it stitched together to create this.”
Not to mention the phones still store all of the photos it took, so you could reference the original images that produced these composite images. Though I believe backups generally don’t store them all.
edit: since it wasn’t obvious to readers, this is a hypothetical of a techno-distopian future…
Imagine taking a selfie only to see an image of you holding a knife. But there are no knives in your hands. Another snap. Same image displays on the screen, but there’s a person of particular importance in the background. You turn your head but are all alone. Nobody is around. You’re starting to freak out. Are you being pranked, maybe your phone has been hacked. Another shutter sound effect and you see an image of yourself over a victim. You frantically open your camera’s gallery, thinking your eyes are fooling you, but the photos are the same. And are sent to the cloud. Deleting isn’t allowed, AI detected felonious imagery. You’ve been reported to multiple agencies. You are alone. There are no knives in your hands.
What? Again, the images are real. They just take 50 shots and use the best frame and thus movement can happen in between. They’re not using AI to make it into what they think the image should be. They’re just stitching together a bunch of frames taken from basically the same time (1-2 seconds) into one picture.
I was taking the comment thread (about how dangerous this could be in photographic evidence) a step further by imagining a hypothetical techno-distopian future where corporate controlled AI alters photos to make them look better, but in reality, it creates a back door where incriminating evidence can be created.
What? All of the photo is real. It’s just taken like 50 frames over a second or two and chosen the best. Since it sees 3 “people” it chooses the best for each “person” which leads to those 3 not being from the exact same, but still close enough.
I can’t imagine very many photos in evidence being like “well we can see you were holding the knife , actively stabbing toward them and also that they were stabbed to death by the knife, but I mean who knows WHAT could’ve happened in the one second the phone took the picture it stitched together to create this.”
Not to mention the phones still store all of the photos it took, so you could reference the original images that produced these composite images. Though I believe backups generally don’t store them all.
edit: since it wasn’t obvious to readers, this is a hypothetical of a techno-distopian future…
Imagine taking a selfie only to see an image of you holding a knife. But there are no knives in your hands. Another snap. Same image displays on the screen, but there’s a person of particular importance in the background. You turn your head but are all alone. Nobody is around. You’re starting to freak out. Are you being pranked, maybe your phone has been hacked. Another shutter sound effect and you see an image of yourself over a victim. You frantically open your camera’s gallery, thinking your eyes are fooling you, but the photos are the same. And are sent to the cloud. Deleting isn’t allowed, AI detected felonious imagery. You’ve been reported to multiple agencies. You are alone. There are no knives in your hands.
What? Again, the images are real. They just take 50 shots and use the best frame and thus movement can happen in between. They’re not using AI to make it into what they think the image should be. They’re just stitching together a bunch of frames taken from basically the same time (1-2 seconds) into one picture.
I was taking the comment thread (about how dangerous this could be in photographic evidence) a step further by imagining a hypothetical techno-distopian future where corporate controlled AI alters photos to make them look better, but in reality, it creates a back door where incriminating evidence can be created.