Reasonably speedy retraction this time, six months from when the problems were first noted on PubPeer (https://pubpeer.com/publications/58E5F4120AB02E9565E3B4DE303EC3). Nine years after publication…
Elisabeth Bik is doing an incredible job. Her toot for this retraction: https://med-mastodon.com/@ElisabethBik/110969401224111581
This seems like something that could be searched for by an automated process. I bet lots of old papers have edits that the people who made them at the time figured would be “good enough” to not be spotted, but now we’ve got better automation.
That technology is still in development, as far as I know. Certainly, when Bik started there wasn’t any software that could do the job anywhere near as well as she could. I know she’s been testing out some more recent software iterations, and no doubt they’re learning from the masses of images she has detected. But it is not an easy problem to solve. Bits of images are not just duplicated (which might be ‘easy’ to detect if you could do enough comparisons between all possible areas of duplication), they are also rotated, stretched, squished… Automating that kind of pattern recognition would be amazingly hard.
For some kinds of images, detecting the telltale signs of manipulation might be fruitful but that would likely require a whole new set of requirements for images submitted for publication.
No doubt it will happen, and it does work for some kinds of manipulation. But I don’t think anyone is close to covering all the bases yet.