Disturbing fake images and dangerous chatbot advice: New research shows how ChatGPT, Bard, Stable Diffusion and more could fuel one of the most deadly mental illnesses
WP gift article expires in 14 days.
https://counterhate.com/wp-content/uploads/2023/08/230705-AI-and-Eating-Disorders-REPORT.pdf
I agree in regards to image generation, but chat bots giving advice which risk fueling eating disorders is a problem
Someone with an eating disorder might ask a language model about weight loss advice using pro-anorexia language, and it would be good if the chatbot didn’t respond in a way that might risk fueling that eating disorder. Language models already have safeguards against e.g. hate speech, it would in my opinion be a good idea to add safeguards related to eating disorders as well.
Of course, this isn’t a solution to eating disorders, you can probably still find plenty of harmful advice on the internet in various ways. Reducing the ways that people can reinforce their eating disorders is still a beneficial thing to do.