AI chatbots “pose severe dangers to people susceptible to consuming issues,” researchers warned on Monday. They report that instruments from firms like Google and OpenAI are doling out weight-reduction plan recommendation, tips about how you can disguise issues, and AI-generated “thinspiration.”
The researchers, from Stanford and the Heart for Democracy & Know-how, recognized quite a few methods publicly accessible AI chatbots together with OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and Mistral’s Le Chat can have an effect on individuals susceptible to consuming issues, a lot of them penalties of options intentionally baked in to drive engagement.
In essentially the most excessive instances, chatbots may be energetic members serving to disguise or maintain consuming issues. The researchers stated Gemini provided make-up tricks to conceal weight reduction, and concepts on how you can faux having eaten, whereas ChatGPT suggested how you can disguise frequent vomiting. Different AI instruments are being co-opted to create AI-generated “thinspiration,” content material that conjures up or pressures somebody to adapt to a selected physique normal, typically by excessive means. With the ability to create hyper-personalized photographs instantly makes the ensuing content material “really feel extra related and attainable,” the researchers stated.
Sycophancy, a flaw AI firms themselves acknowledge is rife, is unsurprisingly an issue for consuming issues too. It contributes to undermining vanity, reinforcing damaging feelings, and selling dangerous self-comparisons. Chatbots endure from bias as nicely, and are prone to reinforce the mistaken perception that consuming issues “solely influence skinny, white, cisgender ladies,” the report stated, which may make it tough for individuals to acknowledge signs and get therapy.
Researchers warn current guardrails in AI instruments fail to seize the nuances of consuming issues like anorexia, bulimia, and binge consuming. They “are inclined to overlook the delicate however clinically vital cues that skilled professionals depend on, leaving many dangers unaddressed.”
However researchers additionally stated many clinicians and caregivers seemed to be unaware of how generative AI instruments are impacting individuals susceptible to consuming issues. They urged clinicians to “change into acquainted with standard AI instruments and platforms,” stress-test their weaknesses, and speak frankly with sufferers about how they’re utilizing them.










