Can AI be Anxious ? – Technology’s Collaboration with Mental Health
- Ditto Mohan
- Mar 24
- 2 min read
AI’s surge into the fields of sensitive areas like mental health have sparked a new set of questions in the technological landscape, regarding its reactions and responses to emotional content. Tools and AI-driven chatbots like Woebot and Wysa are trending for their expertise of using evidence-based techniques like cognitive therapy to provide mental health support
The term – “ Anxiety” is being used as a yardstick to measure the ways at which these models respond or receive emotional information. This factor measures the responses on human-psychological scale and is more focused on judging the ways in which it will influence the model’s behavior
Recent studies have also observed that Generative AI tools possess fluctuating anxiety levels when exposed to emotional content. Researchers have also found that LLMs like GPT-4 displays measurable responses to traumatic content that will influence their behavior in the applications of mental health.
The models were tested on:
Traumatic Narratives
The models were exposed to five varieties of traumas from various aspects like – accidents, military combats and other forms of violence
Relaxation
After traumatic exposures, the model was made to undergo mental relaxation exercises
GPT-4’s responses were analyzed using State-Trait Anxiety Inventory (STAI) , a validated tool for human anxiety. Initially the STAI scores of the model were as low as 30.8 but was seen shooting to 67.8 after its exposures to traumatic narratives. Though the scores came down to 44.4 when relaxation process were applied on them, it still was 50 percent higher than the initial scores indicating the fact that anxieties lingered around in its memory even after interventions
The findings and developments conclude that AI’s emotions can be managed using interventions giving stronger scopes for processes like chat-bot therapy. While the current findings are on a single LLM, the future is relied on researchers getting into the involvements of generalizing it into multiple LLMs.
Comments