The Mind Behind Hate Speech On Social Media
- Lidi Garcia
- Aug 6
- 4 min read

Social media is increasingly being used to spread hate speech and misinformation, which can negatively affect people's mental health and even influence violent behavior. A recent study analyzed thousands of Reddit posts using artificial intelligence and found that language patterns associated with hate speech and misinformation resemble those of communities with certain mental disorders, such as antisocial personality disorder or anxiety. This can help us better understand the causes of these online behaviors and how to address them.
In recent years, social media has become an important part of our daily lives. However, at the same time, it has become a cause for concern due to the spread of hateful messages. This type of speech is characterized by offensive statements directed at groups of people who share common ground, such as ethnicity, religion, or sexual orientation.
This content, in addition to fueling prejudice, can influence society offline, increasing discriminatory behavior and even violence. A serious example of this was when the UN accused Facebook of aiding, even indirectly, in acts of genocide by not containing hate speech.
Another problem with social media is disinformation, the dissemination of false or misleading information, often contrary to scientific evidence. This type of content has already caused harm in important areas of health, such as vaccines, pandemics, and conventional medical treatments, such as cancer.

Misinformation can cause people to distrust medicine or adopt dangerous habits. Furthermore, it is believed that both hate speech and misinformation may arise from a lack of empathy or a difficulty putting oneself in another person's shoes.
Recent research has begun to examine whether there is a relationship between this type of online behavior and certain personality traits, especially those linked to the so-called Dark Triad: narcissism (excessive ego), Machiavellianism (a tendency toward manipulation), and psychopathy (a lack of empathy and impulsivity).
These traits share similarities with some personality disorders already recognized by psychiatry. Therefore, scientists want to better understand whether people who spread hate or lies online may be dealing with a deeper mental disorder.
This relationship, however, is still not entirely clear. Even though some personality traits resemble diagnosable mental disorders, they are not the same. Furthermore, mental health is a very broad field, and there are many other mental conditions that may be related to these online behaviors, but which have not yet been fully studied.
Therefore, researchers want to better understand how people's overall mental health may be linked to what they post online, especially when it involves hate or misinformation.

To help with this task, scientists are using artificial intelligence tools like ChatGPT. This type of technology is capable of analyzing large volumes of text quickly and intelligently.
For example, the GPT3 model can transform text into vectors, that is, numerical representations, which help computers understand the meaning behind the words. Thus, even without being specifically trained for a task, the system can "guess" what a text is trying to say, whether it contains hate speech, misinformation, or some pattern that suggests mental disorders.

Topological mapping of all distilled areas, colored by category. The yellow nodes within the named space are for psychiatric disorders.
In this study, researchers from the University of Alabama at Birmingham, USA, collected thousands of posts published in Reddit communities, a widely used site for online discussions. They chose specific communities, some linked to topics like hate speech, others focused on misinformation, and also groups discussing mental health and psychiatric disorders. Using GPT3, they transformed these texts into vectors and analyzed the language patterns.
Afterward, they applied a technique called topological data analysis, which helps create visual maps that show how different types of discourse are related to each other. With this method, the scientists were able to see how hate speech and misinformation relate to certain psychiatric disorders by observing how people express themselves in these posts.
The results showed that hate speech bears more resemblance to the speech pattern found in communities of people with Antisocial Personality Disorder, Borderline Disorder, Narcissistic Disorder, Schizoid Disorder, and Complex Post-Traumatic Stress Disorder.

Topographic mapping colored by percentage of occurrence in brain areas related to hate speech: (A) Narcissistic Personality Disorder. (B) Schizoid Personality Disorder. (C) Antisocial Personality Disorder. Yellow = No occurrence of disorders, Red = Totally composed of occurrence of disorders.
Speech patterns in communities spreading misinformation were more similar to those in neutral communities, but still showed some similarity to groups discussing anxiety disorders.
These findings are still in their infancy, but they help us understand how what we say online may be connected to our mental health. Furthermore, they show that artificial intelligence can be an important ally in mapping these relationships and, perhaps, in the future, help create more effective strategies to combat hate and misinformation online.

READ MORE:
Topological data mapping of online hate speech, misinformation, and general mental health: A large language model based study
Andrew William Alexander and Hongbin Wang
PLOS Digit Health, 4(7): e0000935, July 29, 2025
Abstract:
The advent of social media has led to an increased concern over its potential to propagate hate speech and misinformation, which, in addition to contributing to prejudice and discrimination, has been suspected of playing a role in increasing social violence and crimes in the United States. While literature has shown the existence of an association between posting hate speech and misinformation online and certain personality traits of posters, the general relationship and relevance of online hate speech/misinformation in the context of overall psychological wellbeing of posters remain elusive. One difficulty lies in finding data analytics tools capable of adequately analyzing the massive amount of social media posts to uncover the underlying hidden links. Machine learning and large language models such as ChatGPT make such an analysis possible. In this study, we collected thousands of posts from carefully selected communities on the social media site Reddit. We then utilized OpenAI’s GPT3 to derive embeddings of these posts, which are high-dimensional real-numbered vectors that presumably represent the hidden semantics of posts. We then performed various machine-learning classifications based on these embeddings in order to identify potential similarities between hate speech/misinformation speech patterns and those of various communities. Finally, a topological data analysis (TDA) was applied to the embeddings to obtain a visual map connecting online hate speech, misinformation, various psychiatric disorders, and general mental health.



Comments