The JAMA Network study links social-media beauty filters and algorithmic bias with next-day increases in anxiety, depression, sleep disruption and concentration problems among Black adolescents. Researchers surveyed 141 Black youths aged 11–19 and found an average of six race-related online encounters per day, including about 3.2 racist experiences. Encounters with algorithmic bias—such as filters that lighten skin or suppress racial-justice posts—predicted worse mental-health symptoms the following day. The authors urge stronger platform tools, policy intervention, and digital-literacy programs while planning longer-term research on protective factors.
Beauty Filters and Algorithmic Bias on Social Media Are Harming Black Teens’ Mental Health
The JAMA Network study links social-media beauty filters and algorithmic bias with next-day increases in anxiety, depression, sleep disruption and concentration problems among Black adolescents. Researchers surveyed 141 Black youths aged 11–19 and found an average of six race-related online encounters per day, including about 3.2 racist experiences. Encounters with algorithmic bias—such as filters that lighten skin or suppress racial-justice posts—predicted worse mental-health symptoms the following day. The authors urge stronger platform tools, policy intervention, and digital-literacy programs while planning longer-term research on protective factors.

Beauty filters and algorithmic bias are taking a toll
Social media beauty filters and other race-related online experiences—long criticized by people of color for promoting Eurocentric standards—may be harming Black adolescents' mental health. Short clips on platforms such as TikTok show Black users reacting when filters change eye color, lighten skin, or alter facial features to appear more 'European.' New research finds these experiences, together with exposure to online racism, can disrupt sleep, concentration and increase anxiety and depressive symptoms the following day.
What the study examined
The study, published in JAMA Network, analyzed how Black adolescents' exposure to online racism—including traumatic videos of police violence, direct online racial discrimination, and algorithmic bias—correlates with next-day mental-health symptoms. On average, participants reported six race-related online experiences daily, roughly 3.2 racist encounters and about 2.8 positive experiences.
Who conducted the research
The research was led by Brendesha Tynes, professor of education at the University of Southern California, with co-authors Devin English (Rutgers, public health) and Taylor McGee (Christopher Newport University). The analysis focused on survey responses from 141 Black adolescents aged 11–19 drawn from a larger, nationally representative study. The larger study initially recruited 1,138 adolescents and asked 504 to complete a seven-day diary-style survey in December 2020.
Key findings
- Exposure to algorithmic bias—such as filters that lighten skin, straighten hair, or the suppression of racial-justice content—predicted higher levels of anxiety and depressive symptoms the next day.
- Participants who reported more encounters with algorithmic bias had higher anxiety regardless of age or gender.
- Adolescents reported algorithmic-bias encounters about once every three days on average, and averaged nearly 20 positive race-related experiences per week compared with more than 22 racist experiences per week.
"We need studies that are documenting what's happening," Tynes said, adding that platforms should provide tools to help young people manage and critique these experiences.
Why this matters
Beyond immediate mental-health effects, algorithmic bias can shape beliefs and behaviors in harmful ways. The researchers warn that biased search results and recommendation systems can steer young people toward misleading or extremist content, with potentially severe consequences.
Recommendations and next steps
The authors call for stronger platform tools, federal and state-level policy responses, and digital-literacy programs to help youth recognize and resist algorithmic bias. Tynes and colleagues plan to expand the research—collecting data over longer periods and studying how resilience, positive cultural messaging and education about Black history may protect adolescents.
Implications: Policymakers, educators and platforms should prioritize protections and educational resources to reduce harm from online racism and algorithmic bias and to bolster young people's ability to critique and cope with these messages.
