Ledger of Harms

Beta Version Updated Apr 5, 2021

Under immense pressure to prioritize engagement and growth, technology platforms have created a race for human attention that’s unleashed invisible harms to society. Here are some of the costs that aren't showing up on their balance sheets.

We hope these factoids, each supported by a citation, help to advance your work. Please share with others who might also find them useful.

Icon for Making Sense of the World

Making Sense of the World

Misinformation, conspiracy theories, and fake news

Why It Matters

A broken information ecology undermines our ability to understand and act on complex global challenges from climate change to COVID-19.

Icon for Making Sense of the World

Making Sense of the World

Evidence

4x

more views were generated by the 10 most popular Facebook COVID misinformation sites compared to content from the 10 leading international health institutions (e.g., WHO & CDC). Analysis indicates that major Facebook networks spread misinformation across at least 5 countries and generated an estimated 3.8 billion views in just the first 8 months of 2020. In a sample of nearly 200 health misinformation articles and posts, 84% carried no warning label despite being fact-checked by the platform. It's estimated that Facebook could cut the reach of such misinformation by 80% simply by changing its algorithm to ensure that misinformation is downgraded in users' news feeds.

64%

of all extremist group joins are due to our recommendation tools...our recommendation systems grow the problem”, noted an internal Facebook presentation in 2016. Yet repeated attempts to counteract this have been repeatedly ignored, diluted, or deliberately shut down by senior Facebook officers, according to a 2020 Wall Street Journal investigation. In 2018, Facebook managers told employees the company’s priorities were shifting “away from societal good to individual value.”

6X faster

Fake news spreads six times faster than true news. According to researchers, this is because fake news grabs our attention more than authentic information: fake news items usually have a higher emotional content and contain unexpected information which inevitably means that they will be shared and reposted more often.

Anger is the emotion that travels fastest and farthest on social media, compared to all other emotions. As a result, those who post angry messages will inevitably have the greatest influence, and social media platforms will tend to be dominated by anger.

45%

of tweets about coronavirus are from bots spreading fake information, according to research from Carnegie Mellon University. An analysis of more than 200 million tweets created since January 2020 indicates more than 100 false narratives, including conspiracy theories that hospitals are full of mannequins. Researchers note that these posts appear to be aimed at sowing division within America, commenting “We do know that it looks like a propaganda machine.”

As the COVID-19 pandemic develops, there has been a significant increase in posting fake news and false information even among human users, due to the algorithms underlying social media platforms. Researchers note that people naturally repost messages on the basis of their popularity, rather than their accuracy. Fact-checking has been unable to keep pace. Such false information is particularly dangerous because, as noted above, it tends to be retained for a long time, irrespective of fact correction.

17%

Each word of moral outrage added to a tweet increases the rate of retweets by 17%. It takes very little effort to tip the emotional balance within social media spaces, catalyzing and accelerating further polarization.

Reading a fake news item even once increases the chances of a reader judging that it is true when they next encounter it, even when the news item has been labeled as suspect by fact-checkers or is counter to the reader’s own political standpoint. The damage done by fake news items in the past continues to reverberate today. Psychological mechanisms such as these, twinned with the speed at which fake news travels, highlight our vulnerability demonstrating how we can easily be manipulated by anyone planting fakes news or using bots to spread their own viewpoints.

The primary driving force behind whether someone will share a piece of information is not its accuracy or even its content; the main reason we share a post is because it comes from a friend or a celebrity with whom we want to be associated. As humans, we’re often more concerned with status, popularity, and establishing a trusted “friends” circle, than with maintaining the truth. As a result, social media spaces will inevitably be spaces where the truth is easily downgraded.

2 minutes

of exposure to a conspiracy theory video reduces people’s pro-social attitudes (such as their willingness to help others), as well as reducing their belief in established scientific facts.

Analysis indicates that bots wield a disproportionate influence, dominating social media platforms such as Twitter. An estimated 66% of tweeted links to popular websites are tweeted by bots, with this number climbing to 89% for popular news sites. In addition, bots overwhelmed human users: in this study, 500 bots were responsible for 22% of the tweets, compared to the top 500 human users who only accounted for 6% of tweets. As a result, those who create bots can manipulate and artificially tilt the balance of shared social spaces.

An Oxford research study of 22 million tweets showed that Twitter users had shared more “misinformation, polarizing, and conspiratorial content” than had shared actual news stories

Twitter now plays a key role in how journalists find news. According to a recent survey, many journalists see tweets as equally newsworthy compared to headlines from the Associated Press. As a result, the neutrality of the press can be easily undermined: on the one hand, professional journalists can be manipulated by bots and bad faith actors and on the other hand, the chance of radical content, conspiracies, and other types of disinformation occurring in professional news articles are extremely high.

Using a wide range of tricky techniques, malicious actors of all types use social media to rapidly advance their agenda. They have developed sophisticated media manipulation strategies, including hijacking existing memes and seeding false narratives widely. Manuals for journalists and other media professionals to defend against these strategies naturally lag far behind, and are just starting to be developed in civil society.

Analysis indicates that foreign governments place and promote misinformation stories on multiple social media channels, creating the illusion of known truth emerging from diverse "independent" sources.

Fake news items contain more anger than posts of real news. According to research conducted with more than 1,000 active users on China's Weibo platform, angry posts generate more anxiety and this, in turn, motivates readers to share them further. Analyzing over 30,000 posts on Weibo, the researchers found that fake news posts contained 17% fewer "joy" words but 6% more "anger" words compared to real news posts. They found similar trends in an analysis of 40,000 posts on Twitter.

Are we missing something?

We need your help! Contribute a study, article, or correction. ↗

Icon for Attention and Cognition

Attention and Cognition

Loss of crucial abilities including memory and focus

Why It Matters

Technology's constant interruptions and precisely-targeted distractions are taking a toll on our ability to think, to focus, to solve problems, and to be present with each other.

Icon for Attention and Cognition

Attention and Cognition

Evidence

The level of social media use on a given day is linked to a significant correlated increase in memory failure the next day. Assessing nearly 800 people aged 25-75, research showed similar effects irrespective of age and of the user's memory levels on previous days. These effects occur in one direction only: levels of social media use predict later memory failure, but levels of memory failure do not predict social media use on subsequent days.

Draining attention

The mere presence of your smartphone, even when it is turned off and face down, drains your attention. An experimental study of several hundred adults showed that both working memory and the ability to solve new problems were drastically reduced when their phones were turned off but present on their desks, as opposed to being in another room. Ironically, participants who said they were highly dependent on their phones showed the greatest increase in memory and fluid intelligence scores when their phones were moved to the other room. Researchers noted that smartphones act as "high-priority stimuli," unconsciously draining significant attentional resources even when we consciously ignore them.

3

months after starting to use a smartphone, users experience a significant decrease in their mental arithmetic scores (indicating a reduction in their attentional capacity) and a significant increase in social conformity, as shown by experiments with 25 year olds using randomized controlled trials. In addition, brain scans show that heavy users have significantly reduced neural activity in their right prefrontal cortex, a condition also seen in ADHD, and linked with serious behavioral abnormalities such as impulsivity and poor attention.

75%

of screen content is viewed for less than 1 minute, according to a study that tracked computer multitasking across the course of 1 day. Results indicate that most people switched between different content every 19 seconds. Biological analysis demonstrated that participants experienced a neurological "high" whenever they switched — explaining why we feel driven to keep switching and underscoring how human biology makes us vulnerable to being manipulated by attention-extractive economies.

1 hour per day

is the amount of time most Americans spend dealing with distractions and then getting focused and back on track each day, which comes to a grand total of 5 full weeks in a year.

Our memory systems automatically prioritize superficial social text over more complex forms of text. For example, after reading news online, people are significantly better at remembering other readers' comments rather than sentences from the article itself or even its headline; this is in part due to the social (“gossipy”) nature of posted comments. Similarly, users remember Facebook posts significantly better than sentences from a book or even human faces, on the same scale as the memory difference between amnesics and non-amnesics.

A meta-analysis of several dozen research studies indicates that higher levels of switching between different media channels is significantly linked to lower levels of both working memory and long-term memory. Given the current Extractive Attention Economy, and the increasing number of social media platforms and apps competing to capture our attention, basic human capacities — such as our memories — are increasingly under attack.

Are we missing something?

We need your help! Contribute a study, article, or correction. ↗

Icon for Physical and Mental Health

Physical and Mental Health

Stress, loneliness, feelings of addiction, and increased risky health behavior

Why It Matters

As technology increasingly pervades our waking lives, research is showing a wide range of effects on our happiness, our self image, and our mental health.

Icon for Physical and Mental Health

Physical and Mental Health

Evidence

A long-term study of several hundred Dutch teenagers shows that problematic social media use is significantly linked to the emergence of serious cognitive effects a year later, including reduced attention, increased impulsivity, and increased hyperactivity. Losing control over social media habits (such as lying to parents to gain access to social media) was significantly more likely to lead to new attentional problems a year later.

30%

of 18-44 year olds feel anxious if they haven’t checked Facebook in the last 2 hours. A recent survey of over 2,000 American adults indicates a high incidence of potential warning signs of Facebook addiction, particularly among 18-44 year olds, among whom 30% feel anxious if they haven't checked it for 2 hours. In fact, many are so hooked that 31% report checking it while driving and 16% while making love.

1 month

away from Facebook leads to a significant improvement in emotional well-being. In an experimental study of over 1,600 American adults (who normally used Facebook for up to an hour each day), deactivating Facebook accounts led to a significant increase in emotional well-being (including a reduction in loneliness and an increase in happiness), as well as a significant reduction in political polarization.

The greater your level of Facebook addiction, the lower your brain volume. MRI brain scans of Facebook users demonstrated a significant reduction in gray matter in the amygdala correlated with their level of addiction to Facebook. This pruning away of brain matter is similar to the type of cell death seen in cocaine addicts.

Young men exposed to social media show an increased tendency to physically objectify themselves. A study of 300 men aged 17-25 indicated that their frequency of visiting celebrity, fashion, or personal grooming sites correlated significantly with an increase in social appearance anxiety and monitoring of their own body shape. In turn, this was linked with an increased idealization of lean, muscular body types but without any increase in activity to achieve these new ideals, creating a sense of powerless dissatisfaction with their existing bodies.

The number of "Likes" on a celebrity Instagram account can significantly change how you see yourself. An experimental study showed that when women were exposed to different celebrity Instagram images, their ratings of their own facial appearance dropped in direct proportion to the number of "likes" attached to each image they saw. Given that there are 1 billion active Instagram users, and some celebrities have more than 150 million followers, the scale of impact is vast.

Posting alcohol-related messages on Facebook can lead to an increase in alcoholic behavior and alcoholic identity in real life. Research analysis of several hundred college students revealed that the more they posted alcohol-related messages, the more their real life social groups tended to shift a few months later towards friends with higher alcohol use, which then in turn linked to an increase in their own levels of drinking a few months after that.

The more time you spend on Instagram, the more likely you are to suffer eating disorders such as orthorexia nervosa, (a clinical condition where sufferers obsess about ideal foods so much that they stop eating adequately, seriously endangering their health). According to research, no other social media platforms have this correlative effect. Scientists believe this is because images of food have more impact — and are remembered longer— than text, and because food images from "celebrity" Instagram users have a dramatically disproportionate influence on their followers' reactions to food. According to researchers, Instagram's algorithm recommendations allow orthorexia sufferers to become trapped in an echo chamber of images that only show a distorted reality of food images and how to react to food.

In just 3 years, there has been a quadrupling in the number of plastic surgeons with patients undergoing cosmetic surgery for the sake of looking good on social media (from 13% in 2016 to 55% in 2019). The greatest increase is in patients under the age of 30, particularly teenagers. Doctors point to the role of social media in creating an exaggerated idea of what is normal in beauty and as a result, distorting viewers' sense of their own appearance. According to clinicians, such Body Dysmorphic Disorder (BDD) (aka “Snapchat Dysmorphia”) is rapidly on the increase.

Are we missing something?

We need your help! Contribute a study, article, or correction. ↗

Icon for Social Relationships

Social Relationships

Less empathy, more confusion and misinterpretation

Why It Matters

While social networks claim to connect us, all too often they distract us from connecting with those directly in front of us, leaving many feeling both connected and socially isolated.

Icon for Social Relationships

Social Relationships

Evidence

A person’s social media usage level significantly predicts their level of neuroticism/ anxiety one year later, as shown by a long-term study of 11,000 people aged 20-97. In addition, levels of neuroticism/anxiety predicted later levels of social media use, leading researchers to suggest a possible negative downward spiral linking these two processes.

The mere presence of a mobile phone can disrupt the connection between two people, leading to reduced feelings of empathy, trust, and a sense of closeness. In a series of studies, researchers found that when pairs of strangers were asked to have meaningful conversations, their ability to connect emotionally was significantly reduced if a mobile phone was visible.

2X

Children under age 14 spend nearly twice as long with tech devices (3 hours and 18 minutes per day) as they do in conversation with their families (1 hour and 43 minutes per day).

Parental use of mobile devices during playtime with their children can lead to significant levels of child distress. A study of 50 infant-mother pairs indicated that infants showed greater unhappiness, fewer positive emotions, and were significantly less likely to play with toys when their mothers looked at their devices for as little as 2 minutes.

50%

of Americans report that their partner is often or sometimes distracted by their devices when they are trying to talk to them.

The more that someone treats an AI (such as Siri) as if it has human qualities, the more they later dehumanize actual humans, and treat them poorly.

50%

of parents reported that mobile devices typically interrupted the time they spent with their children 3 or more times each day; only 11% reported that mobile devices did NOT interrupt their time with their children.

People who took photos to share on Facebook experienced less enjoyment and less engagement with the scene compared to those who took photos purely for their own pleasure. Closer analysis indicates that taking photos to share on social media increases a user's focus on their own self-identity and self-presentation, distracting them from connecting to the world around them.

89%

of cellphone users admit to using their phones during their last social gathering (34% were checking for alerts). During social gatherings, 82% of millenials judge that it’s ok to read texts & emails, while 75% think it’s ok to send texts & emails.

When encountering someone with an opposing political viewpoint, people are more likely to judge them as warm and intelligent if they hear that person’s ideas spoken rather than written down. Unfortunately, many social media platforms are currently designed to focus on text, reducing the chances of genuine discussion and debate and increasing the possibility of polarization.

We are so distracted by our phones that we often fail to see the most basic things, sometimes at great cost to ourselves and others. Security camera footage from San Francisco public transit reveals that a gunman was able to pull out his gun and openly handle it at length without anyone noticing, before he eventually shot a fellow passenger.

Are we missing something?

We need your help! Contribute a study, article, or correction. ↗

Icon for Politics and Elections

Politics and Elections

Propaganda, distorted dialogue & a disrupted democratic process

Why It Matters

Social media platforms are incentivized to amplify the most engaging content, tilting public attention towards polarizing and often misleading content. By selling micro targeting to the highest bidder, they enable manipulative practices that undermine democracies around the world.

Icon for Politics and Elections

Politics and Elections

Evidence

Social media continues to profit by amplifying messages of extreme content, which then attract more views and hike up advertising revenue. For example, one week after the Capitol attacks, military gear ads continued to be attached to content on the US elections and the attacks, despite Facebook staff and external watchdogs flagging these instances.

5 million

pieces of disinformation were slipping through daily on Facebook and less than 5% of hate speech was being deleted, a few weeks ahead of the Capitol attacks, according to staff leaving the company. Raising this and similar issues, at least 4 members of Facebook's integrity team resigned between May and December 2020, stating “it’s an embarrassment to work here”.

A Facebook whistleblower has revealed that Facebook cannot keep up with the scale of misinformation and election manipulation that is flooding its platform across the world. Sophie Zhang, former data scientist on Facebook’s integrity team, noted that due to a lack of resources, senior leadership seemed unwilling to protect democratic processes in smaller countries, focusing instead on spam or potential public relations risks. Zhang pointed out that Facebook took 9 months to respond to fake accounts that boosted Honduran President Hernandez, despite the fact that “President JOH’s marketing team had openly admitted to organizing the activity on his behalf.”

Analyzing over 2 million recommendations and 72 million comments on YouTube in 2019, researchers demonstrated that viewers consistently moved from watching moderate to extremist videos; simulation experiments run on YouTube revealed that its recommendation system steers viewers towards politically extreme content. The study notes "a comprehensive picture of user radicalization on YouTube".

Exposure to a fake political news story can rewire your memories: in a study, where over 3,000 voters were shown fake stories, many voters later not only “remembered” the fake stories as if they were real events but also "remembered" additional, rich details of how and when the events took place.

The order in which search engines present results has a powerful impact on users' political opinions. Experimental studies show that when undecided voters search for information about political candidates, more than 20% will change their opinion based on the ordering of their search results. Few people are aware of bias in search engine results or how their own choice of political candidate changed as a result.

1 in 4

Spaniards are estimated to have received disinformation and hate speech via WhatsApp ahead of the 2019 national elections, with WhatsApp delivering more disinformation and hate speech than Twitter, YouTube, & Instagram combined, and almost as much as that spread by Facebook.

Game theory analysis has shown how a few bots with extreme political views, carefully placed within a network of real people, can have a disproportionate effect within current social media systems. Studies demonstrate how an extremist minority political group can have undue influence using such bots—for example, reversing a 2:1 voter difference to win a majority of the votes.

Fake news stories posted before the 2016 US elections were still in the top 10 news stories circulating across Twitter almost 2 years later, indicating the staying power of such stories and their long-term impact on ongoing political dialogue.

More fake political headlines were shared on Facebook than real ones during the last 3 months of the 2016 US elections.

The most popular news story of the 2016 US elections was fake. In fact, three times as many Americans read and shared it on their social media accounts as they did the top-performing article from the New York Times. (The fake news story alleged that the Pope endorsed Donald Trump for President).

150 million

Americans were reached by Russian propaganda posts on Facebook during the 2016 US elections, according to Facebook's estimates.

The outcomes of elections around the world are being more easily manipulated via social media: during the 2018 Mexican election, 25% of Facebook and Twitter posts were created by bots and trolls; during Ecuador's 2017 elections, president Lenin Moreno's advisors bought tens of thousands of fake followers; China's state-run news agency (Xinhua) has paid for hundreds of thousands of fake followers, tweeting propaganda to the Twitter accounts of Western users.

The 2017 genocide in Myanmar was exacerbated by unmoderated fake news, with only 4 Burmese speakers at Facebook to monitor its 7.3 million Burmese users.

Are we missing something?

We need your help! Contribute a study, article, or correction. ↗

Icon for Systemic Oppression

Systemic Oppression

Amplification of racism, sexism, homophobia and ableism

Why It Matters

Technology integrates and often amplifies racism, sexism, ableism and homophobia, creating an attention economy that works against marginalized communities.

Icon for Systemic Oppression

Systemic Oppression

Evidence

Facebook's internal training materials for its moderators state: "We allow praise, support, and representation of white separatism as an ideology, e.g. 'The US should be a white-only nation'". At the same time, Facebook notes that "Our Implementation Standards prohibit organizations and people dedicated to promoting hatred and violence against people based on their protected characteristics.”

Sustained disinformation campaigns, made viral by social media, can dilute, distract, and deny the reality of oppression. Within a week of George Floyd’s killing by police, social media platforms hosted a range of counter-information: one video asserting that the death was faked reached 1.3 million people, while thousands of posts on both Facebook and Twitter claimed that the police officer involved was an actor and that the event was faked by the state.

70%

of the most shared Facebook posts about Black Lives Matter in June 2020 were critical of the movement, despite the fact that the majority of Americans support BLM, according to research by data analysis company CrowdTangle. Such fake representations of public opinion can play a significant role in distorting the basis for democratic dialogue and diminishing the momentum for social change. Even as societies take action to challenge racism and other forms of systemic oppression, social media platforms are being hijacked to discourage or even deny change.

Russia's propaganda program (IRA) primarily targeted African-Americans in the US between 2015-2017: fake African-American campaigns on Facebook and Instagram, such as "Black Matters US" and "Blacktivist", reached 15 million users and successfully prompted over 1.5 million users to click through to fake websites which purported to support African-American interests but promoted initiatives such as "Not voting is a way to exercise our rights".

Russia's IRA spread false information designed to create outrage about Black Lives Matter and deepen social division in the US. Research indicates that one of the IRA's major strategies was to use social media platforms to target conservative groups who supported the police or veterans and specifically feed them misinformation about BLM. The Oxford University report concludes that "the affordances of social media platforms make them powerful infrastructures for spreading computational propaganda".

Even after the shooting of 2 law enforcement officers by a Boogaloo activist, Facebook allows many Boogaloo groups to continue organizing. The tech giant argues that its June 2020 ban identifies and removes violent Boogaloo groups leaving non-violent groups intact; external researchers disagree, noting that at least 20 violent Boogaloo groups have side-stepped Facebook's new restrictions and continue to operate on the platform.

With over 800 million users, TikTok promotes itself as a place for self-expression and unrestricted creativity, yet its internal documents reveal a policy of downgrading content from users who do not fit normative ideals of gender, race, class, sexuality, or able-bodiedness, with moderators urged to censor users with "abnormal body shape", "too many wrinkles", or whose environment shows signs of poverty such as "cracks in the wall" or "old decorations".

The number of teens encountering racist hate speech online has practically doubled in the last 2 years. 23% of 14-18 year olds "often" encounter racist content in 2020 (compared to 12% in 2018), and almost 50% more teens report encountering sexist or homophobic material online.

Google image search systematically distorts the way that genders are represented in the workplace, leading to knock-on effects in our perception of real life, according to research. Analysis of Google's top 100 images for each of 45 different jobs demonstrated that Google displayed significantly fewer images of women compared to the actual percentage of women in each of these professions: for example, while in real life, 27% of CEOs are women, only 11% of images generated by a Google search depicted women. Further experiments showed that exposure to such search results significantly distorted viewers' later estimates of how many women worked in these fields.

Rigorous testing of industry AI algorithms, including Google search’s natural language processing, discovered significant stereotypical bias by gender, race, profession, and religion. For example, during "fill-in-the-blank" tests the AI's regularly associated the word "African" with words such as "poor". Researchers noted that GPT-2 showed less bias compared to the other A.I. language models, suggesting that this may be due to GPT-2 was trained on the type of real world datasets that are moderated to reduce bias (such as Reddit forums).

Up to 10,000

Twitter users were actively involved in Gamergate at its peak, with the hashtag #Gamergate being tweeted hundreds of thousands of times per month, mostly supporting the campaign of abuse and violent threats against specific female game designers and those who spoke up to support them. Gamergate played out primarily on Twitter, whose platform design and administration, according to researchers, make the platform particularly adaptable for online abuse— due to the highly public nature of tweets, the potential for mass targeting of individuals, and the fact that abusive responses can’t be removed.

Until 2019, Facebook allowed advertisers to use discriminatory targeting in ads: those advertising jobs, housing, and credit offers, could choose to exclude people on the basis of gender, race, disability and other characteristics, in direct contravention of federal laws such as the Fair Housing Act which bans discrimination. While Facebook has agreed to block such targeting, experts note that measures are not stringent enough and can be easily "gamed": for example, advertisers can still exclude users on the basis of their location.

The algorithmic basis of Google search makes it vulnerable to exploitation by those with enough capital to deploy search engine optimization tactics, often in ways that perpetuate existing forms of race and gender oppression. Researchers note that the porn industry has publicly boasted of how easily it can subvert Google safeguards to place porn in the first page of search results. In addition, commercial incentives for promoting degraded stereotypes of women, especially women of color, has knock-on effects for non-porn related Google searches: an innocent Google search for "black girls" returned pornographic results for many years, via both ads and non-ad search results.

For many years, 92% of the ads that appeared when searching for a black-identified name on Google mentioned the word "arrest", according to Harvard researchers, compared to only 80% of the ads prompted by searching for white-identified names, a statistically significant difference (p < 0.01). Even where a white-identified name (e.g. "Karen Lindquist") belonged to a person with an arrest record, a Google name search still only generated neutral ads that did not mention arrest. In contrast, black-identified names attracted "arrest ads", even when no-one with this name had an arrest record.

Are we missing something?

We need your help! Contribute a study, article, or correction. ↗

Icon for The Next Generations

The Next Generations

From developmental delays to suicide, children face a host of physical, mental and social challenges

Why It Matters

Exposure to unrestrained levels of digital technology can have serious long term consequences for children’s development, creating permanent changes in brain structure that impact how children will think, feel, and act throughout their lives.

Icon for The Next Generations

The Next Generations

Evidence

3X more likely

Children who have been cyberbullied are 3x more likely to contemplate suicide compared to their peers. The experience of being bullied online is significantly more harrowing than "traditional bullying", potentially due to the victim’s awareness that this is taking place in front of a much larger public audience.

Preschoolers who use screen-based media for more than 1 hour each day have been shown to have significantly less development in core brain regions involved in language and literacy. Brain scans indicate that the more time spent on screens, the lower the child's language skills, and the less structural integrity in key brain areas responsible for language. This is one of the first studies to assess the structural neurobiological impacts of screen-based media use in preschoolers; it raises serious questions as to how screen use may affect the basic development of young children's brains.

58 minutes

per day is the average amount of time 2-4 years old spend on mobile devices. And 46% children under the age of 2 years have used a mobile device at least once, despite the American Academy of Pediatrics' recommendation that children under 2 years should not use any screen media.

In a longitudinal study tracking over 200 children from the age of 2 years to 5 years old, children with higher levels of screen time showed greater delays in development across a range of important measures, including language, problem-solving, and social interaction. Analyses indicated that the level of screen time was significantly linked to the specific level of developmental delay 12 -14 months later. This is a critical period in a child's life: as the researchers note, the current data indicates that exposure to excessive screen time during these early years can have serious effects "impinging on children's ability to develop optimally".

The number of US teenagers who are online continuously is increasing at a dramatic pace, almost doubling from 2015 to 2018: 24% to 45%.

Children who experienced cyberbullying during their adolescence were significantly more likely to engage in risk-taking health behavior as adults. Boys who were cyberbully-victims were significantly more likely to smoke as young adults (p = 0.014) while teenage girls were significantly more likely to show a lifetime usage of drugs (p < 0.04).

The level of electronic media use before bedtime is significantly correlated with depression in adolescence. Measurements from several hundred teenagers indicate that this is primarily due to the impact on sleep: compared to video game players, teens with high levels of social media use experienced greater sleep difficulties, which in turn strongly correlated with higher levels of depression.

Several self-harming videos have been circulating on TikTok, from the "Skull breaker" challenge to the "Cha Cha Slide" challenge (which involves repeatedly swerving a car across a road in time to music). Videos that contain the tag "#passoutchallenge" had over 233,000 views on TikTok as of February 2020.

Children who see videos of child influencers holding unhealthy foods consume significantly more calories than those who see influencers holding other types of objects, as clearly shown by experiments using randomized control trials.

Over 40%

of videos from the 5 most popular kid-influencer YouTube channels feature food and drink, with 90% explicitly showing unhealthy food or drink products, sponsored by brands. The 179 food/ drink videos created by these child influencers have more than 1 billion views. Such advertising exploits the fact that children aged 8 and younger are highly susceptible to product placement, cannot distinguish advertising from real content, and spend on average 1 hour daily watching videos online.

Media multi-tasking is significantly linked to later levels of attentional difficulties. Tracking more than 800 adolescents across time demonstrated that the degree to which young teens (aged 11-13 years old) multi-tasked was a significant predictor of attentional problems 3 months later (p < 0.05), highlighting the potential impact of distracting digital environments on young teens' development.

The amount of time spent using social media is significantly correlated with later levels of alcohol use. Research on several thousand teens demonstrated that while time spent on other forms of electronic media (including TV or video games) has comparatively little impact, the amount of time spent on social media is significantly linked to alcohol use 4 years later. Data indicates that social media has this unique effect through "social norming": repeatedly exposing teens to multiple images of their peers and role models drinking alcohol makes such behavior seem normal and acceptable, encouraging imitation.

A longitudinal study of several thousand adolescents indicated that their level of social media usage was a significant predictor of their depression levels over the course of 4 years. For every increased hour spent using social media, teens show a 2% increase in depressive symptoms.

More than half US middle-schoolers cannot distinguish advertising from real news, or fact from fiction. Many state that “If it’s viral, it must be true”. As a result, the next generation are poorly equipped to make sense of the world in their future decisions, whether with regards to drug use, risky sexual behavior, political extremism or any other issues in their future lives.

77%

of teenagers get their news from social media, with 39% stating that they “often” get news from celebrities, influencers, or personalities, according to a survey of over 800 teens aged 13-18. In the last 3 years, there has been a significant growth in the percentage of teens using YouTube and Instagram as their top news sources: YouTube use went from 27% to 44% while Instagram as a news source went from 22% to 32%.

30%

teens report that they pay “very little attention” to considering the source from which they are getting their news on social media.

After nearly two decades in decline, high depressive symptoms for 13-18 year old teen girls rose by 65% between 2010-2017

66%

is the increase in the risk of suicide-related outcomes among teen girls who spend more than 5 hours a day (vs. 1 hour a day) on social media.

A systematic review and meta-analysis (of 20 studies) showed strong, consistent evidence of an association between bedtime access to or use of electronic devices and reduced sleep quantity and quality, as well as increased daytime sleepiness

Are we missing something?

We need your help! Contribute a study, article, or correction. ↗

Icon for Do Unto Others

Do Unto Others

Many people who work for tech companies — and even the CEOs — limit tech usage in their own homes

Why It Matters

Many tech leaders don’t allow their own children to use the products they build — which implies they’re keenly aware that the products from which they make so much money from pose risks, especially for young users.

Icon for Do Unto Others

Do Unto Others

Evidence

Chamath Palihapitiya, former VP of user growth at Facebook, has said that: “I can control my decision, which is that I don’t use that sh%t. I can control my kids’ decisions, which is that they’re not allowed to use that sh%t... The short-term, dopamine-driven feedback loops that we have created are destroying how society works.”

Steve Jobs, who was CEO of Apple for many years, told reporters that his kids don’t use iPads and that “We limit how much technology our kids use at home.”

Sean Parker, who was the founding president of Facebook, has publicly called himself "something of a conscientious objector" on social media and said, “God only knows what it's doing to our children's brains.”

Many modern Silicon Valley parents strongly restrict technology use at home, and some of the area’s top schools minimize tech in the classroom. In the words of one 44-year-old parent who used to work at Google, "We know at some point they will need to get their own phones, but we are prolonging it as long as possible."

“We’ve unleashed a beast, but there’s a lot of unintended consequences,” says Tony Fadell, inventor of the iPod and co-inventor of the iPhone. “I don’t think we have the tools we need to understand what we do every day… we have zero data about our habits on our devices.”

Are we missing something?

We need your help! Contribute a study, article, or correction. ↗

WOULD YOU RECOMMEND THIS LEDGER TO A FRIEND OR COLLEAGUE? YES NO

Have a study, article, or correction to contribute?

The goal of the Ledger of Harms is to list compelling studies and articles that show clear effects, documented by relatively unbiased researchers and writers. If there's a key study or article missing, or if you see methodological problems with anything listed here, please let us know via the link below. We will update this page if the new information passes our review process.

Send Us Your Thoughts ↗