Keys to understanding the manipulative design of social networks
Imagine a machine that knows your fears, desires, and weaknesses better than you do. That's how platforms like TikTok and Instagram operate, where every like , notification, and recommended video is the result of a design based on behavioral psychology and artificial intelligence algorithms. These networks not only entertain, but they also use engineering to exploit cognitive biases , such as the need for social validation or insatiable curiosity, with one clear objective: to keep us engaged for as long as possible.
Below we highlight some of the hidden strategies behind their design, analyze their impact on the people who consume them, and propose some alternatives for a more humane technology.
Main strategies in design
Variable rewards
One of the best-known tactics is variable rewards, also known as the digital slot machine approach. Inspired by B.F. Skinner's experiments with mice and levers, TikTok and Instagram use unpredictable rewards to create addiction . For example, when scrolling through TikTok, you never know if the next video will be boring or go viral. This uncertainty triggers the release of dopamine, just like in casinos. Instagram applies this tactic with likes: the red counter appears irregularly, turning each post into an emotional gamble.
Infinite scroll
One of the most powerful is the infinite scrolling effect. Both platforms eliminate natural feedback loops. On Instagram, the feed never ends ; on TikTok, videos autoplay . It's a nonstop, continuous stimulation that keeps you entertained. Even when a video tells you to stop until tomorrow, the format is tedious enough to make you keep scrolling anyway.
Sense of urgency
The sense of urgency, created with notifications and vibrant colors like Instagram's red notifications, is no accident. This color triggers alert responses in the brain . TikTok, meanwhile, sends messages like "Your favorite video has new comments!" to exploit FOMO (Fear of Missing Out).
Artificial intelligence algorithms
Beyond visual design, work is also being done on the design of the algorithms —that is, how personalization and user interaction will be managed. The algorithms analyze every interaction (viewing time, pauses, shares) to predict which content will be most engaging. For example, if you watch a cat video on TikTok to the end, the platform will show you ten similar videos in a row, creating an "interest loop" that's difficult to break.
This means that the emotional aspect is prioritized over the rational. Some internal studies by Meta (Instagram's parent company) reveal that the algorithms prioritize content that generates intense reactions, such as outrage or morbid curiosity (Horwitz & Seetharaman, 2021; Alfano et al., 2021; Harris, 2016; Meta, 2023). Thus, a controversial video about politics or an extreme challenge is more likely to go viral, even if it harms mental health. Imagine someone who is depressed and starts consuming content related to depression; the platforms will feed the user more content that increases their emotional state, which is why it is so negative for people with mental health problems (Bucher, 2018; Cotter, 2019).
A study found that 70% of teenagers exposed to content from fitness or beauty influencers reported feelings of inferiority; and eating disorders were 40% more frequent among active Instagram users.
The Wall Street Journal conducted an extensive study in which it programmed bots to impersonate users and interact with TikTok, demonstrating that the platform's algorithm can determine the interests of new users in just a few hours. Furthermore, by showing only content aligned with the user's beliefs, the algorithms reinforce biases. This is known as echo chambers, and it often promotes radicalization. In 2021, The Wall Street Journal demonstrated that Instagram promoted hate speech to teenagers interested in conspiracy theories. ( The Wall Street Journal , 2021).
Impact on users
The impact on mental health must be taken into account, as it can undermine self-esteem. A 2023 study in the Journal of Social Media and Society found that 70% of adolescents exposed to content from fitness or beauty influencers reported feelings of inferiority; and eating disorders were 40% more frequent among active Instagram users. (Cohen et al., 2023; Tiggemann & Anderberg, 2022).
Furthermore, we must consider that some of these influencers are not even people, but Artificial Intelligences (AI), especially on Instagram. In other words, people compare their physical appearance, which is usually deeply stereotyped, to AI-created figures.
Some examples of these AI-created influencers are:
Lil Miquela (@lilmiquela) – Instagram
- Followers (Oct 2024): ~2.6 million
- Probably the most famous virtual influencer. She's a nineteen-year-old Brazilian-American "teenager" living in Los Angeles. She's a model, singer (her music is AI-generated), and activist. She posts photos of her (entirely fictional) "life," interacts with "friends" (other virtual influencers), and promotes fashion, music, and tech brands. Her aesthetic is hyperrealistic, though it's still noticeable that she's CGI.
- Creators: Brud (a technology and media company based in Los Angeles).
Shudu (@shudu.gram) - Instagram
- Followers (Oct 2024): ~240k
- She presents herself as "the world's first digital supermodel." Shudu is a strikingly beautiful Black woman with a meticulously crafted and elegant aesthetic. She participates in fashion and beauty campaigns.
- Creator: Cameron-James Wilson (a British photographer).
Imma (@imma.gram) – Instagram
- Followers (Oct 2024): ~400
- A "Japanese girl" with distinctive pink hair. Imma has a trendy, urban style. She posts photos of her "life" in Tokyo, interacts with fashion and tech brands, and participates in virtual events.
- Creators: Aww Inc. (a Japanese company specializing in virtual characters).
The World Health Organization has warned about the side effects of excessive social media use. Problematic social media use is associated with anxiety, insomnia, and low self-esteem , especially in adolescents. The WHO recommends further research to define specific diagnostic criteria. (WHO, 2022)
The problem isn't social media itself, but its business model based on exploiting attention. The challenge is to create a digital ecosystem where psychology is used to empower, not to manipulate.
On the other hand, MIT research shows that algorithms increase exposure to fake news by 40% compared to verified sources. This translates into a social polarization toward fabrication and a distancing from truthful and verified facts (Aral, 2021; Vosoughi et al., 2018; Deb et al., 2020). Therefore, it is crucial that we are aware of our thoughts when we consume news and critical of the information we consume.
Alternatives for a more humane technology
To reverse the negative influence of social media, different actions can be taken, such as:
- Create more ethical designs that prioritize transparency . For example, add an option to see "why this content is recommended to me" and buttons to adjust preferences ("Fewer extreme diet videos").
- Set proactive, not reactive, limits . This way, the feed can be blocked after 45 minutes of use. Use compassionate notifications: "You've been on Instagram for two hours today. Want to save your photos and close?" displayed on a simple screen, without bright colors or clunky animations. And prevent further scrolling. This prioritizes "quality time" (meaningful interactions) over "screen time."
The problem isn't social media itself, but its business model based on exploiting attention. Platforms like Pinterest are already taking positive steps. For example, when you search for #depression, they suggest support resources instead of viral content. The challenge is to create a digital ecosystem where psychology is used to empower, not manipulate. As users, demanding transparency and supporting ethical apps can be the first step toward a healthier future.