![RebeccaMcMillanPhotography_TMW_20240630-109918](/media/images/RebeccaMcMillanPhotography_TMW_20240630-10991.width-1440.jpg)
How Social Media Algorithms Shape What We See (and Why It Matters for Parents and Caregivers)
![Caitlin](/media/images/CO_Staff_Avatars_-_Caitlin.2e16d0ba.fill-80x80.png)
Caitlin on Feb. 10, 2025
Imagine stepping into a giant, ever-shifting library where the books seem to rearrange themselves based on what you glance at, touch, or linger over. That’s essentially how social media algorithms work—except instead of books, it's content designed to grab your attention. For rangatahi with smartphones, these algorithms quickly become invisible, yet powerful, companions in their online lives. And while they’re great at showing funny memes or dance videos, they can also lead to much darker corners of the internet.
It All Starts With a Few Clicks
When a young person signs up for a social media account, the algorithm starts working immediately. It takes the basic information they provide—such as age, gender, and location—and begins suggesting content based on patterns it has learned from millions of other users. Add a few interests or likes, and the algorithm gets a clearer picture of what to serve up. While this might seem harmless, it’s important to understand that it’s designed to keep users engaged, no matter what.
For example, a 13-year-old who follows accounts about fitness might start seeing tips for healthy recipes. But linger too long on a post about extreme dieting, and the algorithm might interpret this as interest, serving up more content about weight loss. Before long, they could be exposed to harmful material like pro-anorexia communities or dangerous challenges—a rabbit hole no one chooses to go down. It took me ages to clear my FYP of a relentless (and not-so-healthy) “healthy eating” streak after searching for kids’ lunchbox ideas. Who knew there were so many ways to cut grapes for children?
The Algorithm’s Invisible Hand
Algorithms rely on more than just what we like or follow. They also track how long we look at a post, whether we share or save it, and even what we’ve paused to read without interacting. These micro-moments send signals that shape the content fed to us. For young users, who are still developing critical thinking skills, this can lead to a feedback loop that’s hard to escape.
For example, a young person curious about current events might click on a dramatic headline. If that headline leads to extremist content, the algorithm could start suggesting more of the same. The result? They may be unintentionally exposed to violent or harmful material, not because they searched for it, but because they hovered over the wrong post at the wrong time.
Our Research and Expertise
At Te Mana Whakaatu, Classification Office, we’ve conducted extensive research into how online environments affect young people. Our studies on Body Image and Young People’s Experiences Online reveal how algorithms can amplify harmful content, particularly around appearance ideals. Additionally, our work on Online Misogyny and Violent Extremism has highlighted how these systems perpetuate harmful narratives about gender.
Last year we consulted young people directly, leading to a comprehensive report on their experiences navigating extremely harmful content online, which is set for release early this year. These insights make it clear: the content young people encounter often isn’t a matter of choice, but a result of algorithms designed to keep them engaged.
As content experts, especially in understanding harmful or objectionable material, we aim to empower parents with knowledge and tools to better support their children in this digital landscape. While we focus on the content itself, Netsafe serves as New Zealand’s online safety organisation, providing resources and guidance on how to create safer online environments for families.
The Challenge of Harmful Content
While algorithms can show uplifting or inspiring content, they can just as easily push harmful material into a user’s feed. This isn’t always a reflection of the young person’s interests but rather the algorithm’s relentless pursuit of engagement. If a post shocks, scares, or provokes strong emotions, it’s likely to perform well in terms of views and interactions—which means the algorithm is more likely to amplify it.
And the more time someone spends on this type of content, the more they’ll be fed. It’s a cycle that’s nearly impossible to break without active effort—and it’s happening to young people every day. This is why even the most careful parents might find their children encountering distressing material. It’s not their fault; it’s the design of the platforms they’re using.
What Young People Want
If your child or teen has seen something upsetting online, their first instinct might be to hide it—out of fear they’ll be judged or have their devices taken away. But what young people consistently say they need is support and open, non-judgmental communication. They want to feel safe bringing up what they’ve seen without worrying about getting into trouble. Here are some examples of conversation starters you could talk with your rangatahi about:
- What kinds of things have you been seeing on your social media lately? Is there anything that stood out to you?
- How do you feel about some of the things you've seen online? Have you come across anything confusing or upsetting?
- Ask your child how a younger sibling or family friend might feel seeing this online. Shifting their perspective fosters empathy and encourages thoughtful decisions.
- If you ever see something online that makes you feel uncomfortable or unsafe, how do you think we can handle it together?
- What do you think about the way social media shows you content? Have you noticed patterns or things you'd like to change?
Start these conversations early, even before your child has a smartphone or social media account. Let them know it’s okay to talk about anything they see online, whether it’s confusing, upsetting, or just strange. By normalising these discussions, you’re building trust and giving them the confidence to seek your help if they ever need it. Here are some examples of conversation starters you could talk with your younger whānau members about:
- Have you ever heard your friends talk about funny or strange things they’ve seen online? What do you think about that?
- If you ever come across something online that makes you feel weird or unsure, what do you think you’d do?
- What kind of short clips or video games do you think are fun or interesting? Why do you like them?
- Why do you think some clips or video games might not be okay for kids your age? What would you do if you saw something like that?
Empowering Conversations, No Matter Their Age
The algorithms running social media are complex, ever-evolving, and largely beyond the control of individual users. For parents, understanding how they work is the first step in supporting young people navigating these spaces. While you can’t always stop harmful content from appearing, you can create an environment where your child feels safe discussing it. That’s a powerful way to counter the influence of algorithms and ensure your child’s online experiences are as positive and safe as possible.
Quick Takeaways: How Social Media Algorithms Shape What We See
- Algorithms are invisible influencers: Social media platforms use algorithms to serve content based on user activity, such as likes, follows, and even how long you pause on a post.
- Engagement: Algorithms are designed to keep users engaged, sometimes pushing harmful content if it triggers interaction.
- A cycle that’s hard to break: Once a young person interacts with certain content, the algorithm feeds them more of the same, creating a feedback loop that can be difficult to escape.
- Start conversations early: Ask your child how they think a younger sibling or family friend might feel about certain content. This perspective fosters empathy and critical thinking.
- Normalise talking about content: Encourage open, non-judgmental discussions about what your child sees online to build trust and help them navigate harmful content.
Subscribe to our blog
Stay up to date with the Classification Office blog.