Skip to content
white D on top of a red, blue and gray shield with black text to the right, DIGITAL CITIZEN ACADEMY
  • Programs
  • Resources
  • About Us
    • Dr. Lisa Strohman
    • DCA Foundation
    • Media
  • Blog
  • DCA School
  • Programs
  • Resources
  • About Us
    • Dr. Lisa Strohman
    • DCA Foundation
    • Media
  • Blog
  • DCA School

When the Code Isn’t Neutral: Understanding Algorithmic Bias in the Digital Age

Young female wearing a virtual reality with digital data surrounding her in front of a white background

If you have ever thought your phone or computer is listening to you, then you are right. However, I often encounter assumptions during my work with students, parents, and even educators that technology remains impartial. The thought? Since computers lack emotions, they cannot possess bias. Unfortunately, that’s not the case.

What Is Algorithmic Bias?

AI systems produce unfair outcomes when their data or design contains biases. The biases appear through three main channels:

  • Historical data that reflects societal inequalities.
  • Incomplete datasets that fail to represent diverse populations.
  • Programmer assumptions that unintentionally favor particular groups.

The algorithms that power search engines, social media feeds, and educational tools embed biased decisions that shape what young people see, learn, and believe.

Why It Matters for Our Kids

Children and teens are in critical stages of identity formation. An algorithm that consistently displays specific narratives, stereotypes, and opportunities while concealing others imposes restrictions on how young people see the world. The college recommendation tool produces biased results, leading to the omission of qualified students from underrepresented communities.

A social media algorithm might amplify harmful stereotypes or create filter bubbles. The use of AI in school discipline systems leads to unnecessary student identification based on flawed data–The Hidden Feedback Loop. The algorithms that display bias reinforce the existing world rather than simply mirroring it. The system’s biased outputs influence human decisions, which generate new data that is then incorporated into the system, leading to a more severe problem.

What We Can Do

The Digital Citizen Academy believes awareness serves as the starting point for progress. Parents, educators, and communities must follow these steps to address this issue.

  1. Ask questions: How was this AI trained? Who benefits from its design?
  2. Diversify inputs: Encourage students to seek multiple sources of information.
  3. Advocate for transparency: Push for clear explanations of how educational and social platforms make decisions.
  4. Teach critical thinking: Equip kids to recognize when something might be biased or incomplete.

Understanding algorithmic bias, together with our insistence on tech creators’ accountability, will help us build a digital world grounded in our noblest values rather than our most fundamental weaknesses.

Recent Posts

Young female wearing a virtual reality with digital data surrounding her in front of a white background

When the Code Isn’t Neutral: Understanding Algorithmic Bias in the Digital Age

December 29, 2025
back pocket of jeans with a pair of red glassed and cell phone that has a sticky with "No Social Media" written on it

The “World-First” Ban: What Australia Just Did

December 11, 2025
man and girl sitting at a table both staring at a tablet

Digital Self-Management – Teaching Children to Control Their Technology Use

September 22, 2025
white D on top of a red, blue and gray shield with black text to the right, DIGITAL CITIZEN ACADEMY

Contact Us  | Press

CONNECT WITH DCA

Facebook-f Linkedin-in Instagram

CONNECT WITH DR. LISA STROHMAN

Facebook-f Linkedin-in Instagram Vimeo