“You may also like this” – Unraveling digital biases
In our daily online interactions, we encounter algorithms at every turn, whether we are scrolling through social media, browsing the web or catching up on the news. Algorithms are often referred to as black boxes because they quietly work behind the scenes. This secrecy serves multiple purposes, including protecting intellectual property and ensuring a seamless user experience. While algorithms seem to make our digital lives more convenient, they can also shape our online experiences and influence our way of thinking. This underscores the importance of algorithm transparency and responsible online behavior.
Drifting into the Filter Bubble
Algorithms are adept at sorting and recommending content based on our preferences and behaviors. Maybe the phrase “You may also like this” sounds familiar? Commercial platforms like Amazon, Netflix, and Spotify are constantly developing personalization technologies by tracking online activities. Data points include click behavior, browsing history, likes, purchases, location data, and many more. These information is used to predict our desires and intentions, all of which are matched with models in real time.
Artificial intelligence (AI) excels at sifting through vast amounts of data and curating online content and services to match our habits and interests. While this can save us time, it also creates filter bubbles. These bubbles reinforce our existing opinions by shielding us from different perspectives. Awareness of these filter bubbles can empower us to seek out alternative viewpoints and overcome this confirmation bias.
The Concept of Algorithmic Literacy
Algorithmic literacy is key in understanding and navigating the world of algorithms. It involves knowing how algorithms function, critically evaluating their decisions, and actively influencing algorithmic processes. Algorithmic contexts are inherently tied to data collection, processing, and exchange. Consequently, data protection has become a major concern in the age of algorithms. Questioning empowers us to navigate the digital landscape with responsibility and a critical eye, ensuring that we protect our digital privacy and become aware of potential pitfalls.
How can we unravel your digital biases?
Get informed. Becoming aware of biases can be a solid step to reducing them. We supported the Humboldt Institute for Internet and Society during the development of a digital platform to unravel myths about algorithms and AI. Experts around the world talk in CC-licenced videos and texts about frequently asked questions abound algorithms and AI.
Adjust settings. Another step can be to adjust the default settings on your phone and limit app authorizations. Practice clicking save and exit or reject all instead of allow all, when you visit a website and seek out different sources to keep your social media and news feed diverse.
Speak out. Discuss biases with others, share your story and learn about other perspectives. One project working to do this is UNESCO’s Algorithm & Data Literacy Project, which provides various interactive materials and discussion guides.