The dualism of data and the role of algorithms
We interact with various algorithms in our everyday Internet use, whether on our social media profiles, while surfing the web, reading an online newspaper, or watching a YouTube video. Algorithms prove to be useful helpers because, for example, when we store online, they enable us to save our shopping cart when we leave the website. Yet we are hardly actively aware of these algorithms, so what are they all about and what downsides do we need to consider?
Invisible log
An algorithm is a set of routines, rules or commands. Since algorithms are usually invisible on the user interface, they are also referred to as a “black box”. Internet users are not always aware of how they work and thus lack awareness of potential threats: Algorithms are hidden to protect intellectual property, hide the details from users, and make their “interaction” with the system effortless. However, this simultaneously prevents users from understanding the details of how algorithmic systems work or even their existence. Whether or not users’ understanding is present, their perceived knowledge of an algorithm can still influence their behavior, so awareness of these processes is important. This includes making algorithms transparent – at least at the level required for responsible use – and helping people to shape their behavior on the Internet responsibly.
The risk of the filter bubble
Algorithms sort data based on what we seem to like and suggest similar content based on that. Artificial intelligence (AI) can be extremely efficient in sorting such large amounts of data. It filters the content we see online every day by classifying and prioritizing information. This provides us with customized online content and services based on our habits, preferences, and identity characteristics. Automated, they determine our sources of information and thus our perspective on the world and on others. This filter bubble leads to a reproduction of our already existing opinions and possibly even exacerbates them, because other perspectives are blanked out. Filter bubbles can thus save us time in searching for new content, but they can also strongly influence our view of the world and reinforce cognitive behaviors such as impulsivity and distraction. If we become aware of these bubbles, we can look at online content in a more differentiated way and specifically influence the algorithm, for example, by searching for other perspectives on a topic.
The risk of personalization and profiling
As the number of personalization technologies by commercial platforms such as Amazon, Netflix, and Spotify grows, so does the number of data tracking practices used to derive inferences about everyday habits and sociocultural economic behaviors. Current data tracking strategies include the collection of browsing history, likes, purchase and search history, geolocation, app interactions, uploaded photos, mobile and other audible conversations, written comments, cross-device activity and IP addresses, email content, social contacts, song downloads, credit history, movie/TV viewing behavior, game high scores, and a host of other trackable everyday actions. User information and search queries are aggregated into databases to understand and predict purchase intentions, wants and needs, which can then be matched with behavioral models in real time. Clicks play a key role in this process because they are “read” as patterns of behavior. Based on this click behavior, algorithms produce knowledge that is condensed into complex profiles. In order to actively change this profile, users need an awareness of these processes and access to their profile. These are usually neither transparent nor can they be influenced, since the processes for calculating profiles are usually trade secrets. Google is an exception: Logged-in users are offered the opportunity to view and adjust the profile created about them. They can view their personal interest profile in the settings in their Ad Preferences.
What is “Algorithmic Literacy”?
Algorithmic literacy is understood as having an awareness of the use of algorithms in online applications, platforms and services, knowing how algorithms work, being able to critically evaluate algorithmic decisions, and knowing how to deal with and influence algorithmic processes. Practically, this means that individuals are able to apply strategies to change predefined settings in algorithmically curated environments, such as their social media news feeds or search engines. The goal here is to compare the outcomes of different algorithmic choices in order to maintain perspective diversity and protect one’s privacy. Algorithmic literacy can accordingly be seen as an aspect of media criticism and literacy.
By now it should have become clear that algorithmic contexts are also always about data collection, data processing and data exchange. Accordingly, data protection is also an important topic when talking about algorithmic literacy. Social media algorithms in particular are designed in such a way that dependencies can be developed quickly. To counteract this, a first step can be to limit app notifications and become aware of one’s usage time. One project working to do this is UNESCO’s Algorithm & Data Literacy Project, which provides various materials free of charge to educate people about algorithms and their impact on people and society, which are also suitable for children and the classroom. If you want to learn more about algorithms in the classroom, you can watch our video on YouTube here.
Sources
Leyla Dogruel, Philipp Masur & Sven Joeckel (2022) Development and Validation of an Algorithm Literacy Scale for Internet Users, Communication Methods and Measures, 16:2, 115-133, DOI: 10.1080/19312458.2021.1968361