Politically Biased AI Algorithms


Artificial intelligence is advancing day by day, taking over more responsibilities and decision-making than ever. AI Algorithms are part of our everyday life, making it more efficient and easy, personalizing our online experience and filtering our social media feed in order to provide us with only the most relevant information. Yet, even the most impressive inventions have to be looked at from a critical point of view. Even though artificial intelligence solves many problems and is supposed to eliminate human biases, it still reflects the challenges of our society and then scales them exponentially. After all, algorithms are created by humans, making it practically impossible to ever build completely objective systems.

“Questions about fairness and bias in machine learning are tremendously important for our society,” says Arvind Narayanan, a Princeton University assistant professor of computer science and the Center for Information Technology Policy (CITP), as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society.

Kate Crawford, co-founder of the AI Now Institute, states her concern on how AI controls the way people think and ultimately vote: “What concerns me most is the idea that we’re coming up with systems that are supposed to ameliorate problems [but] that might end up exacerbating them”.

One of the biggest and most dangerous impacts AI and algorithms have on society, is on people’s political opinion. When one browses the web, personalization algorithms develop feedback loops, repeatedly suggesting the same kind of content, opinions, and political viewpoints and making it almost impossible for consumers to remain objective. When a user is drawn to democratic concepts, for example, search engine optimization (SEO) algorithms learn to prioritize similar content, leading them to more information that enforces his/her opinion. This process leads to users developing their unique virtual filter bubble, which ultimately fractures society into polarized groups. Algorithms that are politically biased take away the choice of the user to shape their individual reality themselves. The question remains if programs could ever be “neutral” or if they’ll always reflect humans to a certain degree, especially if they’re integrated into a discriminative society.

Additionally to biased AI algorithms, users are also influenced by platforms such as Google and Facebook, that already have political preferences and therefore emphasize on content that reflects their opinion, impacting how society’s thinking is being shaped. “Social media companies are the gatekeepers”, says Frank Foer, writer at The Atlantic and former editor of the New Republic, “Whatever choices these companies make to elevate or bury information is very powerful and will have a big impact on what people read.”

In a study of the power of search rankings, Robert Epstein, a psychology researcher and professor was able to boost support of political candidates by over 60 percent after just one Google search, due to manipulated search engines. This experiment shows how quickly AI processes create feedback loops, instead of educating people on multiple political standpoints and encouraging them to explore solutions that lead to political and social change. It’s the responsibility of social media companies to be mindful of possible manipulation of what is presented to users, as well as the responsibility of the writer to inform people in a neutral and unbiased manner. “Writers do not merely reflect and interpret life, they inform and shape life.”, E.B. White, author of Charlotte’s Web and writer for The New Yorker, states.

Since users are being controlled by social media platforms, search engines, media outlets and opinions of journalists, it’s important to give them the opportunity to look at problems more objectively and help them create their own opinion on political subjects. Developers are called to creating responsible and effective software, which don’t intend to manipulate people’s political preferences, and individuals are encouraged to consciously share content with their friends and followers, being aware what impact certain articles may have. Even if it’s hard to unpack an algorithm, a high priority on transparency could have useful effects.

For example, with the help of an infographic by the ISTE, consumers are able to determine if online content is credible and unbiased. It encourages them to look out for certain criteria such as the reliability of a news source, website or author and if the headline matches the content of the article. It also recommends to investigate the reliability of the URL, establish a current date and backed up facts.

Besides educating people on their responsibility of spreading unbiased news, tools like Nobias make it even easier to identify credible content and sources and transparently reveal any political biases. Used as a Chrome extension Nobias mission is to shine a light on each user’s own unconscious bias and give them control of their interaction with algorithms as they consciously decide if they want to read and share certain articles. It determines credibility of the source (media outlet) and author and detects polarity (also called sentiment analysis or opinion mining) of the article itself. Nobias measures polarity based on the paper “What Drives Media Slant? Evidence from U.S. Daily Newspaper” by Gentzkow et al and Jesse M. Shapiro,  examining it’s content for right or left leaning tone, wording, and message.

Helping people gain perspective and see things for what they are without being influenced by opinions tailored to their past interactions, Nobias lets people control the intellectual input they consume.

With the help of tools like Nobias and initiatives like News Guard or the AI Now Institute, people are now able to choose content being aware of bias in the political opinion. Detecting fake news before they spread can prevent major misinterpretation and misunderstanding of complex topics such as politics. Creating awareness on bias in algorithms, educating people on their responsibility as digital citizens and developing products that help them evaluate the accuracy, perspective, reliability, and relevance of content is the first step in the process of creating transparency. AI brings amazing new opportunities and, when used responsibly, has the power to change lives and society in a very positive way.