Nobias Nobias

Nobias - Bursting the Virtual Filter Bubble

https://www.ie.edu/exponential-learning/blog/data-science/machine-learning-marketing/

https://www.ie.edu/exponential-learning/blog/data-science/machine-learning-marketing/

Have you ever wondered how reliable and transparent online sources are and how they may influence your perception and attitude towards important topics such as politics, business, or race? Since 2009, Google has been customizing users’ search results for marketing purposes, which not only influence consumers’ purchase decisions but also shape their virtual realities. Using Artificial Intelligence (AI) algorithms, websites are being tailored to each individual, giving them a unique online experience, a personalized universe of information. Each consumer’s online experience is shaping itself exponentially through every click, article, and video until they’ve created their own personal online reality. Just like in real life, humans create their perceptions and attitudes by focusing on information and news that sound familiar and are pleasant to them, as Eli Prisier, president of MoveOn.org, states in his book Filter Bubble; “The more often we click on similar articles the more articles with the same biases appear in front of us, leaving us in the belief that they report the truth. That’s a problem because it leaves less room for opposition, other perspectives, creativity, innovation, and democratic exchange of ideas.”

In her book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Cathy O’Neil Ph.D. professor, hedge-fund analyst and data scientist, discusses how algorithms are increasingly regulating people and how biased most sources on the internet are. Pedro Domingo, professor, researcher in machine learning and author of The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, lays out how online data reflects the social, historical and political conditions in which it was created and Artificial Intelligence systems ‘learn’ based on the data they are given. On a deeper level, the human brain is pattern-recognizing. It tends to look for information it already knows and that supports its current belief system. Humans are already influenced by primary biases by nature; algorithms reinforce this disadvantage and scale it to extremes, creating a worldview that might not be beneficial in the long run. The media denounces algorithms as “biased,” “racist,” and “sexist” and furthermore highlights how algorithms on Google, Facebook and Yahoo can be of huge disadvantage in business. It’s essential for business leaders to remain objective and to make important decisions having all facts at hand, looking at problems from different angles. They need to scan the world for upcoming competitors and understand different points of views in business and leadership. Additionally, algorithms even choose sources that may influence business decisions such as investing in stocks or companies. Users are presented with articles that emphasise one point of view, recommending either buying or selling certain stocks, and never get exposed to sources discussing the opposition.


Another problem with Artificial Intelligence and algorithms is the emphasis on outrageous content, regardless of its value or credibility. Since engagement and sharing rates of disturbing and emotionally charged content are 20% higher than usual, algorithms pick up on them and “shape consumers moral emotions in ways that could ultimately make it harder to change society for the better.” (M.J. Crockett, assistant professor Yale). If people are only connected to those they agree with, potential for political, racial and gender-related dialogue is lost. Understanding the opposition’s point of view and reasonings, instead of staying trapped in feedback loops, is essential to exploring solutions that lead to social change.

The question becomes how objective can one remain when browsing on the world wide web. Is it still possible to gain access to actual facts without being influenced by biases? In order to stay in control of their decisions and opinions, consumers want to expose themselves to many different points of view and ways of thinking while avoiding “fake” news and other low reliability sources. Nobias mission is to reveal facts and enable people to control the intellectual input they consume. It’s the solution to staying objective and in control when consuming content online, the answer to escaping false virtual realities. Used as a Chrome extension it helps individuals and organizations burst their “virtual filter bubble” by revealing the article’s biases and examining its credibility before they read the article, giving them back control over their online reality. Further, it reflects back biases users create for themselves. Civik Owl evaluates the quality of news stories analyzing them for credibility and political diversity.  AI Now Institute at NYU has the mission to eliminate biases and inaccurate and unfair outcomes. “AI Now researches issues of fairness, looking at how bias is defined and by whom, and the different impacts of AI and related technologies on diverse populations.” News Guard works with a rating system which uses 9 criteria in determining if sources are based on opinions or if they are credible and transparent, such as not publishing false content, not presenting opinions as facts, and clearly labeling advertising. Deep News has the mission of “restoring the economic value of journalism by automatically submitting stories to the platform and scoring them on a scale from 1 to 5 based on their journalistic quality.”  

Not only is remaining objective important when making important decisions in business, but also when it comes to social issues and opinions on race, gender equality, politics, and many other sectors of society. Users should have the right to gain access to the full scope of information on the internet, not only one point of view which infinitely echos back the same information. Staying trapped between false or limited information leads to inequality, a lack of education and ultimately many different “virtual filter bubbles” that fail to present the truth. Nobias is what lets users define their destiny instead of having algorithms control their reality.


Read More
Nobias Nobias

Politically Biased AI Algorithms

jonathan-simcoe-149324-unsplash.jpg

Artificial intelligence is advancing day by day, taking over more responsibilities and decision-making than ever. AI Algorithms are part of our everyday life, making it more efficient and easy, personalizing our online experience and filtering our social media feed in order to provide us with only the most relevant information. Yet, even the most impressive inventions have to be looked at from a critical point of view. Even though artificial intelligence solves many problems and is supposed to eliminate human biases, it still reflects the challenges of our society and then scales them exponentially. After all, algorithms are created by humans, making it practically impossible to ever build completely objective systems.

“Questions about fairness and bias in machine learning are tremendously important for our society,” says Arvind Narayanan, a Princeton University assistant professor of computer science and the Center for Information Technology Policy (CITP), as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society.

Kate Crawford, co-founder of the AI Now Institute, states her concern on how AI controls the way people think and ultimately vote: “What concerns me most is the idea that we’re coming up with systems that are supposed to ameliorate problems [but] that might end up exacerbating them”.


One of the biggest and most dangerous impacts AI and algorithms have on society, is on people’s political opinion. When one browses the web, personalization algorithms develop feedback loops, repeatedly suggesting the same kind of content, opinions, and political viewpoints and making it almost impossible for consumers to remain objective. When a user is drawn to democratic concepts, for example, search engine optimization (SEO) algorithms learn to prioritize similar content, leading them to more information that enforces his/her opinion. This process leads to users developing their unique virtual filter bubble, which ultimately fractures society into polarized groups. Algorithms that are politically biased take away the choice of the user to shape their individual reality themselves. The question remains if programs could ever be “neutral” or if they’ll always reflect humans to a certain degree, especially if they’re integrated into a discriminative society.


Additional to biased AI algorithms, users are also influenced by platforms such as Google and Facebook, that already have political preferences and therefore emphasize on content that reflects their opinion, impacting how society’s thinking is being shaped. “Social media companies are the gatekeepers”, says Frank Foer, writer at The Atlantic and former editor of the New Republic, “Whatever choices these companies make to elevate or bury information is very powerful and will have a big impact on what people read.”

In a study of the power of search rankings, Robert Epstein, a psychology researcher and professor was able to boost support of political candidates by over 60 percent after just one Google search, due to manipulated search engines. This experiment shows how quickly AI processes create feedback loops, instead of educating people on multiple political standpoints and encouraging them to explore solutions that lead to political and social change. It’s the responsibility of social media companies to be mindful of possible manipulation of what is presented to users, as well as the responsibility of the writer to inform people in a neutral and unbiased manner. “Writers do not merely reflect and interpret life, they inform and shape life.”, E.B. White, author of Charlotte’s Web and writer for The New Yorker, states.


Since users are being controlled by social media platforms, search engines, media outlets and opinions of journalists, it’s important to give them the opportunity to look at problems more objectively and help them create their own opinion on political subjects. Developers are called to creating responsible and effective software, which don’t intend to manipulate people’s political preferences, and individuals are encouraged to consciously share content with their friends and followers, being aware what impact certain articles may have. Even if it’s hard to unpack an algorithm, a high priority on transparency could have useful effects.


For example, with the help of an infographic by the ISTE, consumers are able to determine if online content is credible and unbiased. It encourages them to look out for certain criteria such as the reliability of a news source, website or author and if the headline matches the content of the article. It also recommends to investigate the reliability of the URL, establish a current date and backed up facts.


Besides educating people on their responsibility of spreading unbiased news, tools like Nobias make it even easier to identify credible content and sources and transparently reveal any political biases. Used as a Chrome extension Nobias mission is to shine a light on each user’s own unconscious bias and give them control of their interaction with algorithms as they consciously decide if they want to read and share certain articles. It determines credibility of the source (media outlet) and author and detects polarity (also called sentiment analysis or opinion mining) of the article itself. Nobias measures polarity based on the paper “What Drives Media Slant? Evidence from U.S. Daily Newspaper” by Gentzkow et al and Jesse M. Shapiro,  examining it’s content for right or left leaning tone, wording, and message.


Helping people gain perspective and see things for what they are without being influenced by opinions tailored to their past interactions, Nobias lets people control the intellectual input they consume.

With the help of tools like Nobias and initiatives like News Guard or the AI Now Institute, people are now able to choose content being aware of bias in the political opinion. Detecting fake news before they spread can prevent major misinterpretation and misunderstanding of complex topics such as politics. Creating awareness on bias in algorithms, educating people on their responsibility as digital citizens and developing products that help them evaluate the accuracy, perspective, reliability, and relevance of content is the first step in the process of creating transparency. AI brings amazing new opportunities and, when used responsibly, has the power to change lives and society in a very positive way.


Read More
Nobias Nobias

Are you a responsible “digital citizen”?

Iste.org

Iste.org

 

In the age of the internet and social media, where over 80% of the US population has an online presence and uses the Internet regularly and effectively, it’s essential to understand one’s responsibilities as a “digital citizen”.

Often, we underestimate the impact we might have on others when interacting online, posting content that shapes their opinions. When sharing intellectual property it’s key to be aware of what we spread in the world and to educate with credible facts instead of spitting out disturbing content that fascinates. We consume what our friends post, build our opinions on what we hear and what we read. Therefore, living and working in a digital world, it’s important to recognize one’s rights and responsibility to act in safe, legal and ethical ways.


Technology has many advantages and when used right, it can spread important news fast and scale positive movements quickly. Yet, it has to be handled with caution, misinformation and fake news spread like wildfire and can have major impact on topics such as politics and business. Being a responsible digital citizen means using technology appropriately and operating safely and knowledgeably online.

“As many educators know, most students want to do the right thing — and will, if they know what that is,” Mike Ribble, author of Digital Citizenship in Schools and co-founder of the new Digital Citizenship Network states. “Let’s help them do great things with technology while avoiding the pitfalls.”

Here are 5 ways of taking responsibility as a “digital citizen”:

1. Recognize your responsibilities

Know that the rights and freedom you have online come with responsibilities. Recognize what impact you may have on others, especially if you’re representing authority. Be conscious about the content you’re sharing with friends and followers and in what way it can influence them. Make others, who act irresponsibly, aware of their behavior and share this information with them.


2. Safety

Maintaining digital security is crucial when having an online presence. Make sure to not publish delicate content that may cause problems for either yourself or others. Don’t share personal information publicly and always remember; What goes on the internet stays on the internet. Besides protecting yourself from spams and viruses, be aware that certain information such as your contact information should be published with caution.


3. Ethics

Cyberbullying is and has been a huge issue. Close to 34 percent of students acknowledge that they have experienced cyberbullying, which can have major effects not only on individuals but on society in general. Showing respect towards others is the foundation of leveraging technology to move social causes forward, even if you don’t share the same opinion. Seeking to understand where other people are coming from is one of the fastest ways to develop empathy towards them and overcome social and political issues.


4. Legality

Just as in “real life”, there are laws online that have to be followed. Don’t engage in unlawful activities such as stealing intellectual property, damaging others’ work or identity, or creating destructive programs or websites. Even though it’s way harder to be accused of illegal behavior online, it’s your responsibility as a “digital citizen” to follow the law.


5. Educate yourself

Seek to understand all perspectives of a topic and learn to apply critical thinking to all online content. Avoid sharing non-reliable sources and fake news with your followers.  The International Society for Technology in Education (ISTE) recommends to always read multiple articles on the same topic and to determine the reliability of sources such as the news outlet, author and backed up facts. Before sharing anything online, which may affect people’s opinions, make sure the content is contributing positively to society and helps others understand issues from an objective point of view. Almost 80% of students mistake paid advertisement for legit news, which means that it’s really easy to control how (especially young) members of society think. Utilize tools such as Nobias to determine biases and reliability of articles and detect fake news before sharing.  It’s your responsibility as a “digital citizen” to educate yourself and prevent spreading fake news leading to misunderstanding in important topics such as business and politics.


Undoubtedly, being a “digital citizen” has many advantages and comes with new possibilities; We’re more connected than ever and are building an entire parallel world online, sharing our passions and pains, discussing politics and social issues and doing business with each other.

Technology brings many new inventions and opportunities. If used responsibly, the internet and social media has the power to move social causes forward and be an incredible aid for change. Therefore, educating yourself on how to best behave online and how you influence others when sharing content, is one of the main components of a responsible “digital citizen”.


Read More