Search Algorithms Don’t Just Know You, They’re Judging You, Too

For avid technology users, the online experience has become an extremely personalized one. But the same algorithms that are responsible for recommending new products and completing search terms are not as objective as many users assume.

These algorithms can also be used to shape public opinion, support racial bias and even influence voting behaviors.

Tech giants like Facebook and Google are often conducting experiments on their users in order to learn more about their behaviors and how those behaviors can be influenced.

Back in June, Facebook was the target of online backlash after it was revealed that the social media giant conducted a research experiment by manipulating users’ news feeds.

The study was an attempt to see how the alteration of the news feeds would manipulate user emotions.

While Facebook did apologize for “any anxiety” the experiment may have caused, the test did not violate any of the social media site’s terms and conditions that users agree to before setting up their profiles.

Even more upsetting for some users was a study that attempted to see how Facebook could impact users’ willingness to vote.

The experiment proved successful, and the tech giant announced that it saw a drastic increase in civic engagement and voter turnout by incorporating an “I voted” badge on certain user’s profiles.

For some people it begged the question, if social media sites can influence some users to vote, could it also influence some users not to?

This is the same type of testing used by Google that tries to determine what types of color combinations and content placement will garner more attention from people online.

That ability to track behavior has also led to something called the “filter bubble,” which is the idea that the same search will produce very different results based on what type of person the search engine assumes you to be.

For example, the search for “wagner” on Google will likely produce sites about the composer Richard Wagner for women while men will see results about Wagner USA, which is a paint supply company.

Then there was the story of African-American Harvard University Ph.D., Latanya Sweeney.

Sweeney realized that her Google search results were often displaying advertisements asking if she had ever been to jail.

The same advertisements weren’t appearing for her white colleagues.

After conducting a study of the advertisements on different people’s Google results, it turned out that the algorithms behind the ad placements were likely to draw a connection between names commonly given to Black people and ads related to arrest records.

For once, Sweeney was confronted with the fact that some of these so-called objective algorithms are making connections based on stereotypes and racial bias.

The real concerns come from the fact that social media sites and search engines are not the only ones using such tools.

Earlier this year, a Hong Kong-based venture capital firm tasked an algorithm with making crucial decisions about which companies to invest in.

If such algorithms are continuously used to make investment decisions, is it possible that the same results that suggested Black people would want to know about arrest records will recommend wealthy investors avoid putting money into companies with Black CEOs or a certain percentage of Black employees?

While the algorithms don’t cause much harm when it comes to placing advertisements on Facebook pages, the implications of what these algorithms have the ability to do on a broader scale are enough to call for marginalized groups to keep a closer eye on what decisions these automated systems are allowed to make.