New Study Finds That Computers May Know Us Better Than Our Friends and Family

Computers guess personalities

A new study published earlier this month revealed that even a person’s closest friends and family didn’t know their personality as well as computers did.

A team of researchers from both Stanford and the University of Cambridge discovered that a computer with access to a person’s Facebook “likes” actually knew more about that individual’s personality than that person’s own friends and family.

The study focused on five key areas of a personality that are also known as the “Big Five,” the Stanford Daily reports.

Those areas are “openness, conscientiousness, extraversion, agreeableness and neuroticism.”

It wasn’t long ago that consumers were shocked to find out that algorithms online were doing more than keeping track of their favorite shopping sites or other daily activities. Today’s algorithms are also judging consumers in a sense and using assumptions made about their personalities to not only control advertisements but to completely rearrange their search engine results.

Now that computers are also proving to be accurate judges of personality, it seems like our technological companions are getting to know us all too well.

During the study, the researchers collected self-assessments from the participants about their own personalities.

Each of the 82,220 volunteers had to answer 100 key questions about their personalities before their friends, family and spouses were then given a 10-item questionnaire about their loved one’s personality.

Meanwhile, the computer used nothing more than the participants’ Facebook “likes.”

On average, the computer only had to sift through 10 likes before it could guess the person’s personality more accurately than a friend or roommate.

After about 70 likes the computer was able to guess the person’s personality better than their close friends.

It took roughly 150 likes for the computer to perform better than family members.

It took much more research for the computers to perform as well as the research participants’ spouses, however.

On average, the computer had to analyze about 300 likes before it could judge the participants’ personality as well as their spouse did with the 10-item questionnaire.

Since most people only had about 277 likes on Facebook, the computer often failed to judge the participants’ personality as well as their spouse.

While spouses still seem to understand their loved ones’ personality better than the Facebook like-reading computers, the researchers said there were still some key implications to take away from the study.

In the study’s abstract, the team of researchers wrote that “computers outpacing humans in personality judgment present significant opportunities and challenges in the areas of psychological assessment, marketing and privacy.”

Only time will tell how much privacy consumers are willing to sacrifice in exchange for advancements in psychology and marketing.

 

Search Algorithms Don’t Just Know You, They’re Judging You, Too

For avid technology users, the online experience has become an extremely personalized one. But the same algorithms that are responsible for recommending new products and completing search terms are not as objective as many users assume.

These algorithms can also be used to shape public opinion, support racial bias and even influence voting behaviors.

Tech giants like Facebook and Google are often conducting experiments on their users in order to learn more about their behaviors and how those behaviors can be influenced.

Back in June, Facebook was the target of online backlash after it was revealed that the social media giant conducted a research experiment by manipulating users’ news feeds.

The study was an attempt to see how the alteration of the news feeds would manipulate user emotions.

While Facebook did apologize for “any anxiety” the experiment may have caused, the test did not violate any of the social media site’s terms and conditions that users agree to before setting up their profiles.

Even more upsetting for some users was a study that attempted to see how Facebook could impact users’ willingness to vote.

The experiment proved successful, and the tech giant announced that it saw a drastic increase in civic engagement and voter turnout by incorporating an “I voted” badge on certain user’s profiles.

For some people it begged the question, if social media sites can influence some users to vote, could it also influence some users not to?

This is the same type of testing used by Google that tries to determine what types of color combinations and content placement will garner more attention from people online.

That ability to track behavior has also led to something called the “filter bubble,” which is the idea that the same search will produce very different results based on what type of person the search engine assumes you to be.

For example, the search for “wagner” on Google will likely produce sites about the composer Richard Wagner for women while men will see results about Wagner USA, which is a paint supply company.

Then there was the story of African-American Harvard University Ph.D., Latanya Sweeney.

Sweeney realized that her Google search results were often displaying advertisements asking if she had ever been to jail.

The same advertisements weren’t appearing for her white colleagues.

After conducting a study of the advertisements on different people’s Google results, it turned out that the algorithms behind the ad placements were likely to draw a connection between names commonly given to Black people and ads related to arrest records.

For once, Sweeney was confronted with the fact that some of these so-called objective algorithms are making connections based on stereotypes and racial bias.

The real concerns come from the fact that social media sites and search engines are not the only ones using such tools.

Earlier this year, a Hong Kong-based venture capital firm tasked an algorithm with making crucial decisions about which companies to invest in.

If such algorithms are continuously used to make investment decisions, is it possible that the same results that suggested Black people would want to know about arrest records will recommend wealthy investors avoid putting money into companies with Black CEOs or a certain percentage of Black employees?

While the algorithms don’t cause much harm when it comes to placing advertisements on Facebook pages, the implications of what these algorithms have the ability to do on a broader scale are enough to call for marginalized groups to keep a closer eye on what decisions these automated systems are allowed to make.