http://www.theglobeandmail.com/technology/google-opts-for-human-touch-in-fight-against-fake-news/article34803437/

Google opts for human touch in fight against fake news

Every year, Google.com makes hundreds of changes to improve the computer code of its search engine, but in an attempt to combat the scourge of fake news and offensive content, its engineers are beginning to collect data from a new source: humans.

“It’s become very apparent that a small set of queries in our daily traffic (around 0.25 per cent), have been returning offensive or clearly misleading content,” writes Ben Gomes, vice-president of engineering for Google, in a blog post outlining some policy changes that will seek more user feedback in an effort to clean up some of the scandals related to automatically generated sections of its search results.

Google’s troubles with offensive content have been popping up with more frequency in recent months. In October, 2016, users noticed Google would sometimes autocomplete the phrase “are jews …” with the word “evil.” After a public outcry the company made changes to remove the offending lines, adding more scrutiny in its algorithm to so-called “sensitive” topics. But even after the fixes, its search engine still regularly turns up offensive results.

For example, right now, users who type “are black” into a Google search bar might see autocomplete suggestions such as “are black people smart,” which leads to a search page topped by a story about the offensiveness of that autocomplete suggestion, followed by a Fox News article claiming a DNA connection to intelligence and a fourth article with the headline: “Black people aren’t human.” That last article is from an organization called National Vanguard, which is identified as a U.S. neo-Nazi white nationalist splinter group by the Southern Poverty Law Centre.

To combat the problem, Google is giving regular users a new “report” button on its search-bar autocomplete feature so people can more easily alert Google to problematic results. A similar button will be added to the “featured snippets” section of its results pages. Autocomplete and featured snippets – previews of search results – have both been the subject of controversies that involved the promotion of conspiracy theories, fake news and racist slurs on the hugely popular website.

After Tuesday, a user who spots an offensive autocomplete result will be able to flag it for Google’s engineers to review.

But even these high-profile anecdotes don’t capture the scale of the problem Google faces. The company doesn’t say how many searches a day it processes; it simply says it processes “trillions” of search requests a year. So while one-quarter of 1 per cent of bad content might be a good result for almost any other enterprise, Google could be responding to many billions of user requests a year with these “low quality” results. Small for Google is still a potential avalanche of unwelcome content for users.

Mr. Gomes explained that content promoting hate is also being given the lowest possible search weighting, and an increased importance will be given to “high-quality” sources of information, particularly on sensitive topics. The process of sifting through search results involves a mix of algorithm and human-curation efforts.

For instance, Google has seen posts containing Holocaust-denying falsehoods ranking high in its searches – an absurd condition when there is excellent scholarship and documentation of the horrors of the Holocaust available online.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s