With the threat of regulation looming, Google doubles down on its fight against false news

Amid growing concern that some of the world's largest and most influential tech companies are failing to adequately protect users from the misuse of their platforms, Google is doubling down on its own efforts to curb the spread of misinformation and false news.

The company said on Tuesday that it was introducing a slew of new media-focused products and initiatives — called the Google News Initiative — and committing to a $300-million US investment over the next three years. Those efforts include the creation of a new academic lab to study and counter the spread of disinformation, a new subscription revenue tool for publishers, and funding for new media literacy programs worldwide.

The initiative is Google's latest effort to convince news publishers to see the tech giant as a partner, and not a competitor, despite the fact that the overwhelming majority of digital advertising revenue goes to Google and Facebook each year.

But more urgently, with lawmakers raising the spectre of regulating Big Tech, Google wants to prove it is taking the spread of disinformation, the manipulation of its platform, and threats to democracy seriously — and supporting and elevating authoritative, higher quality content plays a crucial part.

In a presentation for publishers at its New York office, Google's chief business officer Philipp Schindler acknowledged they "haven't always gotten it right." But he stressed the company's continuing commitment to helping journalism thrive.

Who's responsible for handling false news?

Both Google and Facebook have faced fierce questioning from lawmakers for their respective platforms' roles in enabling malicious actors to spread false news, disinformation, and divisive political ads in the run-up to the 2016 U.S. presidential election and beyond.

This weekend, Facebook once again drew lawmakers' ire, after it was revealed that data analytics firm Cambridge Analytica had obtained detailed profile data harvested from 50 million users — in most cases without their explicit consent.

The episode has led to renewed calls that tech companies be subject to greater political oversight or control — an outcome that Facebook and Google would desperately like to avoid.

In response, Facebook has gone to great lengths over the past year to prove the company can be a force for social good. It has announced partnerships with fact-checking organizations, supported news literacy initiatives, introduced new reporting and publishing tools for journalists, and increased transparency around political ads.

Most recently, Facebook made dramatic changes to its News Feed, prioritizing posts from friends and family over those from news organizations and brands.

Google's initiatives are similar — in part, efforts to mollify critics who have argued that tech companies haven't done enough to get the misuse of their platforms under control.

Yet, those efforts have mostly shifted the responsibility for identifying and debunking false news and misinformation away from tech companies to third party fact-checkers, newsrooms and media literacy groups — as opposed to addressing the underlying factors that incentivize its spread.

Rather than make sweeping changes to the way YouTube surfaces or recommends videos, for example, Google recently announced it would display factual information from Wikipedia below contested videos instead, and highlight verified videos in a "Top News" shelf.

To do anything more drastic would require a fundamental reconfiguring of the way their services are designed — optimized first and foremost to serve highly targeted content and advertising, Facebook and Google's primary source of revenue. Neither are likely to change that anytime soon.

Grants for research and education

The Google News Initiative builds on previous Google efforts, including a tool that uses artificial intelligence to identify hateful or offensive comments, and a technology called AMP that makes stories load faster on mobile devices.

As part of its announcement, Google said that its philanthropic arm, Google.org, would distribute $10 million in grant money to organizations worldwide working on media literacy programs. To start, $3 million will be given to a U.S. media literacy initiative aimed at students called MediaWise, which is modelled after an initiative called NewsWise that was piloted in Canada last autumn.

Content developed by NewsWise is estimated to reach more than one million students, starting with Ontario classrooms in the spring, and nationwide next year ahead of the 2019 federal election.

Google is also launching an academic lab with the Harvard University research group First Draft. The Disinfo Lab, as it is called, would use a mix of computational tools and human expertise to identify trending dis- and misinformation — specifically during elections and moments of breaking news — and share those insights with newsroom partners.

It is being modelled after prior work done during the French election.

The company also said it would put some of its investment toward hiring a new team of machine-learning engineers that would be dedicated to building and scaling more tools that meet publishers' needs — from the creation of compelling new story formats to new ways of collecting revenue.

One of those tools would help journalists identify cutting edge fake audio or video clips — what Google calls synthetic media — that some predict will soon be used in attempts to fool the media. Training data will be made available to researchers and journalists so they can develop their own detection tools, too.