Google’s old idiom that every picture tells a story causes concern. With all the world’s graphic engineering devices at its disposal, the tech giant is taking a deeper, under-the-cover look at the credibility of the visual media that appears on Google Images to make sure every photo tells the same story.
It takes almost two years to get the message that consumers are skeptical of online photos, but Google immediately implements fact checks in its image-posting process so people are aware of any issues and can make more informed decisions.
Look for mark “Reality Check”
Going forward, when someone searches Google Images, they may see a “Fact Check” label under the picture results. Tapping on that label gives the user a quick overview of what Google found in its fact-check, both for specific images and articles with an allegedly fake image.
Google claims it also checks facts on daily search and in Google News, but the company depends on reports from ClaimReview, an open-source tool that Google also uses for YouTube and one that web publishers use to identify fact-check material to search engines.
“Photos and videos are an incredible way to help people understand the world. But the power of visual media has its drawbacks — particularly when there are concerns about an image’s origin, credibility or meaning, “commented Harris Cohen, Google’s Search Group Product Manager.
Fact-checking is rising
Google isn’t the first — and certainly won’t be the last — to employ fact-checking. Weeding out fake news, videos and photos has become a must for every Big Tech member who wants to stay in good standing with their user base.
There’s no company as proactive as Facebook, though. The social media platform bumped its fact-checking after the 2016 election and went on a tirade earlier this year to bust coronavirus myths.
YouTube CEO Susan Wojcicki noted in a recent Washington Post Live interview that the company yanks videos that violate its policies, including hate speech, inciting violence, or any type of manipulated media that could cause disinformation.
Wojcicki said it doesn’t matter if those doctored videos are from a politician or anyone else, but she said the new rules of YouTube keep some of the videos available if they are presented in context, by news or for educational purposes.