TwitterAISocial Media

V erifying Struggles and Artificial Advances

Author image
Nick Cotton Dec 22, 2017

Here’s the news we’re talking about around the Zbra Studios water cooler. We’ve provided key bullet points from each article for the speed readers out there.

Internal Emails Show Twitter Struggled To Interpret Its Own Verification Rules While Hunting Trolls
By Charlie Warzel from Buzzfeed
  • The blue checkmark, first introduced in 2009, was supposed to prevent impersonation. But according to the emails, some inside Twitter viewed verification as both an endorsement and a badge of validity — especially among journalists and celebrities. Other emails reveal that verification bestowed upon users perks and status within the Twitter community.”
  • “One employee argued that Twitter’s own internal metrics suggested a different meaning for the blue checkmark. ‘[Verification] makes the account measured for Media OKRs [Objectives and Key Results] and contributes to the VIT [Very Important Tweeter] count we report to shareholders,’ Sharp wrote in an email to fellow executives, suggesting that verified users were valuable to the company.”
  • “The emails also highlight a fundamental tension inside Twitter — the strain between the company’s desire to rid its platform of bad actors and its oft-professed commitment to a maximalist interpretation of free speech.”
As Artificial Intelligence Advances, Here Are Five Tough Projects for 2018
By Tom Simonite from Wired
  • “One strand of that work aims to give machines the kind of grounding in common sense and the physical world that underpins our own thinking. Facebook researchers are trying to teach software to understand reality by watching video, for example.”
  • “Getting a robot to do anything requires specific programming for a particular task. They can learn operations like grasping objects from repeated trials (and errors). But the process is relatively slow. One promising shortcut is to have robots train in virtual, simulated worlds, and then download that hard-won knowledge into physical robot bodies.”
  • “Researchers discussed fiendish tricks like how to generate handwritten digits that look normal to humans, but appear as something different to software. What you see as a 2, for example, a machine vision system would see as a 3. Researchers also discussed possible defenses against such attacks—and worried about AI being used to fool humans.”

Expand your presence on the web

Reach new customers in your market.