Microsoft has a new plan to prove what’s real and what’s AI online
This newsletter discusses Microsoft's proposed blueprint for verifying the authenticity of online content in the face of increasingly sophisticated AI-generated disinformation. The plan involves technical standards for AI companies and social media platforms, drawing parallels to authenticating artwork through provenance, watermarks, and digital fingerprints. However, the article also raises concerns about the limits of these tools, the potential for misuse, and the willingness of tech companies and governments to implement them effectively.
- AI-Driven Deception: The newsletter highlights the growing problem of AI-enabled deception in online content, citing examples ranging from manipulated images shared by government officials to Russian disinformation campaigns.
- Microsoft's Verification Blueprint: Microsoft proposes using methods like provenance tracking, watermarking, and digital fingerprinting to verify the authenticity of online content, aiming to create a "gold standard" for content verification.
- Implementation Challenges & Limits: While Microsoft's approach could make it more difficult to spread manipulated content, the article acknowledges that sophisticated actors can bypass these tools, and the technology doesn't address the underlying issue of whether content is accurate.
- Tech Company & Government Reluctance: The newsletter questions whether tech companies will fully adopt these measures if they risk reducing user engagement. It also highlights the potential for governments to exploit these technologies for their own disinformation campaigns.
- Sociotechnical Attacks: The newsletter raises concerns about the possibility that bad actors might manipulate legitimate content to create false flags and create the incorrect perception that something is AI-generated.