This is how the promise of digital technology is fulfilled. From Tim Bray:
Leica, the German maker of elegant but absurdly-expensive cameras, just released the M11-P. The most interesting thing about it is a capability whose marketing name is “Content Credentials”, based on a tech standard called C2PA (Coalition for Content Provenance and Authenticity), a project of the Content Authenticity Initiative. The camera puts a digital watermark on its pictures, which might turn out to be extremely valuable in this era of disinformation and sketchy AI.
Can we ever really ‘prove’ that a photograph is ‘real’, by which I presume we mean that the pixels in front of our eyes are what we would have seen had we actually been present for the moment of the photo’s capture? A wise person would assume no and be forever skeptical, however, C2PA may be one of the better solutions for providing the necessary context that tells us if a given photograph is trustworthy or not.
When you visit a website you may see a little lock icon (it might be green) in your web browser’s omni-bar, the place where the www.example.com address sits. This is an indication of two things:
- The website has presented a digital certificate, which asserts identity
- The certificate is trusted by your computer, hence the green color
The identity bit is a claim that the website you just loaded up really is www.example.com and the trust bit is part of the broader internet - the authority who issued this certificate is known and trusted. (To put it simply, some big companies are trusted certificate issuers and your computer comes with a list pre-installed.)
Now bring that same infrastructure to photographs. C2PA spells out that the physical camera has a unique certificate, and it uses that information in that certificate to digitally “sign” each photo it takes. Just like we can use the green lock icon in our web browser to believe that we got the right website, we can use this digital signature in a photograph to know that it was taken by a specific camera.
Why is that powerful? Consider that if a bad actor takes over a network, they could intercept traffic to www.example.com and return something else, something malicious. But that bad actor cannot fake the www.example.com certificate1. The bad actor could supply some other certificate, but since they are not a trusted authority our browser won’t display that green lock icon, and all modern web browsers will present the user a warning saying “this website may not be who you think it is - proceed?”
With C2PA, the same can be true for a given photograph. A “certificate” can be presented along-side the photo such that you know that it was captured by a specific camera. Alone, that’s of little value. But we can build authorities of trusted cameras; cameras which belong to specific journalists. Trust in a specific camera’s certificate could be revoked at any time, say if a camera was lost or stolen, and any images generated using that camera and it’s certificate after such an event could be “dis-trusted”.
All of this is possible because of the internet, because of public-private key cryptography, and because a photo is more than just a photo. It’s not only a collection of pixels, it can carry all of this metadata (the certificate, the chain of trust info, etc).
Which is very, very cool. And the promise of what technology can and should do for our lives.
There are also ways to show a “chain of provenance” for a given photo. This would tell a viewer the history of a photo, for example:
- The photo was taken by a camera owned by Matt Edwards on a specific date
- The photo was edited in Pixelmator Pro two weeks later
- The photo was resized and uploaded 5 minutes after that
All of this information would be signed by various certificates and embedded in the photo’s metadata. All of this data is optional, and there’s no guarantee it would exist. But for photos which had this info it could help us as a society trust a photo which shows some extraordinary, and distrust a photo which seems “too good to be true.”
It won’t prove a lie, but it’ll help assure us of what could be true.
We assume the bad actor is not a trusted certificate authority. This is a safe assumption, today, in the year 2023. I hope it never changes. ↩︎