2025-10-02 03:00:00
At a a recent online conference, I said that we can “change the global Internet conversation for the better, by making it harder for liars to lie and easier for truth-tellers to be believed.” I was talking about media — images, video, audio. We can make it much easier to tell when media is faked and when it’s real. There’s work to do, but it’s straightforward stuff and we could get there soon. Here’s how.
This is a vision of what success looks like.
Nadia lives in LA. She has a popular social-media account with a reputation for stylish pictures of urban life. She’s not terribly political, just a talented street photog. Her handle is “[email protected]”.
She’s in Venice Beach the afternoon of Sunday August 9, 2026, when federal agents take down a vendor selling cheap Asian ladies’ wear. She gets a great shot of an enforcer carrying away an armful of pretty dresses while two more bend the merchant over his countertop. None of the agents in the picture are in uniform, all are masked.
She signs into her “CoolTonesLA” account on hotpix.example
and drafts a post saying “Feds raid Venice
Beach”. When she uploads
the picture, there’s a pop-up asking “Sign this
image?” Nadia knows what this means, and selects “Yes”. By midnight her post has gone viral.
As a result of Nadia agreeing to “sign” the image, anyone who sees her post, whether in a browser or mobile app, also sees that little “Cr” badge in the photo’s top right corner. When they mouse over it, a little pop-up says something like:
Signature is valid.
Media was posted by @CoolTonesLA
on hotpix.example
at 5:40 PM PDT, August 9th, 2026.
The links point to Nadia’s feed and her instance’s home page. Following them can give the reader a feeling for what kind of person she is, the nature of her server, and the quality of her work. Most people are inclined to believe the photo is real.
Marco is a troublemaker. He grabs Nadia’s photo and posts it to his social-media account with the caption “Criminal illegals terrorize local business. Lock ’em up!” He’s not technical and doesn’t strip the metadata. Since the picture is already signed, he doesn’t get the “Sign this picture?” prompt. Anyone who sees his post will see the “Cr” badge and mousing over it makes it pretty clear that it isn’t what he says it is. Commenters gleefully point this out. By the time Marco takes the post down, his credibility is damaged.
Maggie is a more technical troublemaker. She sees Marco’s post and likes it, strips the picture’s metadata, and reposts it. When she gets the “Sign this picture?” prompt, she says “No”, so it doesn’t get a “Cr” badge. Hostile commenters accuse her of posting a fake, saying “LOL badge-free zone”. It is less likely that her post will go viral.
Miko isn’t political but thinks the photo would be more dramatic if she Photoshopped it to add a harsh dystopian lighting effect. When she reposts her version, the “Cr” badge won’t be there because the pixels have changed.
Morris follows Maggie. He grabs the stripped picture and, when he posts it, says “Yes” to signing. In his post the image will show up with the “Cr” and credit it to him, with a “posted” timestamp later than Nadia’s initial post. Now, the picture’s believability will depend on Morris’s. Does he have a credible track record? Also, there’s a chance that someone will notice what Morris did and point out that he stole Nadia’s picture.
(In fact, I wouldn’t be surprised if people ran programs against the social-network firehose looking for media signed by more than one account, which would be easy to detect.)
That’s the Nadia story.
The rest of this piece explains in some detail how the Nadia story can be supported by technology that already exists, with a few adjustments. If jargon like “PKIX” and “TLS” and “Nginx” is foreign to you, you’re unlikely to enjoy the following. Before you go, please consider: Do you think making the Nadia story come true would be a good investment?
I’m not a really deep expert on all the bits and pieces, so it’s possible that I’ve got something wrong. Therefore, this blog piece will be a living document in that I’ll correct any convincingly-reported errors, with the goal that it accurately describes a realistic technical roadmap to the Nadia story.
By this time I’ve posted enough times about C2PA that I’m going to assume people know what it is and how it works. For my long, thorough explainer, see On C2PA. Or, check out the Content Credentials Web site.
Tl;dr: C2PA is a list of assertions about a media object, stored in its metadata, with a digital signature that includes the assertions and the bits of the picture or video.
This discussion assumes the use of C2PA and also an in-progress specification from the Creator Assertions Working Group (CAWG) called Identity Assertion.
Not all the pieces are quite ready to support the Nadia story. But there’s a clear path forward to closing each gap.
C2PA and CAWG specify many assertions that you can make about a piece of media. For now let’s focus just on what we need for provenance. When the media is uploaded to a social-network service, there are two facts that the server knows, absolutely and unambiguously: Who uploaded it (because they’ve had to sign in) and when it happened.
In the current state of
the specification drafts, “Who” is the cawg.social_media
property from the draft
Identity Assertion spec, section
8.1.2.5.1, and “When” is the c2pa.time-stamp
property from the
C2PA
specification, section 18.17.3.
I think these two are all you need for a big improvement in social network media provenance, so let’s stick with them.
Let’s go back to the Nadia story.
It needs the Who/When assertions to be digitally signed in a way that will convince a tech-savvy human or a PKIX validation
library that the signature could only have been applied by the server at hotpix.example
.
The C2PA people have been thinking about this. They are working on a Verified News Publishers List, to be maintained and managed by, uh, that’s not clear to me. The idea is that C2PA software would, when validating a digital signature, require that the PKIX cert is one of those on the Publishers List.
This isn’t going to work for a decentralized social network, which has tens of thousands of independent servers run by co-ops, academic departments, municipal governments, or just a gaggle of friends who kick in on Patreon. And anyhow, Fediverse instances don’t claim to be “News Publishers”, verified or not.
So what key can hotpix.example
sign with?
Fortunately, there’s already a keypair and PKIX certificate in place on every social-media server, the one it uses to
support TLS connections. The one at tbray.org
, that’s being used right now to protect your interaction
with this blog, is in /etc/letsencrypt/live/
and the private key is obviously not generally readable.
That cert will contain the public key corresponding to the host’s private key, the cert's ancestry, and the host name.
It’s all that any PKIX library needs to verify that yes, this could only have been signed by
hotpix.example
. However, there will be objections.
Objection: “hotpix.example
is not a Verified News Publisher!” True enough, the C2PA validation libraries would
have to accept X.509 certs. Maybe they do already? Maybe this requires an extension of the current specs? In any
case, the software’s all open-source, could be forked if necessary.
Objection: “That cert was issued for the purpose of encrypting TLS connections, not for some weird photo provenance application. Look at the OID!” OK, but seriously, who cares? The math does what the math does, and it works.
Objection: “I have to be super-careful about protecting my private key and I don’t want to give a copy to the hippies running the social-media server.” I sympathize but, in most cases, social media is all that server’s doing.
Having said that, it would be great if there were extensions to Nginx and Apache httpd where you could request that they sign the assertions for you. Neither would be rocket science.
OK, so we sign Nadia’s Who/When assertions and her photo’s pixels with our host’s TLS key, and ship it off into the world. What’s next?
Verifying these assertions, in a Web or mobile app, is going to require a C2PA library to pick apart the assertions and a PKIX library for the signature check.
We already have c2pa-rs, Rust code with MIT and Apache licenses. Rust libraries can be called from some other programming languages but in the normal course of affairs I’d expect there soon to be native implementations. Once again, all these technologies are old as dirt, absolutely no rocket science required.
How about validating the signatures? I was initially puzzled about this one because, as a
programmer, certs only come into the picture when I do something like http.Get()
and the
library takes care of all that stuff. So I can’t speak from experience.
But I think the infrastructure is there. Here’s a Curl blogger praising Apple SecTrust. Over on Android, there’s X509ExtendedTrustManager. I assume Windows has something. And if all else fails, you could just download a trusted-roots file from the Curl or Android projects and refresh it every week or two.
This feels a little too easy, something that could be done in months not years. Perhaps I’m oversimplifying. Having said that, I think the most important thing to get right is the scenarios, so we know what effect we want to achieve.
What do you think of the Nadia story?
2025-09-27 03:00:00
I’m going to take a big chance here and make predictions about GenAI’s future. Yeah, I know, you’re feeling overloaded on this stuff and me too, but it seems to have sucked the air out of all the other conversations. I would so like to return to arguing about Functional Programming or Free Trade. This is risky and there’s a pretty good chance that I’m completely wrong. But I’ll try to entertain while prognosticating.
That’s the title of a Cory Doctorow essay, which I think is spot on. I’m pretty sure anyone who’s read even this far would enjoy it and it’s not long, and it’d help understand this. Go have a look, I’ll wait.
I have one good and one excellent argument to support this prediction. Good first: While my understanding of LLMs is not that deep, it doesn’t have to be to understand that it’s really difficult (as in, we don’t know how) to connect the model’s machinations to our underlying reality, so as to fact-check.
The above is my non-expert intuition at work. But then there’s Why Language Models Hallucinate, three authors from OpenAI and one from Georgia Tech, which seems to show that hallucinations are an inevitable result of current training practices.
And here’s the excellent argument: If there were a way to eliminate the hallucinations, somebody already would have. An army of smart, experienced people, backed by effectively infinite funds, have been hunting this white whale for years now without much success. My conclusion is, don’t hold your breath waiting.
Maybe there’ll be a surprise breakthrough next Tuesday. Could happen, but I’d be really surprised.
(When it comes to LLMs and code, the picture is different; see below.)
The central goal of GenAI is the elimination of tens of millions of knowledge workers. That’s the only path to the profits that can cover the costs of training and running those models.
To support this scenario the AI has to run in Cory’s “reverse centaur” mode, where the models do the work and the humans tend them. This allows the production of several times more work per human, generally of lower quality, with inevitable hallucinations. There are two problems here: First, that at least some of the output is workslop, whose cleanup costs eat away at the productivity wins. Second, that the lower quality hurts your customers and your business goes downhill.
I just don’t see it. Yeah, I know, every CEO is being told that this will work and they’ll be heroes to their shareholders. But the data we have so far keeps refusing to support those productivity claims.
OK then, remove the “reverse” and run in centaur mode, where smart humans use AI tools judiciously to improve productivity and quality. Which might be a good idea for some people in some jobs. But in that scenario neither the output boost nor the quality gain get you to where you can dismiss enough millions of knowledge workers to afford the AI bills.
Back to Cory, with The real (economic) AI apocalypse is nigh. It’s good, well worth reading, but at this point pretty well conventional wisdom as seen by everyone who isn’t either peddling a GenAI product or (especially) fundraising to build one.
To pile on a bit, I’m seeing things every week like for example this: The AI boom is unsustainable unless tech spending goes ‘parabolic,’ Deutsche Bank warns: ‘This is highly unlikely’.
The aggregate investment is ludicrous. The only people who are actually making money are the ones selling the gold-mining equipment to the peddlers. Like they say, “If something cannot go on forever, it will stop.” Where by “forever”, in the case of GenAI, I mean “sometime in 2026, probably”.
Cory forecasts existential disaster, but I’m less worried. Those most hurt when the bubble collapses will be the investing classes who, generally speaking, can afford it. Yeah, if the S&P 500 drops by a third, the screaming will shake the heavens, but I honestly don’t see it hitting as hard as 2008 and don’t see how the big-picture economy falls apart. That work that the genAI shills say would be automated away is still gonna have to be done, right?
Here’s where I get in trouble, because a big chunk of my professional peers, including people I admire, see GenAI-boosted coding as pure poison: “In a kind of nihilistic symmetry, their dream of the perfect slave machine drains the life of those who use it as well as those who turn the gears.” (The title of that essay is “I Am An AI Hater.”)
I’m not a hater. I argued above that LLMs generating human discourse have no way to check their output for consistency with reality. But if it’s code, “reality” is approximated by what will compile and build and pass the tests. The agent-based systems iteratively generate code, reality-check it, and don’t show it to you until it passes. One consequence is that the quality of help you get from the model should depend on the quality of your test framework. Which warms my testing-fanatic heart.
So, my first specific prediction: Generated code will be a routine thing in the toolkit, going forward from here. It’s pretty obvious that LLMs are better at predicting code sequences than human language.
In Revenge of the junior developer, Steve Yegge says, more or less, “Resistance is useless. You will be assimilated.” But he’s wrong; there are going to be places where we put the models to work, and others where we won’t. We don’t know which places those are and aren’t, but I have (weaker) predictions; let’s be honest and just say “guesses”.
Where I suspect generated code will likely appear:
Application logic: “Depreciate the values in the AMOUNT
field of the INSTALLED
table forward
ten years and write the NAME
field and the depreciated value into a CSV.” Or “Look at JIRA ticket 248975 and
create a fix.”
(By the way, this is a high proportion of what actual real-world programmers do every day.)
Glorified StackOverflow-style lookups like I did in My First GenAI Code.
Drafting code that needs to run against interfaces too big and complex to hold in your head, like for example the
Android and AWS APIs (“When I shake the phone, grab the location from GPS and drop it in the INCOMING
S3 bucket”). Or
CSS (“Render that against a faded indigo background flush right, and hold it steady while scrolling so the text slides around
it”).
SQL. This feels like a no-brainer. So much klunky syntax and so many moving pieces.
Where I suspect LLM output won’t help much.
Interaction design. I mean, c’mon, it requires predicting how humans understand and behave.
Low level infrastructure code, the kind I’ve spent my whole life on, where you care a whole lot about about conserving memory and finding sublinear algorithms and shrinking code paths and having good benchmarks.
Here are areas where I don’t have a prediction but would like to know whether and how LLM fits in (or not).
Help with testing: Writing unit and integration tests, keeping an eye on coverage, creating a bunch of BDD tests from a verbal description of what a function is going to do.
Infrastructure as code: CI/CD, Terraform and peers, all that stuff. There are so many ways to get it wrong.
Bad old-school concurrency that uses explicit mutexes and java.lang.Thread
where you have to understand
language memory models and suchlike.
Because it’s being sold by a panoply of grifters and chancers and financial engineers who know that the world where their dreams come true would be generally shitty, and they don’t care.
(Not to mention the environmental costs and the poor folk in the poor countries where the QA and safety work is outsourced.)
Final prediction: After the air goes out of the assholes’ bubble, we won’t have to live in the world they imagine. Thank goodness.
2025-09-19 03:00:00
This is the blog version of my talk at the IPTC’s online Photo Metadata Conference conference. Its title is the one the conference organizers slapped on my session without asking; I was initially going to object but then I thought of the big guitar riff in Dire Straits’ Private Investigations and snickered. If you want, instead of reading, to watch me present, that’s on YouTube. Here we go.
Hi all, thanks for having me. Today I represent… nobody, officially. I’m not on any of the committees nor am I an employee of any of the providers. But I’m a photographer and software developer and social-media activist and have written a lot about C2PA. So under all those hats this is a subject I care about.
Also, I posted this on Twitter back in 2017.
I’m not claiming that I was the first with this idea, but I’ve been thinking about the issues for quite a while.
Enough self-introduction. Today I’m going to look at C2PA in practice right now in 2025. Then I’m going to talk about what I think it’s for. Let’s start with a picture.
This smaller version doesn’t have C2PA,
but if you click on it, the larger version you get does.
Photo credit: Rob Pike
I should start by saying that a few of the things that I’m going to show you are, umm, broken. But I’m still a C2PA fan. Bear in mind that at this point everything is beta or preview or whatever, at best v1.0. I think we’re in glass-half-full mode.
This photo is entirely created and processed by off-the-shelf commercial products and has content credentials, and let me say that I had a freaking hard time finding such a photo. There are very few Content Credentials out there on the Internet. That’s because nearly every online photo is delivered either via social media or by professional publishing software. In both cases, the metadata is routinely stripped, bye-bye C2PA. So one of the big jobs facing us in putting Content Credentials to work is to stop publishers from deleting them.
Of course, that’s complicated. Professional publishers probably want the Content Credentials in place, but on social media privacy is a key issue and stripping the metadata is arguably a good default choice. So there are a lot of policy discussions to be had up front of the software work.
Anyhow, let’s look at the C2PA.
I open up that picture in Chrome and there are little “Cr” glyphs at both the top left and top right corners; that’s because I’ve installed multiple C2PA Chrome plug-ins. Turns out these seem to only be available for Chrome, which is irritating. Anyhow, I’ve clicked on the one in the top left.
That’s a little disappointing. It says the credentials were recorded by Lightroom, and gives my name, but I think it’s hiding way more than it’s revealing. Maybe the one on the top right will be more informative?
More or less the same info. A slightly richer presentation But both displays have an “inspect” button and both do the same thing. Let’s click it!
This is the Adobe Content Credentials inspector and it’s broken. That’s disappointing. Having said that, I was in a Discord chat with a senior Adobe person this morning and they’re aware of the problem.
But anyhow, I can drag and drop the picture like they say.
Much much better. It turns out that this picture was originally taken with a Leica M11-P. The photographer is a famous software guy named Rob Pike, who follows me on Mastodon and wanted to help out.
So, thanks Rob, and thanks also to the Leica store in Sydney, Australia, who loaned him the M11. He hasn’t told me how he arranged that, but I’m curious.
I edited it in Lightroom, and if you look close, you can see that I cropped it down and brightened it up. Let’s zoom in on the content credentials for the Leica image.
There’s the camera model, the capture date (which is wrong because Rob didn’t get around to setting the camera’s date before he took the picture.) The additional hardware (R-Adapter-M), the dimensions, ISO, focal length, and shutter speed.
Speaking as a photographer, this is kind of cool. There’s a problem in that it’s partly wrong. The focal length isn’t zero, and Rob is pretty sure he didn’t have an adapter on. But Leica is trying to do the right thing and they’ll get there.
Now let’s look at the assertions that were added by Lightroom.
There’s a lot of interesting stuff in here, particularly the provenance. Lightroom lets you manage your identities, using what we call “OAuth flows”, so it can ask Instagram (with my permission) what my Instagram ID is. It goes even further with LinkedIn; it turns out that LinkedIn has an integration with the Clear ID people, the ones who fast-track you at the airport. So I set up a Clear ID, which required photos of my passport, and went through the dance with LinkedIn to link it up, and then with Lightroom so it knew my LinkedIn ID. So to expand, what it’s really saying is: “Adobe says that LinkedIn says that Clear says that the government ID of the person who posted this says that he’s named Timothy Bray”.
I don’t know about you, but this feels like pretty strong provenance medicine to me. I understand that the C2PA committee and the CAWG people are re-working the provenance assertions. To them: Please don’t screw this particular style of provenance up.
Now let’s look at what Lightroom says it did. It may be helpful to know what I in fact did.
Cropped the picture down.
Used Lightroom’s “Dehaze” tool because it looked a little cloudy.
Adjusted the exposure and contrast, and boosted the blacks a bit.
Sharpened it up.
Lightroom knows what I did, and you might wonder how it got from those facts to that relatively content-free description that reads like it was written by lawyers. Anyhow, I’d like to know. Since I’m a computer geek, I used the open-source “c2patool” to dump what the assertions actually are. I apologize if this hurts your eyes.
It turns out that there is way more data in those files than the inspector shows. For example, the Leica claims included 29 EXIF values, here are three I selected more or less at random:
"exif:ApertureValue": "2.79917",
"exif:BitsPerSample": "16",
"exif:BodySerialNumber": "6006238",
Some of these are interesting: In the Leica claims, the serial number. I could see that as a useful provenance claim. Or as a potentially lethal privacy risk. Hmmm.
{
"action": "c2pa.color_adjustments",
"parameters": {
"action": "c2pa.color_adjustments",
"parameters": {
"com.adobe.acr.value": "60",
"com.adobe.acr": "Exposure2012"
}
},
{
"action": "c2pa.color_adjustments",
"parameters": {
"com.adobe.acr": "Sharpness",
"com.adobe.acr.value": "52"
}
},
{
"action": "c2pa.cropped",
"parameters": {
"com.adobe.acr.value": "Rotated Crop",
"com.adobe.acr": "Crop"
}
}
And in the Lightroom section, it actually shows exactly what I did, see the sharpness and exposure values.
My feeling is that the inspector is doing either too much or too little. At the minimal end you could just say “hand processed? Yes/No” and “genAI? Yes/No”. For a photo professional, they might like to drill down and see what I actually did. I don’t see who would find the existing presentation useful. There’s clearly work to do in this space.
Oh wait, did I just say “AI”? Yes, yes I did. Let’s look at another picture, in this case a lousy picture.
I was out for a walk and thought the building behind the tree was interesting. I was disappointed when I pulled it up on the screen, but I still liked the shape and decided to try and save it.
So I used Lightroom’s “Select Sky” to recover its color, and “Select Subject” to pull the building details out of the shadows. Both of these Lightroom features, which are pretty magic and I use all the time, are billed as being AI-based. I believe it.
Let’s look at what the C2PA discloses.
Having said all that, if you look at the C2PA (or at the data behind it) Lightroom discloses only “Color or Exposure”, “Cropping”, and “Drawing” edits. Nothing about AI.
Hmm. Is that OK? I personally think it is, and highlights the distinction between what I’d call “automation” AI and Generative AI. I mean, selecting the sky and subject is something that a skilled Photoshop user could accomplish with a lot of tinkering, the software is just speeding things up. But I don’t know, others might disagree.
Well, how about that generative AI?
Fails c2patool
validation, “DigitalSourceType” is trainedAlgorithmicMedia
“DigitalSourceType” is compositeWithTrainedAlgorithmicMedia
The turtle is 100% synthetic, from ChatGPT, and on the right is a Pixel 10 shot on which I did a few edits including “Magic Eraser”. Both of these came with Content Credentials; chatGPT’s is actually invalid, but on the glass-half-full front, the Pixel 10’s were also invalid up until a few days ago, then they fixed it. So this stuff does get fixed.
I’m happy about the consistent use of C2PA terminology, they are clearly marking the images as genAI-involved.
I’m about done talking about the state of the Content Credentials art generally but I should probably talk about this device.
Because it marks the arrival of Content Credentials on the mass consumer market. Nobody knows how many Pixels Google actually sells but I guarantee it’s a lot more than Leica sells M11’s. And since Samsung tends to follow Google pretty closely, we’re heading for tens then hundreds of millions of C2PA-generating mobile devices. I wonder when Apple will climb on board?
Let’s have a look at that C2PA.
This view of the C2PA is from the Google Photos app. It’s very limited. In particular, there is nothing in there to support provenance. In fact, it’s the opposite, Google is bending over backward to avoid anything that could be interpreted as breaking the privacy contract by sharing information about the user.
Let’s pull back the covers and dig a little deeper. Here are a few notes
The device is identified just as “Pixel camera”. There are lots of different kinds of those!
The C2PA inclusion is Not optional!
DigitalSourceType: computationalCapture
(if no genAI)
Timestamp is “untrusted”
The C2PA not being optional removes a lot of UI issues but still, well, I’m not smart enough to have fully thought through the implications. That Digital Source Type looks good and appropriate, and the untrusted-ness of the timestamp is interesting.
You notice that my full-workflow examples featured a Leica rather than the Pixel, and that’s because the toolchain is currently broken for me: Neither Lightroom nor Photoshop can handle the P10 C2PA. I’ll skip the details, except to say that Adobe is aware of the bug, a version mismatch, and they say they’re working on it.
Before we leave the Pixel 10, I should say that there are plenty of alternate camera apps in Android and iOS, some quite good, and it’d be perfectly possible for them to ship much richer C2PA, notably including provenance, location, and so on.
I guess that concludes my look at the current state of the Content Credentials art. Now I’d like to talk about what Content Credentials are for. To start with, I think it’d be helpful to sort the assertions into three baskets.
Capture, that’s like that Leica EXIF stuff we showed earlier. What kind of camera and lens, what the shooting parameters were. Processing, that’s like the Lightroom report: How was the image manipulated, and by what software. Provenance: Which person or organization produced this?
But I think this picture has an important oversimplification, let me fix that.
Processing is logically where you’d disclose the presence of GenAI. And in terms of what people practically care about, that’s super important and deserves special consideration.
Now I’m going to leave the realm of facts and give you opinions. As for the Capture data there on the left… who cares? Really, I’m trying to imagine a scenario in which anyone cares about the camera or lens or F stop. I guess there’s an exception if you want to prove that the photo was taken by one of Annie Liebowitz’s cameras, but that’s really provenance.
Let’s think about a professional publication scenario. They get photos from photographers, employees or agencies or whatever. They might want to be really sure that the photo was from the photographer and not an imposter. So having C2PA provenance would be nice. Then when the publisher gets photos, they do a routine check of the provenance and if it doesn’t check out, they don’t run the picture without a close look first.
They also probably want to check for the “is there genAI?” indicator in the C2PA, and, well, I don’t know what they might do, but I’m pretty sure they’d want to know.
That same publisher might want to equip the photos they publish with C2PA, to demonstrate that they are really the ones who chose and provided the media. That assertion should be applied routinely by their content management system. Which should be easy, on the technology side anyhow.
So from the point of view of a professional publisher, provenance matters, and being careful about GenAI matters, and in the C2PA domain, I think that’s all that really matters.
Now let’s turn to Social Media, which is the source of most of the images that most people see most days. Today, all the networks strip all the photo metadata, and that decision involves a lot of complicated privacy and intellectual-property thinking. But there is one important FACT that they know: For any new piece of media, they know which account uploaded the damn thing, because that account owner had to log in to do it. So I think it’s a no-brainer that IF THE USER WISHES, they can have a Content Credentials assertion in the photo saying “Initially uploaded by Tim Bray at LinkedIn” or whoever at wherever.
What we’d like to achieve is that if you see some shocking or controversial media, you’d really want to know who originally posted it before you decided whether you believed it, and if Content Credentials are absent, that’s a big red flag. And if the picture is of the current situation in Gaza, your reaction might be different depending on whether it was originally from an Israeli military social-media account, or the Popular Front for the Liberation of Palestine, or by the BBC, or by [email protected].
Anyhow, here’s how I see it:
So for me, it’s the P and A in C2PA that matter – provenance and authenticity. I think the technology has the potential to change the global Internet conversation for the better, by making it harder for liars to lie and easier for truth-tellers to be believed. I think the first steps that have been taken so far are broadly correct and the path forward is reasonably clear. All the little things that are broken, we can fix ’em.
And there aren’t that many things that matter more than promoting truth and discouraging lies.
And that’s all, folks.
2025-09-14 03:00:00
Only a few more pictures to share from our vacation, which I’ll wrap up in conventional tourism advice.
It’s mostly about the oceanfront, and what you can see from it.
I recommend all of the following.
Schoolhouse Brewery in Windsor, NS; nice space, decent food, the Vice Principal is a good IPA. Maybe the beer that I enjoyed most was “Exile on North Street” from unfiltered brewing; you might want to follow that link and also check out the URL.
I didn’t love Halifax that much but it has this charming little neighborhood called Hydrostone, where The Brown Hound offered very solid food and beer. We didn’t spend that much time in New Brunswick, but Moncton’s Pump House was cheery and competent; a cool space; I can’t remember which of their IPAs I had, but it was good. The other peak New Brunswick goodness was Adorable Chocolat in Shediac, where everyone was effortlessly bilingual and the pastries just divine. Don’t miss it if you’re anywhere near.
People live by the sea, and swim in it.
Charlottetown’s not that rich in dining options, but got a really excellent lunch at The Cork & Cast. Maybe our best meal of the trip was at The Wheelhouse, in Digby. Scallops all around, seared is the best option.
Every good tourist spot in the world seems to suffer from increasingly intense and persistent overcrowding, and the Maritimes are no exception. On top of which, they’re thinly populated, fewer than two million souls in three provinces. The biggest city, Halifax (and the entire province of Prince Edward Island) are both smaller than individual Vancouver suburbs. It’s not a place for savouring urban flavors.
In Nova Scotia, Halifax has too many cruise ships; stay away from its so-called “farmers market” unless you love cruise culture. Lunenberg is big enough to soak up its waves of visitors and still offer unique visuals.
Overcrowded but has nice bits.
Peggy’s Cove I just can’t recommend; beautiful but jam-packed with cars looking for parking and people risking their lives on the rocks.
These were once defences but now just a pleasant walk.
I do recommend visiting Annapolis Royal; it’s got that great garden and Fort Anne, despite its lengthy and chequered military history, is lovely and peaceful.
In PEI, Charlottetown makes an effort and it has a beautiful basilica, but just isn’t big enough to reward a whole day’s visit.
In NB, Moncton is OK but its biggest tourist attraction is the tide going in and out.
Hopewell Provincial Park, NB. The clifftop trees are exceptional.
The hotels and Airbnbs and VRBOs were OK, mostly. The Harbourview Inn, near Digby, is a charmingly-traditional guest-house. The rooms are OK, but the downstairs is warmly welcoming, drinks available when the host’s there to man the bar, lots of space to sink into a comfy chair and conversation or your laptop. Also the breakfast was solid.
Excited clouds over Lake Ainsley, NS.
But the trip’s lodging highlight was this VRBO called Forest Lake House on Lake Ainsley, the Maritimes’ biggest. Isolated, comfortable, outstanding grounds, your own private forest walk; everything anyone could want. We stopped traveling and had a chill-out day there, enjoying every minute of it.
Lots of people but plenty at room at Cavendish beach.
We only swam once, at Cavendish Beach in PEI’s Anne of Green Gables territory, very nicely set up. But what looked most appealing to me was Crescent Beach in Lockeport, Nova Scotia; I wish we’d made time to have a swim there.
Turns out all three vacationers had farming or agriculture-adjacent roots. If you care about that stuff, driving around PEI is a treat; the agriculture is super-intensive and, to my eye, pleasingly well-done.
The farmlands extend to the seaside.
But if you have the time, get away from PEI’s farms and head northwest, drive down the coast from Tignish to West Point; that ride is full of colors and sea-fronts that aren’t like anywhere else I’ve seen.
Since it’s the New World there’s plenty of nasty history around the indigenous folk, the Mi'kmaq nation. But you really have to look to find it. We visited the Millbrook Cultural & Heritage Centre in Truro, which is much better than nothing.
You gotta drive; we put 3,742km on a basic rented Kia. The roads are way better taken care of than here out West.
Bye-bye, Maritimes.
We didn’t run across a single human Maritimer who was anything less than friendly and welcoming.
Nice people living along beautiful oceanfronts, plenty good enough for me.
2025-09-02 03:00:00
When someone (like us) comes back from a trip to the Maritimes, they’re apt to have pictures of brightly-colored houses. This is to show those colors off and not just in houses. Plus a camera color conundrum.
On the northwest coast of PEI, probably near Cape Wolfe.
In that picture above, glance at the bit of beach showing left of the little lighthouse. There’s a color story there too.
As it happens, our very first outing on the vacation was to Lunenberg, which features those cheerful houses.
It wasn’t just tourist magnets like Lunenberg; anywhere in the Maritimes you’re apt to see exuberantly-painted residences, a practice I admire. While the Maritimes are a long way from my home in Vancouver, we share a long, dim, grey winter, and any splash of color can help with that Seasonal Affective Disorder.
Also, we recently bought a house and, while we like it, it’s an undistinguished near-grey, so we’re looking for color schemes to steal. Thus I took lots of pictures of bright houses.
A couple years back we painted our cabin a cheery blue based on sampling photos of the shutters on Mykonos. A few neighbors rolled their eyes but nobody’s actually complained.
That’s the other color you have to talk about down east; I mean the color of the soil and sand and rocks. PEI in particular is famous for its red dirt, when you come in the on the ferry from Nova Scotia the first thing you notice is the island’s red fringe. I took a million pictures and maybe this is the closest to capturing it.
Not far from that first picture.
One of Nova Scotia’s attractions is the Cabot Trail, a 300km loop around Cape Breton, stretching northeast out into the Atlantic. This one scenic turn-off has you looking at a big, densely-forested mountainside. It’s more chaotic than our West-Coast temperate rain forests, with many tree species jumbled together. The spectrum of greens under shifting clouds was a real treat for the eyes. Here are two of the pictures I came away with. Have a look at them for a moment.
Above is by my Pixel 7, below a modern Fujifilm camera. When I unloaded them on the big outboard screen, I was disappointed with the Fujifilm take, which seemed a little flat and boring; was thinking the Pixel had done better. But then I started feeling uneasy; my memory kept telling me that that mountainside just didn’t include that yellow flavor in the Pixel’s highlights. I mean, those highlights look great, but I’m pretty sure they’re lies.
After a while, I edited the Fujifilm version just a teeny bit, gently bumping Lightroom’s “exposure” and “Vibrance” sliders, and I thought what I got was very close to what I remembered. The Pixel photo is entirely un-touched.
I’m not sure what to think. Mobile-phone cameras in general and the Pixel in particular proudly boast their “computational photography” and “AI” chops and, yeah, the Pixel produced a photo that it’s hard not to like.
And quite a few of the pictures I publish in this space have have been adjusted pretty heavily in Lightroom. I stand by my claim that I’m mostly trying to make something that looks like what I saw. But increasingly, I suspect the Pixel is showing colors people like, as opposed to what’s real.
2025-08-31 03:00:00
Nova Scotia and New Brunswick each have plenty of wilderness; PEI not so much. So pictures of bears and cougars and so on would be plausible, as would marine mammals. But no. Herewith, from our recent vacation, birds and bees, with a little lens-geek side trip.
Having touristed around Charlottetown, we drove down a series of smaller and smaller back roads and ended up at Canceaux Cove near Rocky Point, which I thought might present a nice vista of the city. It did, but the city looks boring. By way of consolation, there were these cute little birds running around on the beach and then flying loops in formation over the water.
Pretty sure these are Semipalmated plovers.
I wanted to get a picture of them in the air so I sauntered down the beach, assuming they’d fly away picturesquely. They studiously ignored me and eventually I had to jump and down and wave my arms and even then they took off grudgingly.
They were graceful and did this mysterious thing that birds can do, staying in formation with no obvious leader. I’ve had the pleasure, very occasionally, of being in engineering teams like that.
We went to Annapolis Royal because of its Historic Gardens and wow, what a treat. I think even those who don’t see themselves as garden fans would enjoy an hour or more sauntering around in there. I like taking pictures of flowers and a lot of these flowers had bees in them.
This one was cute enough to reward a close-up.
Aren’t her wings cute?
And I ask, what could be better than a cute bee in a pretty flower? Obviously, two bees.
And again, a closer look.
Bees are admirable creatures and I don’t want to make fun of them, but this surprised-looking little citizen makes me laugh. (She’s just navigating from one blossom to the next.)
All of these are shot with Fujifilm’s 55-200mm lens, which I’ve had for at least eleven years. Up till now, I’ve always pointed it at faraway things, but wow, I think I’ll be taking this to more gardens in future.
I mention the lens partly so I can link to this awesome (and funny) teardown piece from Lensrentals.
And, on the way out, let’s let that lens show off with a couple of roses.
Remember, pink and black are the colors of rock & roll. And if you’re anywhere near Annapolis Royal, stop and visit that garden.