A client asked us if we could provide a source of origin for anything generated by AI. It wasn't a hypothetical - they wanted to know if we had the capability. We didn't, not in any rigorous way. That sent us looking for what existed, and we landed on C2PA.
In hindsight, we'd seen it coming. When we wrote our initial IRAP proposal for pipeline modernization, we included an audit trail as a requirement. We knew that as AI tools entered VFX workflows, the ability to document what was generated, by what, and when would matter. We just didn't know yet what the mechanism would look like.
C2PA turned out to be that mechanism. We've been prototyping a provenance pipeline and are in the process of applying for membership in the Coalition. Here's what we've learned so far, and why we think the VFX industry has both a problem to solve and an opportunity to help solve it.
C2PA has built something impressive
The Coalition for Content Provenance and Authenticity has been working on content provenance since 2021, and they've made remarkable progress. C2PA is an open standard for embedding cryptographically signed provenance data into media files. A Content Credential records what tools touched an asset, what actions were performed, whether AI was involved, and who's making those claims. It's signed with X.509 certificates - the same tech behind SSL/TLS - so anyone can verify the chain of trust without calling the creator.
Think of it as a chain of receipts for how content was made and modified. Not DRM. Not blockchain. Just signed metadata that travels with the file.
The adoption is real. Leica and Nikon cameras embed Content Credentials at capture. Sony's PXW-Z300 does it for video. Samsung Galaxy S25 and Google Pixel 10 sign natively. LinkedIn and TikTok preserve credentials on their platforms. Adobe is rolling out enterprise tooling. The spec is at version 2.4. The steering committee includes Adobe, Microsoft, Google, Meta, Amazon, Sony, BBC, and OpenAI. Over 5,000 organizations have joined the broader Content Authenticity Initiative.
Photography, journalism, and consumer media drove the early adoption because those industries had the most urgent need and showed up to do the work. The result is a mature, well-designed standard with broad industry support.
VFX has a gap to close
The SDK currently supports JPEG, PNG, TIFF, DNG, GIF, WebP, AVIF, HEIC, MP4, MOV, AVI, PDF, MP3, WAV, and SVG. OpenEXR, Alembic, FBX, and USD aren't there yet.
That's our industry's problem to bring forward, not C2PA's oversight. The standard was designed to be extensible, and the community has been welcoming to new industries with new requirements. The guiding principles explicitly state that C2PA should support all common asset and content file formats, and the architecture provides a sidecar mechanism for formats that can't embed manifests natively. The foundation is ready. What's been missing is VFX people in the room articulating what we need.
The data model already fits compositing
When we started digging into the spec, we were looking for the place where it would fall apart for VFX. A VFX shot isn't a photograph - it's dozens or hundreds of source assets combined through a chain of tools, with multiple artists touching the same shot across departments. Compositing is fundamentally different from single-asset workflows.
But the spec handles it well. C2PA defines "ingredients" with three relationship types. A parentOf ingredient is a direct predecessor - you opened a file, modified it, saved it. A componentOf ingredient is one of multiple assets combined into something new. An inputTo ingredient is information used to help create the asset, like a prompt fed to a generative AI model.
That componentOf relationship maps naturally to a Nuke comp. Your background plate, CG render, matte painting, and roto are all componentOf ingredients in the final composed asset. Each ingredient's own manifest gets bundled into the output's manifest store, creating a provenance chain that can stretch back to each ingredient's origin. The spec even handles redaction - if a plate arrives with sensitive metadata, those assertions can be permanently stripped while recording that a redaction occurred.
The data model is sound. The opportunity now is in the practical layer: building format support for VFX file types, working with DCC vendors on C2PA integration, and solving the scale challenges that are unique to our workflows.
The practical challenges are real
There are genuine problems to work through, and they're the kind of problems that benefit from collaboration between VFX practitioners and the C2PA community.
Volume is one. A single shot can involve thousands of EXR frames across multiple render passes. Signing each one individually is a different scale problem than signing a photograph. Understanding how C2PA's architecture can accommodate that kind of throughput is an active area of exploration for us.
Pipeline gaps are another. Most tools in a VFX pipeline aren't C2PA-aware today. The spec acknowledges that gaps happen and says they should be detected and documented. That's a reasonable approach, but a provenance record with frequent gaps is a different value proposition than one captured end-to-end. Figuring out where provenance adds the most value in a VFX pipeline - and where it's okay to have gaps - is something our industry needs to think through and bring back to the working groups.
Certificates cost money. You need a recognized Certificate Authority on the C2PA Trust List, and the infrastructure for smaller organizations is still maturing. These are ecosystem-level challenges that the broader C2PA community is actively working on, and VFX studios joining that conversation can help ensure the solutions work for organizations of different sizes.
The timing matters
Two things make this worth engaging with now.
First, regulation. The EU AI Act's Article 50 enforcement begins in August 2026, requiring machine-readable disclosure on AI-generated content. California's SB 942 took effect in January 2026. If your studio uses generative AI anywhere in your pipeline - and increasingly, you do - your clients may need provenance metadata on delivered assets to satisfy their own compliance obligations.
Second, the standard is still actively evolving. C2PA is a Linux Foundation project with over 300 member organizations. The Technical Working Group and Content Authenticity Working Group are where format support gets defined, action vocabularies get standardized, and decisions get made about how provenance works for different industries. These groups are open to new members and new perspectives. The people building C2PA can't anticipate VFX requirements on their own - they need practitioners from our industry to describe the problems and collaborate on solutions.
What we'd suggest to other studios
You don't need a production-ready C2PA pipeline tomorrow. But don't wait until a client forces the issue either.
Read the C2PA explainer document. It's well-written and shorter than you'd expect. Pay attention to the ingredients model and the sidecar mechanism - those are the parts most relevant to VFX workflows.
Try signing something. The c2pa-python and c2pa-rs SDKs are open source under MIT license. Pick one deliverable format, one point in your pipeline, and see what happens. You'll learn things from prototyping that reading specs won't teach you.
And consider joining C2PA. Not because membership is a badge, but because the working groups are where the collaboration happens. VFX has specific needs - high-frame-count assets, multi-tool workflows, formats the spec doesn't support yet - and the best way to get those needs met is to show up and contribute. The C2PA community has built something genuinely good. The opportunity for VFX is to help make it work for our corner of the industry too.
The question our client asked is going to become routine. The studios that can answer it with verifiable proof, not just a verbal assurance, are the ones that will be best positioned. And the studios that help shape how provenance works for VFX will be better positioned still.