- DePIN Snacks
- Posts
- DePIN x Deepfake Detection
DePIN x Deepfake Detection
When seeing is no longer believing, who can you really trust?
Welcome back to DePIN Snacks! Each week, we cover a recent web2 startup fundraise and explore why crypto-based networks will win long-term.
Today’s topic: deepfake detection.
Last week, Reality Defender announced a $33m Series B for a deepfake detection API that ingests text/audio/images/video content and returns a score measuring the likelihood that it was generated by AI.
Today’s newsletter is sponsored by Polygon: build on the Agglayer today and apply to the Polygon Community Grants Program!
What are the use cases for deepfake detection?
Social media platforms need to verify authenticity of user-generated content
Enterprises need to verify the authenticity of internal communications
Financial institutions need to verify the authenticity of KYC submissions
Call centers need to verify the authenticity of inbound calls
E-commerce brands need to prove the authenticity of celebrity endorsements
Governments need to prove the authenticity of official statements
The opportunity pattern-matches to other highly-successful venture outcomes: it solves an urgent, expensive and embarrassing problem for a broad set of enterprise customers with huge balance sheets and minimal risk tolerance.
Perhaps it’s no surprise that deepfake detection has become one of the hottest AI investment themes of 2024, with nearly $200m raised across a dozen VC-backed web2 companies.1 While the markets are huge, these startups aren’t just competing with each other, but against hundreds of incumbent full-stack vendors in the security / identity / fraud industry with existing sticky customer relationships who are choosing to build their own (presumably inferior) deepfake detection products.2
“The battle between every startup and incumbent comes down to whether the startup gets distribution before the incumbent gets innovation.” — Alex Rampell, a16z
Like OpenAI, Reality Defender started out as a nonprofit and later transitioned to venture funding, published much of the leading research in their field, and is the undisputed leader in their market from a technology perspective.3 Also like OpenAI, capital markets will eventually value Reality Defender and other deepfake detection companies based on cash flow generation—not research prowess. At OpenAI, this transition saw key executives leave as the company pivoted to focus on profitability. On a long enough time horizon, every AI company faces the same conflict: between a culture centered around technology, where the north star is building the most powerful and/or safe AI — vs a culture centered around commercialization, where the north star is building world-class enterprise sales & customer success functions.
Why will crypto-based AI networks eventually win?
Crypto-networks don’t need to maintain cultural alignment among a group of employees—instead, they use programmatic onchain incentives to direct resources towards a common “north star” goal. The leading crypto-based AI network, Bittensor, distributes incentives to three types of network participants:
Subnet creators set high-level objectives, analogous to executives at centralized AI companies. (18% of incentives)
Miners run infrastructure to train and inference AI models, analogous to research / tech / engineering teams at centralized AI companies. (41%)
Validators curate and monetize model outputs, analogous to product / sales / marketing / support teams at centralized AI companies (41%)
Bitmind is one of the fastest-growing subnets on Bittensor, currently distributing >$50k/day in TAO to miners and validators for its deepfake detection platform. Like Reality Defender, Bitmind uses a multi-faceted mechanism to detect deepfakes: their technology, dubbed Content-Aware Model Orchestration (CAMO), meaningfully outperforms leading open-source benchmarks like DeepfakeBench. You can test out the subnet results yourself via browser extension, social media bot or webapp—I tried a dozen different paid AI image generation platforms and couldn’t fool it once…
How can Bitmind, a three-month old subnet with zero external funding, compete with Reality Defender, a company that’s raised $40m over the past three years?
Reality Defender has ~40 employees plus another 10 open roles: At an average salary of $200k/yr plus benefits and bonuses, Reality Defender pays ten million dollars in annual salaries; assuming a 50/50 split of salaries vs infra spend, the company might plan to spend $20m/yr on AI infra and talent.
Bitmind is already distributing >$20m/yr in TAO incentives today: $10m/yr to miners for developing intelligence, $10m/yr to validators for commercializing intelligence, plus $4m/yr to themselves as subnet creators. These figures have tons of room to grow: it is currently the #17 ranked subnet and climbing fast, raising its share of overall TAO emissions from 1.8% to potentially much higher.
Unlike Reality Defender’s employees, contributors to Bitmind’s subnet have no 5 month interview process, no quarterly reviews or office politics, and no one to tell them what to do or how to do it: they simply detect deepfakes and/or monetize the outputs, and - if they are world-class at what they do -they’ll earn TAO in accordance with clearly-defined protocol rules.
Which paradigm do you think attracts the smartest, hungriest AI talent, and the most-advanced, performant AI infrastructure over the long-term?
Endnotes
1 Deepfake detection companies that raised capital in 2024: Reality Defender ($33m), Clarity ($16m), Illuma ($9m), Loti ($7m), IdentifAI ($2m), DuckDuckGoose ($1m), GetReal (undisclosed), DeepMedia ($25m contract) and Pindrop ($100m loan).
2 Full-stack identity/fraud/security vendors rolling their own deepfake detection: Sardine, Socure, Sumsub, AuthenticID, Hiya, McAfee and many others.
3 Reality Defender’s research: using text-based Q&A to reason about specific features in images, using novel signal modulation techniques and vision transformers in audio, and detecting anomalies in the audio-visual patterns of videos.
4 In other recent crypto x deepfake news, Worldcoin launched a anti-deepfake solution called Deep Face for WorldID users to prove their communications are legitimate.