Tinder and Zoom Want You to Scan Your Eye to Prove You’re Human
Sam Altman’s eyeball-scanning startup is making big moves.
by Tudor Tarita · ZME ScienceAt a San Francisco event on April 17, AI-generated deepfakes of Ronald Reagan and legendary broadcasters like Walter Cronkite spoke from massive screens. They were digital illusions, designed to highlight a growing problem: the internet is losing its grip on reality, and it is becoming increasingly difficult to tell who is actually human.
Ironically, Sam Altman (CEO of OpenAI, which brought us ChatGPT) wants to solve this.
Altman co-founded World, a biometric identity company that aims to scan your iris with a special device. Now, World is partnering up with Tinder and Zoom to see whether it can fend off bots and deepfakes. But how much of our physical identity are we willing to offer to secure our online life?
The Biology of Proof
The iris is the colored ring around the pupil, and its folds, fibers, and pigment patterns are highly distinctive. It’s like a fingerprint. World uses those patterns to create what it calls a “proof of human” credential.
Your phone wouldn’t be able to do this, so you’d have to look into a special device.
A user looks into an Orb, a spherical imaging device that verifies they are a real person. World says the system turns the scan into a digital credential that can be used online without attaching it to a person’s name, address, or other conventional ID.
That credential is now moving into Tinder. World says Tinder is expanding its World ID integration to the United States, where verified users can receive a badge showing they are human. To sweeten the deal, Tinder is offering five free Boosts to verified users, a paid feature that increases profile visibility.
For dating apps, the appeal is obvious. Bots and scammers already pollute the experience, and AI makes them more convincing. A verified-human badge could give users a quick signal that the person on the other end is not an automated account.
×
Get smarter every day...
Stay ahead with ZME Science and subscribe.
Daily Newsletter
The science you need to know, every weekday.
Weekly Newsletter
A week in science, all in one place. Sends every Sunday.
No spam, ever. Unsubscribe anytime. Review our Privacy Policy.
Thank you! One more thing...
Please check your inbox and confirm your subscription.
But it could also create a new kind of pressure. On a dating app, an unverified profile may start to look suspicious, even if the person behind it simply does not want to scan their eyes. Suddenly, to be viable on the already tough dating scene, you’d need to buy a new device.
Deepfakes in Your Meeting Room
The same logic is coming to Zoom, where generative algorithms are already pulling off high-stakes heists. In 2024, a finance worker in Hong Kong wired $25 million to criminals after attending a call populated entirely by deepfakes of his colleagues.
World says Zoom is integrating Deep Face into meetings to fight AI impersonation. The system checks three things: the image captured when a person verified at an Orb, a real-time Face Auth liveness selfie taken on the user’s device, and the live video frame other meeting participants see on screen. If they match, the participant can appear as verified.
RelatedPosts
This Polish radio station fired all its journalists and replaced them with AI hosts — and people are furious
AI Ping Pong Robot Beats Elite Human Table Tennis Players
Woman Actually Wins Legal Case Using ChatGPT
Robot with an AI ‘brain’ learns language like babies do and the results are fascinating
World also wants to expand to the music market, partnering with Concert Kit, a tool meant to let artists reserve tickets for verified humans rather than bots. But even that rollout has been messy. World’s current announcement says Thirty Seconds to Mars will use the system for part of its 2027 tour, while Wired reported that the company had earlier promoted a Bruno Mars partnership that did not exist.
The Privacy Problem Is Getting Worse
World’s timing is good. The internet really is filling with synthetic content, fake accounts, and automated agents. But its history makes the pitch harder to swallow, especially at a time when people’s trust in media is dipping lower and lower.
The project began with a cryptocurrency component, offering WLD tokens to some users who verified with the Orb. That approach drew criticism, especially after reports that early recruitment efforts leaned heavily on poorer communities and developing countries. MIT Technology Review’s 2022 investigation described “deception, exploited workers, and cash handouts” in the project’s early push to recruit users.
Yet, surrendering our eyes to a private company requires a leap of faith. World carries heavy baggage. It initially lured users by offering its own cryptocurrency token, WLD, in exchange for an iris scan. A WLD token launched at $7.50 but has since plummeted to 25 cents.
An investigation by MIT Technology Review found the company used predatory and deceptive practices to harvest biometrics, often targeting exploited workers in developing nations. Regulators have retaliated. The Kenyan government suspended the company’s operations, and the European Union ordered the startup to delete all iris data collected from its residents.
Regulators have also pushed back. Kenya suspended Worldcoin activities in 2023 over privacy and security concerns, although Reuters later reported that Kenyan authorities had dropped a police investigation and that Worldcoin expected to resume operations. Worldcoin later rebranded to World.
Despite these controversies, World claims 18 million people have obtained an ID. Altman views the biometric verification system as an inevitable shield. He took the stage in San Francisco to warn that the internet will soon host “more stuff made by AI than is made by humans,” according to the BBC.
In Europe, the scrutiny has been even sharper. A Bavarian data protection decision found GDPR violations and ordered the Worldcoin Foundation to erase certain iris codes collected through its European activities. It also ordered the project to give users a proper way to exercise their right to deletion.
Yet World says nearly 18 million people have already verified as humans across 160 countries. The company argues that this kind of identity layer is becoming essential as AI makes impersonation cheap and scalable.
Would you sign up?