Alex Blania: ‘In the next few years thousands of humans will fall deeply in love with an AI’
The CEO of Tools for Humanity, the company behind Worldcoin, on technology and ethics, Artificial Intelligence and the use of private data.
On the cusp of the age of Artificial Intelligence, entrepreneur Alex Blania is leading a controversial project aimed at verifying humanness in a world that is increasingly digital.
Blania is the CEO of Tools for Humanity, the company behind Worldcoin, a cryptocurrency project that pays humans for a scan of their iris in order to create a digital passport of sorts. The company, co-founded by ChatGPT-maker OpenAI’s chief Sam Altman and Max Novendstrn, promises a digital utopia but has faced accusations it is taking advantage of citizens from low-income countries who hand over precious biometric data by allowing the scanning of their eyes via Worldcoin’s futuristic orbs. And while they are under scrutiny in several countries – including Argentina where they’ve scanned at least 500,000 eyeballs – Blania defends the project as a noble attempt at solving the complex problem of digital identity as the Internet is becoming overrun by AI-fuelled programmes and bots.
He hopes the Worldcoin protocol will be used to create a better digital ecosystem where cryptographic verification could be one of the few ways to build trust online. He’s also optimistic that AI will generate a productivity boom that will allow humans to work less, in part justifying the need for a universal basic income. Yet serious questions remain about the handing over of one’s unique personal information to a group of San Francisco-based techies.
There have been ethical issues raised regarding the use of facial recognition, such as when governments started using it, and now you are using it to develop WorldID. Why do you think this is?
Well, I do think everything that has the potential to limit, either privacy or freedom, should be considered very carefully, and very critically. So what we really focused on when we started working on [WorldCoin] was how can we create something that actually gives all the power to individuals using a lot of cryptography around it in a way that you actually don't need to rely on a third party. You, as an individual, can decide what you want to share, what you don't want to share and how you want to use it. But generally speaking, I think it's something to be [considered] critically, all these technologies have a potential downside.
One of the things that makes this explicit is that your face is being scanned and being used, as opposed to typing in your password or giving your ID, and even your biometric data to these major companies, as we’ve been doing for years. Why do you think you guys get so much pushback?
What do you think?
I think that it's more explicit, when one of your orbs scans someone's eyes, as opposed to just scanning your face on your iPhone. Can you explain why you think this is not a problem?
I believe the Internet itself will actually change quite meaningfully. And this notion of verifying who is a human on the Internet will turn out to be critical infrastructure. I don't think it will just be nice to have it, I think it will be very important to protect our democracy and the Internet.
On social networks we interact with each other, we share opinions, as we do on digital media, and I think they rely on the notion that what we interact with is an actual person, actually human.
And I do believe – and I've thought about this for a long time – [that] the only way to solve it will be some form of biometrics. Simply because everything that is purely digital, like my behaviour on the Internet, or certain actions I take from filling out captchas [codes] to more complicated things, all of those things will be done by AI [i.e. artificial intelligence programmes] very easily.
When we started working on this, we thought about how it should actually be and came to the conclusion that it should be an open protocol that is verifiable and not controlled by any of the big tech companies or governments. Governments sometimes don't have the competence to do it, but it's also something that should be the right of the citizens of any government – it should be in their control. So we tried to design an open protocol that actually can solve the problem on a global scale. Giving users their privacy and their control.
I do think we have to explain it much better than we have in the past. The fact that it’s a kind of an audit, with all the cryptography we use, all those things that we use to actually keep you anonymous and private, I think these things are not easy to understand. And it's not something that you usually do. Also, the fact that [the iris scanner is] a chrome orb certainly did not help, I think in many ways to make it a little less creepy, but it looks pretty cool!
It looks very sci-fi…
Which was the idea. But look, one thing that I always keep thinking back is about one of the earliest meetings we had… the biggest risk was that no-one cared about this project. That risk we don't have anymore, I think people really care. So I think there's a lot of explaining that we need to do: this whole idea of a protocol is not clear, the cryptography around it is not clear.
Another thing we should have done much better was actually going to the countries where we were launching beforehand, [to] meet with people like yourself, with the media, with the government, with regulators, with citizens and actually explain why we think this matters, how it will work, and how it will play out. But none of us actually expected that amount of pick up so fast, so it was just overwhelming. If I could go back in time I would change that.
What generated some of the concern here was the payment. You guys have been talking about a “universal income,” and maybe you can associate those two, because offering US$100 in a developing country with widespread poverty generates suspicion.
We believe that AI [i.e. artificial intelligence] will change many things about both the economy and society.
One of the biggest challenges of our time will be how to make all of this progress available to the whole world? How is this not something that just occurs in San Francisco, and where a couple of big tech companies just get even bigger? How does it actually lift up the world? This was one of the big founding reasons for the project. It might be that in the coming decades we might need something like a universal basic income. I know it's politically controversial, but I think things will turn out to be quite different than they are today. So we should be very careful making these statements.
In any case, we should try new things, such as giving everyone some universal access to compute [on the internet], which will be very important because it may become as critical as water, given it will become the way you actually generate a living.This concept of economically giving everyone minimum firepower, access, and control will turn out to be critical
Also, conceptually speaking, we are actually not seeking anything in return. There is no business model where we sell data – all of those things are not true. Rather, it's just a new digital currency that is created by giving ownership to every human being. There is no trick. There's nothing hidden. When you sign up, you just get part of that currency.
I think part of the feat that people have is that you have these transnational organisations, particularly the largest six or seven, that control so much personal data from a private sector standpoint, meaning that there’s little oversight, especially in countries like Argentina. How does your governance structure make you different?
Even medium-term, something like Worldcoin will not work if there's a single company behind it. You’re probably very familiar with Ethereum [the blockchain protocol behind the world’s second largest cryptocurrency, ETH]. Vitalik [Buterin] and a group of people actually started it, but now no-one really controls it. It's a public infrastructure where many people build companies on top of it, people issue stable coins on top of it – it created a whole new wave of innovation.
Worldcoin is the same. At this point, it's a fairly small project, but if we talk again in a year, it will be an open protocol, no matter what I say or do with the project. People will build companies on top of it, people will use it, and hopefully it will turn out to be very useful to the world.
I run a company called Tools for Humanity and we built Worldcoin. Worldcoin itself is both a protocol and a non-profit foundation, so it's fundamentally different from a company. It would not work otherwise because neither the public or other developers will trust it. And it would not be able to scale [up] globally.
Who has the control over the biometric data ultimately?
When you sign up, what happens first is that the device checks that you are actually a person. So that you can’t defraud it – there's sensors in front to make sure you are not just a display or a print out or anything like that.
It takes a picture of your face and your eye, which is actually something very common that occurs in many airports around the world. Many governments of the world do it as well, it's actually not a new technology. It then generates an iris code, which is essentially just an embedding of the information in your eyes, similar to what the iPhone does is when you use Face ID, then the iris code gets split out in multiple pieces and sent to multiple servers that all need to work together to actually compare the uniqueness of the code. That's the first important piece of information: there's no central server that does that anymore.
And then second – which I think is the even more important process – is, as a person, you can do what is called a zero knowledge proof that confirms that you were the person verified before with the orb. This effectively gives you full anonymity.
Even if what I’m saying is not true, or things go wrong or break, or we make a mistake… the only thing that would be absolutely true is that your WorldID would be disconnected from your account. The only thing that we can say, or [in terms of] anyone breaching the system, is that you have verified before [whether or not you are a human].
I think the combination of decentralised compute, zero knowledge proofs, and ultimately a decentralised system is the most private we can get while solving a very important problem.
Getting a little futuristic, there are two main visions as to where AI is taking us. One option is the dystopian future where robots take over and the other is where the increases in productivity are such that humans won’t have to work as many hours anymore. Where do you think it’s taking us?
I believe the safety issue will be solved. I’m not an expert – I was four years ago but many things have changed in that time! I think the AI safety issues will be broadly solved, even if now it seems like a very scary question. There is a lot of progress occurring in the field, a lot of good things going in good directions coming out.
I think when it is solved, if it is solved, then the other question is if we will see that progress continue? And what is the timeline on which it will continue? Fundamentally, I think in the short term it will probably be less than we expect, while in the long-term it will be even more than we think, because it is probably exponential in some form. I do think that it will probably be the most transformative technology in human history, in many ways.
When we talk again about this in two years, I think we will talk much more about it as the Internet, as opposed to a technology controlled by four big tech companies. It will be a broad technology [with] many players. Three or four really big companies will probably have the big frontier models, but there will be many other companies that just use these things for scientific progress, medical progress. All collective human output will drastically accelerate, and that's exciting. I think that's very cool.
In 10 years, we’ll [look] back to today as the Stone Age, where people had to die of cancer and there was poverty in many places around the world. That’s what I hope technology can actually bring, and what I believe will actually happen. There will be other obstacles along the way such as energy which will be a big bottleneck.
Fundamentally it will be a technology of empowerment.
What’s the Internet going to look like? Lately, a lot of people have been questioning whether it’s broken, given the concentration of power in the hands of a few companies, algorithms becoming black boxes and ultimately the generation of negative externalities. Now, with AI, it’s going to change the whole game?
The cost of generating content will basically go to zero. [AI generated] content that is indistinguishable from reality and very sophisticated will [see its cost of production] go to zero. Most of the traffic on the Internet will not be human. Currently, the AIs we have are passive systems, but they will turn into agents. You tell an AI system, “please do this task,” and currently this action translates into just one query – in the future they will be executing multiple steps, going to the Internet, looking for the answer, and maybe even taking action on the Internet, and then going back to the user. It will be much more complicated, a lot of the traffic will be AI.
Currently, by default, we trust things on the Internet. We see things that we think are generated by a person that actually thought about something and decided to communicate a certain piece of information on social media, for example. I think this dynamic will completely flip around. By default, we will actually not trust what we see on the Internet. We will need to prove what is real.
I think it will be a very different place. I do worry about it becoming even more centralised. That was the big change that occurred from the 2000s to today. We had this open place for innovation, with all these different websites and creators, and then it became much more centralised. There is a danger that because of AI it becomes even more centralised.
Is it possible for AI to have an identity? I don’t want to use the term ‘consciousness,’ but will they eventually become a ‘being’?
Yes. However, we should be very careful to keep a distinction, rather than accepting that these things might be considered persons. Even just simply for the reason that humans will fall in love with these things, right? They already have these dating AIs and things like that – in the next few years you will have thousands of people fall deeply in love with an AI, and they will just not accept the idea that these systems should not have personhood. But they're still not humans.
I think that that distinction is very important, so we need to build a completely new framework about how to think about machines and these issues, it will be somewhat complicated.