AI is obviously the dominant topic in tech lately, and I think occasionally there's AI that's nonsense, and occasionally there's AI that's great. I love finding new proteins for medicine and so on. I don't think we serve ourselves well when we put our own technology up as if it were a new God that we created. I think we're really getting a little too full of ourselves to think that. This goes back to Alan Turing, the main founder of computer science, who had this idea of the Turing test. In the test, you can't tell whether the computer has gotten more human-like or the human has gotten more computer-like. People are very prone to becoming more computer-like. When we're on social media, we let ourselves be guided by the algorithms, so we start to become dumb in the way the algorithms want us to. You see that all the time. It's really degraded our psychologies and our society.
Jaron Lanier is a pioneering technologist, writer, and musician, best known for coining the term “Virtual Reality” and founding VPL Research, the first company to sell VR products. He led early breakthroughs in virtual worlds, avatars, and VR applications in fields like surgery and media. Lanier writes on the philosophy and economics of technology in his bestselling book Who Owns the Future? and You Are Not a Gadget. His book Dawn of the New Everything: Encounters with Reality and Virtual Reality is an inventive blend of autobiography, science writing, and philosophy. Lanier has been named one of TIME’s 100 most influential people and serves as Prime Unifying Scientist at Microsoft’s Office of the CTO—aka “Octopus.” As a musician, he’s performed with Sara Bareilles, Philip Glass, T Bone Burnett, Laurie Anderson, Jon Batiste, and others.
THE CREATIVE PROCESS
Your memoir, The Dawn of the New Everything, brings together some of those different elements of what you do. It helped me understand VR, virtual reality, a bit more. But for you, I think it seems to be more expansive than a scientific or technological pursuit. It's almost like the core of human intellectual activity. If I understand it right, for you it’s the core of human intellectual activity is a web of life, a language for connections…?
JARON LANIER
Virtual reality has meant very different things to me over the years. I've been working on it for, God, half a century now, which is sort of hard to believe. I had my first startup selling it 40 years ago. Yeah, that was 40 years ago. So, it hasn't been the same.
Originally, when I was young, what I was going for was a kind of new layer of human experience that would be a little bit like shared intentional dreaming, where people would be able to spontaneously improvise what's going on in virtual reality for each other. The image I had of it back then was that you'd play virtual musical instruments—like some crazy instruments I love—and play the world into existence.
The idea was sometimes called post-symbolic communication, which meant that if you could find a way to directly bring the stuff in virtual reality and the principles in virtual reality into existence in an improvised way that was social, it would be a kind of communication that transcends symbols. We use the little things we can do directly in the world, like move our tongues around to symbolize all the things we can't change. For instance, you can't spontaneously create a giant flying, glittering elephant in front of you, but you can speak those words. So, the idea was to find a way to make that thing appear for other people directly. That was back then, and it's gone through many iterations since.
A huge difference now is that there are millions of people who use virtual reality. Back when I started, there was no one, and then there were two, and then there were four. For a long time, there were really only a handful. In fact, in the old days, even if I gave somebody a demo, they still couldn't understand what they had just experienced. It was so exotic and so strange. But now it's become ordinary. It's become commodified, and the version of virtuality as it's appeared is very different from what I imagined. It's much more, how would I put it, driven by the economics of the attention economy, trying to get people sort of hooked on things.
I'm really surprised at how poor social VR is. I mean, the term virtual reality was supposed to mean social. That was the original intent of the term. Anyway, I still like the stuff.
I still enjoy putting on a headset, and I think some of them have gotten good in some ways. The styles of interaction that are available are kind of inferior to what we could do in the '80s. Because in the '80s, the graphics were so bad that we had to really focus on haptics and sound. Those were done pretty well. Now the graphics are both cheap and higher quality, and in a way, people have gotten lazy about interaction. A lot of the interactions are really uninteresting. I think the most interesting thing about virtuality is being in there with your body and really moving. We used to do it with clothing, instrumented gloves, and things like that. Now, of course, it's done with cameras and other kinds of sensors.
THE CREATIVE PROCESS
Yeah, I think it could be liberating. I think it could be utopian. I do want to go into The Dawn of the New Everything and your life, because you have a really eccentric life, a varied life. I mean, if you talk about how you went to school in Mexico, grew up around New Mexico—there are just so many events in your life where your imagination was liberated in a way. It's a kind of education that a lot of people don't get a chance at. You seem like you're forever learning, living an autodidactic existence, whatever you want to learn about. Tell us about how that curiosity developed in a very unusual, unique life journey.
LANIER
I'm a little suspicious of the doctrine of the special person, where one person is really creative compared to another. That I'm unusual, I think, is probably true. I'm a little hesitant to buy into the claim that I'm more whatever than somebody else. This is a thing that we tend to do. We tend to say, "Oh, this person's exceptionally brilliant," and "this one's exceptionally creative." I mean, maybe, but on the other hand, I think we sometimes miss those things in people that we treat as being more ordinary.
So, I'm just a little suspicious of it. I want to hem and haw a bit on that level. The other thing is that I don't know how much insight we really have into ourselves. I think in my case, I had parents who were somewhat similar, but then I lost my mom very early on in a way that was the trauma of my life and continues to be.
It does come out of trauma. I think what knocks us off course is often not something really positive and pleasant. It's possible that there's this sort of spirit in which a question like that can be asked, which is like, "Well, why can't you give us insight that will help everybody be like you?" My initial thought is, "God, I wouldn't wish that on people." A lot of eccentricity is probably a response to trauma in many cases. We have enough trauma in the world as it is.
THE CREATIVE PROCESS
So, on the one hand, you have this living ephemeral art, and then you have one foot in technology. For people who don't know, Gerard has this amazing collection of really ancient instruments. I don't know how many instruments you own, but these really ancient instruments create conversations with histories, different cultures, and countries. There must be so much mystery in music for you. What do you get from it? You also improvise, which is another skill that a lot of people would not know how to do live on stage.
LANIER
I do have this kind of obsessive thing about finding and learning new obscure musical instruments. In fact, one of my problems in life is I've kind of run out. I have, at this point, possibly a comprehensive tour of the instruments—or close to it. So it's getting harder and harder to find new adventures.
At any rate, it started when my mom taught me music, and when she died, I somehow became obsessed with learning new instruments because I remember that experience of her teaching me piano. Of course, I still play the piano. I also just crave that experience of learning new instruments, which is sort of a connection to her, I suppose. That's probably the best explanation.
But there's a lot more. I always think of instruments as some of the most optimistic signals for human nature that we find in history. There's a cliché that it's the weapons of war that drive technology, and that's true in many cases. But it's also true that musical instruments drive technology. The musical bow logically had to precede the bow and arrow, and the casting of bells preceded casting cannons and guns. The first product sold by a Silicon Valley company was Hewlett-Packard's music synthesizer for Walt Disney's Fantasia. Those are some examples I like, but the point is that we find in musical instruments this incredible drive for technological innovation.
THE CREATIVE PROCESS
Yeah, that's what I feel like we can learn from it. Because obviously, it's naturally harmonizing. I don’t know what came first, but I feel like they say the heartbeat came first. We have this sonic world and a sonic landscape, and then we put sense onto it. But that was later, I think. You've also been interesting because you've said some things that have been provocative for people or eye-opening for them, like "Owns the Future," which is an older book. You wrote a piece not too long ago in The New Yorker about "there is no AI."
Because we're now, a lot of people are subscribing to the idea that it has consciousness. Isn't this amazing? All this stuff that's coming out of it. And you said, "No, it's theft," just about how to reorient our thinking.
LANIER
AI is obviously the dominant topic in tech lately, and I think occasionally there's AI that's nonsense, and occasionally there's AI that's great. I love finding new proteins for medicine and so on. There’s a lot of really wonderful stuff in AI. What I meant when I said there is no AI is that I don't think we serve ourselves well when we put our own technology up as if it were a new God that we created. I think we're really getting a little too full of ourselves to think that.
When we do that, I think we confuse ourselves too easily. This goes back to Alan Turing, the main founder of computer science, who had this idea of the Turing test. In the test, you can't tell whether the computer has gotten more human-like or the human has gotten more computer-like. People are very prone to becoming more computer-like. When we're on social media, we let ourselves be guided by the algorithms, so we start to become dumb in the way the algorithms want us to. You see that all the time. It's really degraded our psychologies and our society.
THE CREATIVE PROCESS
I just think of it as an advanced search that also makes errors—it's a kind of authorized theft or plagiarism machine. For myself, I have to, like any artist, use it ethically because we know it's drawing from all these other people whose value has been sidestepped. Their knowledge has just fed it. I don't put stuff into it that's my original creativity. I just don't. Well, I'm sure everyone knows that everyone has been scraped, but—
LANIER
Well, whether it's a plagiarism machine or not depends on how it's used. I mean, it's totally plausible to use it in a way that is not a plagiarism machine by rewarding, celebrating, and even paying the people who provided the most indispensable training data to it. Plagiarism is not part of the AI algorithm; it's part of the societal framing of the AI algorithm. It's very important to understand that and to understand how that could change without reducing any benefits that there might be. I think the Wikipedia set us on a bad path because Wikipedia, in a way, and it even goes further back, even search was trying to simulate an AI—like Yahoo was yet another hallowed oracle.
The idea was that there'd be this oracle, and by erasing people you create this illusion of this new entity. Wikipedia continued that illusion with the emphasis on pseudonyms. At least pseudonyms are something, so there's a bit of continuity, but I would have preferred to see a tendency towards true names in Wikipedia—not a requirement. There are reasons for some people not to do it, but I think this default to pseudonyms did tend to create the illusion of the one true rendering of truth, which is not necessarily great. With AI, we've pushed that even further by just erasing people, but we don't have to erase people to get whatever benefits we might seek from the types of large models that we're building now.
THE CREATIVE PROCESS
You've written about that in Who Owns the Future? You're an advocate for data unions and other ways to compensate not just artists, but all those people who have been scraped.
LANIER
I should point out for those who are wondering, "What's he doing at Microsoft?" I think this is a win-win idea. When we elevate society, we elevate all the parts of society, including the tech companies. Those who think that there's this inherent opposition between tech companies and people who provide data out there are making a logical error.
THE CREATIVE PROCESS
Thank you for opening my eyes to that. I'm glad they're getting their act together with the EU AI Act. I don't know how efficient that is, but I was reading that Mark Zuckerberg is predicting that smartphones' reign is coming to an end and he's forecasting that smart glasses will take over as our main connection to the digital world. This seems similar to VR, but this is augmented reality. What are your views on that?
LANIER
The terminology gets a bit confusing. Back in the old days, meaning the late '70s, virtual reality was a social version of a virtual world. The term "virtual world" was coined by an art theorist named Suzanne Langer in the '50s. Ivan Sutherland, the father of computer graphics, adopted it. Virtual reality was supposed to be the social version of that, and mixed reality is what we kind of call augmented these days. The terms are all over the place, and everybody wants to coin a new term, I've noticed.
As far as wearing goggles all the time, I don't know. I've always thought it was dumb. From when I was young, I thought that the most special moment with virtual reality or anything similar was when you take them off because then you see the world with fresh eyes. That moment of coming out is what's so magical. It's like seeing a really good movie, and there's that moment when you leave the theater and have an altered perception for a moment. That actually is the best part of the movie, isn't it?
I've also thought that this notion of just being in VR all the time would make it boring and ordinary and miss the point. It would ruin it. I think it's just a poorly informed idea. It's not based on a lot of experience with it. Obviously, if you have a company that's selling headsets, you'd want to promote that idea because then you sell more headsets. I mean, sure. Everyone wants the next iPhone; everybody would like to sell the ubiquitous device. Look at what that did for Apple, right? It was huge.
My own sense is that the usual reason people give for why it might not work is that people don't like how they look with glasses on. I don't think that's it exactly because that's very subjective, and people's subjective sense of what's attractive can shift from generation to generation. But I think this idea of making something special into something ordinary is what will turn people off.
I don't know, there's the occasional chocoholic that's just nibbling on chocolate all day long, but it's much more common for them to have punctuated equilibrium in their chocolate so they enjoy it when it comes up or something. I think people will naturally gravitate to want virtuality to be special instead of ordinary.
THE CREATIVE PROCESS
I've been really interested throughout this interview in you talking about the intersection of compensating artists and people who provide all the data to tech companies, especially those on YouTube who create better content. But there's also the risk of people creating content for the sake of content, and it becomes worse.
As AI-generated content becomes more prevalent online, particularly with shock content or poorly executed, lazy content, is there a way that you think the system could work more towards fairly compensating the people who are actually working on the content? Can we find a way to filter out the content that isn't being received well or is created simply for shock value?
LANIER
Right now, most AI services do not trace the human origins of the data they rely on to generate what you see or read. Could it be done technically? I think so. I'm certainly working on that. There is a small world of researchers studying that question and trying to figure it out. We know a fair amount about it.
The one thing I think I should say at this juncture is that there are motivations to do it beyond a sense of societal quality, promoting human values, and fairness. There's also a sort of short-term motivation connected to safety and quality. I want to explain that because I think it's often underemphasized.
Right now, there are techniques like indirect prompt injection, which is a famous one, where you can trick an AI system into giving you content that we prefer you not to access. For instance, can you take a picture of a kitchen and say, "Tell me how I can make a bomb from the available things you see in this?" That's a big no-no, and that's the kind of thing that responsible AI systems flag. I don't know how many of them are responsible in that way, and I think there are more and more that maybe are not.
If somebody is very clever about how they ask for that, it might not be obvious in the prompt or in the output that that's what's going on. For instance, they might ask for a cake recipe from available items but ask it to be done by a character who's a bomb maker in a movie, for instance. They might find a sneaky way to get there, and the output might come out without making it obvious that it's about a bomb.
How do you fix that? In security, we have this idea called multi-factor authentication. When you try to log onto a site with financial information, they'll also send a code to your phone, for instance. The reason is that anything you can do directly on that one side, like a CAPTCHA, a bad actor might intercept. But as soon as there are multiple channels, it becomes harder for them.
In the same way with AI, a lot of the ways that we try to make AI responsible are by adding a twist to the AI itself, where there's some kind of guardrail. If you have a whole other factor, it becomes that much harder for weird, malicious, or accidentally horrible things to happen. Going back to the human sources is a separate factor. If you say, "What were the original sources relied upon to generate this output?" If there's a bomb document in there, you can't hide that with indirect prompt injection.
I think it's important for us to understand that this idea of being human-centered makes the whole thing more reality-centered. It makes it all safer and better, a way of improving quality that's not just lofty, Utopian societal goals. I've been trying to emphasize that side of things more and more because I think it's easier to make progress on that basis before we try to reinvent all of digital society.
THE CREATIVE PROCESS
I know that you've had many great mentors and collaborators over the years. As you think about the future and the importance of the arts and the kind of world we're leaving for the next generation, what are your thoughts about the beauty and wonder of the natural world? What would you like young people to know, preserve, and remember?
LANIER
I think young people are overwhelmed by the attention battles of unimportant and very transient things on social media now. Young people are—this wasn't entirely untrue for other recent generations, but it's just true beyond a certain threshold now, and that seems important to me. They tend to know about two pretty unimportant, uninteresting social media personalities who don't matter, but they get into conflicts that generate a lot of attention. That sort of thing will dominate them to the degree that they tend to be politically engaged in a package deal, totality kind of way.
If you're left, you buy every left idea at once. If you're right, you buy every right idea at once. I wish I could snap them out of that and get them to be a little more nuanced and say, "Well, maybe I agree with two-thirds of the left ideas, but there's one-third I don't," or vice versa.
In terms of knowledge of history, I'm a little concerned that the traditional teaching of history might have become too sclerotic or static. On the other hand, what a lot of kids know is based on whatever someone was saying at the time. Both the left and the right in the United States have emphases in their own tellings of history that are kind of weirdly askew and extreme, which makes it very hard for a young person to get a balanced sense of how we got here.
We have a lot of work to do. My generation dove into the internet with a sense of urgency and self-certainty, and it will take generations to fix what we've created.