Andrew Huberman: [MUSIC PLAYING] Welcome to the Huberman Lab podcast, where we discuss science and
science-based tools for everyday life. I'm Andrew Huberman, and I'm a professor
of neurobiology and ophthalmology at Stanford School of Medicine. Today, my guest is Marc Andreessen. Marc Andreessen is a software engineer
and an investor in technology companies. He co-founded and developed
Mosaic, which was one of the first widely used web browsers. He also co-founded and developed
Netscape, which was one of the earliest widespread used web browsers. And he co-founded and is a general
partner at Andreessen Horowitz, one of the most successful Silicon
Valley venture capital firms. All of that is to say that Mark
Andreessen is one of the most successful innovators and investors ever. I was extremely excited to record this
episode with Marc for several reasons. First of all, he himself
is an incredible innovator. Second of all, he has an uncanny ability
to spot the innovators of the future. And third, Marc has shown over and
over again the ability to understand how technologies not yet even
developed are going to impact the way that humans interact at large. Our conversation starts off by discussing
what makes for an exceptional innovator, as well as what sorts of environmental
conditions make for exceptional innovation and creativity more generally. In that context, we talk about risk
taking, not just in terms of risk taking in one's profession, but about how some
people, not all, but how some people who are risk takers and innovators in the
context of their work also seem to take a lot of risks in their personal life and
some of the consequences that can bring. Then we discuss some of the most
transformative technologies that are now emerging, such as novel approaches
to developing clean energy, as well as AI or artificial intelligence. With respect to AI, Marc shares
his views as to why AI is likely to greatly improve human experience,
and we discuss the multiple roles that AI is very likely to have in
all of our lives in the near future. Marc explains how not too long from now,
all of us are very likely to have AI assistants, for instance, assistants that
give us highly informed health advice, highly informed psychological advice. Indeed, it is very likely that all of us
will soon have AI assistants that govern most, if not all, of our daily decisions. And Marc explains how, if done
correctly, this can be a tremendously positive addition to our life. In doing so, Marc provides a stark
argument for those that argue that AI is going to diminish human experience. So if you're hearing about and or
concerned about the ways that AI is likely to destroy us today, you are
going to hear about the many different ways that AI technologies now in
development are likely to enhance our human experience at every level. What you'll soon find is that while
today's discussion does center around technology and technology development,
it is really a discussion about human beings and human psychology. So whether you have an interest in
technology development and or AI, I'm certain that you'll find today's
discussion to be an important and highly lucid view into what will soon
be the future that we all live in. Before we begin, I'd like to emphasize
that this podcast is separate from my teaching and research roles at Stanford. It is, however, part of my desire
a nd effort to bring zero cost to consumer information about
science and science-related tools to the general public. In keeping with that theme, I'd like to
thank the sponsors of today's podcast. Our first sponsor is LMNT. LMNT is an electrolyte drink that has
everything you need and nothing you don't. That means plenty of the electrolytes,
sodium, magnesium, and potassium in the correct ratios, but no sugar. The electrolytes and hydration are
absolutely key for mental health, physical health, and performance. Even a slight degree of dehydration can
impair our ability to think, our energy levels and our physical performance. LMNT makes it very easy to achieve
proper hydration, and it does so by including the three electrolytes in the
exact ratios they need to be present. I drink LMNT first thing in
the morning when I wake up. I usually mix it with
about 16 to 32oz of water. If I'm exercising, I'll drink one
while I'm exercising, and I tend to drink one after exercising as well. Now, many people are scared off by the
idea of ingesting sodium because obviously we don't want to consume sodium in excess. However, for people that have normal
blood pressure, and especially for people that are consuming very clean
diets, that is consuming not so many processed foods or highly processed
foods, oftentimes we are not getting enough sodium, magnesium and potassium,
and we can suffer as a consequence. And with LMNT , simply by mixing
in water, it tastes delicious. It's very easy to get
that proper hydration. If you'd like to try LMNT , you can
go to drinklmnt, that's L-M-N-T, .com/huberman to claim a free element
sample pack with your purchase. Again, that's drinklmnt.com/huberman. Today's episode is also
brought to us by Eight Sleep. Eight Sleep makes smart mattress
covers with cooling, heating and sleep tracking capacity. I've spoken many times before on this
podcast about the fact that sleep, that is getting a great night's sleep, is
the foundation of all mental health, physical health and performance. When we're sleeping well,
everything goes far better. And when we are not sleeping well
or enough, everything gets far worse at the level of mental health,
physical health and performance. Now, one of the key things to getting a
great night's sleep and waking up feeling refreshed is that you have to control the
temperature of your sleeping environment. And that's because in order to
fall and stay deeply asleep, you need your core body temperature to
drop by about one to three degrees. And in order to wake up feeling
refreshed and energized, you want your core body temperature to increase
by about one to three degrees. With Eight Sleep , it's very easy
to induce that drop in core body temperature by cooling your mattress
early and throughout the night and warming your mattress toward morning. I started sleeping on an Eight Sleep
mattress cover a few years ago, and it has completely transformed the
quality of the sleep that I get. So much so that I actually loathe
traveling because I don't have my Eight Sleep mattress cover when I travel. If you'd like to try Eight Sleep , you can
go to eightsleep.com/huberman and you'll save up to $150 off their Pod 3 Cover. Eight Sleep currently ships
in the USA, Canada, UK, select countries in t he EU and Australia. Again, that's eightsleep.com/huberman. And now for my discussion
with Marc Andreessen. Marc, welcome. Marc Andreessen: Hey, thank you. Andrew Huberman: Delighted to
have you here and have so many questions for you about innovation
AI, your view of the landscape of tech, and humanity in general. I want to start off by talking
about innovation from three different perspectives. There's the inner game, so to speak,
or the psychology of the innovator, or innovators, things like their
propensity for engaging in conflict or not, their propensity for having a
dream or a vision, and in particular, their innovation as it relates to some
psychological trait or expression. So we'll get to that in a moment. The second component that I'm
curious about is the outer landscape around innovators, who they place
themselves with, the sorts of choices that they make and also
the sorts of personal relationships that they might have or not have. And then the last component is this
notion of the larger landscape that they happen to find themselves in. What time in history? What's the geography? Bay Area, New York, Dubai, etc. So to start off, is there a common
trait of innovators that you think is absolutely essential as a seed to
creating things that are really impactful? Marc Andreessen: Yeah. So I'm not a psychologist,
but I've picked up some of the concepts and some of the terms. And so it was a great moment of delight
in my life when I learned about the Big Five personality traits, because I was
like, aha, there's a way to actually describe the answer to this question in
at least reasonably scientific terms. And so I think what you're looking
for, when you're talking about real innovators, like people who actually do
really creative breakthrough work, I think you're talking about a couple of things. So one is very high in what's called
trait openness, which is one of the Big Five, which is basically just
like, flat out open to new ideas. And of course, the nature of trait
openness is trait openness means you're not just open to new ideas
in one category, you're open to many different kinds of new ideas. And so we might talk about the
fact that a lot of innovators also are very creative people in other
aspects of their lives, even outside of their specific creative domain. So that's important. But of course, just being open is not
sufficient, because if you're just open, you could just be curious and
explore and spend your entire life reading and doing, talking to people
and never actually create something. So you also need a couple of other things. You need a high level of
conscientiousness, which is another one of the Big Five. You need somebody who's really willing
to apply themselves, and in our world, typically over a period of many years to
be able to accomplish something great. They typically work very hard. That often gets obscured because
the stories that end up getting told about these people are, it's just like
this kid, and he just had this idea, and it was like a stroke of genius. And it was like a moment in time and
was just like, oh, he was so lucky. And it's like, no, for most of
these people, it's years and years and years of applied effort. And so you need somebody with an
extreme, basically, willingness to defer gratification and really apply themselves
to a specific thing for a long time. And of course, this is why there aren't
very many of these people, there aren't many people who are high in openness and
high in conscientiousness because to a certain extent, they're opposed traits. And so you need somebody
who has both of those. Third is you need somebody
high in disagreeableness, which is the third of the Big Five. So you need somebody who's just basically
ornery, because if they're not ornery, then they'll be talked out of their
ideas by people who will be like, oh, well, because the reaction, most people
have new ideas is, oh, that's dumb. And so somebody who's too agreeable
will be easily dissuaded to not pursue, not pulling the thread anymore. So you need somebody highly disagreeable. Again, the nature of
disagreeableness is they tend to be disagreeable about everything. So they tend to be these very sort of
iconoclastic kind of renegade characters. And then there's just a table
stakes component, which is they just also need to be high IQ. They just need to be really smart
because it's hard to innovate in any category if you can't synthesize
large amounts of information quickly. And so those are four basically
high spikes, very rare traits that basically have to come together. You could probably also say they probably
at some point need to be relatively low on neuroticism, which is another of the
Big Five, because if they're too neurotic, they probably can't handle the stress. Right.
So it's kind of this dial in there. And then, of course, if you're into the
sort of science of the Big Five, basically these are all people who are on the far
outlying kind of point on the normal distribution across all these traits. And then that just gets you to, I
think, the sort of hardest topic of all around this whole concept, which
there are very few of these people. Andrew Huberman: Do you think
they're born with these traits? Marc Andreessen: Yeah,
they're born with the traits. And then, of course, the traits are
not genetics, are not destiny, and so the traits are not deterministic in
the sense of that just because they have those personality traits doesn't
mean they're going to deliver great creativity, but they need to have those
properties because otherwise they're just not either going to be able to do the
work or they're not going to enjoy it. Right. I mean, look, a lot of these people
are highly capable, competent people. It's very easy for them to get,
like, high paying jobs in traditional institutions and get lots of traditional
awards and end up with big paychecks. And there's a lot of people at big
institutions that you and I know well, and I deal with many of these where
people get paid a lot of money and they get a lot of respect and they go
for 20 years and it's great and they never create anything new, right? There's a lot of administrators, a
lot of them end up in administrative jobs, and that's fine, that's good. The world needs that also, right? The innovators can't run
everything because the rate of change would be too high. Society, I think, probably
wouldn't be able to handle it. So you need some people who are on the
other side who are going to kind of keep the lights on and keep things running. But there is this decision that people
have to make, which is okay if I have the sort of latent capability
to do this, is this actually what I want to spend my life doing? And do I want to go through the
stress and the pain and the trauma and anxiety and the risk of failure? And so, do I really want to? Once in a while you run into
somebody who's just like, can't do it any other way. They just have to. Andrew Huberman: Who's an example of that? Marc Andreessen: I mean, Elon's the
paramount example of our time, and I bring him up in part because he's
such an obvious example, but in part because he's talked about this in
interviews where he basically says, he's like, I can't turn it off. The ideas come, I have
to pursue them, right? It's why he's like running
five companies at the same time and, like working on a sixth. It's just like he can't turn it off. Look, there's a lot of other people who
probably had the capability to do it, who ended up talking themselves into or
whatever events conspired to put them in a position where they did something else. Obviously, there are people
who try to be creative, who just don't have the capability. And so, there's some venn diagram
there of determinism through traits, but also choices in life, and then
also, of course, the situation in which they're born, the context within
which they grow up, culture, what their parents expect of them, and so forth. And so to kind of get all the way
through this, you have to thread all these needles kind of at the same time. Andrew Huberman: Do you think there are
folks out there that meet these criteria who are disagreeable, but that can feign
agreeableness, you know that can...? [BOTH LAUGH] For those just listening,
Marc just raised his right hand. In other words, they can sort of,
phrase that comes to mind maybe because I can relate to it a little bit, they
sneak up through the system, meaning they behave ethically as it relates
to the requirements of the system. They're not breaking laws or breaking
rules, in fact, quite the opposite, they're paying attention to the
rules and following the rules until they get to a place where being
disagreeable feels less threatening to their overall sense of security. Marc Andreessen: Yeah, I mean, look,
the really highly competent people don't have to break laws, right? There was this myth that happened
around the movie The Godfather , and then there was this character, Meyer
Lansky, who's like, ran basically the Mafia 50, 60, 70 years ago. And there was this great line of like,
well, if Meyer Lansky had only applied himself to running General Motors, he
would have been the best CEO of all time. It's like, no, not really, right? The people who are great at
running the big companies, they don't have to be mob bosses. They don't have to break laws. They're smart and sophisticated enough
to be able to work inside the system. They don't need to take the easy out. So, I don't think there's any
implication that they have to break laws. That said, they have
to break norms, right? And specifically, this is probably
the thing that gets missed the most, because the process of innovating,
the process of creating something new, once it works, the stories get
retconned, as they say in comic books. So the stories get adapted to where
it's like it was inevitable all along. Everybody always knew
that this was a good idea. The person has won all these
awards, society embraced them. And invariably, if you were with them when
they were actually doing the work, or if you actually get a couple of drinks into
them and talk about it, it'd be like, no, that's not how it happened at all. They faced a wall of skepticism,
just like a wall of basically social, essentially denial. No, this is not going to work. No, I'm not going to join your lab. No, I'm not going to come
work for your company. No, I'm not going to
buy your product, right? No, I'm not going to meet with you. And so they get just like
tremendous social resistance. They're not getting positive feedback
from their social network the way that more agreeable people need to have, right? And this is why agreeableness
is a problem for innovation. If you're agreeable, you're going
to listen to the people around you. They're going to tell you that new
ideas are stupid, end of story. You're not going to proceed. And so I would put it more on like,
they need to be able to deal with, they need to be able to deal with social
discomfort to the level of ostracism, or at some point they're going to get
shaken out and they're just going to quit. Andrew Huberman: Do you think that
people that meet these criteria do best by banding with others
that meet these criteria early? Or is it important that they form this
deep sense of self, like the ability to cry oneself to sleep at night or
lie in the fetal position, worrying that things aren't going to work
out and then still get up the next morning and get right back out there. Marc Andreessen: Right. So, Sean Parker has the best
line, by the way, on this. He says being an entrepreneur or being
a creator is like getting punched in the face over and over again. He said, eventually you start to
like the taste of your own blood. And I love that line because it makes
everybody massively uncomfortable, but it gives you a sense of how
basically painful the process is. If you talk to any entrepreneur who's
been through it about that, they're like, oh, yeah, that's exactly what it's like. So, there is a big
individual component to it. But look, it can be very lonely, and
especially very hard, I think, to do this if nobody around you is trying
to do anything even remotely similar. And if you're getting just
universally negative responses, like very few people, I think very
few people have the ego strength to be able to survive that for years. So I do think there's a huge advantage,
and this is why you do see clusters. There's a huge advantage to clustering. Throughout history, you've
had this clustering effect. You had clustering of the great
artists and sculptors in, you had the clustering of the philosophers of Greece. You had the clustering of
tech people in Silicon Valley. You have the clustering of
know, arts, movie, TV people in Los Angeles, and so forth. And so, know, there's
always a scene, right? There's always, like a nexus
and a place where people come together for these kinds of things. So, generally speaking, if somebody
wants to work in tech, innovate in tech, they're going to be much better off being
around a lot of people who are trying to do that kind of thing than they are in
a place where nobody else is doing it. Having said that, the clustering can
have downsides, it can have side effects. And you put any group of people
together, and you do start to get groupthink, even among people who
are individually very disagreeable. And so these same clusters where
you get these very idiosyncratic people, they do have fads and
trends just like every place else. And so they get wrapped up
in their own social dynamics. The good news is the social dynamic in
those places is usually very forward looking, and so it's usually like, I don't
know, it's like a herd of iconoclasts looking for the next big thing. So iconoclasts, looking
for the next big thing. That's good. The herd part. That's what you've got to be careful of. So even when you're in one of
these environments, you have to be careful that you're not getting
sucked into the groupthink too much. Andrew Huberman: When you say groupthink,
do you mean excessive friction? Do you do pressure testing each
other's ideas to the point where things just don't move forward? Or are you talking about groupthink,
where people start to form a consensus? Or the self belief that, gosh, we are
so strong because we are so different? Can we better define groupthink? Marc Andreessen: It's actually less
either one of those things both happen. Those are good. Those are good. The part of groupthink I'm talking about
is just like, we all basically zero in, we just end up zeroing in on the same ideas. Right. In Hollywood, there's this classic thing. There are years where all of a sudden
there's, like, a lot of volcano movies. It's like, why are there
all these volcano movies? And it's just like, there was just
something in the gestalt, right? There was just something in the air. Look, Silicon Valley has this. There are moments in time
where you'll have these. It's like the old thing. What's the difference
between a fad and a trend? Fad is the trend that doesn't last. Right. And so Silicon Valley is subject to both
fads and trends, just like any place. In other words, you take smart,
disagreeable people, you cluster them together, they will act like a herd. They will end up thinking the same
things unless they try very hard not to. Andrew Huberman: You've talked about these
personality traits of great innovators before, and we're talking about them now. You invest in innovators, you try
and identify them, and you are one. So you can recognize these traits here. I'm making the presumption
that you have these traits. Indeed you do. We'll just get that out of the way. Have you observed people trying to
feign these traits, and are there any specific questions or behaviors that are
a giveaway that they're pretending to be the young Steve Jobs or that they're
pretending to be the young Henry Ford? Pick your list of other names that qualify
as authentic, legitimate innovators. We won't name names of people
who have tried to disguise themselves as true innovators. But what are some of the litmus tests? And I realize here that we don't
want you to give these away to the point where they lose their potency. But if you could share a few of those. Marc Andreessen: Good, we're
actually a pretty open book on this. First of all, yes, so there are people who
definitely try to come in and basically present as being something that they're
not, and they've read all the books. They will have listened to this interview. They study everything and they
construct a facade, and they come in and present as something they're not. I would say the amount of that varies
exactly, correlated to the NASDAQ. And so when stock prices are super
low, you actually get the opposite. When stock prices are super
low, people get too demoralized. And people who should be doing
it basically give up because they just think that the industry is
over, the trend is over, whatever. It's all hopeless. And so you get this flushing thing. So nobody ever shows up at a stock
market low and says, like, I'm the new next big thing and doesn't really
want to do it because there are higher status, the kinds of people who do the
thing that you're talking about, they're fundamentally oriented for social status. They're trying to get the social
status without actually the substance. And there are always other places
to go to get social status. So after 2000, the joke was,
when I got to Silicon Valley in '93, '94, the Valley was dead. We can talk about that. By '98, it was roaring, and you had
a lot of these people showing up, who were, you basically had a lot of people
showing up with these kind of stories. 2000, the market crashed. By 2001, the joke was that there
were these terms, B to C and B to B. And in 1998, they meant B to C meant
business to consumer and B to B meant business to business, which
is two different kinds of business models for Internet companies. By 2001, B to B meant back to banking
and B to C meant back to consulting, which is the high status people who, the
people oriented to status, who showed up to be in tech were like, yeah, screw it. This is over. Stick a fork in it. I'm going to go back to Goldman
Sachs or go back to McKinsey, where I can be high status. And so you get this flushing kind of
effect that happens in a downturn. That said, in a big upswing, yeah, you
get a lot of people showing up with a lot of kind of, let's say, public persona
without the substance to back it up. So the way we stress that you can actually
say exactly how we test for this, because the test exactly addresses the issue
in a way that is impossible to fake. And it's actually the same way homicide
detectives try to find out if you've actually, like, if you're innocent
or whether you've killed somebody. It's the same tactic, which is, you ask
increasingly detailed questions, right? And so the way the homicide cop does
this is, what were you doing last night? Oh, I was at a movie. Which movie? Which theater? Okay, which seat did you sit in? Okay, what was the end of the movie? And you ask increasingly detailed
questions and people have trouble. At some point, people have trouble
making up and things just fuzz into just kind of obvious bullshit. And basically fake founders
basically have the same problem. They're able to relay a conceptual
theory of what they're doing that they've kind of engineered, but as they get
into the details, it just fuzzes out. Whereas the true people that you want
to back that can do it, basically what you find is they've spent five or ten
or 20 years obsessing on the details of whatever it is they're about to do. And they're so deep in the
details that they know so much more about it than you ever will. And in fact, the best possible
reaction is when they get mad, which is also what the homicide cops say. What you actually want is you want the
emotional response of like, I can't believe that you're asking me questions
this detailed and specific and picky and they kind of figure out what
you're doing and then they get upset. That's good, that's perfect, right? But then they have to have proven
themselves in the sense of, they have to be able to answer
the questions in great detail. Andrew Huberman: Do you think that people
that are able to answer those questions in great detail have actually taken the
time to systematically think through the if-ands of all the possible implications
of what they're going to do and they have a specific vision in mind of how
things need to turn out or will turn out? Or do you think that they have
a vision and it's a no matter what, it will work out because the
world will sort of bend around it? I mean, in other words, do you think
that they place their vision in context or they simply have a vision
and they have that tunnel vision of that thing and that's going to be it? Let's use you for an
example with Netscape. That's how I first came to know your name. When you were conceiving Netscape,
did you think, okay, there's this search engine and this browser and
it's going to be this thing that looks this way and works this way and
feels this way, did you think that? And also think about that there was
going to be a gallery of other search engines and it would fit into that
landscape of other search engines? Or were you just projecting your
vision of this thing as this unique and special brainchild? Marc Andreessen: Let me give the
general answer, and then we can talk about the specific example. So the general answer is what? Entrepreneurship, creativity,
innovation is what economists call decision making under uncertainty. In both parts, those are
important decision making. Like, you're going to make a ton
of decisions because you have to decide what to do, what not to do. And then uncertainty, which is like,
the world's a complicated place. And in mathematical terms, the
world is a complex adaptive system with feedback loops. And Isaac Asimov wrote in his
novels, he wrote about this field called psychohistory, which is
the idea that there's like a supercomputer that can predict the
future of human affairs, right? And it's like, we don't have that. [LAUGHS] Not yet. Andrew Huberman: [LAUGHS] Not yet. We'll get to that later. Marc Andreessen: We certainly
don't have that yet. And so you're just dealing, you
know, military commanders call this the fog of war, right? You're just dealing with a
situation where the number of variables are just off the charts. It's all these other people who are
inherently unpredictable, making all these decisions in different directions. And then the whole system is
combinatorial, which is these people are colliding with each
other, influencing their decisions. And so, I mean, look, the most
straightforward kind of way to think about this is, it's amazing. Like, anybody who believes in
economic central planning, it always blows my mind because it's just
like, try opening a restaurant. Try just opening a restaurant
on the corner down here. And like 50/50 odds, the
restaurant is going to work. And all you have to do to run a
restaurant is have a thing and serve food. And it's like most
restaurants fail, right? People who run restaurants
are pretty smart. They usually think about these things
very hard, and they all want to succeed, and it's hard to do that. And so to start a tech company or to
start an artistic movement or to fight a war, you're just going into this,
basically conceptual battleground or in military terms, real battleground,
where there's just like incredible levels of complexity, branching future paths,
and so there's nothing predictable. And so what we look for is basically
the really good innovators. They've got a drive to basically be able
to cope with that and deal with that. And they basically do that in two steps. So one is they try to pre-plan as
much as they possibly can and we call that the process of navigating
the, what we call the idea maze. And so the idea maze basically is, I've
got this general idea, and it might be the Internet is going to work or search
or whatever, and then it's like, okay, in their head, they have thought through of
like, okay, if I do it this way, that way, this third way, here's what will happen. Then I have to do that, then I
have to do this, then I have to bring in somebody to do that. Here's the technical
challenge I'm going to hit. And they got in their heads as
best anybody could, they've got as complete a sort of a map of possible
futures as they could possibly have. And this is where I say, when you ask them
increasingly detailed questions, that's what you're trying to kind of get them to
kind of chart out, is, okay, how far ahead have you thought, and how much are you
anticipating all of the different twists and turns that this is going to take? Okay, so then they start on day
one, and then, of course, what happens is now they're in it, now
they're in the fog of war, right? They're in future uncertainty. And now that idea maze is maybe not
helpful practically, but now they're going to be basically constructing
it on the fly, day by day, as they learn and discover new things and
as the world changes around them. And of course, it's a feedback loop,
because if their thing starts to work, it's going to change the world. And then the fact the world
is changing is going to cause their plan to change as well. And so, yeah, the great ones,
basically, the great ones course correct every single day. They take stock of what they've learned. They modify the plan. The great ones tend to think
in terms of hypotheses, right? Like a scientific sort of mentality,
which is they tend to think, okay, I'm going to try this. I'm going to go into the world, I'm going
to announce that I'm doing this for sure. I'm going to say, this is my plan. I'm going to tell all my employees
that, and I'm going to tell all my investors that, and I'm going to put
a stake in there, and it's my plan, and then I'm going to try it, and even
though I sound like I have complete certainty, I know that I need to test
to find out whether it's going to work. And if it's not, then I have to go
back to all those same people and have to say, well, actually, we're
not going left, we're going right. And they have to run that loop thousands
of times to get through the other side. And this led to the creation of this great
term pivot, which has been very helpful in our industry because the word, when
I was young, the word we used was fuck up, and pivot sounds like so much better,
sounds like so much more professional. But, yeah, you make mistakes. It's just too complicated to understand. You course correct,
you adjust, you evolve. Often these things, at least in business,
the businesses that end up working really well tend to be different than
the original plan, but that's part of the process of a really smart founder
basically working their way through reality as they're executing their plan. Andrew Huberman: The way you're
describing this has parallels to a lot of models in biology and the
practice of science, random walks, but that aren't truly random,
pseudo-random walks in biology, etc. But one thing that is becoming
clear from the way you're describing this is that I could imagine
a great risk to early success. So, for instance, somebody develops
a product, people are excited by it, they start to implement that product,
but then the landscape changes, and they don't learn how to pivot to
use the less profane version of it. They don't learn how to do that. In other words, and I think of everything
these days, or most everything, in terms of reward schedules and dopamine
reward schedules, because that is the universal currency of reward. And so when you talk about the Sean
Parker quote of learning to enjoy the taste of one's own blood, that
is very different than learning to enjoy the taste of success, right? It's about internalizing success
as a process of being self determined and less agreeable, etc. In other words, building up of those five
traits becomes the source of dopamine, perhaps in a way that's highly adaptive. So on the outside, we just see the
product, the end product, the iPhone, the MacBook, the Netscape, etc. But I have to presume, and I'm not
a psychologist, but I have done neurophysiology and I've studied the
dopamine system enough to know that what's being rewarded in the context
of what you're describing sounds to be a reinforcement of those five
traits, rather than, oh, it's going to be this particular product, or the
company is going to look this way, or the logo is going to be this or that. That all seems like the peripheral
to what's really going on, that great innovators are really in the process
of establishing neural circuitry that is all about reinforcing
the me and the process of being. Marc Andreessen: So this is like
extrinsic versus intrinsic motivation. So, the Steve Jobs kind of
Zen version of this, right? Or the sort of hippie version of
this was the journey is the reward. He always told his employees that. It's like, look, everybody thinks in
terms of these big public markers, like the stock price or the IPO
or the product launch or whatever. He's like, no, it's actually
the process itself is the point. Right to your point, if you have that
mentality, then that's an intrinsic motivation, not an extrinsic motivation. And so that's the kind of
intrinsic motivation that can keep you going for a long time. Another way to think about it is
competing against yourself, right? It's like, can I get better at doing this? And can I prove to myself
that I can get better? There's also a big social component
to this, and this is one of the reasons why Silicon Valley punches
so far above its weight as a place. There's a psychological component
which also goes to the comparison set. So a phenomenon that we've observed
over time is the leading tech company in any city will aspire to be as large
as the previous leading tech company in that city, but often not larger, right? Because they have a model of success. And as long as they beat that level
of success, they've kind of checked the box like they've made it. But then, in contrast, you're in
Silicon Valley, and you look around and it's just like Facebook and Cisco
and Oracle and Hewlett Packard and-- Andrew Huberman: --Gladiators-- Marc Andreessen: --Yeah. And you're just, like,
looking at these giants. Many of them are still, Mark Zuckerberg,
still going to work every day. And so these people are, like,
the role models are, like, alive. They're, like, right there, and it's so
clear how much better they are and how much bigger their accomplishments are. And so what we find is young
founders in that environment have much greater aspirations. Because, again, at that point, maybe
it's the social status, maybe there's an extrinsic component to that, or
maybe it helps calibrate that internal system to basically say, actually, no,
the opportunity here is not to build what you may call a local maximum
form of success, but let's build to a global maximum form of success, which
is something as big as we possibly can. Ultimately, the great ones are
probably driven more internally than externally when it comes down to it. And that is where you get this phenomenon
where you get people who are extremely successful and extremely wealthy
who very easily could punch out and move to Fiji and just call it, and
they're still working 16 hour days. Obviously something explains that that
has nothing to do with external rewards, and I think it's an internal thing. Andrew Huberman: As many of you
know, I've been taking AG1 daily since 2012, so I'm delighted that
they're sponsoring the podcast. AG1 is a vitamin mineral probiotic
drink that's designed to meet all of your foundational nutrition needs. Now, of course, I try to get enough
servings of vitamins and minerals through whole food sources that include
vegetables and fruits every day. But oftentimes I simply
can't get enough servings. But with AG1, I'm sure to get
enough vitamins and minerals and the probiotics that I need. And it also contains adaptogens
to help buffer stress. Simply put, I always feel
better when I take AG1. I have more focus and
energy, and I sleep better. And it also happens to taste great. For all these reasons, whenever
I'm asked if you could take just one supplement, what would it be? I answer AG1. If you'd like to try AG1,
go to drinkag1.com/huberman to claim a special offer. They'll give you five free travel packs
plus a year's supply of Vitamin D3K2. Again, that's drinkag1.com/huberman. I've heard you talk a lot about the
inner landscape, the inner psychology of these folks, and I appreciate that. We're going even deeper into that today. And we will talk about the landscape
around whether or not Silicon Valley or New York, whether or not there
are specific cities that are ideal for certain types of pursuits. I think there was an article written by
Paul Graham some years ago, about the conversations that you overhear in a city
will tell you everything you need to know about whether or not you belong there
in terms of your professional pursuits. Some of that's changed over time, and
now we should probably add Austin to the mix because it was written some time ago. In any event, I want to return to
that, but I want to focus on an aspect of this intrinsic versus extrinsic
motivators in terms of something that's a bit more cryptic, which
is one's personal relationships. If I think about the catalog of innovators
in Silicon Valley, some of them, like Steve Jobs, had complicated personal
lives, romantic personal lives early on, and it sounds like he worked it out. I don't know. I wasn't their couple's therapist. But when he died, he was in a
marriage that for all the world seemed like a happy marriage. You also have examples of innovators
who have had many partners, many children with other partners. Elon comes to mind. I don't think I'm disclosing
anything that isn't already obvious. Those could have been happy
relationships and just had many of them. But the reason I'm asking this is you
can imagine that for the innovator, the person with these traits, who's
trying to build up this thing, whatever it is, that having someone, or several
people in some cases, who just truly believe in you when the rest of the
world may not believe in you yet or at all, could be immensely powerful. And we have examples from
cults that embody this. We have examples from politics. We have examples from tech
innovation and science. And I've always been fascinated by
this because I feel like it's the more cryptic and yet very potent form of
allowing someone to build themselves up. It's a combination of inner
psychology and extrinsic motivation. Because obviously, if that person
were to die or leave them or cheat on them or pair up with some other
innovator, which we've seen several times recently and in the past, it
can be devastating to that person. But what are your thoughts on the
role of personal, and in particular, romantic relationship as it relates
to people having an idea and their feeling that they can really bring
that idea to fruition in the world? Marc Andreessen: So it's a real mixed bag. You have lots of examples
in all directions, and I think it's something like. Something like the following. So first, we talked about the
personality traits of these people. They tend to be highly disagreeable. Andrew Huberman: Doesn't foster
a good romantic relationship. Marc Andreessen: Highly
disagreeable people can be difficult to be in a relationship. [LAUGHS] Andrew Huberman: [LAUGHS] I may have
heard of that once or twice before. A friend may have given me that example. Marc Andreessen: Yeah. Right. And maybe you just need to find the
right person who compliments that and is willing to, there's a lot of
relationships where it's always this question about relationships, right? Which is, do you want to have the
same personality growth profile, the same behavioral traits, basically,
as your partner, or do you actually want to have, is it an opposite thing? I'm sure you've seen this. There are relationships where you'll
have somebody who's highly disagreeable, who's paired with somebody who's highly
agreeable, and it actually works out great because one person just gets to be on
their soapbox all the time, and the other person is just like, okay, it's fine. Right?
It's fine. It's good. You put two disagreeable people
together, maybe sparks fly and they have great conversations all the time,
and maybe they come to hate each other. Anyway, so these people, if you're
going to be with one of these people, you're fishing out of
the disagreeable end of the pond. And again, when I say disagreeable, I
don't mean these are normal distributions. I don't mean, like 60%
disagreeable or 80% disagreeable. The people we're talking
about are 99.99% disagreeable. So these are ordinary people. So part of it's that. And then, of course, they have
the other personality traits. They're super conscientious. They're super driven. As a consequence, they
tend to work really hard. They tend to not have a lot of time
for family vacations or other things. Then they don't enjoy them if
they're forced to go on them. And so, again, that kind of
thing can fray at a relationship. So there's a fair amount
in there that's loaded. Like, somebody who's going to
partner with one of these people needs to be signed up for the ride. And that's a hard thing. That's a hard thing to do. Or you need a true partnership of two
of these, which is also hard to do. So I think that's part of it. And then, look, I think a big part of
it is people achieve a certain level of success, and either in their own minds
or publicly, and then they start to be able to get away with things, right? And they start to be able to. It's like, well, okay, now we're rich
and successful and famous, and now I deserve, and this is where you get into... I view this now in the
realm of personal choice. You get into this thing where people
start to think that they deserve things, and so they start to behave in very
bad ways, and then they blow up their personal worlds as a consequence. And maybe they regret it
later, and maybe they don't. Right? It's always a question. I think there's that. And then, I don't know, maybe the other
part of it is that some people just need more emotional support than others. And I don't know that that's a big, I
don't know that that tilts either way. I know some of these people who have
great, loving relationships and seem to draw very much on having this
kind of firm foundation to rely upon. And then I know other people who
are just like, their personal lives are just a continuous train wreck. And it doesn't seem to matter,
like, professionally, they just keep doing what they're doing. And maybe we could talk here
about whatever is the personality trait for risk taking. Some people are so incredibly risk
prone that they need to take risk in all aspects of their lives at all times. And if part of their life gets
stable, they find a way to blow it up. And that's some of these people you
could describe in those terms also. Andrew Huberman: Yeah,
let's talk about that. Because I think risk taking and
sensation seeking is something that fascinates me for my own reasons
and in my observations of others. Does it dovetail with these five traits
in a way that can really serve innovation, in ways that can benefit everybody? The reason I say to benefit everybody
is because there is a view of how we're painting this picture of the
innovator as this really cruel person. But oftentimes, what we're talking
about are innovations that make the world far better for billions of people. Marc Andreessen: Yeah, that's right. And by the way, everything we're
talking about also is not just in tech or science or in business. Everything we're also talking
about is true for the arts. The history of artistic expression. You have people with all
these same kinds of traits. Andrew Huberman: Well, I was thinking
about Picasso and his regular turnover of lovers and partners, and he was very
open about the fact that it was one of the sources of his productivity, creativity. He wasn't shy about that. I suppose if he were alive today,
it might be a little bit different. He might be judged a little differently. Marc Andreessen: Or that was his
story for behaving in a pattern that was very awful for the people
around him, and he didn't care. Andrew Huberman: Right,
maybe they left him? Marc Andreessen: Yeah.
Who knows? Right? Puts and takes to all this, but no. Okay, so I have a theory. So here's a theory. This is one of these, I keep a
list of things that will get me kicked out of a dinner party and
topics at any given point in time. Andrew Huberman: Do you
read it before you go in? Marc Andreessen: Yeah. On auto recall, so that I
can get out of these things. Here's the thing that can
get me kicked out of a dinner party, especially these days. So think of the kind of person where it's
very clear that they're super high, to your point, this is somebody who's super
high output in whatever domain they're in. They've done things that have
fundamentally changed the world. They've brought new, whether it's
businesses or technologies or works of art, entire schools of creative
expression, in some cases to the world. And then at a certain point, they
blow themselves to smithereens, right? And they do that either through
a massive financial scandal. They do that through a
massive personal breakdown. They do that through some sort
of public expression that causes them a huge amount of problems. They say the wrong thing, maybe not
once, but several hundred times, and blow themselves to smithereens. There's this moral arc that people
kind of want to apply, which it's like the Icarus flying too close to
the sun and he had it coming and he needed to keep his ego under control. And you get kind of this
judgment that applies. So I have a different theory on this. So the term I use to describe these
people, and by the way, a lot of other people who don't actually blow themselves
up but get close to it, which is a whole 'nother set of people, I call
them martyrs to civilizational progress. We're backwards, civilizational progress. So look, the only way civilization
gets moved forward is when people like this do something new. Because civilization as a
whole does not do new things. Groups of people do not do new things. These things don't happen automatically. By default nothing changes. The only way civilizational change on any
of these axes ever happens is because one of these people stands up and says, no,
I'm going to do something different than what everybody else has ever done before. So, this is progress, like,
this is actually how it happens. Sometimes they get lionized or awarded. Sometimes they get crucified. Sometimes the crucifixion is literal. Sometimes it's just symbolic. But they are those kinds of people,
and then martyrs when they go down in flames and again, this is where it really
screws the people's moral judgments because everybody wants to have the sort
of super clear story of like, okay, he did a bad thing and he was punished. And I'm like, no, he was the kind of
person who was going to do great things and also was going to take on a level
of risk and take on a level of sort of extreme behavior such that he was going
to expose himself to flying too close to the sun, wings melt and crash to ground. But it's a package deal. The reason you have the Picasso's
and the Beethovens and all these people is because they're willing to
take these extreme level of risks. They are that creative and original,
not just in their art or their business, but in everything else that they
do that they will set themselves up to be able to fail psychologically. A psychologist would probably, or
psychiatrist would probably say maybe. To what extent do they actually
have a death wish at some point. Do they want to punish themselves? Do they want to fail? That I don't know. But you see this. They deliberately move themselves too
close to the sun, and you can see it when it's happening, because if they
get too far away from the sun, they deliberately move back towards it. Right. They come right back, and
they want the risk anyway. So martyrs to civilizational progress. This is how progress happens. When these people crash and
burn, the natural inclination is to judge them morally. I tend to think we should basically
say, look, and I don't even know if this means, like, giving them a moral pass
or whatever, but it's like, look, this is how civilization progresses, and we
need to at least understand that there's a self sacrificial aspect to this that
may be tragic and often is tragic, but it is quite literally self sacrificial. Andrew Huberman: Are there any examples
of great innovators who were able to compartmentalize their risk taking to
such a degree that they had what seemed to be a morally impeccable life in every
domain except in their business pursuits? Marc Andreessen: Yeah, that's right. So some people are very
highly controlled like that. Some people are able to very narrowly,
and I don't really want to set myself an example on a lot of this, but I
will tell you as an example, I will never use debt in business, number one. Number two, I have the most placid
personal life you can imagine. Number three, I'm the last
person in the world who is ever going to do an extreme sport. I mean, I'm not even going to
go in the sauna on the ice bath. I'm not doing any of this. I'm not tele skiing. Andrew Huberman: No obligation. Marc Andreessen: I'm not on the Titan. I'm not going down to see the Titanic. Goodness, you weren't doing any of this. I'm not doing any of this stuff. I have no interest. I don't play golf. I don't ski. I have no interest in
any of this stuff, right? And I know people like this,
right, who are very high achievers. It's just like, yeah,
they're completely segmented. They're extreme risk takers. In business, they're completely buttoned
down on the personal side, they're completely buttoned down financially. They're scrupulous with following every
rule and law you can possibly imagine, but they're still fantastic innovators. And then I know many others who are
just like their life is on fire all the time, in every possible way. And whenever it looks like the fire is
turning into embers, they figure out a way to relight the fire, and they
just really want to live on the edge. And so I think that's
an independent variable. And again, I would apply the same thing. I think the same thing
applies to the arts. Classical music as an example. I think Bach was, as an example,
one of the best musicians of all time, had just a completely sedate
personal life, never had any aberrant behavior at all in his personal life. Family man, tons of kids,
apparently pillar of the community. Right. And so if Bach could be Bach and yet
not burn his way through 300 mistresses or whatever, maybe you can, too. Andrew Huberman: So in thinking about
these two different categories of innovators, those that take on tremendous
risk in all domains of their life and those that take on tremendous risk in
a very compartmentalized way, I don't know what the percentages are, but I
have to wonder if in this modern age of the public being far less forgivable,
what I'm referring to is cancel culture. Do you think that we are limiting
the number of innovations in total by just simply frightening or
eliminating an enormous category of innovators because they don't have
the confidence or the means or the strategies in place to regulate? So they're just either bowing out
or they're getting crossed off, they're getting canceled one by one. Marc Andreessen: So do you think
the public is less tolerant than they used to be or more tolerant? Andrew Huberman: Well, the systems
that, I'm not going to be careful here. I think the large institution systems
are not tolerant of what the public tells them they shouldn't be tolerant of. And so if there's enough noise,
there's enough noise in the mob. I think institutions bow out. And here I'm referring not just
to, they essentially say, okay, let the cancellation proceed. Maybe they're the gavel that
comes down, but they're not the lever that got the thing going. And so I'm not just
thinking about universities. I'm also thinking about advertisers. I'm thinking about the big movie
houses that cancel a film that a given actor might be in because they
had something in their personal life that's still getting worked out. I'm thinking about people who
are in a legal process that's not yet resolved, but the public has
decided they're a bad person, etc. Marc Andreessen: My question is, are
we really talking about the public? I agree with your question, and I'm
going to come back to it, but I'm going to examine one part of your
question, which is, is this really the public we're talking about. And I would just say Exhibit A is
who is the current frontrunner for the Republican nomination today? The public, at least on one side of the
political aisle, seems very on board. Number two, like, look, there's a
certain musician who flew too close to the sun, blew himself to smithereens. He's still hitting all time highs
on music streams every month. The public seems fine. I would argue the public is actually
more open to these things than it actually maybe ever has been. And we could talk about
why that's the case. I think it's a differentiation,
and this is what your question was aiming at, but it's a differentiation
between the public and the elites. My view is everything that you just
described is an elite phenomenon. And actually, the public is
very much not on board with it. So what's actually happening is
what's happened is the public and the elites have gapped out. The public is more forgiving of what
previously might have been considered kind of aberant and extreme behavior, right? F. Scott Fitzgerald, "there are no
second acts in American lives" turns out was completely wrong. Turns out there are second
acts, third acts, fourth acts. Apparently you can have an
unlimited number of acts. The public is actually up for it. Yeah. Andrew Huberman: I mean, I think
of somebody like Mike Tyson, right? I feel like his life
exemplifies everything. That's amazing and great and
also terrible about America. Marc Andreessen: If we took Mike Tyson to
dinner tonight at any restaurant anywhere in the United States, what would happen? Andrew Huberman: He would be loved. Marc Andreessen: Oh, he would be
like, the outpouring of enthusiasm and passion and love would be incredible. It would be unbelievable. This is a great example. And again, I'm not even
going to draw more. I'm not even going to say I agree
with that or disagree with that. I think we all intuitively know that the
public is just like, 100%, absolutely. He's a legend.
He's a living legend. He's like a cultural touchstone. Absolutely. And you see it when he
shows up in movies, right? I don't remember the, I mean, the big
breakthrough where I figured this out with respect to him because I don't really
follow sports, but when he showed up in that, it was that first Hangover movie,
and he shows up and I was in a theater and the audience just goes, bananas crazy. They're so excited to see him. Andrew Huberman: He evokes delight. I always say that Mike Tyson is the
only person I'm aware of that can wear a shirt with his own name on it,
and it somehow doesn't seem wrong. In fact, it just kind of
makes you like him more. His ego feels very contoured in a way that
he knows who he is and who he was, and yet there's a humbleness woven in, maybe as a
consequence of all that he's been through. I don't know. But, yeah, people love Mike. Marc Andreessen: Public loves him now. Exactly. Now, if he shows up to lecture at
Harvard, right, I think you're probably going to get a different reaction? [LAUGHS]
Andrew Huberman: I don't know. I don't know! You know, the guy who wrote The Wire
gave a talk at Harvard, and it sounded to me, based on his report of that,
which is very interesting, in fact, that people adore people who are
connected to everybody in that way. I feel like everybody loves Mike. From above his status, the sides
below his status, he occupies this halo of love and adoration. Marc Andreessen: Okay. Andrew Huberman: All right. Marc Andreessen: Yeah. Look, the other side of this is
the elites, and you kind of alluded to this, of the institution. So basically, it's like the people who
are at least nominally in charge or feel like that they should be in charge. Andrew Huberman: I want to
make sure we define elite. So you're not necessarily talking
about people who are wealthy. You're talking about people who
have authority within institutions. Marc Andreessen: So the ultimate
definition of an elite is who can get who fired, right. That's the ultimate test. Who can get who fired, boycotted,
blacklisted, ostracized. Like when push, prosecuted, jailed,
like when push comes to shove. I think that's always the question,
who can destroy whose career? And of course, you'll notice
that that is heavily asymmetric when these fights play out. Like, it's very clear which side can get
the other side fired and which side can't. And so, yeah, so, look, I think
we live in a period of time where the elites have gotten to be
extreme in a number of dimensions. I think it's characterized by, for
sure, extreme groupthink, extreme sanctimony, extreme moral, I would
say dudgeon, this weird sort of modern puritanism, and then an extreme sort
of morality of punishment and terror against their perceived enemies. But I want to go through that
because I actually think that's a very different phenomenon. I think what's happening at the
elites is very different than what's happening in the population at large. And then, of course, I think there's
a feedback loop in there, which is, I think the population at large
is not on board with that program. Right. I think the elites are aware
that the population is not on board WIth that program. I think they judge the population
negatively as a consequence, that causes the elites to harden their own positions. That causes them to be even more
alienating to the population. And so they're in sort of an
oppositional negative feedback loop. But again, it's a sort of question,
okay, who can get who fired? And so elites are really good
at getting normal people fired. Ostracized, banned, hit pieces
in the press, like, whatever. For normal people to get elites fired,
they have to really band together, right. And really mount a serious challenge,
which mostly doesn't happen, but might be starting to happen in some cases. Andrew Huberman: Do you think this
power of the elites over, stemmed from social media sort of going
against its original purpose? I mean, when you think social
media, you think you're giving each and every person their own little
reality TV show, their own voice. And yet we've seen a dramatic uptick
in the number of cancellations and firings related to immoral behavior
based on things that were either done or amplified on social media. It's almost as if the public is
holding the wrong end of the knife. Marc Andreessen: Yeah, so the way I
describe it, I use these two terms, and they're somewhat interchangeable,
but elites and institutions. And then they're somewhat interchangeable
because who runs the institutions? The elites, right? And so it's sort of a
self reinforcing thing. And institutions of all kinds. Institutions, everything from the
government, bureaucracies, companies, nonprofits, foundations, NGOs,
tech companies, on and on and on. Like people who are in charge of big
complexes and that carry a lot of, basically, power and influence and
capability and money as a consequence of their positional authority. So the head of a giant foundation
may never have done anything in their life that would cause somebody to have
a high opinion of them as a person. But they're in charge of this
gigantic multi billion dollar complex and have all this power. And so that's just defined
terms, at least in institutions. So, it's actually interesting. Gallup has been doing polls on the
following on the question of trust in institutions, which is sort of
therefore a proxy for trust in elites, basically since the early 1970s. And they do this across all the categories
of big institutions, basically everyone. I just talked about a bunch of others. Big business, small business,
banks, newspapers, broadcast television, the military, police. So they've got like 30
categories or something. And basically what you see is almost
all the categories basically started in the early 70s at like 60 or 70% trust. And now almost across the board,
they've just had a complete, basically linear slide down for
50 years, basically my whole life. And they're now bottoming out. Congress and journalists
bottom out at like 10%. The two groups everybody hates
are Congress and journalists. And then it's like a lot of
other big institutions are like, in their 20s, 30s, 40s. Actually, big business
actually scores fairly high. Tech actually scores quite high. The military scores quite high. But basically everything
else has really caved in. This is sort of my fundamental challenge
to everybody who basically says, and you didn't do this, but you'll hear the
simple form of this, which is social media caused the current trouble. And let's call this an example, collapse
in faith in institutions and elites. Let's call that part
of the current trouble. Everybody's like, well,
social media caused that. I was like, well, no, social
media, social media is new, right? In the last... social media is effectively new,
practically speaking, since 2010, 2012 is when it really took off. And so, if the trend started in the
early 1970s and has been continuous, then we're dealing with something broader. Martin Gurri wrote, I think, the best book
on this called the Revolt of the Public , where he goes through this in detail. He does say that social media
had a lot to do with what's happened in the last decade. But he says, yeah, if you go
back, you look further, it was basically two things coinciding. One was just a general change
in the media environment. And in particular, the 1970s is when you
started to, and especially in the 1980s, is when you started to get specifically
talk radio, which was a new outlet. And then you also got cable television. And then you also, by the way, it's
actually interesting in that you had paperback books, which was another
one of these, which was an outlet. So you had like a fracturing in the
media landscape that started in the 50s through the, then, of course,
the Internet blew it wide open. Having said that, if the elites and
the institutions were fantastic, you would know it more than ever. Information is more accessible. And so the other thing that he says,
and I agree with, is the public is not being tricked into thinking the
elites and institutions are bad. They're learning that they're bad, and
therefore, the mystery of the Gallup poll is why those numbers aren't all
just zero, which is arguably, in a lot of cases, where they should be. Andrew Huberman: I think one reason that-- Marc Andreessen: --By the
way, he thinks this is bad. So he and I have a different view. So here's where he and I disagree. He thinks this is bad. So he basically says, you can't
replace elites with nothing. You can't replace institutions with
nothing, because what you're just left with is just going to be wreckage. You're going to be left with a completely,
basically atomized, out of control society that has no ability to marshal
any sort of activity in any direction. It's just going to be a
dog eat dog awful world. I have a very different view on
that which we can talk about. Andrew Huberman: Yeah, I'd love
to hear your views on that. I'd like to take a quick break and
acknowledge our sponsor, InsideTracker. InsideTracker is a personalized
nutrition platform that analyzes data from your blood and DNA to help
you better understand your body and help you meet your health goals. I'm a big believer in getting regular
blood work done for the simple reason that many of the factors that impact your
immediate and long term health can only be analyzed from a quality blood test. However, with a lot of blood tests
out there, you get information back about blood lipids, about hormones
and so on, but you don't know what to do with that information. With InsideTracker, they have a
personalized platform that makes it very easy to understand your data, that is,
to understand what those lipids, what those hormone levels, etc., mean, and
behavioral supplement, nutrition and other protocols to adjust those numbers to
bring them into the ranges that are ideal for your immediate and long term health. InsideTracker's ultimate plan now includes
measures of both APOB and of Insulin, which are key indicators of cardiovascular
health and energy regulation. If you'd like to try InsideTracker, you
can visit insidetracker.com/huberman to get 20% off any of InsideTracker's plans. Again, that's insidetracker.com/huberman
to get 20% off. The quick question I was going to ask
before we go there is, I think that one reason that I and many other people
sort of reflexively assume that social media caused the demise of our faith and
institutions is, well, first of all, I wasn't aware of this lack of correlation
between the decline in faith in institutions and the rise of social media. But secondarily that we've seen
some movements that have essentially rooted themselves in tweets, in
comments, in posts that get amplified, and those tweets and comments and
posts come from everyday people. In fact, I can't name one person who
initiated a given cancellation or movement because it was the sort of
dogpiling or mob adding-on to some person that was essentially anonymous. So I think that for many of us, we
have the, to use neuroscience language, as sort of a bottom up perspective,
oh, someone sees something in their daily life or experiences something in
their daily life, and they tweet about it or they comment about it or they
post about it, and then enough people dogpile on the accused that it picks
up force, and then the elites feel compelled, obligated to cancel somebody. That tends to be the narrative. And so I think the logical
conclusion is, oh, social media allows for this to happen. Whereas normally someone would just
be standing on the corner shouting or calling lawyers that don't have
faith in them, and you've got the Erin Brockovich model that turns into a movie. But that's a rare case of this lone woman
who's got this idea in mind about how a big institution is doing wrong or somebody
is doing wrong in the world and then can leverage the big institution, excuse me. But the way that you describe it is
that the elites are leading this shift. So what is the role of the public in it? Just to give it a concrete example,
if, for instance, no one tweeted or commented on me, too, or no one tweeted
or commented about some ill behavior of some, I don't know, university
faculty member or business person, would the elite have come down on them? Marc Andreessen: Anyway, what's happening? Based on what I've seen over the years,
there is so much astroturfing right now. There are entire categories of
people who are paid to do this. Some of them we call journalists,
some of them we call activists, some of them we call NGO nonprofit. Some of them we call university
professors, some of them we call grad students, whatever,
they're paid to do this. I don't know if you've ever looked into
the misinformation industrial complex? There's this whole universe of
basically these funded groups that basically do misinformation. And they're constantly mounting
these kinds of attacks. They're constantly trying to gin
up this kind of basically panic to cause somebody to get fired. Andrew Huberman: So
it's not a grassroots-- Marc Andreessen: --No.
It's the opposite of grassroots. No. Almost always going to
trace these things back. It was a journalist, it was an activist,
it was a public figure of some kind. These are entrepreneurs
in a sort of a weird way. Basically their job, mission
calling, is all wrapped up together like they're true believers, but
they're also getting paid to do it. And there's a giant funding, I
mean, there's a very large funding complex for this coming from
certain high profile people who put huge amounts of money into this. Andrew Huberman: Is this well known? Marc Andreessen: Yes. Well, it is in my world. So this is what the social media
companies have been on the receiving end of for the last decade. It's basically a political media activism
complex with very deep pockets behind it. And you've got people who basically,
literally have people who sit all day and watch the TV network on the other
side or watch the Twitter feeds on the other side, and they basically wait. It's like every politician, this has
been the case for a long time now. Every politician who goes out and gives
stump speeches, you'll see there's always somebody in the crowd with a camcorder
or now with a phone recording them. And that's somebody from the other
campaign who's paid somebody to just be there and record every
single thing the politician says. So that when a Mitt Romney says,
whatever, the 47% thing, they've got it on tape, and then they clip
it, and they try to make it viral. And again, look, these people
believe what they're doing. I'm not saying it's even dishonest. Like, these people believe
what they're doing. They think they're fighting a holy war. They think they're protecting democracy. They think they're
protecting civilization. They think they're protecting
whatever it is they're protecting. And then they know how to use
the tools, and so they know how to try to gin up the outrage. And then, by the way, sometimes
it works in social cascades. Sometimes it works, sometimes it doesn't. Sometimes they cascade,
sometimes they don't. But if you follow these people on
Twitter, this is what they do every day. They're constantly trying
to, like, light this fire. Andrew Huberman: I assume that it was
really bottom up, but it sounds like it's sort of middle level, and that
it captures the elites, and then the thing takes on a life of its own. Marc Andreessen: By the way, it also
intersects with the trust and safety groups at the social media firms who are
responsible for figuring out who gets promoted and who gets banned across this. And you'll notice one large social
media company has recently changed hands and has implemented a different
kind of set of trust and safety. And all of a sudden, a different
kind of boycott movement has all of a sudden started to work
that wasn't working before that. And another kind of boycott movement
is not working as well anymore. And so, for sure, there's
an intermediation happening. Look, the stuff that's happening in
the world today is being intermediated through social media, because social
media is the defining media of our time. But there are people who know how
to do this and do this for a living. No, I view very much the cancellation
wave, like, this whole thing, it's an elite phenomenon, and when it appears
to be a grassroots thing, it's either grassroots among the elites, which
is possible because there's a fairly large number of people who are signed
up for that particular crusade, but there's also a lot of astroturfing
that's taking place inside that. The question is, okay, at what
point does the population at large get pulled into this? And maybe there are movements,
certain points in time where they do get pulled in, and then maybe
later they get disillusioned. And so then there's some question there. And then there's another question
of like, well, if the population at large is going to decide what these
movements are, are they going to be the same movements that the elites want? And how are the elites going
to react when the population actually fully expresses itself? Like I said, there's a feedback loop
between these where the more extreme the elites get, they tend to push
the population to more extreme views on the other side and vice versa. So it ping pongs back and forth. And so, yeah, this is our world. Andrew Huberman: Yeah,
this explains a lot. Marc Andreessen: I want to make sure
that Schellenberger, Matt Taibbi, a bunch of these guys have done a lot of work. If you just look into what's called
the misinformation industrial complex, you'll find a network of money and
power that is really quite amazing. Andrew Huberman: I've seen more
and more Schellenberger showing up. Marc Andreessen: Right. And he's just, look,
he's just on this stuff. He, and just, they're literally
just like tracking money. It's very clear how the money flows,
including a remarkable amount of money out of the government, which is, of
course, in theory, very concerning. Andrew Huberman: Very interesting. Marc Andreessen: The government should
not be funding programs that take away people's constitutional rights. And yet somehow that is
what's been happening. Andrew Huberman: Very interesting. I want to make sure that I hear
your ideas about why the decline in confidence in institutions
is not necessarily problematic. Is this going to be a total
destruction, burning down of the forest that will lead to new life? Is that your view? Marc Andreessen: Well,
so this is the thing. And look, there's a question if you're,
there's a couple of questions in here, which is like, how bad is it really? How bad are they? Right.
And I think they're pretty bad. A lot of them are actually pretty bad. So that's one big question. And then, yeah, look, the other question
is like, okay, if the institution has gone bad or a group of elites have gone bad,
it's this wonderful word, reform, right? Can they be reformed? And everybody always wants to reform
everything, and yet somehow nothing ever quite ever gets reformed. And so people are trying to reform
housing policy in the Bay Area for decades, and we're not building. We're building fewer
houses than ever before. So somehow reform movements seem
to lead to just more bad stuff. But anyway, yeah. So if you have an existing
institution, can it be reformed? Can it be fixed from the inside? What's happened in universities? There are professors at Stanford
as an example, who very much think that they can fix Stanford. Like, I don't know what you think. It doesn't seem like it's going in
productive directions right now. Andrew Huberman: Well, I mean,
there are many things about Stanford that function extremely well. It's a big institution. It's certainly got its
issues like any other place. They're also my employer, Marc's
giving me some interesting looks. He wants me to get a little more vocal. Marc Andreessen: I didn't
mean to put you on the spot. Yeah. Andrew Huberman: I mean, one of
the things about being a researcher at a big institution like Stanford
is, well, first of all, it meets the criteria that you described. Know, you look to the left, you look
to the right or anywhere above or below you, and you have excellence. Right? I mean, I've got a Nobel Prize
winner below me whose daddy also won a Nobel Prize, and his scientific
offspring is likely to win. I mean, it inspires you to
do bigger things than one ordinarily would, no matter what. So there's that, and that's great. And that persists. There's all the bureaucratic red tape
about trying to get things done and how to implement decisions is very hard,
and there are a lot of reasons for that. And then, of course, there are the
things that many people are aware of. There are public accusations about
people in positions of great leadership, and that's getting played out. And the whole thing becomes kind
of overwhelming and a little bit opaque when you're just trying to
run your lab or live your life. And so I think one of the reasons
for this lack of reform that you're referring to is because there's
no position of reformer, right? So deans are dealing with a lot of issues. Provosts are dealing with a lot of issues. Presidents are dealing with a lot of
issues, and then some in some cases. And so we don't have a dedicated role
of reformer, someone to go in and say, listen, there's just a lot of
fat on this and we need to trim it or we need to create this or do that. There just isn't a system to do that. And that's, I think in part, because
universities are built on old systems, and it's like the New York subway. It's amazing i t still works as
well as it does, and yet it's got a ton of problems also. Marc Andreessen: So the point, we could
debate the university specifically, but the point is like, look, if you do
think institutions are going bad, and then you have to make it number one. You have to figure out if you
think institutions are going bad. The population largely does think
that at the very least, the people who run institutions ought to really
think hard about what that means. Andrew Huberman: But people still
strive to go to these places. And I still hear from people
who, for instance, did not go to college, are talking about how
a university degree is useless. They'll tell you how proud they are
that their son or daughter is going to Stanford or is going to UCLA
or is going to Urbana Champaign. I mean, it's almost like, to me, that's
always the most shocking contradiction, is like, these institutions don't matter. But then when people want to hold
up a card that says why their kid is great, it's not about how
many pushups they can do or that they started their own business. Most of the time it's they're
going to this university. And I think, well, what's going on here? Marc Andreessen: So do you think the
median voter in the United States can have their kid go to Stanford? Andrew Huberman: No. Marc Andreessen: Do you think the
median voter in the United States could have their kid admitted to
Stanford, even with a perfect SAT? Andrew Huberman: No, no. In this day and age, the competition
is so fierce that it requires more. Marc Andreessen: Yeah. So first of all, again,
we're dealing here. Yes. We're dealing with a small number
of very elite institutions. People may admire them or not. Most people have no
connectivity to them whatsoever. In the statistics, in the polling,
universities are not doing well. The population at large, yeah,
they may have fantasies about their kid going to Stanford, but the
reality of it is they have a very collapsing view of these institutions. So anyway, this actually goes straight to
the question of alternatives then, right? Which is like, okay, if you believe
that there's collapsing faith in the institutions, if you believe that it
is merited, at least in some ways, if you believe that reform is effectively
impossible, then you are faced... We could debate each of those,
but the population at large seems to believe a lot of that. Then there's a question of
like, okay, can it be replaced? And if so, are you better off
replacing these things basically, while the old things still exist? Or do you actually need to
basically clear the field to be able to have the new thing exist? The universities are a great
case study of this because of how student loans work, right? And the way student loans work is to
be an actual competitive university and compete, you need to have
access to federal student lending. Because if you don't, everybody
has to pay out of pocket. And it's completely out of reach for
anybody other than a certain class of either extremely rich or foreign students. So you need access to a
federal student loan facility. To get access to a federal
student loan facility, you need to be an accredited university. Guess who runs the accreditation council? Andrew Huberman: I don't know. Marc Andreessen: The
existing universities, right? So it's a self laundering machine. Like they decide who the
new universities are. Guess how many new universities get
accredited, each year to be able... Andrew Huberman: Zero. Marc Andreessen: Zero, right? And so as long as that system is in place,
and as long as they have the government wired the way that they do, and as
long as they control who gets access to federal student loan funding, of course
there's not going to be any competition. Of course there can't be a new institution
that's going to be able to get to scale. It's not, not possible. And so if you actually wanted to
create a new system that was better in, you know, I would argue dozens or
hundreds of ways, it could obviously be better if you were starting it today. It probably can't be done as long as the
existing institutions are actually intact. And this is my counter to Martin, which
is like, yeah, look, if we're going to tear down the old, there may be a
period of disruption before we get to the new, but we're never going to get to
the new if we don't tear down the old. Andrew Huberman: When you say counter
to Martin, you're talking about the author of Revolt of the Public ?
Marc Andreessen: Yeah, Martin Gurri. What Martin Gurri says is like, look,
he said basically as follows, the elites deserve contempt, but the only thing
worse than these elites that deserve contempt would be no elites at all. And he basically says on the other
side of the destruction of the elites and the institutions is nihilism. You're basically left with nothing. And by the way, there
is a nihilistic streak. I mean, there's a nihilistic streak
in the culture and the politics today. There are people who basically
would just say, yeah, just tear the whole system down without any
particular plan for what follows. And so I think he makes a good point
and that you want to be careful that you actually have a plan on the other side
that you think is actually achievable. But again, the counterargument
to that is if you're not willing to actually tear down the old,
you're not going to get to the new. Now, what's interesting, of
course, is this is what happens every day in business, right? So the entire way, how do you know
that the capitalist system works? The way that you know is that the old
companies, when they're no longer like the best at what they do, they get torn
down and then they ultimately die and they get replaced by better companies. Andrew Huberman: Yeah, I
haven't seen a Sears in a while. Marc Andreessen: Exactly. And we know what's so interesting
is we know in capitalism, in a market economy, we know that's the
sign of health, that's the sign of how the system is working properly. And in fact, we get actually
judged by antitrust authorities in the government on that basis. It's like the best defense against
antitrust charges is no, people are coming to kill us and they're
doing a really good job of it. That's how we know we're doing our job. And in fact, in business we are
specifically, it is specifically illegal for companies in the same
industry to get together and plot and conspire and plan and have things
like these accreditation bureaus. If I created the equivalent in my
companies of the kind of accreditation bureau that the universities have, I'd
get sent straight to federal prison and a trust violation Sherman Act. Straight to prison. People have been sent to prison for that. So in the business world, we
know that you want everything subject to market competition. We know that you want
creative destruction. We know that you want replacement
of the old with superior new. It's just once we get outside of business,
we're like, oh, we don't want any of that. We want basically stagnation and log
rolling and basically institutional incestuous, like entanglements
and conflicts of interest as far as the eye can see, and then
we're surprised by the results. Andrew Huberman: So let's play it
out as a bit of a thought experiment. So let's say that one small banding
together of people who want to start a new university where there is free exchange
of open ideas, where unless somebody has egregious behavior, violent behavior,
truly sexually inappropriate behavior against somebody that is committing
a crime, they're allowed to be there. They're allowed to be a student or
a faculty member or administrator. And let's just say this accreditation
bureau allowed student loans for this one particular university. Or let's say that there was an independent
source of funding for that university such that students could just apply there. They didn't need to be part of this
elite, accredited group, which sounds very mafia-like, frankly, not necessarily
violent, but certainly coercive in the way that it walls people out. Let's say that then there were
20 or 30 of those or 40 of those. Do you think that over time, that model
would overtake the existing model? Marc Andreessen: Isn't it
interesting that those don't exist? Remember Sherlock Holmes,
The Dog that Didn't Bark ?
Andrew Huberman: It is
interesting that they don't exist. Marc Andreessen: Right.
So there's two possibilities. One is like, nobody wants
that, which I don't believe. And then the other is like, the
system is wired in a way that will just simply not allow it. And you did a hypothetical in
which the system would allow it. And my response to that is, no, of
course the system won't allow that. Andrew Huberman: Or the people that band
together have enough money or get enough resources to say, look, we can afford to
give loans to 10,000 students per year. 10,000 isn't a trivial number when
thinking about the size of a university. And most of them hopefully will graduate
in four years and there'll be a turnover. Do you think that the great future
innovators would tend to orient toward that model more than they currently
do toward the traditional model? What I'm trying to get back to here is
how do you think that the current model thwarts innovation, as well as maybe some
ways that it still supports innovation? Certainly cancellation and the risk of
cancellation from the way that we framed it earlier, is going to discourage risk
takers of the category of risk takers that take risk in every domain that
really like to fly close to the sun and sometimes into the sun or are-- Marc Andreessen: --Doing research that
is just not politically palatable. Andrew Huberman: Right, that we can't
even talk about on this podcast, probably without causing a distraction of what
we're actually trying to talk about. Marc Andreessen: That gives
up the whole game right there. Exactly. Andrew Huberman: I keep a file, and
it's a written file because I'm afraid to put it into electronic form of all
the things that I'm afraid to talk about publicly because I come from a
lineage of advisors where all three died young, and I figure, if nothing else,
I'll die, and then I'll make it into the world and let's say 510 years, 20
years, and if not, I know a certainty I'm going to die at some point, and then
we'll see where all those issues stand. In any event-- Marc Andreessen: --is that list
getting l onger over time or shorter? Andrew Huberman: Oh, it's
definitely getting longer. Marc Andreessen: Isn't that interesting? Andrew Huberman: Yeah,
it's getting much longer. I mean, there are just so many issues
that I would love to explore on this podcast with experts and that I can't
explore, just even if I had a panel of them, because of the way that
things get soundbited and segmented out and taken out of context, it's
like the whole conversation is lost. And so, unfortunately, there are an
immense number of equally interesting conversations that I'm excited to
have, but it is a little disturbing. Marc Andreessen: Do you
remember Lysenkoism? Andrew Huberman: No. Marc Andreessen: Famous in the
history of the Soviet Union. This is the famous thing. So there was a geneticist named Lysenko. Andrew Huberman: That's why it sounds
familiar, but I'm not calling to-- Marc Andreessen: --Well, he was the guy
who did communist genetics, the field of genetics, the Soviets did not approve
of the field of genetics because, of course, they believed in the creation
of the new man and total equality, and genetics did not support that. And so if you were doing traditional
genetics, you were going to know, at the very least be fired, if not killed. And so this guy Lysenko stood up and said,
oh, I've got Marxist genetics, right? I've got, like a whole new
field of genetics that basically is politically compliant. And then they actually implemented
that in the agriculture system of the Soviet Union. And it's the origin of one of the
big reasons that the Soviet Union actually fell, which was they
ultimately couldn't feed themselves. Andrew Huberman: So create a new notion
of biology as it relates to genetics. Marc Andreessen: Politically
correct biology, right? They not only created it, they taught it,
they mandated it, they required it, and then they implemented it in agriculture. Andrew Huberman: Interesting. Marc Andreessen: I never understood. There was a bunch of things in
history I never understood until the last decade, and that's one of them. Andrew Huberman: Well, I censor myself
at the level of deleting certain things, but I don't contort what I do talk about. So I tend to like to play
on lush, open fields. Just makes my life a lot easier. Marc Andreessen: But this goes to the rot. This goes to the rot, and I'll come
back to your question, but this goes to the rot in the existing system,
which is, by the way, I'm no different. I'm just like you. Like, I'm trying not to
light myself on fire either. But the rot in the existing system,
and by system, I mean the institutions and the elites, the rot is that the set
of things that are no longer allowed. I mean, that list is obviously expanding
over time, and that's real, historically speaking, that doesn't end in good places. Andrew Huberman: Is this group
of a particular generation that we can look forward to the time
when they eventually die off. Marc Andreessen: It's a third of
the Boomers plus the Millennials. Andrew Huberman: So, got a while. Marc Andreessen: Good news, bad news. Gen X is weird, right? I'm Gen X. Gen X is weird because we
kind of slipped in the middle. We were kind of the, I don't
know how to describe it. We were the kind of non-political
generation kind of sandwiched between the Boomers and the Millennials. Gen Z is a very, I think, open
question right now which way they go. I could imagine them being
actually much more intense than the Millennials on all these issues. I could also imagine them
reacting to the Millennials and being far more open minded. Andrew Huberman: We don't know
which way it's going to go. Marc Andreessen: Yeah, it's going to go. It might be different groups of them. Andrew Huberman: I'm Gen
X also, I'm 47, you're...? Marc Andreessen: 52. Andrew Huberman: So I grew up with
some John Hughes films and so where the jocks and the hippies and the punks,
and were all divided and they were all segmented, but then it all sort of
mishmashed together a few years later. And I think that had a lot to do
with, like you said, the sort of apolitical aspect of our generation. Marc Andreessen: The Gen X just
knew the Boomers were nuts, right? Like, one of the great sitcoms of
the era was Family Ties , right? With the character Michael P. Keaton. And he was just like, this guy
is just like, yeah, my Boomer hippie parents are crazy. I'm just going to go into business
and actually do something productive. There was something iconic about
that character in our culture. And people like me were like, yeah,
obviously you go into business, you don't go into political activism. And then it's just like, man,
that came whipping back around with the next generation. So just to touch real quick
on the university thing. So, look, there are people trying to
do, and I'm actually going to do a thing this afternoon with the University
of Austin, which is one of these. And so there are people
trying to do new universities. Like, I would say it's certainly possible. I hope they succeed. I'm pulling for them. I think it'd be great. I think it'd be great if there
w ere a lot more of them. Andrew Huberman: Who
founded this university? Marc Andreessen: This is
a whole group of people. I don't want to freelance on that because
I don't know originally who the idea was-- Andrew Huberman: --University
of Austin, not UT Austin. Marc Andreessen: Yeah.
So this is not UT Austin. It's called the University of Austin. Or they call it. I think it's UATX? And it's a lot of very sharp
people associated with it. They're going to try, very much
exactly like what you described. They're going to try to do a new one. I would just tell you the wall
of opposition that they're up against is profound. And part of it is economic,
which is can they ever get access to federal student lending? And I hope that they can, but it
seems nearly inconceivable the way the system is rigged today. And then the other is just like they
already have come under, I mean, anybody who publicly associates with
them who is in traditional academia immediately gets lit on fire, and
there's, you know, cancellation campaigns. So they're up against a
wall of social ostracism. Andrew Huberman: Wow. Marc Andreessen: They're up
against a wall of press attacks. They're up against a wall of people
just like doing the thing, pouncing on, anytime anybody says anything, they're
going to try to burn the place down. Andrew Huberman: This reminds me of
Jerry Springer episodes and Geraldo Rivera episodes where it's like if
a teen listened to Danzig or Marilyn Manson type music or Metallica, that
they were considered a devil worshiper. Now we just laugh, right? We're like, that's crazy, right? People listen to music with all
sorts of lyrics and ideas and looks. That's crazy. But there were people
legitimately sent to prison. I think it was a West
Memphis three, right? These kids out in West Memphis that
looked different, acted different, were accused of murders that eventually
was made clear they clearly didn't commit, but they were in prison
because of the music they listened to. I mean, this sounds very similar to that. And I remember seeing bumpersickers,
Free the West Memphis Three! And I thought this was some crazy thing. And you look into it and this
isn't, it's a little bit niche, but these are real lives. And there was an active witch
hunt for people that looked different and acted different. And yet now we're sort of in this inverted
world where on the one hand we're all told that we can express ourselves
however we want, but on the other hand, you can't get a bunch of people
together to take classes where they learn biology and sociology and econ in Texas. Wild. Marc Andreessen: Yes. Well, so the simple explanation
is this is Puritanism, right? So this is the original American
Puritanism that just works itself out through the system in
different ways at different times. There's a religious phenomenon in
America called the Great Awakenings. There will be these periods in
American history where there's basically religiosity fades and
then there will be this snapback effect where you'll have basically
this frenzy basically, of religion. In the old days, it would have been
tent revivals and people speaking in tongues and all this stuff. And then in the modern world, it's of the
form that we're living through right now. And so, yeah, it's just basically these
waves of sort of American religious, and remember, religion in our time, religious
impulses in our time don't get expressed because we live in more advanced times. We live in scientifically informed times. And so religious impulses in our time
don't show up as overtly religious. They show up in a secularized form,
which, of course, conveniently, is therefore not subject to the First
Amendment separation of church and state. As long as the church is
secular, there's no problem. But we're acting out these kind
of religious scripts over and over again, and we're in the middle
of another religious frenzy. Andrew Huberman: There's a phrase
that I hear a lot, and I don't necessarily believe it, but I want
your thoughts on it, which is, "the pendulum always swings back." Marc Andreessen: Yeah, not quite. [LAUGHS] Andrew Huberman: So that's
how I feel, too, because-- Marc Andreessen: --Boy,
that would be great. Andrew Huberman: Take any number of
things that we've talked about, and, gosh, it's so crazy the way things
have gone with institutions, or it's so crazy the way things have gone with
social media, or it's so crazy, fill in the blank and people will say, well,
the pendulum always swings back like it's the stock market or something. After every crash, there'll be
an eventual boom and vice versa. Marc Andreessen: By the
way, that's not true either. Most stock markets we have
are, of course, survivorship. It's all survivorship. Everything is survivor. Everything you just said is
obviously survivorship bias. Right. So if you look globally, most
stock markets, over time crash and burn and never recover. The American stock market
hasn't always recovered. Andrew Huberman: I was referring
to the American stock market. Marc Andreessen: Globally, b ut
the reason everybody refers to the American stock market is because
it's the one that doesn't do that, the other 200 or whatever,
crash and burn and never recover. Let's go check in on the
Argentina stock market right now. I don't think it's
coming back anytime soon. Andrew Huberman: My father is Argentine
and immigrated to the US in the 1960s, so he would definitely agree with you. Marc Andreessen: Yeah. When their stocks crash,
they don't come back. And then Lysenkoism, like, the
Soviet Union never recovered from Lysenkoism, it never came back. It led to the end of the
country, you know, literally. The things that took down the
Soviet Union were oil and wheat. And the wheat thing, you can trace
the crisis back to Lysenkoism. No, look, pendulum swings back is
true only in the cases where the pendulum swings back, everybody just
conveniently forgets all the other circumstances where that doesn't happen. One of the things people, you see this
in business also, people have a really hard time confronting really bad news. I don't know if you've noticed that. I think every doctor who's listening
right now is like, yeah, no shit. But have you seen in business,
there are situations, that Star Trek , remember Star Trek ? The
Kobayashi Maru simulator, right? So the big lesson to become a Star Trek
captain is you had to go through the simulation called the Kobayashi Maru,
and the point was, there's no way to win. It's a no win scenario. And then it turned out like,
Captain Kirk was the only person to ever win the scenario. And the way that he did it was he went in
ahead of time and hacked the simulator. It was the only way to
actually get through. And then there was a debate whether
to fire him or make him a captain. So they made him a captain. You know, the problem is,
in real life, you do get the Kobayashi Maru on a regular basis. Like, there are actual no win situations
that you can't work your way out of. And as a leader, you can't
ever cop to that, right? Because you have to carry things
forward, and you have to look for every possible choice you can. But every once in a while, you
do run into a situation where it's really not recoverable. And at least I've found people
just cannot cope with that. What happens is they basically, then
they basically just exclude it from their memory that it ever happened. Andrew Huberman: I'm glad you brought up
simulators, because I want to make sure that we talk about the new and emerging
landscape of AI artificial intelligence. And I could try and smooth our
conversation of a moment ago with this one by creating some clever segue, but I'm
not going to, except I'm going to ask, is there a possibility that AI is going to
remedy some of what we're talking about? Let's make sure that we earmark that
for discussion a little bit later. But first off, because some of
the listeners of this podcast might not be as familiar with
AI as perhaps they should be. We've all heard about
artificial intelligence. People hear about machine learning, etc. But it'd be great if you could
define for us what AI is. People almost immediately hear AI
and think, okay, robots taking over. I'm going to wake up, and I'm going to
be strapped to the bed and my organs are going to be pulled out of me. The robots are going to
be in my bank account. They're going to kill all my
children and dystopia for most. Clearly, that's not the way it's going
to go if you believe that machines can augment human intelligence, and
human intelligence is a good thing. So tell us what AI is and where you
think it can take us, both good and bad. Marc Andreessen: So, there was a big
debate when the computer was first invented, which is in the 1930s,
1940s, people like Alan Turing and John von Neumann and these people. And the big debate at the time was because
they knew they wanted to build computers. They had the basic idea, and there had
been, like, calculating machines before that, and there had been these looms that
you basically programmed to punch cards. And so there was a prehistory to computers
that had to do with building sort of increasingly complex calculating machines. So they were kind of on a track,
but they knew they were going to be able to build, they called it a
general purpose computer that could basically, you could program, in the
way that you program computers today. But they had a big debate early on,
which is, should the fundamental architecture of the computer be based
on either A, like calculating machines, like cache registers and looms and
other things like that, or should it be based on a model of the human brain? And they actually had this idea
of computers modeled on the human brain back then, and this is this
concept of so called neural networks. And it's actually fairly astonishing
from a research standpoint. The original paper on neural networks
actually was published in 1943. So they didn't have our level of
neuroscience, but they actually knew about the neuron, and they actually
had a theory of neurons interconnecting and synapses and information
processing in the brain even back then. And a lot of people at the time
basically said, you know what? We should basically have the computer
from the start be modeled after the human brain, because if the computer
could do everything that the human brain can do, that would be the best
possible general purpose computer. And then you could have it do
jobs, and you could have it create art, and you could have it do all
kinds of things like humans can do. It turns out that didn't happen. In our world, what happened instead was
the industry went in the other direction. It went basically in the model of the
calculating machine or the cash register. And I think, practically speaking, that
kind of had to be the case, because that was actually the technology
that was practical at the time. But that's the path and so what we all
have experiences with, up to and including the iPhone in our pocket, is computers
built on that basically calculating machine model, not the human brain model. And so what that means is computers,
as we have come to understand them, they're basically like
mathematical savants at best. So they're really good at doing
lots of mathematical calculations. They're really good at executing these
extremely detailed computer programs. They're hyper literal. One of the things you learn early
when you're a programmer is, as the human programmer, you have to get
every single instruction you give the computer correct because it will
do exactly what you tell it to do. And bugs in computer programs are always
a mistake on the part of the programmer. Interesting. You never blame the computer. You always blame the programmer
because that's the nature of the thing that you're dealing with. Andrew Huberman: One downscore
off and the whole thing-- Marc Andreessen: --Yeah, and
it's the programmer's fault. And if you talk to any programmer,
they'll agree with this. They'll be like, yeah, if
there's a problem, it's my fault. I did it. I can't blame the computer. The computer has no judgment. It has no ability to interpret,
synthesize, develop an independent understanding of anything. It's literally just doing what
I tell it to do step by step. So for 80 years we've had this,
just this very kind of hyper literal kind of model computers. Technically, these are what are called
von Neumann machines, based after the mathematician John von Neumann. They run in that way, and they've been
very successful and very important, and our world has been shaped by them. But there was always this other idea
out there, which is, okay, how about a completely different approach,
which is based much more on how the human brain operates, or at least
our kind of best understanding of how the human brain operates, right? Because those aren't the same thing. It basically says, okay, what
if you could have a computer instead of being hyper literal? What if you could have it actually
be conceptual and creative and able to synthesize information and
able to draw judgments and able to behave in ways that are not
deterministic but are rather creative? And the applications for
this, of course, are endless. And so, for example, the self-driving
car, the only way that you cannot program a computer with rules to
make it a self-driving car, you have to do what Tesla and Waymo and
these other companies have done. Now you have to use, right, you
have to use this other architecture, and you have to basically teach
them how to recognize objects in images at high speeds, basically
the same way the human brain does. And so those are so called
neural networks running inside. Andrew Huberman: So, essentially, let
the machine operate based on priors. We almost clipped a boulder going up
this particular drive, and so therefore, this shape that previously the machine
didn't recognize as a boulder, it now introduces to its catalog of boulders. Is that a good example? Marc Andreessen: Let's even make it
even starker for a self-driving car. There's something in the road. Is it a small child or a plastic
shopping bag being blown by the wind? Very important difference. If it's a shopping bag, you definitely
want to go straight through it, because if you deviate off course, you're
going to make a fast, it's the same challenge we have when we're driving. You don't want to swerve to avoid a
shopping bag because you might hit something that you didn't see on the side. But if it's a small child for
sure you want to swerve, right? But in that moment, small children come
in different shapes and descriptions and are wearing different kinds of clothes. Andrew Huberman: They might tumble onto
the road the same way a bag would tumble. Marc Andreessen: Yeah, they
might look like they're tumbling. And by the way, they might
be wearing a Halloween mask. Right. They might not have a
recognizable human face. It might be a kid with one leg. You definitely want to not hit those. This is what basically we figured out
is you can't apply the rules based approach of a Von Neumann machine to
basically real life and expect the computer to be in any way understanding
or resilient, to change to basically things happening in real life. And this is why there's always been
such a stark divide between what the machine can do and what the human can do. And so, basically, what's happened is
in the last decade, that second type of computer, the neural network based
computer, has started to actually work. It started to work, actually, first,
interestingly, in vision, recognizing objects and images, which is why the
self-driving car is starting to work. Andrew Huberman: Face recognition. Marc Andreessen: Face recognition. Andrew Huberman: I mean, when I
started off in visual neuroscience, which is really my original home in
neuroscience, the idea that a computer or a camera could do face recognition
better than a human was like a very low probability event based on the
technology we had at the time, based on the understanding of the face
recognition cells and the fusiform gyrus. Now, you would be smartest to put
all your money on the machine. You want to find faces in airports,
even with masks on and at profile versus straight on, machines can do
it far better than almost all people. I mean, they're the super recognizers. But even they can't
match the best machines. Now, ten years ago, what I just
said was the exact reverse, right? Marc Andreessen: That's right, yeah. So faces, handwriting, and
then voice, being able to understand voice just as a user. If you use Google Docs, it has
a built-in voice transcription. They have sort of the best industry
leading kind of voice transcription. If you use a voice transcription in
Google Docs, it's breathtakingly good. You just speak into it and it
just types what you're saying. Andrew Huberman: Well, that's good,
because in my phone, every once in a while, I'll say I need to go pick
up a f ew things and it'll say, I need to pick up a few thongs. And so Apple needs to get on board. Whatever the voice recognition
is that Google's using-- Marc Andreessen: --Maybe it
knows you better than you think. Andrew Huberman: [LAUGHS] That was not
the topic I was avoiding discussing. Marc Andreessen: No. So that's on the list, right? That's on your... Actually, there's a reason, actually,
why Google's so good and Apple is not right now at that kind of thing. And it actually goes to actually an
ideological thing, of all things. Apple does not permit pooling of
data for any purpose, including training AI, whereas Google does. And Apple's just like, stake
their brand on privacy. And among that is sort of a pledge
that they don't pool your data. And so all of Apple's AI is like, AI
that has to happen locally on your phone. Whereas Google's AI can
happen in the cloud. Right?
It can happen across pool data. Now, by the way, some people
think that that's bad because they think pooling data is bad. But that's an example of the shift that's
happening in the industry right now, which is you have this separation between
the people who are embracing the new way of training AIs and the people who
basically, for whatever reason, are not. Andrew Huberman: Excuse me, you
say that some people think it's bad because of privacy issues or
they think it's bad because of the reduced functionality of that AI. Marc Andreessen: Oh, no.
So you're definitely going to get... there's three reasons
AIs have started to work. One of them is just simply larger
data sets, larger amounts of data. Specifically, the reason why objects
and images are now, the reason machines are now better than humans
at recognizing objects, images or recognizing faces is because modern
facial recognition AIs are trained across all photos on the Internet of people. Billions and billions and
billions of photos, right? Unlimited number of photos
of people on the Internet. Attempts to train facial
recognition systems. Ten or 20 years ago, they'd be trained on
thousands or tens of thousands of photos. Andrew Huberman: So the input
data is simply much m ore vast .
Marc Andreessen: Much larger. This is the reason to get
to the conclusion on this. This is the reason why
ChatGPT works so well. One of the reasons ChatGPT
works so well is it's trained on the entire Internet of text. And the entire Internet of text was
not something that was available for you to train an AI on until it came
to actually exist itself, which is new in the last, basically decade. Andrew Huberman: So in the case of
face recognition, I could see how having a much larger input data set
would be beneficial if the goal is to recognize Marc Andreessen's face,
because you are looking for signal to noise against everything else, right? But in the case of ChatGPT, when you're
pooling all text on the internet and you ask ChatGPT to, say, construct a paragraph
about Marc Andreessen's prediction of the future of human beings over the
next ten years and the likely to be most successful industries, give ChatGPT that. If it's pooling across all
text, how does it know what is authentically Marc Andreessen's text? Because in the case of face recognition,
you've got a standard to work from a verified image versus everything else. In the case of text, you have to make
sure that what you're starting with is verified text from your mouth, which
makes sense if it's coming from video. But then if that video is deep
faked, all of a sudden, what's true? Your valid Marc Andreessen is in question. And then everything ChatGPT is
producing, that is then of question. Marc Andreessen: So I would say
there's a before and after thing here. There's like a before ChatGPT and after
GPT question, because the existence of GPT itself changes the answer. So before ChatGPT. So the version you're using today is
trained on data up till September 2021. They're cut off with the training set. Up till September 2021, almost all text on
the Internet was written by a human being. And then most of that was written
by people under their own names. Some of it wasn't, but a lot of it was. And why do you know it's for me is
because it was published in a magazine under my name, or it's a podcast
transcript and it's under my name. And generally speaking, if you just
did a search on what are things Marc Andreessen has written and said,
90% plus of that would be correct, and somebody might have written a
fake parody article or something. Like that. But not that many people were
spending that much time writing fake articles about things that I said. Andrew Huberman: Right now, so
many people can pretend to be you. Marc Andreessen: Exactly right. And so, generally speaking, you
can kind of get your arms around the idea that there's a corpus
of material associated with me. Or by the way, same thing with you. There's a corpus of YouTube transcripts
and other, your academic papers and talks you've given, and you can
kind of get your hands around that. And that's how these systems are trained. They take all that data
collectively, they put it in there. And that's why this
works as well as it does. And that's why if you ask ChatGPT to
speak or write like me or like you or like somebody else, it will actually generally
do a really good job because it has all of our prior text in its training data. That said, from here on
out, this gets harder. And of course, the reason this gets
harder is because now we have AI that can create text and we have AI that
can create text at industrial scale. Andrew Huberman: Is it
watermarked as AI generated text? Marc Andreessen: No. Andrew Huberman: How hard
would it be to do that? Marc Andreessen: I think it's impossible. I think it's impossible. There are people who
are trying to do that. This is a hot topic in the classroom. I was just talking to a friend who's got
like a 14 year old kid in a class, and there's like these recurring scandals. Every kid in the class is using ChatGPT to
write their essays or to help them write their essays, and then the teacher is
using one of, there's a tool that you can use that purports to be able to tell you
whether something was written by ChatGPT. But it's like, only right
like 60% of the time. And so there was this case where the
student wrote an essay where their parent sat and watched them write the
essay, and then they submitted it, and this tool got the conclusion incorrect. And then the student feels outraged
because he got unfairly cheated. But the teacher is like, well,
you're all using the tool. Then it turns out there's another
tool that basically you feed in text, and they call it a summarizer. But what it really is is it's a
cheating mechanism to basically just shuffle the words around
enough so that it sheds whatever characteristics were associated with AI. So, there's like an arms race going
on in educational settings right now around this exact question. I don't think it's possible to do. There are people working
on the watermarking. I don't think it's possible
to do the watermarking. And I think it's just kind of obvious why
it's not possible to do that, which is you can just read the output for yourself. It's really good. How are you actually going to tell
the difference between that and something that a real person wrote? And then, by the way, you
can also ask ChatGPT to write in different styles, right? So you can tell it, like, write
in the style of a 15 year old. You can tell it to write in the style
of a non native English speaker. Or if you're a non native English
speaker, you can tell it to write in the style of an English
speaker, native English speaker. And so the tool itself
will help you evade. I think there's a lot of
people who are going to want to distinguish, "real" versus fake. I think those days are over. Andrew Huberman: Genie's
out of the bottle. Marc Andreessen: Genie is
completely out of the bottle. And by the way, I actually
think this is good. This doesn't map to my worldview
of how we use this technology anyway, which we can come back to. So there's that, and then there's
the problem, therefore of the so-called deep fake problem. So then there's the problem of, like,
deliberate basically, manipulation. And that's like one of your many
enemies, one of your increasingly long list of enemies like mine,
who basically is like, wow, I know how I'm going to get him, right? I'm going to use it to create
something that looks like a Huberman transcript and I'm going to have
him say all these bad things. Andrew Huberman: Or a video. Marc Andreessen: Or a video, or a video. Andrew Huberman: I mean, Joe Rogan
and I were deep faked in a video. I don't want to flag people to it, so I
won't talk about what it was about, but where it, for all the world looked like
a conversation that we were having and we never had that specific conversation. Marc Andreessen: Yeah, that's right. So that's going to happen for sure. So what there's going to need to
be is there need to be basically registries where basically in your
case, you will submit your legitimate content into a registry under your
unique cryptographic key, right. And then basically there will be a
way to check against that registry to see whether that was the real thing. And I think this needs
to be done for sure. For public figures, it needs
to be done for politicians, it needs to be done for music. Andrew Huberman: What about taking what's
already out there and being able to authenticate it or not in the same way
that many times per week, I get asked, is this your account about a direct
message that somebody got on Instagram? And I always tell them, look,
I only have the one account, this one verified account. Although now, with the advent of
pay to play, verification makes it a little less potent as a security
blanket for knowing if it's not this account, then it's not me. But in any case, these accounts pop
up all the time pretending to be me. And I'm relatively low on the scale. Not low, but relatively low on
the scale to say, like a Beyonce or something like that, who has
hundreds of millions of followers. So is there a system in mind
where people could go in and verify text, click yes or no. This is me. This is not me. And even there, there's the opportunity
for people to fudge, to eliminate things about themselves that they don't want
out there, by saying, no, that's not me. I didn't actually say that. Or create that. Marc Andreessen: Yeah, no, that's right. Technologically, it's actually
pretty straightforward. So the way to implement this
technologically is with a public key. It's called public key cryptography,
which is the basis for how cryptography information is secured in the world today. And so basically, the implementation form
of this would be, you would pick whatever is your most trusted channel, and let's
say it's your YouTube channel as an example, where just everybody just knows
that it's you on your YouTube channel because you've been doing it for ten
years or whatever, and it's just obvious. And you would just publish in
the about me page on YouTube, you would just publish your public
cryptographic key that's unique to you. Right. And then anytime anybody wants
to check to see whether any piece of content is actually you, they
go to a registry in the cloud somewhere, and they basically submit. They basically say, okay, is this him? And then they can basically see
whether somebody with your public key, you had actually certified that
this was something that you made. Now, who runs that registry
is an interesting question. If that registry is run by the government,
we will call that the Ministry of Truth. I think that's probably a bad idea. If that registry is run by a company,
we would call that basically the equivalent of, like, a credit
bureau or something like that. Maybe that's how it happens. The problem with that is that company
now becomes hacking target number one, right, of every bad person on Earth. Because if anybody breaks
into that company, they can fake all kinds of things. Andrew Huberman: They own the truth. Marc Andreessen: Right.
They own the truth. And by the way, insider threat, also,
their employees can monkey with it. So you have to really trust that company. The third way to do it
is with a blockchain. And so this, with the crypto
blockchain technology, you could have a distributed system, basically, a
distributed database in the cloud that is run through a blockchain. And then it implements this cryptography
and this certification process. Andrew Huberman: What
about quantum Internet? Is that another way to
encrypt these things? I know most of our listeners are
probably not familiar with quantum Internet, but put simply, it's a way to
secure communications on the Internet. Let's just leave it at that. It's sophisticated, and we'll probably do
a whole episode about this at some point. But maybe you have a succinct way
of describing quantum Internet, but that would be better. And if so, please offer it up. But is quantum Internet going
to be one way to secure these kinds of data and resources? Marc Andreessen: Maybe in the
future, years in the future? We don't yet have working quantum
computers in practice, so it's not currently something you could
do, but maybe in a decade or two? Andrew Huberman: Tell me. I'm going to take a stab at defining
quantum Internet in one sentence. It's a way in which if anyone were to
try and peer in on a conversation on the Internet, it essentially would be futile
because of the way that quantum Internet changes the way that the communication is
happening so fast and so many times in any one conversation, essentially changing the
translation or the language so fast that there's just no way to keep up with it. Is that more or less accurate? Marc Andreessen: Yeah,
conceivably not yet, but someday. Andrew Huberman: So, going
back to AI, most people who hear about AI are afraid of AI. Marc Andreessen: Well? Andrew Huberman: I think most
people who aren't informed-- Marc Andreessen: --This goes back
to our elites versus masses thing. Andrew Huberman: Oh, interesting. Well, I heard you say that, a his is from
a really wonderful tweet thread that we will link in the show note captions that
you put out not long ago and that I've read now several times, and that everyone
really should take the time to read it. Probably takes about 20 minutes to
read it carefully and to think about each piece, and I highly recommend it. But you said, and I'm quoting
here, "Let's address the fifth, the one thing I actually agree with,
which is AI will make it easier for bad people to do bad things." Marc Andreessen: First of all, there is
a general freak out happening around AI. I think it's primarily, it's one of these,
again, it's an elite driven freak out. I don't think the man in the street knows,
cares, or feels one way or the other. It's just not a relevant concept, and it
probably just sounds like science fiction. So I think there's an elite driven
freak out that's happening right now. I think that elite driven freak out
has many aspects to it that I think are incorrect, which is not surprising. I would think that, given that. I think the elites are incorrect
about a lot of things, but I think they're very wrong about a number
of things they're saying about AI. But that said, look, this is a very
powerful new technology, right? This is like a new general
purpose thinking technology. So what if machines could think? And what if you could use machines
that think, and what if you could have them think for you? There's obviously a lot of
good that could come from that. But also, people, look, criminals
could use them to plan better crimes. Terrorists could use them to plan
better terror attacks and so forth. And so these are going to be
tools that bad people can use to do bad things, for sure. Andrew Huberman: I can think
of some ways that AI could be leveraged to do fantastic things. Like in the realm of medicine, an AI
pathologist perhaps, can scan 10,000 slides of histology and find the one
micro tumor, cellular aberration, that would turn into a full blown tumor,
whereas the even mildly fatigued or well rested human pathologists, as
great as they come, might miss that. And perhaps the best solution is
for both of them to do it, and then for the human to verify what the
AI has found and vice versa, right? Marc Andreessen: That's right. Andrew Huberman: And
that's just one example. I mean, I can come up with thousands of
examples where this would be wonderful. Marc Andreessen: I'll give you
another one, by the way, medicine. So you're talking about an analytic
result, which is good and important. The other is like, the machines are going
to be much better at bedside manner. They're going to be much better
at dealing with the patient. And we already know there's
already been a study. There's already been a study on this. So there was already a study done on
this where there was a study team that scraped thousands of medical questions
off of an Internet forum, and then they had real doctors answer the questions,
and then they had basically GPT4 answer the questions, and then they had another
panel of doctors score the responses. So there were no patients
experimented on here. This was a test contained
within the medical world. The judges, the panel of doctors
who are the judges, scored the answers in both factual accuracy
and on bedside manner, on empathy. And the GPT4 was equal or better
on most of the factual questions analytically, already, and it's not even
a specifically trained medical AI, but it was overwhelmingly better on empathy. Andrew Huberman: Amazing, Marc Andreessen: Right? Do you treat patients
directly in your work? You don't? Andrew Huberman: No, I don't. We run clinical trials. Marc Andreessen: Right. Andrew Huberman: But I don't
do any direct clinical work. Marc Andreessen: I've no
direct experience with this. But from the surgeons, if you talk
to surgeons or you talk to people who train surgeons, what they'll tell you
is surgeons need to have an emotional remove from their patients in order
to do a good job with the surgery. The side effect of that, and by the way,
look, it's a hell of a job to have to go in and tell somebody that they're
going to die or that they have so you're never going to recover, they're never
going to walk again or whatever it is. And so there's sort of something
inherent in that job where they need to keep an emotional reserve from
the patient to be able to do the job. And it's expected of
them as professionals. The machine has no such limitation. The machine can be as sympathetic
as you want it to be for as long as you want it to be. It can be infinitely sympathetic. It's happy to talk to you
at four in the morning. It's happy to sympathize with you. And by the way, it's not just
sympathizing with you in the way that, oh, it's just making up words
to lie to you to make you feel good. It can also sympathize with you in
terms of helping you through all the things that you can actually
do to improve your situation. And so, boy, can you keep a
patient actually on track with a physical therapy program. Can you keep a patient on track
with a nutritional program? Can you keep a patient
off of drugs or alcohol? And if they have a machine medical
companion that's with them all the time that they're talking to all
the time, that's infinitely patient, infinitely wise, infinitely loving,
and it's just going to be there all the time and it's going to be encouraging
and it's going to be, you know, you did such a great job yesterday, I
know you can do this again today. Cognitive behavioral therapy
is an obvious fit here. These things are going to be great
at CBT and that's already starting. You can already use ChatGPT as
a CBT therapist if you want. It's actually quite good at it. There's, there's a universe here
that's, it goes to what you said, there's a universe here that's opening
up, which is what I believe is it's partnership between man and machine. It's a symbiotic relationship,
not an adversarial relationship. And so the doctor is going to pair
with the AI to do all the things that you described, but the patient
is also going to pair with the AI. And I think this partnership that's
going to emerge is going to lead, among other things, to actually
much better health outcomes. Andrew Huberman: I've relied for so much
of my life on excellent mentors from a very young age, and still now, in order
to make the best decisions possible with the information I had, and rarely were
they available at four in the morning sometimes, but not on a frequent basis. And they fatigue like anybody else, and
they have their own stuff like anybody else, baggage, events in their life, etc. What you're describing is a sort of
AI coach or therapist of sorts, that hopefully would learn to identify our best
self and encourage us to be our best self. And when I say best self, I don't mean
that in any kind of pop psychology way. I could imagine AI very easily knowing
how well I slept the night before and what types of good or bad decisions I tend
to make at 02:00 in the afternoon when I've only had 5 hours of sleep, or maybe
just less REM sleep the night before. It might encourage me to take a little
more time to think about something. Might give me a little tap on the
wrist through a device that no one else would detect to refrain from something. Marc Andreessen: Never going to judge you. It's never going to be resentful. It's never going to be upset
that you didn't listen to it. It's never going to go on vacation. It's going to be there for you. I think this is the way
people are going to live. It's going to start with kids, and
then over time it's going to be adults. I think the way people are going
to live is they're going to have a friend, therapist, companion,
mentor, coach, teacher, assistant. Or, by the way, maybe multiple of those. It may be that we're actually talking
about six, like, different personas interacting, which is a whole 'nother
possibility, but they're going to have-- Andrew Huberman: --A committee! Marc Andreessen: A
committee, yeah, exactly. Actually different personas. And maybe, by the way, when there are
difficult decisions to be made in your life, maybe what you want to hear is the
argument among the different personas. And so you're just going to grow up,
you're just going to have this in your life and you're going to always
be able to talk to it and always be able to learn from it and always
be able to help it make, it's going to be a symbiotic relationship. I think it's going to be
a much better way to live. I think people are going
to get a lot out of it. Andrew Huberman: What
modalities will it include? So I can imagine my phone has
this engine in it, this AI companion, and I'm listening in
headphones as I walk into work. And it's giving me some, not just
encouragement, some warnings, some thoughts that things that I might
ask Marc Andreessen today that I might not have thought of and so on. I could also imagine it
having a more human form. I could imagine it being tactile,
having some haptic, so tapping to remind me so that it's not going
to enter our conversation in a way that interferes or distracts you. But I would be aware. Oh, right. Things of that sort. I mean, how many different modalities
are we going to allow these AI coaches to approach us with? And is anyone actually thinking
about the hardware piece right now? Because I'm hearing a lot
about the software piece. What does the hardware piece look like? Marc Andreessen: Yeah, so this is where
Silicon Valley is going to kick in. So the entrepreneurial community is
going to try all of those, right? By the way, the big companies and
startups are going to try all those. And so obviously there's big
companies that are working, the big companies that have talked about a
variety of these, including heads up displays, AR, VR kinds of things. There's lots of people doing voice. Thing is, voice is a real possibility. It may just be an earpiece. There's a new startup that just unveiled
a new thing where they actually project. So you'll have like a pendant you wear
on like a necklace, and it actually projects, literally, it'll project
images on your hand or on the table or on the wall in front of you. So maybe that's how it shows up. Yeah. There are people working on so-called
haptic or touch based kinds of things. There are people working on
actually picking up nerve signals, like out of your arm. There's some science for being able
to do basically like subvocalization. So maybe you could pick up
that way by bone conduction. These are all going to be tried. So that's one question is the physical
form of it, and then the other question is the software version of
it, which is like, okay, what's the level of abstraction that you want to
deal with these things in right now? It's like a question answer paradigm, so
called chatbot, like, ask a question, get an answer, ask a question, get an answer. Well, you want that to go for sure
to more of a fluid conversation. You want it to build up more
knowledge of who you are, and you don't want to have to explain
yourself a second time and so forth. And then you want to be able to tell
it things like, well, remind me this, that, or be sure and tell me when X. But then maybe over time, more and
more, you want it actually deciding when it's going to talk to you, right? And when it thinks it has
something to say, it says it, and otherwise it stays silent. Andrew Huberman: Normally, at
least in my head, unless I make a concerted effort to do otherwise, I
don't think in complete sentences. So presumably these machines could learn
my style of fragmented internal dialogue. And maybe I have an earpiece, and
I'm walking in and I start hearing something, but it's some advice,
etc, encouragement, discouragement. But at some point, those sounds
that I hear in an earphone are very different than seeing something
or hearing something in the room. We know this based on the
neuroscience of musical perception and language perception. Hearing something in your
head is very different. And I could imagine at some point that
the AI will cross a precipice where if it has inline wiring to actually control
neural activity in specific brain areas, and I don't mean very precisely, even
just stimulating a little more prefrontal cortical activity, for instance, through
the earpiece, a little ultrasound wave now can stimulate prefrontal cortex
in a non invasive way that's being used clinically and experimentally,
that the AI could decide that I need to be a little bit more context aware. This is something that is very beneficial
for those listening that are trying to figure out how to navigate through life. It's like, you know, the context you're
in and know the catalog of behaviors and words that are appropriate for that
situation and not, you know, this would go along with agreeableness, perhaps,
but strategic agreeableness, right. Context is important. There's nothing diabolical about that. Context is important, but I could
imagine the AI recognizing we're entering a particular environment. I'm now actually going to ramp up activity
in prefrontal cortex a little bit in a certain way that allows you to be more
situationally aware of yourself and others, which is great, unless I can't
necessarily short circuit that influence, because at some point, the AI is actually
then controlling my brain activity and my decision making and my speech. I think that's what people fear is that
once we cross that precipice that we are giving up control to the artificial
versions of our human intelligence. Marc Andreessen: And look, I think
we have to decide, we collectively, and we as individuals, I think, have
to decide exactly how to do that. And this is the big thing
that I believe about AI. That's just a much more, I would
say, practical view of the world than a lot of the panic that you hear. It's just like, these are machines. They're able to do things that
increasingly are like the things that people can do in some circumstances. But these are machines. We built a machine, means we
decide how to use the machines. When we want the machines turned
on, they're turned on, we want them turned off, they're turned off. I think that's absolutely the kind
of thing that the individual person should always be in charge of. Andrew Huberman: Everyone was. And I have to imagine some people are
still afraid of CRISPR, of gene editing. But gene editing stands to revolutionize
our treatment of all sorts of disease, you know, inserting and deleting
particular genes in adulthood. Not having to recombine in the womb. A new organism is an
immensely powerful tool. And yet the Chinese scientist who
did CRISPR on humans, this has been done, actually did his postdoc at
Stanford with Steve Quake, then went to China, did CRISPR on babies. Mutated something. I believe it was one of the HIV receptors. I'm told it was with the intention
of augmenting human memory. It had very little to do, in fact,
with limiting susceptibility to HIV per se, to do with the way that
receptor is involved in human memory. The world demonized that person. We actually don't know
what happened to them. Whether or not they have a laboratory now
or they're sitting in jail, it's unclear. But in China and elsewhere,
people are doing CRISPR on humans. We know this. It's not legal in the US and other
countries, but it's happening. Do you think it's a mistake for us to fear
these technologies so much that we back away from them and end up 10, 20 years
behind other countries that could use it for both benevolent or malevolent reasons? Marc Andreessen: Yeah, the details matter. So it's technology by technology. But I would say there's two things
you always have to think about in these questions, I think, in terms of
counterfactuals and opportunity cost. CRISPR is an interesting one. CRISPR manipulates the human genome. Nature manipulates the human,
like, in all kinds of ways. [LAUGHS]
Andrew Huberman: Yeah. [LAUGHS] Marc Andreessen: When you
pick a spouse and you-- Andrew Huberman: --Have a
child with that spouse-- Marc Andreessen: --Oh, boy-- Andrew Huberman: --You're
doing genetic recombination. Marc Andreessen: Yes, you are. Quite possibly, if you're Genghis
Khan, you're determining the future of humanity by those mutations. This is the old question of,
basically, this is all state of nature, state of grace, basically. Is nature good? And then therefore, artificial things
are bad, which is kind of shot. A lot of people have
ethical views like that. I'm always of the view that nature
is a bitch and wants us dead. Nature is out to get us, man. Nature wants to kill us, right? Like, nature wants to evolve
all kinds of horrible viruses. Nature wants plagues. Nature wants to do weather. Nature wants to do all kinds of stuff. I mean, look, nature religion
was the original religion, right? Like, that was the original
thing people worshiped. And the reason was because nature was the
thing that was out to get you right before you had scientific and technological
methods to be able to deal with it. So, the idea of not doing these
things, to me is just saying, oh, we're just going to turn over the
future of everything to nature. And I think that there's no reason
to believe that that leads in a particularly good direction or that
that's not a value neutral decision. And then the related thing that comes
from that is always this question around what's called the precautionary principle,
which shows up in all these conversations on things like CRISPR, which basically is
this principle that basically says, the inventors of a new technology should be
required to prove that it will not have negative effects before they roll it out. This, of course, is a very new idea. This is actually a new idea in the 1970s. It's actually invented
by the German Greens. The 1970s. Before that, people didn't
think in those terms. People just invented
things and rolled them out. And we got all of modern
civilization by people inventing things and rolling them out. The German Greens came up with
the precautionary principle for one specific purpose. I'll bet you can guess what it is. It was to prevent...? Andrew Huberman: Famine? Marc Andreessen: Nuclear power. It was to shut down attempts
to do civilian nuclear power. And if you fast forward 50 years later,
you're like, wow, that was a big mistake. So what they said at the time was,
you have to prove that nuclear reactors are not going to melt down
and cause all kinds of problems. And, of course, as an engineer, can
you prove that will never happen? You can't. You can't rule out things that
might happen in the future. And so that philosophy was used to
stop nuclear power by the way, not just in Europe, but also in the US and
around much of the rest of the world. If you're somebody who's concerned
about carbon emissions, of course, this is the worst thing that happened in
the last 50 years in terms of energy. We actually have the silver bullet
answer to unlimited energy with zero carbon emissions, nuclear power. We choose not to do it. Not only do we choose not to do it,
we're actually shutting down the plants that we have now in California. We just shut down the big plant. Germany just shut down their plants. Germany is in the middle of an energy
war with Russia that, we are informed, is existential for the future of Europe. Andrew Huberman: But unless the risk
of nuclear power plant meltdown has increased, and I have to imagine
it's gone the other way, what is the rationale behind shutting down
these plants and not expanding? Marc Andreessen: Because nuclear is bad. Right.
Nuclear is icky. Nuclear has been tagged. Andrew Huberman: It just sounds bad. Nuclear. Marc Andreessen: Yeah. Andrew Huberman: Go nuclear. Marc Andreessen: Well, so what happened? Andrew Huberman: We didn't shut down
postal offices and you hear go postal. Marc Andreessen: So what happened
was, so nuclear technology arrived on planet Earth as a weapon, right? So it arrived in the form of. The first thing they did was
in the middle of World War II. The first thing they did was the
atomic bomb they dropped on Japan. And then there were all the
debates that followed around nuclear weapons and disarmament. And there's a whole conversation
to be had, by the way, about that, because there's different
views you could have on that. And then it was in the. Where they started to roll
out civilian nuclear power. And then there were accidents. There was like, three Mile island
melted down, and then Chernobyl melted down in the Soviet Union, and then
even recently, Fukushima melted down. And so there have been meltdowns. And so I think it was a
combination of it's a weapon. It is sort of icky scientists
sometimes with the ick factor, right. It glows green. And by the way, it becomes like
a mythical fictional thing. And so you have all these movies of
horrible supervillains powered by nuclear energy and all this stuff. Andrew Huberman: Well, the
intro to the Simpsons, right. Is the nuclear power plant and the
three eyed fish and all the negative implications of this nuclear power plant
run by, at least in the Simpsons idiots. And that is the dystopia, where
people are unaware of just how bad it. Marc Andreessen: Is and who
owns the nuclear power plant. Right.
This evil capitalist. Right. So it's connected to capitalism. Right. Andrew Huberman: We're blaming Matt
Gronig for the demise of a particular-- Marc Andreessen: --He
certainly didn't help. But it's literally, this amazing thing
where if you're just like, thinking. If you're just thinking rationally,
scientifically, you're like, okay, we want to get rid of carbon. This is the obvious way to do it. Okay, fun fact. Richard Nixon did two things
that really mattered on this. So one is he defined in 1971 something
called Project Independence, which was to create 1000 new state of
the art nuclear plants, civilian nuclear plants, in the US by 1980. And to get the US completely off of
oil and cut the entire US energy grid over to nuclear power, electricity,
cut over to electric cars, the whole thing, like, detach from carbon. You'll notice that didn't happen. Why did that not happen? Because he also created the EPA and the
Nuclear Regulatory Commission, which then prevented that from happening. Right. And the Nuclear Regulatory Commission
did not authorize a new nuclear plant in the US for 40 years. Andrew Huberman: Why would he
hamstring himself like that? Marc Andreessen: He got distracted
by Watergate in Vietnam. Andrew Huberman: I think Ellsberg
just died recently, right? The guy who released the Pentagon papers. Marc Andreessen: Yeah.
Andrew Huberman: So complicated. Marc Andreessen: Yeah, exactly. It's this thing. He left office shortly thereafter. He didn't have time to
fully figure this out. I don't know whether he would
have figured it out or know. Look, Ford could have figured it out. Carter could have figured it out. Reagan could have figured it out. Any of these guys could
have figured it out. It's like the most obvious. Knowing what we know today, it's
the most obvious thing in the world. The Russia thing is the amazing thing. It's like Europe is literally
funding Russia's invasion of Ukraine by paying them for oil, right? And they can't shut off the oil because
they won't cut over to nuclear, right? And then, of course, what happens? Okay, so then here's the other
kicker of what happens, right? Which is they won't do nuclear, but
they want to do renewables, right? Sustainable energy. And so what they do is
they do solar and wind. Solar and wind are not reliable
because it sometimes gets dark out and sometimes the wind doesn't blow. And so then what happens is they
fire up the coal plants, right? And so the actual consequence of
the precautionary principle for the purpose it was invented is
a massive spike in use of coal. Andrew Huberman: That's
taking us back over 100 years. Marc Andreessen: Yes. Correct. That is the consequence of
the cautionary principle. That's the consequence of that mentality. And so it's a failure of a
principle on its own merits for the thing it was designed. Then, you know, there's a whole
movement of people who want to apply it to every new thing. And this is the hot topic on AI right
now in Washington, which is like, oh my God, these people have to prove that
this can never get used for bad things. Andrew Huberman: Sorry, I'm
hung up on this nuclear thing. And I wonder, can it just be? I mean, there is something
about the naming of things. We know this in, I mean, you know,
Lamarckian evolution and things like that. These are bad words in biology. But we had a guest on this podcast,
Oded Rechavii, who's over in Israel, who's shown inherited traits. But if you talk about his Lamarckian, then
it has all sorts of negative implications. But his discoveries have important
implications for everything from inherited trauma to treatment of disease. I mean, there's all sorts of positives
that await us if we are able to reframe our thinking around something that,
yes, indeed, could be used for evil, but that has enormous potential and
that is in agreement with nature, right? This fundamental truth that at least
to my knowledge, no one is revising in any significant way anytime soon. So what if it were called something else? It could be nuclear. It's called sustainable, right? I mean, it's amazing how marketing
can shift our perspective of robots, for instance. Or anyway, I'm sure you can come
up with better examples than I can, but is there a good, solid PR
firm working from the nuclear side? Marc Andreessen: Thunbergian. Greta Thunberg. Andrew Huberman: Thunbergian. Marc Andreessen: Thunbergian. Like if she was in favor of it,
which by the way, she's not. She's dead set against it. Andrew Huberman: She said that 100%. Marc Andreessen: Yeah. Andrew Huberman: Based on. Marc Andreessen: Based on
Thunbergian principles. The prevailing ethic in environmentalism
for 50 years is that nuclear is evil. Like, they won't consider it. There are, by the way, certain
environmentalists who disagree with this. And so Stuart Brand is the one that's
been the most public, and he has impeccable credentials in the space. Andrew Huberman: And he
wrote Whole Earth Catalog .
Marc Andreessen: Whole Earth Catalog guy. Yeah. And he's written a whole bunch
of really interesting book since. And he wrote a recent book
that goes through in detail. He's like, yes, obviously
the correct environmental thing to do is nuclear power. And we should be implementing
project independence. We should be building a thousand. Specifically, he didn't say this,
but this is what I would say. We should hire Charles Koch. We should hire Koch Industries and
they should build us a thousand nuclear power plants, and then we should
give them the presidential Medal of Freedom for saving the environment. Andrew Huberman: And that would put
us independent of our reliance on oil. Marc Andreessen: Yeah. Then we're done with. We're just, think about what happens. We're done with oil, zero emissions,
we're done with the Middle East. We're done. We're done. We're not drilling on
American land anymore. We're not drilling on foreign land. Like, we have no military entanglements in
places where we're not despoiling Alaska. We're not, nothing. No offshore rigs, no nothing. We're done. And basically just you build state of
the art plants, engineered properly, you have them just completely contained. When there's nuclear waste, you
just entomb the waste in concrete. So it just sits there forever. It's just a very small
footprint kind of thing. And you're just done. And so to me, it's like scientifically,
technologically, this is just like the most obvious thing in the world. It's a massive tell on the part of the
people who claim to be pro-environment that they're not in favor of this. Andrew Huberman: And if I were to
say, tweet that I'm pro nuclear power because it's the more sustainable form
of power, if I hypothetically did that today, what would happen to me in this. Marc Andreessen: You'd be a
cryptofascist.] LAUGHS] Dirty, evil, capitalist monster. How dare you? Andrew Huberman: I'm unlikely
to run that experiment. I was just curious. That was what we call
a Gedanken experiment. Marc Andreessen: Andrew,
you're a terrible human being. We were looking for evidence that you're a
terrible human being, and now we know it. This is a great example of the, I
gave Andrew a book on the way in here with this, my favorite new book. The title of it is When Reason Goes on
Holiday , and this is a great example of it is, the people who simultaneously
say they're environmentalists and say they're anti nuclear power. Like the positions just
simply don't reconcile. But that doesn't bother them at all. So be clear. I predict none of this will happen. Andrew Huberman: Amazing. I need to learn more about nuclear power. Marc Andreessen: Long coal. Andrew Huberman: Long coal. Marc Andreessen: Long coal.
Invest in coal. Andrew Huberman: Because you
think we're just going to revert? Marc Andreessen: It's the
energy source of the future. Well, because it can't be solar and
wind, because they're not reliable. So you need something. If it's not nuclear, it's going to be
either like oil, natural gas, or coal. Andrew Huberman: And you're unwilling
to say bet on nuclear because you don't think that the sociopolitical elitist
trends that are driving against nuclear are likely to dissipate anytime soon. Marc Andreessen: Not a chance. I can't imagine it would
be great if they did. But the powers that be are very
locked in on this as a position. And look, they've been saying this
for 50 years, and so they'd have to reverse themselves off of a bad
position they've had for 50 years. And people really don't like to do that. Andrew Huberman: One thing that's
good about this and other podcasts is that young people listen and
they eventually will take over. Marc Andreessen: And by the way, I will
say also there are nuclear entrepreneurs. So on the point of young kids, there are
a bunch of young entrepreneurs who are basically not taking no for an answer. And they're trying to develop, in
particular, there's people trying to develop new, very small form
factor nuclear power plants with a variety of possible use cases. So, look, maybe they show up with
a better mousetrap and people take a second look, but we'll see. Andrew Huberman: Just rename it. So, my understanding is that
you think we should go all in on AI with the constraints that we
discover we need in order to rein in safety and things of that sort. Not unlike social media,
not unlike the Internet. Marc Andreessen: Not unlike what we
should have done with nuclear power. Andrew Huberman: And in terms of the near
infinite number of ways that AI can be envisioned to harm us, how do you think
we should cope with that psychologically? Because I can imagine a lot of people
listening to this conversation are thinking, okay, that all sounds
great, but there are just too many what ifs that are terrible, right? What if the machines take over? What if the silly example I gave
earlier, but what if one day I could log into my hard earned
bank account and it's all gone? The AI version of myself ran off with
someone else, and with all my money, my AI coach abandoned me for somebody else. After it learned all the
stuff that I taught it. It took off with somebody else stranded. And it has my bank account
numbers, like this kind of thing. Marc Andreessen: You could really
make this scenario horrible, right, if you kept going? Andrew Huberman: Yeah, well, we can
throw in a benevolent example as well to counter it, but it's kind of fun to think
about where the human mind goes, right? Marc Andreessen: Yeah. So first I say we've got to separate the
real problems from the fake problems. And so there's a lot. A lot of the science fiction
scenarios I think are just not real. And the ones that you decided
as an example, like, it's. That's not what is going to happen. And I can explain why that's
not what's going to happen. There's a set of fake ones, and the
fake ones are the ones that just aren't, I think, technologically
grounded, that aren't rational. It's the AI is going to wake
up and decide to kill us all. It's going to develop the kind of agency
where it's going to steal our money and our spouse and everything else, our kids. That's not how it works. And then there's also all these concerns,
destruction of society concerns. And this is misinformation, hate speech,
deepfakes, like all that stuff, which I don't think is actually a real problem. And then people have a bunch of economic
concerns around what's going to take all the jobs and all those kinds of things. We could talk about that. I don't think that's actually
the thing that happens. But then there are two actual
real concerns that I actually do very much agree with. And one of them is what you said,
which is bad people doing bad things. And there's a whole set of
things to be done inside there. The big one is we should use
AI to build defenses against all the bad things, right? And so, for example, there's a
concern AI is going to make it easier for bad people to build pathogens,
design pathogens in labs, which bad scientists can do today, but this is
going to make it easier, easier to do. Well, obviously, we should have the
equivalent of an Operation Warpspeed, operating in perpetuity anyway. But then we should use AI to
build much better bio defenses. And we should be using AI today to design,
like, for example, full spectrum vaccines against every possible form of pathogen. So defensive mechanism hacking,
you can use AI to build better defense tools, right? And so you should have a whole new
kind of security suite wrapped around you, wrapped around your data, wrapped
around your money, where you're having AI repel attacks, disinformation, hate
speech, deepfakes, all that stuff. You should have an AI filter when you
use the Internet, where you shouldn't have to figure out whether it's really
me or whether it's a made up thing. You should have an AI assistant
that's doing that for you. Andrew Huberman: Oh, yeah. I mean, these little banners and cloaks
that you see on social media like "this has been deemed misinformation." If you're me, you always click because
you're like, what's behind the scrim? I don't always look at this
image is gruesome type thing. Sometimes I just pass on that. But if it's something that seems
debatable, of course you look well. Marc Andreessen: And you should
have an AI assistant with you when you're on the Internet. And you should be able to tell that
AI assistant what you want, right? So, yes, I want the full experience. Show me everything. I want it from a particular point of view. And I don't want to hear from these other
people who I don't like, by the way. It's going to be, my eight
year old is using this. I don't want anything that's
going to cause a problem. And I want everything filtered and
AI based filters like that that you program and control are going to work
much better and be much more honest and straightforward and clear and
so forth than what we have today. Anyway, basically, what I want people
to do is think, every time you think of a risk of how it can be used,
just think of like, okay, we can use it to build a countermeasure. And the great thing about
the countermeasures is they can not only offset AI risks,
they can offset other risks. Right? Because we already live in a world
where pathogens are a problem, right? We ought to have better vaccines anyway. We already live in a world where there's
cyber hacking and cyber terrorism. They already live in a world where
there's bad content on the Internet. And we have the ability now to
build much better AI powered tools to deal with all those things. Andrew Huberman: I also love
the idea of the AI physicians. Getting decent health care in this
country is so difficult, even for people who have means or insurance. I mean, the number of phone calls and
waits that you have to go through to get a referral to see a specialist, it's absurd. The process is absurd. I mean, it makes one partially or
frankly ill just to go through the process of having to do all that. I don't know how anyone does it. And granted, I don't have the highest
degree of patience, but I'm pretty patient, and it drives me insane
to even just get remedial care. So I can think of a lot
of benevolent uses of AI. And I'm grateful that you're bringing
this up and here and that you've tweeted about it in that thread. Again, we'll refer people to that. And that you're thinking about this. I have to imagine that in your
role as investor nowadays, that you're also thinking about AI quite
often in terms of all these roles. And so does that mean that there are
a lot of young people who are really bullish on AI and are going for it? Marc Andreessen: Yeah.
Okay. Andrew Huberman: This is here to stay. Marc Andreessen: Okay. Andrew Huberman: Unlike CRISPR, which
is sort of in this liminal place where biotech companies aren't sure if they
should invest or not in CRISPR because it's unclear whether or not the governing
bodies are going to allow gene editing, just like it was unclear 15 years ago if
they were going to allow gene therapy. But now we know they do allow
gene therapy and immunotherapy. Marc Andreessen: Okay,
so there is a fight. Having said that, there is a fight. There's a fight happening in
Washington right now over exactly what should be legal or not legal. And there's quite a bit of risk, I
think, attached to that fight right now because there are some people in
there that are telling a very effective story to try to get people to either
outlaw AI or specifically limit it to a small number of big companies, which
I think is potentially disastrous. By the way, the EU also
is, like, super negative. The EU has turned super negative on
basically all new technology, so they're moving to try to outlaw AI, which if
they outlaw AI, flat out don't want it. Andrew Huberman: But that's like saying
you're going to outlaw the Internet. I don't see how you can stop this train. Marc Andreessen: And frankly, they're
not a big fan of the Internet either. So I think they regret the EU has a very,
especially the EU bureaucrats, the people who run the EU in Brussels have a very
negative view on a lot of modernity. Andrew Huberman: But what I'm
hearing calls to mind things that I've heard people like David Goggins
say, which is, you know, there's so many lazy, undisciplined people
out there that nowadays it's easier and easier to become exceptional. I've heard him say
something to that extent. It almost sounds like there's so many
countries that are just backing off of particular technologies because it just
sounds bad from the PR perspective that it's creating great, kind of, low hanging
fruit, opportunities for people to barge forward and countries to barge forward. If they're willing to embrace this stuff. Marc Andreessen: It is, but
number one, you have to have a country that wants to do that. Those exist, and there
are countries like that. And then the other is, look, they
need to be able to withstand the attack from stronger countries that
don't want them to do it, right? So the EU, the EU has nominal
control over whatever it is, 27 or whatever member countries. So even if you're like, whatever
the Germans get all fired up about, whatever, Brussels can still, in a lot
of cases, just like flat out, basically control them and tell them not to do it. And then the US, you know, we have a
lot of control over a lot of the world. Andrew Huberman: But it sounds like
we sit somewhere sort of in between. Like right now, people are developing
AI technologies in US companies, r ight? So it is happening. Marc Andreessen: Yeah,
today it's happening. But like I said, there's a set of people
who are very focused in Washington right now about trying to either ban it
outright or trying to, as I said, limit it to a small number of big companies. And then, look, China's got a whole, the
other part of this is China's got a whole different kind of take on this than we do. And so they're, of course, going
to allow it for sure, but they're going to allow it in the ways that
their system wants it to happen. Right. Which is much more for population control
and to implement authoritarianism. And then, of course, they are
going to spread their technology and their vision of how society
should run across the world. So we're back in a Cold War dynamic
like we were with the Soviet Union, where there are two different systems
that have fundamentally different views on issues, concepts like freedom and
individual choice and freedom of speech. And so, you know, we know
where the Chinese stand. We're still figuring out where we stand. I'm having specifically a lot of
schizophrenic conversations with people in DC right now, where if
I talk to them and China doesn't come up, they just hate tech. They hate American tech companies,
they hate AI, they hate social media, they hate this, they hate that, they
hate crypto, they hate everything, and they just want to punish and
ban, and they're just very negative. But then if we have a conversation a half
hour later and we talk about China, then the conversation is totally different. Now we need a partnership between
the US government and American tech companies to defeat China. It's like the exact opposite discussion. Right? Andrew Huberman: Is that fear or
competitiveness on China specifically in terms of the US response in, you
know, you bring up these technologies, know, I'll lump CRISPR in there
things like CRISPR, nuclear power, AI. It all sounds very cold, very
dystopian to a lot of people. And yet there are all these benevolent
uses as we've been talking about. And then you say you raise the
issue of China and then it sounds like this big dark cloud emerging. And then all of a sudden, we need
to galvanize and develop these technologies to counter their effort. So is it fear of them or is
it competitiveness or both? Marc Andreessen: Well, so without them
in the picture, you just have this. Basically there's an old Bedouin saying
as me against my brother, me and my brother against my cousin, me and my
brother and my cousin against the world. It's actually, it's evolution in
action, I think we'd think about it, is if there's no external threat, then
the conflict turns inward, and then at that point, there's a big fight
between specifically, tech, and then I was just say, generally politics. And my interpretation of that
fight is it's a fight for status. It's fundamentally a fight for status
and for power, which is like, if you're in politics, you like the status quo of
how power and status work in our society. You don't want these new technologies
to show up and change things, because change is bad, right? Change threatens your position. It threatens the respect that people have
for you and your control over things. And so I think it's primarily a status
fight, which we could talk about. But the China thing is just like a
straight up geopolitical us versus them. Like I said, it's like
a Cold War scenario. And look, 20 years ago, the prevailing
view in Washington was, we need to be friends with China, right? And we're going to be
trading partners with China. And yes, they're a totalitarian
dictatorship, but if we trade with them, over time, they'll become more democratic. In the last five to ten years,
it's become more and more clear that that's just not true. And now there's a lot of people in both
political parties in DC who very much regret that and want to change too much,
more of a sort of a Cold War footing. Andrew Huberman: Are you willing to
comment on TikTok and technologies that emerge from China that are in
widespread use within the US, like how much you trust them or don't trust them? I can go on record myself by saying
that early on, when TikTok was released, we were told, as Stanford faculty,
that we should not and could not have TikTok accounts nor WeChat accounts. Marc Andreessen: So to start with,
there are a lot of really bright Chinese tech entrepreneurs and engineers
who are trying to do good things. I'm totally positive about that. So I think many of the people mean
very well, but the Chinese have a specific system, and the system
is very clear and unambiguous. And the system is, everything
in China is owned by the party. It's not even owned by the state. It's owned by the party.
It's owned by the Chinese Communist Party. So the Chinese Communist Party owns
everything, and they control everything. By the way, it's actually
illegal to this day. It's illegal for an investor to
buy equity in a Chinese company. There's all these basically legal
machinations that people do to try to do something that's like the
economic equivalent to that, but it's actually still illegal to do that. The Chinese Communist Party
has no intention of letting foreigners own any of China. Like, zero intention of that. And they regularly move to make
sure that that doesn't happen. So they own everything. They control everything. Andrew Huberman: Sorry to interrupt
you, but people in China can invest in American companies all the time. Marc Andreessen: Well, they can,
subject to US government constraints. There is a US government system
that attempts to mediate that called CFIUS, and there are more and more
limitations being put on that. But if you can get through that
approval process, then legally you can do that, whereas the same is
not true with respect to China. So they just have a system. And so if you're the CEO of a
Chinese company, it's not optional. If you're the CEO of ByteDance,
CEO of Tencent, your relationship with the Chinese Communist Party
is not optional, it's required. And what's required is you are a
unit of the party and you and your company do what the party says. And when the party says we get full access
to all user data in America, you say yes. When the party says you change the
algorithm to optimize to a certain social result, you say whatever. It's whatever Xi Jinping and
his party cadres decide, and that's what gets implemented. If you're the CEO of a Chinese
tech company, there is a political officer assigned to you who
has an office down the hall. And at any given time, he can come
down the hall, he can grab you out of your staff meeting or board meeting,
and he can take you down the hall and he can make you sit for hours and
study Marxism and Xi Jinping thought and quiz you on it and test you on it,
and you'd better pass the test, Right? So it's like a straight
political control thing. And then, by the way, if you
get crossways with them, like... Andrew Huberman: So when we see
tech founders getting called up to Congress for what looks like
interrogation, but it's probably pretty light interrogation compared
to what happens in other countries. Marc Andreessen: Yeah, it's state power. They just have this view of top down
state power, and they view it's that their system, and they view that
it's necessary for lots of historical and moral reasons that they've
defined, and that's how they run. And then they've got a view that
says how they want to propagate that vision outside the country. And they have these programs like Belt
and Road that basically are intended to propagate kind of their vision worldwide. And so they are who they are. I will say that they don't lie about it. They're very straightforward. They give speeches, they write books. You can buy Xi Jinping speeches. He goes through the whole thing. They have their tech 2025 plan. This is like ten years ago. Their whole AI agenda, it's all in there. Andrew Huberman: And is their goal that
in 200 years, 300 years, that China is the superpower controlling everything? Marc Andreessen: Yeah. Or 20 years, 30 years, or
two years, three years. Andrew Huberman: Yeah, but
they've got a shorter horizon. Marc Andreessen: I don't know. Everybody's a little bit like this,
I guess, but, yeah, they want to win. Andrew Huberman: Well, the CRISPR in
humans example that I gave earlier was interesting to me because, first
of all, I'm a neuroscientist and they could have edited any genes,
but they chose to edit the genes involved in the attempt to create
super memory babies, which presumably would grow into super memory adults. And whether or not they
succeeded in that isn't clear. Those babies are alive and
presumably by now, walking, talking. As far as I know, whether or not
they have super memories isn't clear. But China is clearly unafraid
to augment biology in that way. And I believe that that's inevitable,
that's going to happen elsewhere, probably first for the treatment of disease. But at some point, I'm assuming people
are going to augment biology to make smarter kids, not always, but often will
select mates based on the traits they would like their children to inherit. So this happens far more frequently
than could be deemed bad. Either that or people are bad,
because people do this all the time, selecting mates that have physical and
psychological and cognitive traits that you would like your offspring to have. CRISPR is a more targeted approach. Of course, the reason I'm kind of
giving this example and examples like it is that I feel like so much
of the way that governments and the public react to technologies
is to just take that first glimpse. And it just feels scary. You think about the old
Apple ad of the 1984 Ad. I mean, there was one very scary
version of the personal computer and computers and robots taking
over and everyone like automatons. And then there was the Apple version
where it's all about creativity, love and peace, and it had the pseudo
psychedelic California thing going for it. Again, great marketing seems to convert
people's thinking about technology such that what was once viewed as
very scary and dangerous and dystopian is like an oasis of opportunity. So why are people so
afraid of new technologies? Marc Andreessen: So this is the
thing I've tried to understand for a long time, because the history is so
clear and the history basically is that every new technology is greeted
by what's called a moral panic. And so it's basically this hysterical
freak out of some kind that causes people to basically predict the end of the world. And you go back in time, and actually,
this is a historical sort of effect, it happens even in things now where
you just look back and it's ludicrous. And so you mentioned earlier
the satanic panic of the concern around, like, heavy metal music. Before that, there was, like,
a freak out around comic books. In the 50s, there was a freak
out around jazz music in the 20s and 30s, it's devil music. There was a freak out, the arrival
of bicycles caused a moral panic in the, like, 1860s, 1870s. Bicycles? Bicycles, yeah. So there was this thing at the time. So bicycles were the first. They were the first very easy to use
personal transportation thing that basically let kids travel between
towns quickly without any overhead. You have to take care of a horse. You just jump on a bike and go. And so there was a historical panic,
specifically around at the time, young women who for the first time,
were able to venture outside the confines of the town to maybe go
have a boyfriend, another town. And so the magazines at the time read
all these stories on this phenomenon, medical phenomenon, called bicycle face. And the idea of bicycle face was
the exertion caused by pedaling a bicycle would cause your face. Your face would grimace, and then
if you were on the bicycle for too long, your face would lock into place. Andrew Huberman: [LAUGHS] Sorry. Marc Andreessen: And then you would
be unattractive, and therefore, of course, unable to then get married. Cars, there was a moral
panic around red flag laws. There are all these laws
that created the automobile. Automobiles freaked people out. So there are all these laws in the
early days of the automobile, in a lot of places, you would take a ride
in an automobile and automobiles, they broke down all the time. So only rich people had automobiles. It'd be you and your mechanic in the car. Right, for when it broke down. And then you had to hire another guy to
walk 200 yards in front of the car with a red flag, and he had to wave the red flag. And so you could only drive as fast as
he could walk because the red flag was to warn people that the car was coming. I think it was Pennsylvania. They had the most draconian version,
which was they were very worried about the car scaring the horses. And so there was a law that
said if you saw a horse coming, you needed to stop the car. You had to disassemble the car, and
you had to hide the pieces of the car behind the nearest hay bale, wait
for the horse to go by, and then you could put your car back together. Anyways, an example is electric lighting. There was a panic around, like, whether
this is going to become complete ruin. This is going to completely
ruin the romance of the dark. And it was going to cause a whole new
kind of terrible civilization where everything is always brightly lit. So there's just all these examples. And so it's like, okay,
what on earth is happening? That this is always what happens? And so I finally found this book
that I think has a good model for it. A book is called Men, Machines, and
Modern Times . And it's written by this MIT professor, like, 60 years ago. So it predates the Internet, but it
uses a lot of historical examples. And what he says, basically, is, he says
there's actually a three stage response. There's a three stage societal
response to new technologies. It's very predictable. He said, stage one is
basically just denial. Just ignore. Like, we just don't pay attention to this. Nobody takes it seriously. There's just a blackout
on the whole topic. He says, that's stage one. Stage two is rational counterargument. So stage two is where you line
up all the different reasons why this can't possibly work. It can't possibly ever get cheap,
or this, that it's not fast enough, or whatever the thing is. And then he says, stage three, he
says, is when the name calling begins. So he says, stage three is like when
they fail to ignore it and they've failed to argue society out of it. Andrew Huberman: I love it. Marc Andreessen: They
move to the name calling. And what's the name calling? The name calling is, this is evil. This is moral panic. This is evil. This is terrible. This is awful. This is going to destroy everything. Don't you understand? All this is horrifying. And you, the person working on it,
are being reckless and evil and all this stuff, and you must be stopped. And he said the reason for
that is because, basically, fundamentally, what these things
are is they're a war over status. It's a war over status, and
therefore a war over power. And then, of course, ultimately money. But human status is the thing,
because what he says is, what is the societal impact of a new technology? The societal impact of a new technology
is it reorders status in the society. So the people who are specialists in that
technology become high status, and the people who are specialists in the previous
way of doing things become low status. And generally, people don't adapt. Generally, if you're the kind of
person who is high status because you're an evolved adaptation to an
existing technology, you're probably not the kind of person that's going
to enthusiastically try to replant yourself onto a new technology. This is like every politician
who's just like in a complete state of panic about social media. Like, why are they so freaked
out about social media? Is, because they all know that the whole
nature of modern politics has changed. The entire battery of techniques
that you use to get elected before social media are now obsolete. Obviously, the best new politicians
of the future are going to be 100% creations of social media. Andrew Huberman: And podcasts. Marc Andreessen: And podcasts. Andrew Huberman: And we're seeing
this now as we head towards the next presidential election. That podcasts clearly are going to
be featured very heavily in that next election, because long form content
is a whole different landscape. Marc Andreessen: Rogan's had, like, what? He's had, like Bernie, he's had like
Tulsi, he's had like a whole series. Andrew Huberman: Of RFK most recently. And that's created a lot of controversy. Marc Andreessen: A lot of controversy. But also my understanding, I'm
sure he's invited everybody. I'm sure he'd love to have Biden on. I'm sure he'd love to have Trump on. Andrew Huberman: I'm sure
he'd have to ask him. I mean, I think every podcaster
has their own ethos around who they invite on and why and how. So I certainly can't speak for
him, but I have to imagine that any opportunity to have true, long form
discourse that would allow people to really understand people's positions
on things, I have to imagine that he would be in favor of that sort of thing. Marc Andreessen: Yeah.
Or somebody else would, right? Some other top podcaster would. Exactly. I totally agree with you. But my point is, if you're a
politician, if you're a legacy politician, you have the option
of embracing the new technology. You can do it anytime you want. Right. But you don't. They're not, they won't. They won't do it. And why won't they do it? Well, okay, first of all,
they want to ignore it. They want to pretend that
things aren't changing. Second is they want to have rational
counterarguments for why the existing campaign system works the
way that it does, and this and that and the existing media networks. And here's how you do things, and here's
how you give speeches, and here's the clothes you wear and the tie and the thing
and the pocket square, and you've, that. It's how you succeeded was
coming up through that system. So you've got all your arguments
as to why that won't work anymore. And then we've now proceeded
to the name calling phase, which is now it's evil, right? Now it's evil for somebody to show
up on a stream, God forbid, for three hours and actually say what they think. It's going to destroy society, right? So it's exactly like, it's a
classic example of this pattern. Anyway, so Morrison says in the book,
basically, this is the forever pattern. This will never change. This is one of those things where you
can learn about it and still nothing, the entire world could learn about
this, and still nothing changes. Because at the end of the day, it's
not the tech that's the question, it's the reordering of status. Andrew Huberman: I have a lot of
thoughts about the podcast component. I'll just say this because I
want to get back to the topic of innovation of technology. But on a long form podcast,
there's no safe zone. The person can get up and walk out. But if the person interviewing them, and
certainly Joe is the best of the very best, if not the most skilled podcaster
in the entire universe at continuing to press people on specific topics when
they're trying to bob and weave and wriggle out, he'll just keep either
drilling or alter the question somewhat in a way that forces them to finally
come up with an answer of some sort. And I think that probably puts
certain people's cortisol levels through the roof, such that they
just would never go on there. Marc Andreessen: I think there's another
deeper question also, or another question along with that, which is how many
people actually have something to say. Andrew Huberman: Real substance. Marc Andreessen: Right. Like how many people can actually talk
in a way that's actually interesting to anybody else for any length of time. How much substance is there, really? And a lot of historical politics was to
be able to manufacture a facade where you honestly, as far as you can't tell how
deep the thoughts are, even if they have deep thoughts, it's kept away from you. They would certainly never cop to it. Andrew Huberman: It's going to
be an interesting next, what is it, about 20 months or so. Marc Andreessen: So panic and the
name calling have already started? Andrew Huberman: Yeah, I was going to
say this list of three things, denial, the counterargument, and name calling. It seems like with AI, it's already
just jumped to numbers two and three. Marc Andreessen: Yes, correct. Andrew Huberman: We're already at two and
three, and it's kind of leaning three. Marc Andreessen: That's correct. AI is unusual just because new
technologies that take off, they almost always have a prehistory. They almost always have a 30 or 40 year
history where people tried and failed to get them to work before they took off. AI has an 80 year prehistory,
so it has a very long one. And then it all of a sudden
started to work dramatically well, seemingly overnight. And so it went from basically as
far as most people were concerned, it went from it doesn't work at all
to it works incredibly well in one step, and that almost never happens. I actually think that's
exactly what's happening. I think it's actually speed running
this progression just because if you use Midjourney or you use GPT or any of
these things for five minutes, you're just like, wow, obviously this thing is
going to be like, obviously in my life, this is going to be the best thing ever. This is amazing. There's all these ways that I can use it. And then therefore, immediately
you're like, oh my God, this is going to transform everything. Therefore, step three,
straight to the name calling. Andrew Huberman: In the face of all this. There are innovators out there. Maybe they are aware they are innovators. Maybe they are already starting
companies, or maybe they are just some young or older person who has these
five traits in abundance or doesn't, but knows somebody who does and is
partnering with them in some sort of idea. And you have an amazing track
record at identifying these people. I think in part because you
have those same traits yourself. I've heard you say the following:
the world is a very malleable place. If you know what you want and you go
for it with maximum energy and drive and passion, the world will often
reconfigure itself around you much more quickly and easily than you would think. That's a remarkable quote because
it says at least two things to me. One is that you have a very
clear understanding of the inner workings of these great innovators. We talked a little bit about that
earlier, these five traits, etc., but that also you have an intense
understanding of the world landscape. And the way that we've been talking
a bout it for the last hour or so is that it is a really intense
and kind of oppressive landscape. You've got countries and organizations
and elites and journalists that are trying to, not necessarily trying, but
are suppressing the innovation process. I mean, that's sort of the
picture that I'm getting. So it's like we're trying to
innovate inside of a vise that's getting progressively tighter. And yet this quote argues that it is
the person, the boy or girl, man or woman, who says, well, you know what? That all might be true, but my view
of the world is the way the world's going to bend, or I'm going to create
a dent in that vise that allows me to exist the way that I want. Or you know what, I'm actually going to
uncurl the vise in the other direction. And so I'm at once picking up a sort of
pessimistic, glass half empty view of the world, as well as a glass half full view. So tell me about that. Could you tell us about that from the
perspective of someone listening who is thinking, I've got an idea, and I know
it's a really good one, because I just know I might not have the confidence
of extrinsic reward yet, but I just know there's a seed of something. What does it take to foster that? And how do we foster real innovation in
the landscape that we're talking about? Marc Andreessen: Yeah, so part is, I
think, one of the ways to square it is, I think you as the innovator need to
be signed up to fight the fight, right? And again, this is where the fictional
portrayals of startups, I think, take people off course, or even scientists
or whatever, because when there's great success stories, they get kind
of prettified after the fact and they get made to be cute and fun,
and it's like, yeah, no, if you talk to anybody who actually did any of
these, like, these things are always just like brutal exercises and just
like sheer willpower and fighting forces that are trying to get you. So part of it is you have to
be signed up for the fight. And this kind of goes
to the conscientiousness thing we're talking also. My partner, Ben, uses the term courage
a lot, which is some combination of just stubbornness, but coupled with a
willingness to take pain and not stop and have people think very bad things of
you for a long time until it turns out you hopefully prove yourself correct. And so you have to be willing to do that. It's a contact sport. These aren't easy roads, right? It's a contact sport, so you have
to be signed up for the fight. The advantage that you have as an
innovator is that at the end of the day, the truth actually matters. And all the arguments in the world, the
classic Victor Hugo quote is, "There's nothing more powerful in the world
than an idea whose time has come." If it's real, right? And this is just pure substance, if
the thing is real, if the idea is real, if it's a legitimately good
scientific discovery about how the nature works, if it's a new invention,
if it's a new work of art, and if it's real, then you do, at the end of
the day, you have that on your side. And all of the people who are fighting
you and arguing with you and telling you no, they don't have that on their side. It's not that they're showing up with
some other thing and they're like, my thing is better than your thing. That's not the main problem. The main problem is I have a thing. I'm convinced everybody else is telling
me it's stupid, wrong, it should be illegal, whatever the thing is. But at the end of the day, I
still have the thing, right? So at the end of the day,
the truth really matters. The substance really matters if it's real. I'll give you an example. It's really hard historically to find
an example of a new technology that came into the world that was then pulled back. Nuclear is maybe an example of that. But even still, there are still
nuclear plants, like, running today. That still exists. I would say the same thing as
scientific, at least I may ask you this. I don't know of any scientific
discovery that was made, and then people like, I know there are areas
of science that are not politically correct to talk about today, but
every scientist knows the truth. The truth is still the truth. I mean, even the geneticists in the Soviet
Union who were forced to buy in, like, knew the whole time that it was wrong. That I'm completely convinced of. Andrew Huberman: Yeah, they couldn't
delude themselves, especially because the basic training that one gets in any
field establishes some core truths upon which even the crazy ideas have to rest. And if they don't, as you pointed
out, things fall to pieces. I would say that even the technologies
that did not pan out and in some cases were disastrous, but that were great ideas
at the beginning, are starting to pan out. So the example I'll give is that most
people are aware of the Elizabeth Holmes Theranos debacle, to put it lightly,
analyzing what's in a single drop of blood as a way to analyze hormones
and disease and antibodies, etc. I mean, that's a great
idea, it's a terrific idea. As opposed to having a phlebotomist
come to your house or you have to go in and you get tapped and then
pulling vials and the whole thing. There's now a company born out of
Stanford that is doing exactly what she sought to do, except that at least the
courts ruled that she fudged the thing, and that's why she's in jail right now. But the idea of getting a wide array
of markers from a single drop of blood is an absolutely spectacular idea. The biggest challenge that company
has is going to confront is the idea that it's just the next Theranos. But if they've got the thing and t
hey're not fudging it, as apparently Theranos was, I think everything
will work out ala Victor Hugo. Marc Andreessen: Yeah, exactly. Because who wants to go back if
they get to the work, if it's real? This is the thing. The opponents, they're not
bringing their own ideas. They're not bringing their, oh,
my idea is better than yours. That's not what's happening. They're bringing the silence or
counterargument or name calling. Andrew Huberman: Well, this is why I
think people who need to be loved probably stand a reduced chance of success. And maybe that's also why having
people close to you that do love you and allowing that to be
sufficient can be very beneficial. This gets back to the idea of partnership
and family around innovators, because if you feel filled up by those people
local to you in your home, then you don't need people on the Internet saying nice
things about you or your ideas, because you're good and you can forge forward. Another question about innovation is
the teams that you assemble around you, and you've talked before about the sort
of small squadron model, sort of David and Goliath examples as well, where a
small group of individuals can create a technology that frankly outdoes what a
giant like Facebook might be doing or what any other large company might be doing. There are a lot of theories as to
why that would happen, but I know you have some unique theories. Why do you think small groups
can defeat large organizations? Marc Andreessen: So the conventional
explanation is, I think, correct, and it's just that large organizations have
a lot of advantages, but they just have a very hard time actually executing
anything because of all the overhead. So large organizations have
combinatorial communication overhead. The number of people who have to
be consulted, who have to agree on things, gets to be staggering. The amount of time it takes to schedule
the meeting gets to be staggering. You get these really big companies and
they have some issue they're dealing with, and it takes like a month to
schedule the pre meeting, to plan for the meeting, which is going to happen
two months later, which is then going to result in a post meeting, which will then
result in a board presentation, which will then result in a planning off site. Andrew Huberman: I
thought academia was bad. But what you're describing
is giving me hives. Marc Andreessen: Kafka was a documentary. Yeah. Look, you'd have these organizations
at 100,000 people are more like you're more of a nation state than a company. And you've got all these competing
internal, it's the Bedouin thing I was saying before. You've got all these internal, at
most big companies, your internal enemies are way more dangerous to
you than anybody on the outside. Andrew Huberman: Can
you elaborate on that? Marc Andreessen: Oh, yeah. At a big company, the big competition
is for the next promotion, right? And the enemy for the next promotion is
the next executive over in your company. That's your enemy. The competitor on the outside
is like an abstraction. Like, maybe they'll
matter someday, whatever. I've got to beat that guy
inside my own company. Right? And so the internal warfare is at least
as intense as the external warfare. This is just all the iron law of all these
big bureaucracies and how they function. So if a big bureaucracy ever does anything
productive, I think it's like a miracle. It's like a miracle to the point where
there should be like a celebration, there should be parties, there should be
like ticker tape parades for big, large organizations that actually do things. That's great because it's so rare. It doesn't happen very often anyway. So that's the conventional explanation,
whereas, look, small companies, small teams, there's a lot that they can't
do because they're not operating at scale and they don't have global
coverage and all these kind of, they don't have the resources and so forth. But at least they can move quickly, right? They can organize fast. If there's an issue today,
they can have a meeting today, they can solve the issue today. And everybody they need to solve
the issue is in the room today. So they can just move a lot faster. I think that's part of it. But I think there's another deeper
thing underneath that, that people really don't like to talk about. That takes us back full circle to
where we started, which is just the sheer number of people in the world
who are capable of doing new things is just a very small set of people. And so you're not going to have 100 of
them in a company or 1000 or 10,000. You're going to have
three, eight or ten, maybe. Andrew Huberman: And some of them
are flying too close to the sun. Marc Andreessen: Some of them
are blowing themselves up, right? Some of them are. So IBM. I actually first learned this at IBM. My first actual job job was at IBM
when IBM was still on top of the world right before it caved in the early 90s. And so when I was there it was 440,000
employees which, and again if you inflation adjust like today for that
same size of business, inflation adjusted, market size adjusted, it would
be its equivalent today of like a two or three million person organization. It was a nation state. There were 6000 people in my
division and we were next door to another building that had another
6000 people in another division. So you could work there for
years and never meet anybody who didn't work for IBM. The first half of every meeting
was just IBMers introducing themselves to each other. It was just mind boggling,
the level of complexity. But they were so powerful that they had
four years before I got there in 1985, they were 80% of the market capitalization
of the entire tech industry. So they were at a level of dominance
that even Google or Apple today is not even close to at the time. So that's how powerful they were. And so they had a system and it
worked really well for like 50 years. They had a system which was. Most of the employees in the company
were expected to basically follow rules. So they dressed the same,
they acted the same, they did everything out of the playbook. They were trained very specifically
but they had this category of people they called Wild Ducks. And this was an idea that
the founder Thomas Watson had come up with, Wild Ducks. And the Wild ducks were, they often had
the formal title of an IBM fellow and they were the people who could make new
things and there were eight of them. And they got to break all the rules
and they got to invent new products. They got to go off and
work on something new. They didn't have to report back. They got to pull people off of
other projects to work with them. They got budget when they needed it. They reported directly to the CEO,
they got whatever they needed. He supported them in doing it. And they were glass breakers. And the one in Austin at the
time was this guy Andy Heller. And he would show up in jeans and cowboy
boots and amongst an ocean of men in blue suits, white shirts, red ties and
put his cowboy boots up on the table and it was fine for Andy Heller to do that. And it was not fine for
you to do that, right. And so they very specifically
identified, we have almost like an aristocratic class within our company
that gets to play by different rules. Now the expectation is
they deliver, right? Their job is to invent the
next breakthrough product. But we, IBM management, know that
the 6000 person division is not going to invent the next product. We know it's going to be crazy. Andy Heller in his cowboy boots. And so I was always very impressed. Again, ultimately, IBM had its issues,
but that model worked for 50 years. Right?
Like, worked incredibly well. And I think that's basically
the model that works. But it's a paradox, right? Which is like, how do you have a large,
bureaucratic, regimented organization, whether it's academia or government
or business or anything, that has all these rule followers in it and all these
people who are jealous of their status and don't want things to change, but
then still have that spark of creativity? I would say mostly it's impossible. Mostly it just doesn't happen. Those people get driven out. And in tech, what happens is those people
get driven out because we will fund them. These are the people we fund. Andrew Huberman: I was going to say,
rather, that you are in the business of finding and funding the wild ducks. Marc Andreessen: The wild ducks. That's exactly right. And actually, this is
actually, close the loop. This is actually, I think, the
simplest explanation for why IBM ultimately caved in, and then HP
sort of in the 80s also caved in. IBM and HP kind of were these
incredible, monolithic, incredible companies for 40 or 50 years, and
then they kind of both caved in. In the actually, think it was the
emergence of venture capital, it was the emergence of a parallel funding
system where the wild ducks, or in HP's case, their superstar technical
people, could actually leave and start their own companies is, and again, it
goes back to the university discussion we're having is like, this is what
doesn't exist at the university level. This certainly does not exist
at the government level. Andrew Huberman: And until recently in
media, it didn't exist until there's this thing that we call podcasts. Marc Andreessen: Exactly right. Andrew Huberman: Which clearly
have picked up some momentum, and I would hope that these other wild
duck models will move quickly. Marc Andreessen: Yeah, but the one
thing you know, and you know this, the one thing you know is the people on the
other side are going to be mad as hell. Andrew Huberman: Yeah, they're going
to, well, I think they're past denial. The counterarguments continue. The name calling is prolific. Marc Andreessen: Name
calling is fully underway. Andrew Huberman: Well, Marc, we've covered
a lot of topics, but as with every time I talk to you, I learn oh, so very much. I'm so grateful for you taking the
time out of your schedule to talk about all of these topics in depth with us. I'd be remiss if I didn't say that. It is clear to me now that you are hyper
realistic about the landscape, but you are also intensely optimistic about
the existence of wild ducks and those around them that support them and that
are necessary for the implementation of their ideas at some point. And that also, you have
a real rebel inside you. So that is oh, so welcome on this podcast. And it's also needed in
these times and every time. So on behalf of myself and the rest of
us here at the podcast, and especially the listeners, thank you so much. Marc Andreessen: Thanks for having me. Andrew Huberman: Thank you
for joining me for today's discussion with Marc Andreessen. If you're learning from and or
enjoying this podcast, please subscribe to our YouTube channel. That's a terrific, zero
cost way to support us. In addition, please subscribe to the
podcast on both Spotify and Apple. And on both Spotify and Apple, you
can leave us up to a five star review. If you have questions for me or comments
about the podcast or guests that you'd like me to consider hosting on the
Huberman Lab Podcast, please put those in the comments section on YouTube. I do read all the comments. Please also check out the sponsors
mentioned at the beginning and throughout today's episode. That's the best way to support this
podcast, not on today's podcast, but on many previous episodes of the Huberman
Lab Podcast, we discuss supplements. While supplements aren't necessary
for everybody, many people derive tremendous benefit from
them for things like improving sleep, hormone support and focus. The Huberman Lab Podcast has
partnered with Momentous Supplements. If you'd like to access the supplements
discussed on the Huberman Lab podcast, you can go to livemomentous, spelled O-U-S. So it's livemomentous.com/huberman,
and you can also receive 20% off. Again, that's livemomentous,
spelled O-U-S, .com/huberman. If you haven't already subscribed to our
Neural Network Newsletter, our Neural Network Newsletter is a completely zero
cost monthly newsletter that includes summaries of podcast episodes as well
as protocols, that is, short PDFs describing, for instance, tools to improve
sleep, tools to improve neuroplasticity. We talk about deliberate cold exposure,
fitness, various aspects of mental health, again, all completely zero cost. And to sign up, you simply go to
hubermanlab.com, go over to the menu in the corner, scroll down to
newsletter, and provide your email. We do not share your email with anybody. If you're not already following
me on social media, I am Huberman Lab on all platforms. So that's Instagram, Twitter,
Threads, LinkedIn, and Facebook. And at all of those places I talk
about science and science related tools, some of which overlaps with the
content of the Huberman Lab podcast, but much of which is distinct from the
content of the Huberman Lab podcast. Again, it's Huberman Lab on
all social media platforms. Thank you once again for joining me for
today's discussion with Marc Andreessen. And last but certainly not least,
thank you for your interest in science.