so fundamentally AI systems are based on mathematics right um so you you need to create a system which which must not be uh um made of inanimate matter it could theoretically also be of animate matter so by using organisms or so to create an AI system but you must have a model of how to do this okay this is rapidly going off the rails welcome to Doom debates I'm Lon shapira today I'm reacting to a recent interview of yobst landree on the geopolitics and Empire podcast yobst and by the way that's spelled j o b St is a German mathematician computer scientist and entrepreneur known for his critical views on artificial intelligence he is the co-author along with the philosopher Barry Smith of the 2022 book why machines will never rule the world artificial intelligence without fear which argues against the idea that AI will ever achieve humanlike intelligence or Consciousness Lon G's work spans the fields of mathematics computer science and philosophy where he explores the theoretical limits of machine learning and artificial intelligence lre has a background in both Academia and Industry having worked extensively on practical AI applications in addition to his writing he has contributed to the debate on ai's potential impact on society technology and ethics when we dive into this interview you're going to see he pretty quickly gets to his core argument which has to do with the physical difficulty of building a software system that can replicate the human brain at a low level with fine detail that's his huge sticking point that he writes about in his book and that he talks about in this podcast interview I don't think you're going to be very convinced by yobst but the particular reasons why I disagree with him may be of interest so let's dive in you talk about artificial general intelligence AGI and how for you know mathematical reasons um it's it's it's impossible some of these things that the Elon musks of the world and others are are telling us this idea of machines ruling the world becoming sentient uh right so tell us about you know the thesis you've been working on and and why we can relax a bit why Skynet will will not become self-aware as I like to say yes so um basically when I started working in AI I just saw as a form of Applied Mathematics which it is you know it's basically a form of you of uh working with data and finding patterns in data and losing these patterns for purposes one reason is to find new patterns really like it's done in astrophysics for example with AIS based method since many decades uh and in another way to use it is to automate repetitive processes so when you have a process that has a regular pattern you can kind of um uh use the AI to identify the pattern and also to automate it this is a common move people make who are trying to downplay the potential power of AI they basically say look it's just extrapolating a pattern that's all it's doing that's kind of similar to people saying look it's just talking English in the case of llms it's just giving you a sentence that sounds good or a popular favorite it's just doing statistics it's just giving you something that's statistically likely it's interpolating what's misleading when they make statements like that is that they're defining a category of things where that category of things actually contains powerful scary super intelligence so for instance if it's just talking English well what if it's a Super Genius at telling you the plan to take over the world in English that's just talking English similarly in the Martin Cado episode when he says yeah it's just statistical extrapolation his host Nathan leben's asimus scenario to comment on the idea of super intelligently engineering cell Behavior which is a ridiculously complex system engineering cell Behavior but Martin's response was to be like oh yeah engineering cell Behavior you know that's also statistical extrapolation it's just more statistical extrapolation so it's this very sneaky move that a lot of people make to downplay AI when they say it's just this category but the category potentially gets ridiculously powerful and they don't seem to explicitly acknowledge that they think that they've successfully downplayed what AI can do by fitting it into this category or in the case of what yobst is saying here it's just extrapolating patterns like they use for astrophysics okay well what if solving astrophysics is something that it can do because that meets your definition isn't solving astrophysics kind of a superhuman ability and yeah it didn't have that ability but it's not because your definition of extrapolating patterns in astrophysics is meaningfully constraining what it can do you're just sneaking in this idea of it being limited what you mean is oh it's really limited it's not scary but what you're saying is it's only extrapolating patterns like you're not acknowledging what a super intelligent pattern extrapolator would be capable of doing you're just being imprecise how you're trying to communicate that AI isn't a big deal and that's that's what AI is and that's what I started with in 1998 when I when I started as the young young post what my first postto position and um and uh since then it has continued to evolve I think that was during the second AI winter and now we are on the third AI wave and that started uh um in 2012 when Google published um an algorithm of which they said it could um by itself um recognize faith of a cat which was wrong but but but still that created quite some hype and then many many other hype things happen to make a to create the AI wave we are currently in this enthusiasm of AI the believe that um AI systems will develop Consciousness and intelligence I actually don't make any claims about AI developing Consciousness I think the relationship between Consciousness and intelligence is kind of fuzzy I suspect it's orthogonal I suspect you can get a really high level of intelligence while avoiding Consciousness whatever that means but I'm confused about the topic it's possible that they go hand in hand but I think it's more precise to say that my claim and the claim of many people I know is that intelligence of AI is coming and it's going to be superum pretty soon not necessarily Consciousness and I why is this wrong so um fundamentally um AI systems are based on mathematics right um so you you need to create a system which which must not be um made of inanimate matter could theoretically also be of animate matter so by using organisms or so to create an AI system but you must have a model of how to do this okay this is rapidly going off the rails AI systems are based on mathematics right uh yeah what else I mean the whole universe has a big mathematical model right when we understand the universe we're understanding the math of the universe so whenever somebody comes out of the gate saying but it's made of math it's like uh um yes aren't we just talking about how epistemology Works in general aren't we just talking about what it means to talk about something that we're talking about it mathematically and then he goes on to say it must not be made of inanimate matter I mean something that we discovered a couple hundred years ago is that things that are animate like animals or like you know organs uh parts of your body your hand responding to your will when you go down at a low enough level you don't just get animate level after animate level right at a low enough level s suddenly you get inanimate matter right so that the whole concept of Animation or Spirits or animism or whatever you want to call it is a highlevel phenomenon it's not built into the laws of physics you're not going to see it in the atoms or the quirks or at a low level you're only going to see it as a property of some inanimate low-level configurations so I don't know where yop is going yet I'm listening to it for the first time this is my first exposure to yobs lre but it's a very weak start if he's already talking about inanimate matter and Mathematics so you must basically know what you want to do which steps you want to do to engineer the AI and that that means that in the end you need equations that describe the AI and mathematical equations we can have a lot and they are very useful but they have a limited score so mathematics is is a science that has a limited score um this is very very hard to understand because for most of our contemporaries it seems that mathematics and the scientists based on mathematics have totally revolutionized everything like modern physics like since industrialization has totally changed our natural biosphere in which we were living before to a technosphere and and this is so impressive that we can have this conversation on Zoom that you can take an airplane that I could visit you in 10 hours I could be in Mexico and to meet you and so on it's so impressive why it took you know five or six weeks uh uh 200 years ago and and so it's so impressive also what the medicine medical progress that people believe that that there is no limit to this modeling of nature hell yeah that's exactly what I believe I think we've seen incredibly strong evidence that that's the case but go on in reality that's not the case in reality we have already used up in the applications of physics most of the great inventions that were made up to 1925 when Quant physics was formalized by shringer or 26 it was and we have used up a lot of this and are now stagnating in physics since 100 year we make make much less discoveries and and um but we and but we still apply in technology the discoveries made before 1926 and so we we are now faced with a Slowdown of the technical progress but for the contemporaries this is not visible yet they are euphorizon develop of the last 20 years I mean social media and so on so they cannot because they don't know what's happened in physics they cannot anticipate what I can already see that we are now the progress is limit what even if you assume that the understanding of physics that we have is just never going to get any better and we're just blocked on physics that still allows for a ton of progress because most of the technological progress that gets made is abstracted away from whatever the laws of physics are for instance if if I want to build another software algorithm I don't have to worry about the details of my Computing Hardware right it's an innovation in software and if we ever understand physics better great maybe we can make faster computer chips but that doesn't affect my software Innovation I mean it might add a multiplier on the efficiency of my software Innovation but my software Innovation could already be exponentially faster even without that multiplier so the idea that physics is a bottleneck for all Innovation I think is massively overstated I think physics is an area that we could stand to do a little bit better in and yeah there has been some stagnation but it's because of diing marginal returns if we learn thef theory of everything and have a full understanding of physics there's a very good chance that that'll not actually improve our the the sum of our Tech progress by that much it's very possible that we've already reaped the majority of returns that our species is ever going to reap from physics progress and all the progress beyond that is going to be done on engineering that mostly uses levels of physics that we already understand today in 2024 so again another head scratcher here from yobs I think we're diverging very quickly on some very foundational beliefs but let's follow his train of thought further now why am I saying this because I want I want you and the audience to understand where this Euphoria for mathematics comes from it comes from this very rapid development that we've seen in the last um two or three generations and people now just linearly extrapolate it as if it would continue to go on like this but that's not happening yes it is and why that's not happening that's very interesting and this is now argument mainly from physics so when we when we talk when we think of physics and Technology we always think of the positive results of physics um and of course a very positive result is nuclear fish because we can have nuclear plants okay we can also have atomic bombs which is not so nice but we can remove their plans and then there are many other also quantum mechanics has has has led to to very positive results like the laser and transistors and many other things and um but but physics also has negative results and the main negative results have been achieved in thermodynamics and thermodynamics um is a branch of physics which is a lot which is about the you know um that which has major laws first and second law of th Dynamics many have heard this in school at school but but most have not realized that Ceramics also the limits in statistical mechanics the boundaries of physics this is a negative result from physics and this this negative result hasn't been appreciated very much with all this Euphoria about the progam I'm skeptical about a distinction between positive and negative physics like it's true that some insights in physics and computer science and all kinds of fields are insights about what is harder impossible to do for sure but I wouldn't be quick to label that negative physics because if you start getting insight about what you can't do you might actually be a better engineer because the process of engineering is the process of searching a space of designs and if you have a level of insight thanks to your negative physics that tells you hey don't bother searching this space of designs because it's not going to work like it violates conservation of energy for instance if you start Crossing out a big section of your exponentially sized search space when you're trying to search for designs you're going to become a better engineer because you're you're going to be able to narrow down to the space of good designs more easily so I wouldn't be quick to call limitation theorems you know speed of light conservation of energy second law of theramic I wouldn't be so quick to call these negative physics right they're just more physics and yes they're boundaries but they could be positive in the sense that they're helpful to increasing your engineering power and this this negative result says that there are systems in nature that have essential properties so properties which are really related to their very nature that prevent mathematical modeling of these systems and this is this is the core argument we make that that the human mind is a complex system and such complex systems in a sense of thermodynamics they cannot be modeled mathematically for structural reasons of mathematic they're incompatible with our approach to modeling reality with mathematics I don't think any known theorems significantly prevent the ability to model systems like the human brain the second law of Thermodynamics says that it's going to be really hard to model it it might take a lot of resources and generate a lot of waste Heat or it might say that we can't model it at a very low level practically also if you think the brain is a quantum computer which it's pretty obviously not but some people do then quantum theory tells you that you can't perfectly replicate a Quantum state so that's a limitation theorem but we have much stronger theorems that tell us that you can have a copy of a computational system like the brain and have it be arbitrarily High Fidelity for computational purposes and not worry about any of these low-level constraints because fundamentally the brain does its work digitally when neurons fire they just have to reach some threshold to make other neurons fire similar to how digital logic Works inside of a computer if you don't get the threshold perfectly to perfect accuracy it's okay the brain is not encoding information in like the hundredth decimal place of these kind of firing limits it's a fundamentally digital system in how it does its computation I can't tell you that with 100% % certainty but I just consider it overwhelmingly likely just in terms of like why wouldn't it be why would Evolution not take the easy way out and make the brain digital which is the same as my skepticism about the brain being a quantum computer it's like why why would Evolution do that there's nothing that humans do that makes us need to be a quantum computer like you're really using quantum theory just to make humans grope their way through like basic epistemology basic bumbling science like you just you don't need this complexity you don't need this way over engineered analog thermodynamic computer architecture you can just do it with a naive digital computer so that's one piece of evidence why I consider it obvious that the brain's architecture is a classical digital computer another piece of evidence is just look at all the other animals we're simulating them better and better we're not running into any major problems and so it would be weird if suddenly this fundamental architecture that we have no problem simulating takes a huge left turn when it gets to humans and something evolves like in the last million years that's so different from everything else unlik you know a one megabyte genetic difference at most it just you know again I can't rule it out entirely but it just seems like wild speculation to me compared to a pretty confident hypothesis that it's just a classical computer running a certain algorithm so y's claim that you can't model the human brain with mathematics seems deeply confused it's not that I'm 100% confident it's just that all the evidence I've seen has pointed one way I've seen pretty much no evidence why to suspect any of the stuff about the brain I guess the only evidence I've seen is just the fact that we haven't perfectly proven that we can have a superum AI or a perfect copy of the brain yet so until it's conclusively proven that we can engineer a brain replica or a perfect scan of a brain then he still has an opportunity to say that his thesis is plausible right so that's why I can't be 100% sure but I think this is a weak line of argument this is the core argument of the book and and um now I give you the opportunity to to ask question and then I will maybe detail it a bit more yeah I think that last uh argument you just made that's kind of what convinced me uh regarding what you're saying that they cannot because they talk about you know the ray cor wildes of the world uh basically have getting access to our soul right they talk about the singularity and saying that they'll be able to copy your your your soul and upload your Consciousness which for me is synonymous with your soul Consciousness your your sense and then putting that into a digital computer and living forever uh and I don't think that's possible I Feel Like Only God can access your soul there's no technology that can ever have that capability and kind of refers to what you're talking about they cannot model they have to first model like the mind and the soul and they can't do that all right we're all the way down the slippery slope of AI skeptic claims to the Rock Bottom where you say only God can access your soul I'm not going to debate God but I will observe that humans seem to be gaining access to deeper and deeper levels of your soul because today we have ai assisted technology that can scan your brain and detect what mental imagery you're seeing so you can even try to paint something in your mind in your mind's eye and you can even get a rough reading on the computer screen of what your mind's eye is painting so maybe your soul is beneath that but your Soul's Mind's Eye is now something that's accessible to Ai and and to reading right so it seems like your soul has to beat a line of retreat and be like well uh the emotional feeling that I have is my soul okay but we can start scanning your emotional feeling but you could be like well the Free Will that I have that's generating that stuff is my soul it's like okay but we can find like strong correlations with the decision that your free will is going to make so it seems like even that part of your soul is now retreating away from the perceptive ability of our scanning technology you know it's very much God of the gaps but I get why as long as AI can't perfectly 100% replicate the input output behavior of everybody's brain it's going to keep being an attractive claim to make to say only God can access your soul and Technology can't model the brain and the brain is infinitely special and therefore there will never be super intelligence because ultimately the payoff of all this for a lot of people is they can sit in their room at night and the anxiety can creep in and they can bat that anxiety away because they can say hey this God that I have faith in the fact that we don't have human level AI yet is a good support pillar for my faith and my belief that everything is going to be okay because I'm going to die and go to heaven because I have this tapestry of religious beliefs and as long as super intelligent AI doesn't get proven out as this thing that can exist then my religion is more likely to be correct but that said I don't come here because I want to make specific psychoanalysis accusations of people in particular so maybe her voice is wired totally differently and he's analyzing it totally rationally and he's totally willing to abandon the belief that God can't access your soul he just wants to hear evidence we can be charitable and give him the benefit of the doubt that that's the kind of person he is I wanted to read from your book here quote AI feeds most con conspicuously into what is now called transhumanism the idea that Technologies to enhance human capabilities will lead to the emergence of a new postum beings and one scenario humans themselves will become immortal because they will be able to abandon their current biological bodies and live on forever in digital form so you know why do you think what do you think they're they're up to here so because you are referring to the soul which we don't do in the book privately I do it because I'm a Christian but as a scientist I don't because it's it's it's not part of science it's part of Theology and today I mean since couple of centuries we have this distinction uh since the 18th century we separate into into Theology and and science so so I'm not using theological argument neither am I using an existential argument um which in the tradition of hia um uh which is what some critics of I did but but my argu or Barry Barry Smith my co-author and my argument is really um geared towards the properties of these complex systems okay so yobst is a Christian but he's going to be careful not to make a theological argument for his beliefs he's going to make an argument that's supposed to be convincing to atheists like me fair enough so the properties of complex systems um complex systems they they are um they are um they have an evolutionary character that means that their properties can change unexpectedly that they cannot only acquire new elements but also new element types like bacteria they can um they can also obtain a new genetic element that they didn't have before that enhances their properties in an unexpected way this is how they acquire antibiotic resistance for example and simple systems cannot do that uh complex systems they they also um what is very very important they um they have a non-tic phas space so that means that um that when you sample events from from from their behavior you never get the same pattern whereas in agonic space when you sample from it you can make sure over time that the every sample has the same likud and the same distribution or or is drawn from an overall distribution and like when you sample for example water from this glass and this glass is not moving at all then it will be composed everywhere in the same way chemically that's an neotic system but when you try start to shake it like I do now it becomes non eotic because it's if if water if if now I shake heavily and air gets into it then the distribution of the air molecules and water molecules is different at every space and whenever I sample something it will always be different I cannot find a pattern anymore that's called nonod deity so for example every wave of water is nonod that's why we also have a wave on the book cover because because because waves are non arotic systems so if you sample forever the composition of a wave you can never never predict the next wave so these systems are unpredictable they they are in a way patternless and and mathematical um models they always need repetitive patterns wait wait wait wait mathematical models always lead to repetitive patterns come on it's so easy to make a mathematical model of such a simple system like a very simple cellular automaton like Conway's Game of Life is super chaotic and is arbitrarily complex it's known to be Turing complete so you can literally have any pattern of any arbitrary complexity evolving in a simple two-dimensional discrete cellular automaton like Conway's Game of Life similarly you can have a classical computer running a classical computer program which could potentially I claim simulate the human brain but whether or not it can simulate the human brain it can simulate systems that we know are arbitrarily chaotic arbitrarily non-erotic arbitrarily complex I'm sure yobs knows this but the fact that a statement slipped out like the idea that mathematical models always need repetitive patterns he's at best being super imprecise and misleading I'm already getting the sense that his Mo which is pretty common is to basically blur levels of organization to not realize that higher levels of organization can be qualitatively different from the lower levels that they're built on because it gets back to something you said before this idea of inanimate matter right so to him it's hard to think about these animated beings that their low-level constituents are inanimate and it seems like that's what he's saying now he can't imagine a chaotic system where if you dig down to the rule oh it's just like a simple Game of Life rule it's a simple automaton rule he can't seem to visualize complexity emerging from simple rules and iness and classical computation so I may be wrong about how I'm interpreting him he probably has more subtle claims that we'll hear later in the podcast but so far I'm just confused and frustrated this is a hard reason why they why we can't use mathematics to B them properly also by the way this um evolutionary character also means that mathematics you always have a cartisian system uh Vector space in which you project the data points and if you can suddenly add a new coordinate to this coordinate system the data point will not fit in anymore that's also to to put it in Easy in in in in layman term and that's why this doesn't work his terms there aren't laying enough because I'm pretty confused about the way he's using cartisian points to make his plane but I get the gist it's complex and hard to model yeah and then also the the the the complex systems are driven so they have a flow of energy flowing through them so bacteria create energy from that matter we do this as well well you know we we we consume dioxide and and exhale carbon dioxide and this is because we use the oxygen uh to make energy out of nutritions in in the citrate cycle and and in the M mitochondria and and this is this is um so there's a constant flow of energy through um through complex systems and this flow of energy creates turbulence and many other phenomena that that cannot be modeled mathematically no no no the flow of energy through the cells of the human brain is something that the human brain's operation abstracts away the way that the neurons fire is only very slightly affected by the current energy levels of those neurons like sure those neurons can run out of energy or run out of oxygen and die and shut down similar to a computer right if if you cut the power supply and maybe maybe the power supply is about to cut out and it operates at slightly less than the normal voltage before cutting out entirely so yeah the computer is going to have a transition between working and not working but the thing about a digital system like that is that it tends to either just work the way it's supposed to or shut down and that's mostly what you're going to get with a human brain it usually just does its job or it doesn't do its job based on whether it has the normal amount of energy and other operational parameters so this idea of like oh you have to model the energy flowing through if you want to get an accurate read on the human brain I mean yeah if you go down to the lowest levels right it's the same question as before do you have to model every Quirk do you have to model every energy flow the answer is almost certainly no we have so much experience with different systems that are arbitrarily complex and yet don't need to be modeled at the lowest levels but heisen said when he dies and talks to pet Peter in this or God in heaven God will certainly explain to him how general relativity works but in what it's caused by but not turbulence that God will not be able to explain to turbulence turbulence is a basic phenomenon of complex systems and so so in the end you know modern physics knows that we cannot mathematically model complex systems but our mind is complex the process in it are complex we don't know how it works and that's why we can't model it this is very very important to appreciate that's why I took now again a few minutes to explain this once more a bird's Wing flying through the air with the all the turbulence of those air particles nonetheless you can have a computer simulator that's pretty good at it right not 100% perfect but good enough for engineering and can be made more and more precise at the expense of let's say more and more uh computation time and time and space right computational resources but we're pretty familiar with those trade-offs and we get the accuracy levels that we need to get you might say no no no fluid dynamics is such a complex chaotic system that once you let it progress for a few time steps it's hopelessly lost to any attempt at simulation the inaccuracy is going to grow and you can say the same about the human brain you could say how do you know that the simulation quality isn't just going to slip away from your grasp the more time steps you run it but here's the thing these systems these engineered systems or these naturally selected systems like a bird's Wing or an airplane's Wing or a human brain or an AI these kinds of systems they only are engineerable to natural selection or to humans to the degree that they're predictable so when we go and we model them like when we go and we model a human brain yeah we're not going to literally get it perfect at every level but the degree to which we can model it is at least the degree to which natural selection was able to select on its genes so we can model the work that the genes are doing that natural selection sees fit to replicate the Fidelity of natural selection deciding yeah this is a good brain architecture let me copy it into the Next Generation to the degree that it has enough information to make a survival probability decision like that we can also have similar Fidelity as modelers to predict whether the brain will do that kind of relevant Behavior the same way that the genes of the Next Generation effectively make that prediction to the extent that things are chaotic that's fine we can't predict every little atom but that's not going to be relevant to the part of the brain's architecture that was selected to be there in the first place a brain gets built out of DNA and if the brain ever has a new Behavior that's worth adding a new base pair in into the DNA to keep into the Next Generation then that behavior has to be sufficiently consistent and predictable across Generations therefore it's going to be modelable by humans so whatever the genes can naturally select for almost certainly is a property that humans can notice and Model Natural Selection doesn't say these should be the genes for your brain based on an infinitely precise simulation of what the brain is going to do it's based on noticing oh this brain did a good job running roughly these algorithms then you might say okay but there's going to be other properties that don't get encoded and copied into the genes they're just like byproducts of what happens when the genes build a brain they're part of the brain that's not directly genetically encoded but it's affected by the environment yeah you can always try to make those claims that things won't be engineerable that things will just be too complicated that chaos will win over engineering and I guess the strongest time when you might be right is like weather prediction 5 years into the future right so even a super intelligent AI might struggle to predict the weather 5 years into the future that might be a good scenario where chaos really does triumph over order okay but most of the time engineering just seems to win natural selection just seems to make brains that are very robust and predictable for what they need to do to accomplish survival and replication similarly humans seem to be able to build systems like airplanes that are just extremely reliable even though fluid dynamics is a nightmare in point of fact we have planes that go millions of miles without an accident right that's what happens in practice so yops is basically just denying that when it comes to the human brain he's basically saying no no no this is going to collapse into chaos and that just doesn't seem to be the case for something like the human brain now now what do what do these what do these technocratic Elites want to do why why are there like yal Harari is a good example right he's a historian he has no clue of mathematics he doesn't understand physics biochemistry chemistry biology he really doesn't understand but he he believes that that we can now with science totally change mankind so he he believes in digital immortality in transhumanism um that we can that we can uh create cyborgs that we can genetically manipulate humans to create higher intelligence and all of this and what these people get wrong so so why do they believe in this because they extrapolate um like it was done in the 18th century by people like la plas L this French um uh enlightenment philosophers they believe that nature can per perfectly be modeled with mathematics I mean if you just want to genetically engineer humans to have higher intelligence a naive program of artificial selection basically breeding the smartest humans you can find that's much more likely than not to work right I mean it's never even been tried seriously across that many generations why shouldn't it work I'm pretty optimistic that it'll lead to smarter humans than the smartest humans who have ever lived something like a 200 IQ human because I think we're going to remove a lot of constraints like if they have a bunch of diseases that come with breeding for high IQ we now have the technology to cure a lot of those diseases or manage them and also if their head is really really big all right that's a constraint that natural selection ran into it couldn't make your head too big even though it sure tried well now we have C-section so the head can actually get a lot bigger right so even even without these kind of bonus things even if you just kept all the traditional evolutionary constraints without using this kind of modern medicine and C-sections or even without any of that just doing a breeding program should work fine so it's interesting to me that yob is pushing back on this idea of oh my God genetically engineering intelligence that's so hard that's so chaotic how will we ever do that it's like come on that's an easy one L Plus Bel that if you if you can measure um every every you know Motion in the universe or on Earth and put it in a gigantic system of differential equations You can predict the future perfectly he called thisan demon demon that could predict everything I think that is true in a certain deep sense like I do think the universe probably has a mathematical model that could be written down in a larger universe like there's no particular reason why that can't be the case people will be like oh what about Quantum uncertainty but if you believe in Quantum many worlds as I do you simply write down all the information about all the worlds and suddenly you're back to having a deterministic system so I do think that's useful to be like hey there's some whiteboard in some larger universe that just writes down everything there is to know about our universe and if you put it in a big enough computer you can just evolve it deterministically and just say everything there is to say about our universe but I'm also more than happy to admit that a lot of things make that totally impractical in our universe and so we do have to spend a lot of time thinking what are the limits of our knowledge within this universe what are we allowed to know and not know about what's written on that big whiteboard in the bigger universe so lassus demon is basically good as epistemology but in Practical as engineering you know already in the 18th century towards the end uel can't recognize this is nonsense and that we cannot mathematically Model Natural systems and he wrote this already in CRI thecraft one of his three most important books um the critique of of judgment I think it's called in English uh where he describes already that for natural systems and mathematically models don't work come on we can't mathematically Model Natural systems that's just how all useful engineering gets done by mathematically modeling natural systems it seems like you're being very strict about getting like a 100 digits of precision in your model or something but we seem to always have models that do the job don't we right that's why we're enjoying a high standard of living because the people who built all the nice stuff that we get to enjoy had mathematical models that were good enough to do the job that seems to be the State of Affairs for intelligent beings is that they have engineering powered by mathematical models that technically they're not perfect but they're good enough to do the job good enough to do the job and then in sody Dynamics it was thoroughly proven that this is the case but people like Harari and ilon mus they don't know this and they don't appreciate it and so they believe that all this can be done and why is it wrong because all of this would require us to be able to model what's going on to have equations that describe it thermodynamics just says that anytime you increase the in lement of your own knowledge representation with the world around you anytime you increase your knowledge so that the Universe seems more regular to you you've done that at the expense of taking some nearby region of the universe and making it less known to you more chaotic to you uh hotter to you because there's apparently a deep relationship between the uncertainty of your knowledge State and the temperature crazy stuff but anyway thermodynamics doesn't say that you can't engineer extr extremely precise systems that are predictable to you it just says that you can't literally know where every atom in the universe is at all times but nobody's even trying to right as long as we're willing to go pump out a bunch of exhaust of atoms where we don't understand their exact configuration which we are there's no problem doing that there's plenty of atoms that we can shoot off into space it's not a problem as long as we're willing to make that sacrifice then we can have arbitrarily good understandings of the systems that we're working on engineering which again in practice we're all familiar with going on Amazon and selecting from millions of products that all are engineered well enough to do the job so engineering does actually work for example intelligence intelligence is an omen omnigenic property so most of the genome 880,000 loots are in the genome at least or more are needed to explain only 70% of the variance of intelligence between individuals if you me me this with this was genetic methods so so so there are thousands and thousands and hundreds of thousands of factors that make up intelligence and so that intelligence is not you know there's not two or three genes that that that that lead to intelligence but it's a super complicated property and because it's so complicated we didn't we we don't know what in the genome we would have to change to make somebody more intelligent the delta in the naturally selected portion of the genetic code of the human brain and the common ancestor of humans another Apes that Delta is maybe a megabyte of information so there's a megabyte of genetic code coding for the different proteins that make the human brain develop differently and have all of that IQ increase that we have compared to the other Apes so if the other Apes have an IQ of like 50 or whatever you want to call that and we have an IQ of like 100 or even 150 right there's no ape that gets close to having 150 IQ that huge huge meaningful Delta is maybe a megabyte of genetic difference so when yop is saying oh my God how are you ever going to unpack 80,000 jeans okay yeah it's not easy but one megabyte leading to this much useful functionality I think we can take a crack at it I think we got a shot it's not that much especially since the brain has a lot of regularity it's not like it's one megabyte of like the most compressed code you can imagine a lot of it is like okay make these cortical units and then just like make them a bunch of times and then make them learn from the information you're getting like there's a lot of repetition there's lot of regularity so it is a tractable engineering problem and a lot of us like Jeffrey Hinton like a lot of AI experts think that we're cracking a lot of this intelligence by building the llms and the AI systems that we're building a lot of us think that we are hot on the heels of what the human brain is doing a lot of us think that we've figured out uh high-dimensional embeddings a lot of us think that we've cracked how the human brain solves the simple grounding problem so that may or may not be correct but at least I can confidently say that there's a lot of regularity that there's there's not that much High entropy that there's certainly not something that thermodynamics is preventing us from learning that's not the issue here we have plenty of neg entropy we're not getting uh overly chaos out here like that's not the issue the same is even true for much simpler properties like body hiide body hiide is also property that is encoded by tens of thousands of genetic Lo see so we don't know how it comes about so we cannot we cannot manipulate um uh um human gen nor even if we could technically manipulate it um uh without harming um the individuals which we can't even if we could do that then we would not able be able to to genetically change this because we don't have the mathematical models to do so come on we can totally make people somewhat taller I don't know what the state-of-the-art is but if you told the world's best geneticist that it's really important that they get together on a well-funded project and go create uh a race of hum that are all 7 ft tall or 8 ft tall or you know some incremental Improvement per generation they could probably make it happen like these hygien are not that far beyond the range of what we currently understand this seems like a very tractable problem now could I have one generation where suddenly all the humans are 12T tall and still healthy no but that's not because of some fundamental chaos limitation it's just because it's a hard engineering project so there's a spectrum of engineering project difficulty going to Mars is a very very difficult engineering project but it's not because the world is chaos and Chaos makes everything impossible no it's just hard some projects are harder than others but we're working on them right it's not a fundamental problem to do these projects and for some diseases we have and these are very simple diseases which are called mandalian diseases for those like hemophilia you know which is factor8 disease with factor 8 is genetically dysfunctional so if you would you could for example you can select embryos you can do initro fertilization until you and and select embryos that that don't have the genetic defect and then you can implant this embryo into a mother and then the child will be cured this you can do because there's only one genetic um Locos that is um causing the disease and there are around 1 to 2,000 such diseases um which are known and they are called mandalian inheritance diseases and they can be manipulated they could in theory be manipulated genetically to be cured but these are mon genetic diseases but but for most diseases there are thousands and thousands of factors causing them which you don't understand we don't understand how they interact but it's not that we don't understand it yet and will soon understand it it's we can never understand it because what we can model mathematically is massively limited so again I get that it's hard if something depends on a thousand genes then it's hard to model it but is it impossible no there's a lot of predictability to it we're getting there you would have been out here on this podcast 5 years ago being like Oh man protein folding is so hard there's so much chaos and now we have Alpha fold 3 which is predicting a lot of what you need to know about protein folding not 100% there's still a ways to go but a lot an amount that a lot of people thought might be many decades away when we put our best engineering resources at a problem and we wait 10 years 20 years we make huge progress at these problems I think yop is being very selective to point at things that today still seem really hard and he's not zooming out and looking at the perspective like if we were standing here 100 years ago I could have shown you uh like the idea of a flight simulator and like do you realize how hard it would be to simulate flying a plane how many calculations per second you would have to make in order to realistically give somebody the experience of being in a flight simulator whereas now we just do in fact have computers that use a ton of calculations and also a ton of algorithmic shortcuts and give you a very compelling experience of a flight simulator that uh tests show that the flight simulators train Pilots as well as the real plane right so things that kind of seem possible right or the idea of like oh my God transmitting a high quality video around the earth really quick do you know what kind of Technology somebody would have to build to transmit signals like that it's like okay but we did that right or like you know going to a mo like making a rocket land itself right this thing that SpaceX accomplished do you realize how hard the guidance would have to be to make a rocket land itself when the air is so chaotic right I mean these things all look like Miracles when you're decades away from doing it in the case of SpaceX it looked like a miracle when it was like a year away from doing it right but it's just like give human engineers here some credit you have to distinguish fundamentally whether you're looking at a problem that's intractable for a fundamental reason so that centuries from now they still won't solve it or you're looking at a problem that's okay a couple decades away from our Engineers right you have to try to distinguish don't put everything in the same bucket I don't think yops has said literally anything that is more than a decade or two beyond the reach of our engineering that has to do with the S Dynamics and the and the theory of complex systems from this we know that and that's why the dreams of transum ISM of digital immortality of AI don't work and what we can achieve in reality is much more modest and if you understand this and that's what I'm doing for a living then you can really make things work and improve a lot of things and get get great inventions and you know it it's it's then really fun to apply the mathematics if you know how they work and what is their limitations are then you can apply it but but these dreams that basically overestimate totally what can be done with mathematics that doesn't work I like that yops is basically making an empirical prediction that genetic engineering is basically hitting a wall because I just I don't think we're seeing evidence of that I think genetic engineering is just plowing along and I think we're seeing more and more success on genetic engineering of food for instance I think we're going to see more and more embryo selection have successful results for things that are even more than monogenetic so that's my empirical prediction right so I also could be wrong but I think I have a pretty accurate sense of how the trend is moving here and I don't think that thermodynamics or chaos is putting up an impenetrable barrier I think it's putting up a slope and I think that our technology is climbing the slope and I think that that's how human engineering always works we just climb the slope to solve the next harder problem for me it seems like it comes down then to world view where they're espousing this sort of a materialist worldview and this word materialism has two definitions in philosophy it means nothing exists except matter and its movements and modification so yes you got me I'm a physical materialist I think everybody should be it also has another definition that's more of the common parlament which is a tendency to consider material possessions and physical comfort as more important than spiritual values okay I'm not a spiritual guy so I guess technically I'm the other kind of materialist but I'm actually not because I'm not somebody who's like obsessed with physical Goods I'm equally obsessed with human relations and like my family being happy so I think I'm actually not a materialist in the ordinary parlaments but I think the accusation of materialistic people who only care about goods and not spirituality I feel like that connotation gets sneaked in when people like yobst and hervo put these labels on their philosophical opponents like yeah we're good Christians but these materialists only care about having the fastest car and the nicest mansion and also modeling everything as Adam's boobing around they'll never have what we have which is a Christmas full of love not toys and a better understanding of ontology uh I was going to ask you then is singularity like a pagan pipe dream uh you know and then I saw you did a talk a year ago where you said it's a neopagan pseudo religion and so before even seeing you say that I from in my mind I felt like this is sort of like a a pagan pipe dream this transum ISM um because you look at the things that they're talking about they reject the notion of like just using Ray cwell as an example you talk about yal Harari they dismiss this notion of God or this this biohacker this wealthy Brian Johnson I think is his name he was recently on Tucker Carlson the things that he says are so in my view Blasphemous so against God saying we can become gods and I think this goes back to the core of this this this their worldview which is Pagan it's Gnostic um it's it's new age it goes back to the Garden of Eden where the serpent says you can become gods and this is the stuff that they're openly talking about today so any further thought on this idea of um this Pagan pipe dream or as you call it neopagan pseudo religion there's a lot of people on the non Doomer side like Mark andreon who love busting out this accusation that AI doomers are religious man they're just starting their own religious cult so it's not surprising to see hervo and yobs come along the same lines being like look at these neopagans trying to make like their own version of God it's just funny that they followed up by being like it's all about the Garden of Eden man it's all about God not being able to access your soul that's the truth okay we know what's up not these other religious people but our kind of religious people so like in their mind religiousness isn't a slur it's just like more of a religious dispute like we're the pagans now okay and of course you know the better approach to all this is to just put the religious labels away and just argue on the object level for God's sake but I think we're past the point where we're expecting strong arguments from yobs I'm afraid this is starting to turn into more of like a roast but let's see what his other points are so so they basically they don't have religion anymore because you know since V and David y said there is no God this became very fashionable in the west and now we Christians are all minority in the west I think in all Western countries including even your home country Catia we just a few real a few percent were still real Christians I mean I mean in the Catholics it's formally a few more are still part of it but those who are really understand for example what the crucifixion and resurrection mean we are just a few percent now right and and so we have become become a minority like were at the beginning in the first and second century ad where we were also minority and and so the the the the the most of the um the elites are now thoroughly um agnostic or atheistic hell yeah since they've already moved past their actual argument which is like hand waving about chaos and thermodynamics and they're already at their payload where the mask is off and they're just talking about how like their religion is more insightful than science I wonder if there's a version of their argument that's like their true argument because at the beginning yob mentioned that he wants to make a non-theological argument an argument that's going to be convincing to the atheists out there I wonder what his theological argument is right what's this true argument that convinces him maybe it's the same maybe it's not I don't know but anyway let's listen to this exercise in projection where he accuses the doomers of secretly having a religion and filling the same hole that people like him need to fill the human need for religion doesn't go away scha discovered this he wrote a very important book about the fundamental need of of of of religion in humans and and so if you take away traditional religion there needs to be a replacement religion and this is of course the first country to to create such a replacement religion was France they created socialism and materialism right and so um IU created socialism in 1753 which is I always confound whether it was Cod or c one is by Don be the other one is by him I always confound them but in the 1750s he created the sist first soldes Manifesto and then lamit created this Die Hard materialist and they they did this because they they thought there is no God and so they replaced it okay wait a minute there's a third definition of materialism lamri dieh hard materialism so I looked this up and apparently lamri was an 18th century French philosopher and he wrote a book called man as a machine and it really is just the basic physical materialist idea which is like hey man is made out of smaller components that are inanimate they follow the laws of physics and together like a machine they create the full anim behavior that you see from the human being and in this book he says man doesn't have a soul because it's just a machine as a modern materialist I'm not sure that I would instantly agree oh sure people don't have a soul I mean it's definitely true on an ontologically fundamental sense if you make a Lego kit of the building blocks of the universe you're not going to put a soul in the Lego kit you're not going to have an ontologically fundamental mental entity you're not going to have dualism so I think lry is right on that count that there's no ontologically fundamental soul but I can imagine having not read the book I can imagine he might have taken the concept too far and been like oh well there's no soul so like it's okay to hurt people or like you don't really feel anything I I can't confirm whether lamry was able to layer on this principle of emergence of multi-level emergence on top of the concept of materialism because that's the thing like when you acknowledge that the universe is just made out of like fundamental particles and physical principles and is a big machine once you acknowledge that you also have to make make sure to acknowledge that Dynamic qualitatively different things emerge at higher levels so you don't just get to say like oh there's no feelings there's no emotion there's no qualia there's no soul it's like hold on a minute hold on that can still emerge from the lower levels all you're saying is that there's no ontologically fundamental Soul so I just want to clarify that because I don't know if yobst realizes that a heartless materialist like me or perhaps lry I don't know if he realizes that we still appreciate the experience of being human the richness of Being Human we just don't think it's ontologically fundamental we think it's a higher level phenomenon that emerges via reductionism there's another one of those loaded words that goes hand inand with materialism right we we have a reductionist view of the human experience but like reductionism doesn't mean bad reductionism doesn't mean siphoning the good stuff out of life it just means explaining it in an elegant way and then mat sta created um even word form of egocentric egoistic materialism in his book about the only one in this property which it's called where he says that that the goal of humans is to realize themselves perfectly and only themselves and to be God for themselves their own God and so these are these this this materialistic religion that that that came after the the the enlightenment got rid of God and they are now dominating um basically our culture I don't know if the author in the book He's citing which I've never heard of I don't know how much that accounts for people's attitudes today maybe he's just saying people attitudes today are in the tradition of this random book as mediated by a bunch of other thinkers I don't know and and so now now is the question by what do you replace God I don't really like this question because a lot of people are just born and don't think that much about God and don't necessarily feel a desire to replace God so my oldest kid is five now and I haven't seen him yearn for replacement for God yet so you know maybe he'll have teenaged angst and he'll want me to replace God for him but just hasn't happened yet and maybe it never will right so replacing God I kind of reject that as a question I don't think that Humanity necessarily needs to replace God if people don't even know about it in the first place like it's kind of a bad idea it doesn't really make sense and we don't have to recreate the idea just so that we can replace it we can just ignore the idea just like we ignore all kinds of bad ideas like we never replaced Human Sacrifice we just ignore that as an idea because it's a bad idea so I consider God a bad idea so in in in in spirit in in esoteric spiritualism it's sometimes replaced by the devil like bad blavatsky and this whole tradition based on on U on you know corporium and the classical esotericism that's one strand um and then there's another strand which uses technology as in urage and and haiger thought this clearly he called technology the metaphysics of the 20th century right I don't like Hyer that much but he was right in this regard and and he said he rejected metaphysics and said and now technology has become our new metaphysic and going even further a new religion which he didn't say about which is which it is now so for them they now believe in technology in in an irrational Transcendent almost Transcendent way and the Transcendent needs of these people is now projected onto technology so they use technology um to to to basically um uh satisfy that needs of the Transcendent that's how it work okay so what do I really use technology for I mean I am a transhumanist and it's not like I have a religion hole that I need to fill it's more like I have preferences right I live in the universe and I would rather the universe be more accommodating to what I like I mean I I don't need it if he told me hey the universe is what it is you don't get your transform it I'd be like fine that's fine I'm still happy to be alive I'll take what I can get but if you tell me hey what do you want to make out of the universe I'll be like uh let's see can we make intellectual challenges can we make really comfortable zones where you can just have like the ultimate Spa experience can you make like even cooler extreme sports I'll just have ideas for stuff I want to try as opposed to not getting to try anything right now y can come in here and be like oh you're filling your religious hole or you're just treating this ability to create experiences as your new God and I'm like okay all right man like I don't see it that way right I would rather not be accused of doing that kind of stuff I just don't have a better idea of what I can do if I'm allowed to do stuff right I just want to do stuff and then if you say hey how about this how about if the universe is just paperclips I'm like um I would actually rate that a lot lower than not transforming the universe at all right I would actually pay a lot of money I would spend a lot of effort to avoid all of our Generations being turned into paper clips and having no human future right that seems like a bad outcome and then of course yobst would come in and be like oh so that's your God not being turned into paper clips what it's just like stop with the god stuff in I'm just trying to determine what state the universe is going to get into okay no no what what what they get wrong is that they don't understand the technology because it's based on mathematics and Mathematics is limited can only do so much they overexp what techology can do and they are irrational about it and that's a Pity because if you are try starting to use technology in an irrational way you're creating damage right and so we see now in many areas where technology is not used optimally anymore because um because they don't have um a realistic picture of of what science can achieve and what it cannot achieve I'm pretty sure that y's position here translates perfectly to 500 years ago so you can go back to the year 1524 and you can play yobs same argument to people then and they'd be like yeah yobs is right we shouldn't just lean on technology because mathematics is limited it can only do so much so we shouldn't try to harness the energy of fossils right we shouldn't try to build ourselves air conditioning we shouldn't try to build skyscrapers we shouldn't try to go to the Moon because mathematics is limited why do we hope to do all these things right so yob really needs to be a lot more precise about what we can realistically hope to accomplish with engineering versus not or sadly what the AI can hope to accomplish versus engineering right I mean the reason I make this podcast is not just to argue that we as a species can engineer a lot of things it's to argue that AI can engineer things so much better than we can that we are about to be rendered powerless right that's usually the argument that I spend my time making here so it's funny because I'm arguing with the O from a position of techn passivism toward techno optimism but normally I have to flip back from techno optimism back to techn pessimism when I say oh we're like Icarus we're flying too close to the Sun we're about to get disempowered by our own creation because of how powerful engineering can be pretty funny what can then machine learning AI AGI I don't know what can it realistically then um do yeah so so it is part of the of the Industrial Revolution right it's a great mathematics based technology that can achieve a lot so wherever you have repetitive patterns you can identify them and use them to create something to to create some automation so let's go through a couple of examples so in Warfare for example um a very important task for Warfare is to detect the movements and everything the enemy does and there in the Ukraine war both sides are completely using Ai and with without it the site that cannot use a eye cannot see anymore what the enemy does because nowadays on the W Battlefield many many sensors are deployed Optical sensors vibration sensors acoustic sensors pressure sensors a lot of sensors are deployed and there are many of them thousands and thousands they also you know then they have satellites that get signals so there's a lot of sensory material that is Created from the sensors that need to be processed all of this is done by AI the AI Aggregates all these signals and then creates a view of the battlefield for the officers who are in charge of planning the attacks and the defenses and without This Modern Warfare is Unthinkable so this is this is for example a way where AI is used a lot for offensive weapons it's much harder to use because because basically to be offensive weapon you need active perception now Barry Smith and I believe that active perception cannot be modeled mathematically because active perception is a highly complex property of complex systems like intelligence and so active perception means that you have a constant interaction between movement and perception so like when you when you hear something you turn your head towards it and then you refine your movements to appreciate it you see this in hunting animals how they behave you know they this is called active conception and it's it's a very complicated phenomenon we cannot model well in robots and and so or which is very hard to model it it we can approximate it to a certain extent but very poorly that's why Tech weapons use less AI they use some high but it's much more primitive so that's an example for Warfare whoa okay I love this because he made such a clear testable prediction saying that AI can't and won't be used as a war attack weapon are you kidding me we're seeing drones from Russia and from Ukraine that are getting better and better at flying over their target chasing their Target and then killing their target that's not active perception you can only have a do defense and you can't have it do offense I mean again I love that he's being concrete enough that we can prove him wrong in a matter of like days in a matter of like I would argue he's already being proven wrong but like two years from now you don't think there's going to be Slaughter Bots you don't think there's going to be a robot that can chase you down in the battlefield and kill you the way a human Soldier can or even a dog can I mean he may be right right there's like a few percent chance that he's right but it just seems like he's so close to being so thly objectively disproven so I just hope that once he is he'll open his eyes and be like uhoh maybe AI can just do a lot of the stuff that we take for granted that animated matter like the human brain can do maybe the separating bar is nowhere near where I've imagined that it is maybe I need to do a serious update and like flip my position 180 and become a Doomer I just you know I don't know the size of the update that he's going to make when he gets objectively proven Wrong by the fact or of course it could go the other way way right if we're talking 2 years from now 5 years from now 10 years from now and for some reason nobody's ever using AI for offense in the battlefield like they always need a human on the radio using a video signal to control these devices if for some reason that's the case okay then I'm wrong right then I have to figure out wow how did yops have this great Insight but I'm just highly highly skeptical that things are going to work out that way I feel like he's obviously the wrong but I'm trying to be fair wherever there are regular patterns AI can be used to to can leveraged to to to do the work and but it cannot be creative it cannot become conscious it cannot become intelligent it can just automate repetitiveness and this we will see more and more and so I estimate the potential of automation that in the in the blue color work is still quite high so there's still a lot of blue color work where there's manual labor involved so we can use Ai and robots to automate more but in white color work the type of work we are doing the the potential is is below 10% so I think people like like um like all these you know Doom profits that say Ohi will take up up so much work of humans that's wrong because they believe that AGI which is general intelligence can be done but in the end what you only have is very narrow AI that can solve efficiently some limited tasks this will continue to to evolve but it will not revolutionize the workspace for white color jobs this is normally where I would would reply okay you seem to be overconfident so let me challenge you give me a prediction of 2 years from now the least impressive thing that AI still won't be able to do with 90% confidence so if you're so sure that AI is so limited then fast forward two years and just give me the least impressive thing that AI still won't be able to do in y's case he already said he doesn't think AI can be used for attacking so that's such a good answer that's such a bold sticking your neck out prediction that I'm happy to say okay he said it he met the challenge that's actually worthy of respect of his side so let's just wait a year or two or let's just check the news today and let's see if AI is suddenly being used for attack and let's let yobs update his mental model based off that okay now I'm going to skip ahead to a random part where he starts saying that AIS can't understand text and video which just seems super outdated to me in the wake of the multimodal models we're seeing today let's listen digitization allows to acquire data about individuals and the masses in a much more comprehensive and thorough way and to especially store them so that you can go back and look up however this is the critical point now because AI cannot think you can't use AI to digest the material so Ani that would that would that you would put on onto this conversation that we have now could only use some keywords to gather some very very very Superior superficial patterns or you know phrases but it's very superficial it doesn't really understand what we're doing it you never will so machines cannot understand texts neither does jgpt nor wind machines and never anytime understand text and and and Ne and and and and even and videos are even harder to understand because videos also contain not only you know text pragmatics but also visual pragmatics which means the meaning of of events and that are visual and ver and and how they interact it's super complicated I mean if you watch a scene in a movie and you and and you think about what lets you understand the scene it's not only what's being said but also what the people do while they interact with the environment so you cannot automate this and so therefore machine interpretation of videos and text is impossible so that means that those organizations that are the equivalent of the stazzi you know like secret Services um let's take China yeah which has of course such a but in the west you have also such Services um uh though they are a bit different but still they can store a lot more about us but but mining it and interpreting it still is human work and so AI like in Minority Report can perfectly predict the behavior and then Target the individual you know that this is just this is just an illusion because it supposes that I understands what humans say and how they act but it really doesn't and it never will and so therefore the the dangers of the abuse of of Technology um come more from digitization than from AI if yops had said this like 5 years ago I'd be like yeah makes sense humans can talk in a way that's like lightly encoded in a way that's going to fool the AI where the AI is not going to notice that they're actually plotting to sabotage the government that would have made sense back when AI wasn't very good at talking and understanding language but have you tried it today today this idea of taking meaning out of sentences or of noticing what frames in a video might mean like explaining a scene explaining a joke explaining all the different elements these are some of the strongest points of the modern AI that we have like if you just go play with these chat Bots just drag into them files and ask them to explain them and analyze them and find different properties of them even a cheap model even Llama Or gbt 3.5 some of the smallest cheapest models still do a really excellent job at that and they can index a search based on uh very deep embedding vectors so this idea where yop is saying oh no no no you're going to need a human to review that kind of content if you ever want to control Society if you ever want to do effective surveillance on all this data you're going to need a human watching all that data what are you talking about that is just so not true about modern AI never mind super intelligence 10 years from now I'm just talking about today's AI it seems like y's clip is from 5 years ago but it's not it's from 2024 in this next part yobst makes a pretty long speech about how science is so hard it's like pushing a boulder up a hill you have to have so much modesty so how could anybody claim to be a scientist and hype up such a crazy claim like super intelligent AI is coming and we might all be doomed and it's going to extinct Humanity how does everybody have the confidence to say something so wild when science generally proceeds with small insights out of and you have to be modest let's listen uh for for AI um we shouldn't be afraid of it but we should use the the real science the sober science the modest science real science is very modest you know and it's because if you do science for a living every day you know how hard it is to get replicable result you're not wanting to HDE anything you just want to do this very hard work that requires a lot of patience and endurance and resilience to to pursue this over so many years and then you are just modest because you know it's so hard to get so little done you know and sometimes Geniuses come around like Einstein but even he had to suffer enormously to come to his the you know it was very a long and hard way for him especially to get from the special to the general the of Relativity he needed to work with a lot of mathematicians was super hard and so this is how science works and so when people when politicians economists or or people from Humanities talk about science you can already immediately tell that they haven't been through this experience of creating science and being in it that's why they can talk in this insane way about it that is detached from it like Harari he has never ever done an experiment of thought in his whole life you know but if you have done experiment or if you have tried to prove a mathematical theorem it can take such a long time and it it has nothing to do with with with with anything you know uh um it's not as great an experience as it looks then afterwards but it's just a hard process and so when you are when you have gone through this all your life or even just 10 years you know you start as a PhD student and then you do a fusive postto then you know that that how it really is and and that that the Hypes don't make sense and I think the worst are scientists who know this and who still nevertheless participate in the hype because they are basically undermining what science really is and what is it is about it's true that science on average is a slow and incremental process and most discoveries have a bunch of little discoveries leading up to them and usually you don't make a single discovery that makes you think uhoh Extinction is happening in our lifetime so I agree that this is rare but it's not unprecedented I mean there's so many Eureka moments in science right I mean he mentioned Einstein he mentioned the Breakthrough that was special relativity like hey what if light really is just the same speed in every reference frame H how about that okay that's kind of a Eureka moment there's the original Eureka moment right comdes in the bathtub when he realizes that displacement of water is one way that you can measure the volume of something that's pretty good another Eureka moment happened in 1980 when some geologists were digging in Italy and they were like wait a minute this part of the soil represents 66 million years ago what is iridium metal doing here why was that in the atmosphere normally that only comes when there's an asteroid impact what's going on and then slowly one thing led to another and they're like wait a minute was there a giant meteor impact 66 million years ago is that maybe what killed the dinosaurs and they're like wait a minute where's the impact crater oh it's kind of under our nose here in the Yucatan Peninsula it's just kind of buried but now we can we can detect it there's actually this giant crater oh wow so that was like a a major event equivalent to 10 billion Hiroshima bombs oh okay I guess the dinosaurs got wiped out by an astroid and it wasn't just like a slow dying off oh interesting right so these Eureka moments do happen and they rewrite your understanding of the world pretty significantly so it's not that surprising that people like Alazar owski who opened up a book when he was a teenager and and read about AI was like oh hey what do you know this is going to drastically change the future of humanity that AI is about to get smarter than us or Jeffrey hinton's Eureka moment apparently happened just a couple years ago and he's like wow I'm so surprised by how rapidly gpt3 and gbt 4 are progressing that I am now kind of turning white because this isn't the future I was hoping for or expecting this is something much scarier so having your prediction of the future Dawn on you to be something different than you thought before it's just not uncommon right it's not the most common case but it happens all the time and just to throw in another one I guess I really like talking about this but Robin Hansen's 2021 paper about grabby aliens I mean just a year ago I'd heard Robin Hansen talking on a podcast speculating about the fmy Paradox not really having a good answer next thing you know 2021 he has very compelling evidence a very compelling model of like oh it's just a big land grab we're early aliens are on their way at like 10% of the speed of light or something like that and we're going to see them in a billion years and that's how the universe works that's that's how cosmology works it's a land grab among aliens that is a huge huge Insight I always say on this show that it's underappreciated but it's an absolute Eureka moment so I don't know why yobst is basically saying science has to proceed incrementally so don't believe the hype that people are telling you hey we might all be about to go extinct sorry that is actually the Beauty and the power of science that it can have these moments I would frame it that way except in this case it's like the worst news ever but it's exciting I think science is a p is is about finding out knowledge for the benefit of mankind and and when you basically use signs to tell people how dangerous everything is and how catastrophic everything is that that is politicized science and that's very dangerous and it's it's basically destroying sign imagine that the non- Doomer side is is using good scientific methodology and then they arrive at their conclusion that we're not doomed that's great that's good for them that's good for everybody but then to turn around and say don't use science to reach the conclusion that we're doomed that makes a mockery of their own methodology that they used to conclude that we're not doomed because if they used a good methodology to conclude that they're not doomed then it had to have been a methodology that was open to concluding that we're doomed you can't come out and say you know we're not doomed because you never considered the possibility that we're doomed you see what I'm saying so it's such a huge red flag if a non Doomer comes out here and says it's bad to say we're doomed science doesn't mean saying that we're doomed because that just shows that they've been closed-minded the whole time about their idea that we're doomed you have to come out here and say hey it's absolutely true that we might have been doomed but I'm happy to tell you that the evidence says that we're not doomed not because I had a restriction on ever Bel beling that we're doomed I have to be able to believe that we're doomed you know what I'm saying that's that's science your ability to believe something rationally is dependent on your ability to also believe the opposite and for the evidence to be what guides you one way rather than the other way you have to be able to conceive of the possibility that you actually live in the other universe so in other words yop what I'm trying to say is when you just come out here and rant about that you're right and we're not doomed unfortunately I think that that assumption is incorrect I think that we are doomed and so unfortunately given that we are doomed the best thing you can do is fearmonger about how doomed we are until PE until people get the right level of fear because right now the level the level of fear that people have about AI the average person actually does have a significant amount of fear but unfortunately it's too low there is a major miscalibration of the amount of fear that the average person feels about Ai and It's just sad it's sad to be sleepwalking into the whirling razor blades of our entire species of our loved ones in the year 2024 when we ought to know better the fear of AI comes from the erroneous view that a I can become conscious and have a will that's totally impossible mathematically and from also from the perspective of physics and biology and and when you remove this irrational fear of a I having its own will and Consciousness then it it's just a normal tool that's such an inaccurate character as ation of what doomers claim it's not loadbearing to the Doomer claim to say AI can become conscious if you look at the attack drones that are being used in the Ukraine war right now they're just good at tracking somebody so that they can't escape and then firing they're good at targeting and from the perspective of the person being targeted there's no off button it's not controllable to that person so imagine that for some reason they become uncontrollable to their creator or they become self- sustaining kind kind of like a virus on the internet where it starts manipulating hundreds or thousands or millions of people at once and it becomes really hard to turn off maybe it'll never turn off right it'll just live in our software ecosystem forever like some viruses from the early 2000s are still alive and they're still causing a billion dollars a year of damage we know that software systems can be self-sustaining it's all just a matter of degree we're not positing some kind of fundamentally new thing here we're just saying you take a virus you add the ability to outsmart Humanity in more and more ways to greater and greater degrees and where do you end up where is this going without some sort of breakthrough and control or you know research that we don't have right now so it doesn't really need a will per se it needs the level of optimization power the level of goal orientedness that we're already seeing in other systems like it's not that hard to just architect a softer system saying Hey try to work backwards from this goal try to take actions that are helping you get this goal in if that's the sense that he means of having a will then okay yeah it needs to that kind of trivially definable will but it doesn't need to be conscious and then yob additionally says hey that's mathematically impossible that's impossible from the perspective of physics and biology which is totally false we're not saying anything fundamentally impossible here so he's not even giving us basic credit for what our actual argument is now you might wonder why am I even doing an episode on yob when his argument is significantly weaker than the average argument that I'm here pushing against well I see my contribution to the discourse as maximizing the convex hole of people with the best arguments and people with the biggest audience and he has a reasonable size audience from what I gather so I think I'm adding value by just adding this to my comprehensive list of episodes where I show you the non- dumer position the different types of non-er position that people take seriously and I do think there's value in just showing you the spectrum of ways in which they're wrong there's so many different ways to be wrong as a non- Doomer it's it's pretty tough right it's why the discussion is so hard to have it's why we need to bring together top people to debate as a society we need this kind of conflict resolution technology we need the social infrastructure of debate that is the Viewpoint of Doom debates and as I mentioned before when you're supporting Doom debates you're not just supporting a guy like me who's yelling that we're doomed you're also supporting the social infrastructure of being able to debate a topic like whether we're doing from super intelligent AI which is a complex topic okay that's all we got from this particular appearance of yobst L essentially his whole argument of why we're not doomed comes down to the Crux that it's physically impossible that we're ever going to get a super intelligence that's beyond human level because it would require Building A system that could be like a low-level physical simulation of a human brain but we can't do that because that would require such fine grained uh information modeling and that's just impossible because of thermodynamics so we can only have these cruder systems that will never match what the human brain can do because chaos prevents us from seeing what the human brain can do it's a common type of argument but it's just very clearly obviously wrong unless you're hellbent on making like a quk for quork copy then maybe you can't but the idea that that's the bottleneck to making a super intelligent system we just have so much evidence that that's not the case I'm willing to die on this hill that we're not going to stop super intelligence because of the chaos of doing a low-level model of the human brain I'm sorry that's just not going to stop it which is why I think you're not going to see yobs have predictive accuracy and anything he says I mean he did actually stick his neck out and make a prediction he said you're not going to see AI being useful for attacks that's a very bold prediction that I think is already falsified or if not it'll be falsified next month or next year yob seems to be kind of stuck at an intellectual dead end where he's not really participating in the current discourse of serious doomers versus more serious non- doomers I guess the most serious group of non- doomers are the people at the AI companies or the people with their own AI research organizations who are saying okay yes it's a problem but here are all the ways we fighting it and the ways we're fighting it are going to be good enough matches to the difficulty of the problem and we're going to pull through that way I disagree with them from my perspective I think it's very likely that the problem is much harder than the quality of the solution that they're bringing to bear on it and I think that they it's a very wishful thinking to think that their caliber of solution is up to the caliber of the problem but of course I hold out hope that they're right because I don't have anything better to Hope on besides pausing AI besides that I thought it was interesting that yobst is very open about being a devout Christian and he clearly connects his View on AI and Technology futurism with the many different views that he has about being a Christian which I don't really know but I could just tell that he thinks about it a lot right he brought it up a bunch of times in the interview including sometimes that I didn't clip in my edit if you want to see the rest of the interview I really don't think you're missing much I think I clipped everything that was relevant to the Doom argument but there's a few more things he says so feel free to check it out on the geopolitics and Empire podcast yops you're more than welcome to come on and debate me I think it could be a great conversation to compare our views in more detail because I do admit that when I do these reaction episodes it is one-sided that's totally fair and I would love for you to come on just email me Wiseguy gmail.com besides that we got a lot of good stuff coming up on Doom debates there's a handful of other things that I want to react to especially uh doing a review of some prominent people's positions that I think is wrong in interesting ways so that'll be fun to go through uh I've got more debates coming up I've got some episodes coming up where I just try to explain a concept see if you guys like me doing things that way I guess if I do it that way it becomes more similar to Rob miles's YouTube channel so go to YouTube and search Rob Miles he's got a lot of really great stuff where he just explains Concepts and AI safety really really well all right if you want to support the channel it's always welcome we are Marching our way to 1,000 subscribers so please do your part we got to get four digigit subscribers here it's a little ridiculous that our subscriber count is three digits when objectively this is clearly like a four five or even dare I say six digigit podcast objectively in terms of number of subscribers that it should have before the world ends thanks for watching thanks for all your feedback and contributions some of you have also written in with fact checks and push backs which I think is great so keep that coming and let's keep triangulating what kind of content is the most valuable for me to bring to the mainstream because I see myself as kind of a messenger uh somebody who's been marinating in the AI Doom scene for like 15 years and now trying to communicate to the mainstream I feel like I have a lot of different stuff that I can bring forth and I want to see what works so hit me up in the YouTube comments or shoot me an email wisy gmail.com let me know that's it for today I'll see you next time on Doom debates if you turn to God every day then you you get the confidence to lead a meaningful life and to find the solution