Transcript for:
The Urgent Need for AI Safety

mark my words AI is far more dangerous than nukes I think danger of AI is much greater than the the danger of nuclear warheads by a lot uh the problem is is it's going to happen so fast so when you ask a person imagine life in a decade from now when we have strong General AI That's exponentially increasing and figuring out the AI race is accelerating at a pace that even the experts who built it are now terrified of and the latest person to sound the alarm arm Steven Adler a former open AI safety researcher who just left the company publicly warning that AGI is a ticking Time Bomb but he's not alone some of the world's top AI Minds including those who were once at the core of open AI Safety Research have walked away claiming that the risks are being ignored in favor of speed and profit so what's really happening inside open AI why are so many safety experts leaving and is Agi actually as dangerous as these scientists claim let's break it all down open ai's internal chaos Steven Adler's resignation from open AI is part of a growing Trend one that paints a troubling picture about the future of AI safety Adler who spent four years working as an AI safety lead took to X to say that the race toward AGI is a very risky gamble with huge downside that alone is a strong statement but what makes it even more concerning is the pattern of Departures that open AI has seen in the last year Adler is not the first and he likely won't be the last just in 2023 ilas suaver and Yan Ley two of open ai's top AI safety researchers left the company suit scaver was a co-founder of open Ai and played a major role in the development of its most advanced AI models Leake who co-led open ai's Super alignment team publicly criticized open AI after quitting saying that safety had taken a backseat to shiny products and it doesn't stop there Daniel kokotajlo another AI safety researcher revealed that nearly half of open ai's AI risk team has now left this raises serious concerns if the very people tasked with ensuring AI remains safe don't believe in open ai's commitment to safety what does that say about the risk we're facing Steven Adler's terrifying warning about AGI Adler's concerns go beyond just open AI he's worried about the entire AI industry moving too fast without solving the critical issue of AI alignment in his post he stated no lab has a solution to AI alignment today and the faster we race the less likely that anyone finds one in time for context AI alignment is one of the most pressing challenges in AI safety it refers to the ability to ensure that AI systems operate in ways that are beneficial to humanity without unintended consequences and as of now there is no guaranteed way to keep an AGI artificial general intelligence under human control that's what makes Adler's warning so alarming he even admitted that he's pretty terrified by the current trajectory of AI development and he's not just talking hypothetically he's saying that this issue is affecting his own personal decisions about the future he wrote when I think about where I'll raise a future family or how much to say for retirement I can't help but wonder will Humanity even make it to that point that's a serious statement from someone who spent years working at one of the world's leading AI companies and it raises a crucial question if the experts who built these systems are this concerned shouldn't we all be paying closer attention the AGI race is spiraling out of control one of the biggest reasons for this rapid acceleration the global AI arms race particularly between the US and China recently reports emerged that a Chinese company called Deep seek AI may have built an AI model on par with or even surpassing open AI models at a fraction of the cost this development sent shock waves through the industry causing us investors to panic and forcing companies like open AI Google Deep Mind and anthropic to respond and how did Sam Alman open ai's CEO react instead of calling for caution he immediately announced that open AI would move up the timeline of its next major AI releases he even called d deep seeks breakthrough invigorating but this is exactly what Adler and other AI safety researchers are warning about when one company pushes forward aggressively it forces others to do the same if open AI slows down its competitors could overtake them that's why we're now seeing AI Labs accelerating development at the fastest Pace in history without having solved any of the critical safety challenges this is what researchers like Stuart Russell a professor at UC Berkeley have called a race to the edge of a cliff Russell has repeatedly warned that if AI Labs don't slow down we could reach a point where AGI surpasses human intelligence before we know how to control it and here's where things get even more concerning even some of the CEOs driving this race have admitted that whoever wins the AGI race could also trigger human extinction if the very people leading these AI Labs believe there's a real chance AGI could go horribly wrong then why isn't there more public discussion about slowing things down open ey safety team is falling apart the departure of Adler leak suit and others has left open AI in a difficult position when it comes to AI safety open AI super alignment team the group that was specifically focused on making sure AGI remains under control has been severely weakened and here's the most concerning part open aai had already dedicated only 20% of its compute resources to Safety Research that number is even smaller now with so many safety experts leaving meanwhile open AI continues to push forward with bigger and more powerful models without addressing the concerns raised by its former safety researchers this all comes after Sam alman's temporary removal as open AI CEO in 2023 a move that many believe was at least partially linked to disagreements over AI safety while Alman was reinstated just 5 days later after pressure from employees and investors the questions surrounding why he was removed in the first place have never been fully answered and since his return open AI has been moving even faster toward AGI despite growing concerns from its own former employees the big question now is what happens next suppose open Ai and other leading AI Labs continue to race ahead without a working solution for AI alignment are we heading toward a future where AGI is no longer under human control that's exactly what Steven Adler Yan like and so many other former AI safety researchers are warning about and based on their concerns this isn't just a hypothetical risk it's a problem that's playing out in real time the AI race isn't slowing down and the stakes they couldn't be higher sam alman's power struggles and AGI Ambitions to fully understand what's happening at open AI today we need to go back to November 2023 when Open ai's Leadership was thrown into complete chaos in a move that shocked the entire Tech World Sam mman was abruptly fired as open ai's CEO the decision was made by open ai's board which at the time included key figures like ilas Suk the company's co-founder and chief scientist no clear explanation was given at first but as details emerge reports suggested that concerns over AI safety and alman's aggressive push toward AGI played a major role for 5 days open AI was in crisis mode hundreds of employees threatened to quit the investors like Microsoft put pressure on the board and open ai's Future seemed uncertain then just as suddenly as he was removed Altman was reinstated the board was reshuffled and within days it became clear that Altman had emerged even more powerful than before but instead of taking a more cautious approach Alman doubled down on his AGI Ambitions in interviews he's repeatedly said that his goal is not just AGI but AGI and Beyond he's made it clear that open AI will continue pushing forward regardless of the risks Because he believes that AGI is inevitable the question is at what cost since his return open a I has moved faster than ever in March 2024 the company launched its gp4 turbo model promising even greater advancements and with competition from Deep seek AI in China open AI has accelerated its timelines even further but what happens if this race goes too far if AI safety is already being deprioritized and the people raising red flags are leaving the company who's actually making sure these AI systems remain under human control and more importantly what happens if no one can is AG I a ticking time bomb at this point even top AI researchers are starting to sound the alarm louder than ever Steven Adler's warning wasn't just about open AI it was about the entire AI industry moving too fast without solving alignment and that's where things get Danger right now no AI lab has solved the alignment problem no one has figured out how to ensure that AGI will always act in alignment with human values and goals and yet the AGI race is forcing every company to move faster not slower Adler put it bluntly even if a lab truly wants to develop AG IR responsibly others can still cut Corners to catch up maybe disastrously and this pushes all to speed up in other words this isn't just an open AI problem it's an industrywide arms race and as long as one company is pushing forward at Breakneck speed the rest will be forced to keep up but what if keeping up means rushing past critical safety measures that's exactly what steuart Russell warned about he called the AGI race a race towards the edge of a cliff because once AI surpasses human intelligence we may no longer be in control and the scariest part even the CEOs leading this race acknowledge the risks Tech leader ERS have openly stated that the company that wins the AGI race has a significant probability of triggering human extinction let that sink in the very people building these systems know the risks and they're still moving forward so the real question is why if the scientists who worked at open aai are this concerned if some of the world's top AI researchers believe AGI could spiral out of control if even the leaders of AI companies admit they don't know how to keep AGI in check then why are we still racing forward at full speed because at this point the warnings aren't coming from conspiracy theorists or sci-fi writers they're coming from the people who built these AI systems in the first place and if they don't feel safe should we if you've made it this far let us know what you think in the comment section below for more interesting topics make sure you watch the recommended video that you see on the screen right now thanks for watching