Transcript for:
AI's Impact: A Biblical Perspective

Artificial intelligence is taking over our world and it is something you need to know about. Christian, it is taking over every aspect of our lives from banking to groceries to religion. It is something that is becoming replete in our culture.

So you need to understand this from a biblical worldview and that's what this show is about. Before we start, make sure that you hit the subscribe button. If you're one of the 75% of people that watch our stuff and are not subscribed, please do that. That allows us to get content straight to you so you know exactly what's going on here at Redeeming Truth and something that also works to get this information out to as many people as possible.

You're going to want to do that as a result of this podcast. Stay tuned. Welcome to Redeeming Truth. We have a very important subject to talk about today. Because I really believe that this, the moment that we are in right now is very similar to, if you remember 2007, we were all, we all had a bunch of flip phones and what would now be called dumb phones.

In 2007, in the fall, there was a shift that took place and the shift became smartphones and it's changed our world forever. Everything is different now because the smartphones and I believe that we are in that same moment right now when it comes to AI. And so this is a topic that you are going to want to listen to, to the very end, because we're going to talk about a wide ranging thing, wide ranging subjects to make sure that you are helped and taken care of when it comes to this subject.

So, I've got with me Pastor Dale, Pastor Kyle, and then we've asked Brian, a good friend, one of the members of Redeemer and soon to be on our staff, and also somebody who is very knowledgeable about this subject. So, very grateful. that you'd come on our show, Brian.

Can you introduce yourself to our listeners? Sure. So it's a pleasure to be here, by the way.

I love you guys and love the podcast. It's such a blessing, I know, to our members and the people who watch. So my background has been in technology for 30 years. I've actually spent the majority of my career helping to build innovative solutions in financial technology space, in payments industry, and then in software at large. And so a lot of the last 10 years of my time has been spent in emerging technology.

So I was recently with PayPal, who was an executive there, and I built and led the cybersecurity, cyber fraud, and emerging technology organizations, which really dealt with a lot of these topics like AI and writing policy, figuring out how to apply them to life, and then how to secure them well. And so that's been the majority of my time in my career, professional career, has been in technology leadership. And a lot of the trends that we've seen since, I mean, 1993 is when I started in technology, has changed dramatically from what we'd imagined.

All of the roadmapped ideas that innovators and the big thinkers thought were in play were all, of course, different now in hindsight than we expected. And so, I've also had the view of how leaders innovate new ideas, how they get applied, and then how we look at those things in hindsight. And from a biblical worldview, I'd consider myself...

you know, from a biblical perspective, a tech optimist, because it is all in God's sovereign control, but also a tech pessimist at times because of the reality of what man does when leftos on devices. So, artificial intelligence, we're going to talk about that. It's going to be our subject today.

Is it good? Is it bad? Are there things that, what do we need to know about it, those kinds of issues?

And so, I guess to get us started, what is artificial intelligence? Yeah, so the... The term artificial intelligence was coined a long time ago, actually, but has most recently been adapted to mean helping computers to think like people.

In a sense, taking trained language models, software applications, and then building those models and decisions on top of data sets that computers and then a big computer that's powerful allows it to make decisions on data and inputs. So what we see today for the use of artificial intelligence in most consumer applications would be like, If I just turn on people's watches, you know, that's a command that invokes artificial intelligence to be ready to listen to you. So AI is then saying, I'm going to learn language in a natural model and then convert that into computer speak to give you a result, a decision, and then take action on it. And AI has become now much more prolific into things like bots and driving and so many different use cases that the ultimate goal of the AI innovators is to replace human intelligence with. computer intelligence, and that artificial intelligence would be a means of both accelerating human innovation and inventions, and also empowering efficiency in our economy and the way that people eat and get medical care and where they live.

All of those applications have been applications that AI has tried to insert into. But at its essence, AI is really essentially trying to replace human intelligence with computer intelligence. So is it being used right now? Absolutely.

Yeah, I mean, so I mentioned the basic consumer use. If you have, there you go. So you have basic devices that will respond because they're listening.

And what they're doing is connecting to a computer that is AI powered, and those are consumer use. It's also been used in AI, I mean, for self-driving cars. I have one, and it's scary. quite honestly, like I said, it's early stage of development, but the car learns driving patterns and behaviors. And the AI powering that experience is learning road conditions, traffic around it, weather patterns, and it's building a database, making decisions, and then learning how driving can be optimized and more safe with computerized based decisions.

So it's used in automotive, it's used in medical care. There are actually AI powered, that are really great use cases are using like lung scans and detecting against what a good scan looks like versus a bad scan so the radiologists can use AI to help quickly find problems in people's scans. So there are all kinds of applications of AI that have already contributed to some significant benefits in humanity and mankind. And then there are a lot of negative uses of it, of course, in warfare, in surveillance, in privacy invasion. You look at different applications of AI have been used in government surveillance programs at massive scale.

So there's a lot of deployments of it in different industries that now it's essentially entered into every space of industry and of commerce. Yeah, I read an article about a guy who lost his girlfriend and I think she had cancer and she died and he uploaded all of her social media into an AI program and he began. a conversation with this ai person that was ending as much as could be an exact replica uh exact replication of his of his girlfriend and it allowed it he said that it allowed him to mourn it allowed him to grieve because it would talk back as if she was dead wow and so i'm you know so this ai would encourage him to be you know to move on without her, even though it was her.

It sounded like her because of the videos. It had everything that was just like her. And it was so real to him that he forgot at times that he was talking to AI. And that's the pivot in AI that we're seeing as a really interesting and concerning development.

Technology has gone through, I call them tech aids, but it's periods of time that are of innovation. And maybe it's a 10-year window or a seven-year. It happens usually within three to five years of these innovations and deployments of a new tech.

And we've seen over social media period when we saw the attention economy, there was so much investment in social media attention grabbing that your attention span was the asset. That was what they've been trying to grab. Well, they've won. I mean, essentially, they've won. They've gotten our attention.

Shorts, reels, those things are what grab our attention to market to us. And that's been the attention economy the last 20 years, really, since social media was introduced. And since it's evolved into faster and more engaging content, has captured the audience's attention through social media experiences. And they used a lot of psychological examinations on the human brain and how it reacts to certain stimuli. They're doing the same with AI, but AI is now going to be in the relational economy.

So while we've been in an attention economy focused social media phase. What AI brings is a relational economy where intimacy and relationship are the actual commodity they're trying to buy. And they're doing that through lifelike experiences, conversations, lifelike videos, pictures, things that you can interact with, and then eventually AI bots, which are in development. And so trying to develop and simulate in the simulation world, which Elon Musk calls that we're all kind of in a simulation.

They're developing that with AI's use to try to replicate human interaction, relationship, and intimacy. So what are some of the AI things you guys have seen? Well, I think I'm a question.

The first question I would have is like, what's the point? What's the point of this? I sit back and for the last 10, 15 years that I've been aware of AI and the advancements of it, the thing I've always thought of is, okay, well, humans are always going to be the ones who are in control.

like it's gonna it's you can pull the plug you turn it off whatever but it seems like that that we've actually passed that point where these these ai computers for lack of a better word are now self-actualized actualizing they they understand and they're actually learning not only from humans they're actually learning from themselves right so what's what's the end game yeah it's a great question is the motive so and world of technologists, we view things as being a tech optimist or a tech pessimist. And the draw between that really is what is the motivation. And the motive behind AI depends on the inventor of AI. We noticed in just the last couple of weeks, Sam Altman, who is the inventor of ChadGBT and OpenAI, which is a nonprofit organization pushing AI into the world, really believes that we need universal basic income. Because we're reaching this point, this singularity point, which really in technology terms, singularity means when machines are smarter than people.

I've reached the point of intelligence singularity. And many computer scientists have said we've reached it. ChatGPT version 4 was recently released.

Version 5 is under development. And AI is now, by IQ standards, again, this is just an intelligent quotient standards, now as smart as humans. So if we've reached singularity, what's the point? The point would then to be used. computers and their worldview to use computers to help mankind be a better version of mankind.

If they don't have a view of eternity as we do, they don't have a biblical worldview or a designed by God basis or origin, then their design for AI is for the betterment. And eventually, you see some wild ideas of some of the inventors of these tools trying to live forever. They're trying to use AI to predict how to...

how to make their bodies last or how to come up with an AI bot version of themselves and move consciousness from their own physical biological body into a bot. So it's both in that motive and instances use AI to help us live longer, better and help humanity. And then on the other end, there are a lot of Christians in the field that are trying to use AI as a tool, just as we've seen in the Bible where tools and innovation are.

under god's sovereignty and under his command there are a lot of christians that acknowledge that and and well-meaning people that are using ai for good as well um so it's it's both it depends on their motive really well you you get into a good point in in speaking about uh what's the ultimate end you know dale asks a good question i've i've got the definition of transhumanism up in encyclopedia britannica and this is fascinating because to me what i see happening in the culture is sort of a uh an argumentation from a Mott and Bailey fallacy where they're introducing this. Well, don't you want to have, you know, better relational social interactions, smarter computers, things that help technology, technology that helps human beings. But this, based on this definition, there seems to be a deeper motive behind that sucking people in.

And while we're being entertained by AI, the drive behind AI seems to be to me biblically reversing the curse. Because if we look at this definition, it says transhumanism, philosophical and a philosophical and scientific movement that advocates the use of current and emerging technologies to augment human capabilities and improve the human condition transhumanists envision a future in which the responsible application of such technologies enables humans to slow reverse or eliminate the aging process to achieve corresponding increases in human lifespans and to enhance human cognitive and sensory capacities the movement proposes that humans with augmented capabilities will evolve you into an enhanced species that transcends humanity called the post-human so if this is the bailey you know what they've been giving us in the uh in the consumerist world is the mott right that's the you can have fun with this you can it can it can write papers and and show you movies and do cool stuff uh but in reality it seems like the worldview is built on we need to reverse the curse and absolutely even if they don't acknowledge it's a curse they acknowledge there's a problem and that ai and technology can solve that problem for us we don't need god yeah and that's absolutely true that the view from a humanist perspective and evolutionary scientists and computer scientists is that mankind is a finite being that has infinite capabilities that we can develop on our own and and and this is a you know a solution to that and that the social experiment that we're going through of desensitization to what computer power versus human power and and god's design for mankind versus man's own relative definition of mankind is all being played out in front of us with you know enabled by ai sounds like a 20 cents a 21st century version of the tower of babel in a sense i think though when you when you consider how um ai is now with new language models essentially localizing language, culture, and reuniting humanity under a one world system, that's absolutely another outcome that we've seen with Sam Altman. Some of these founders is one world currency, one world religion, one world identity. They actually just rolled out a system using open AI to develop and try to identify all humans and then to give universal basic income to all humans through cryptocurrency.

And to do that in a way that says, if you'll identify yourself in our system. Our AI will give you a unique identity, provide you with this world identity, and because of that, will give you money. Like essentially incentivizing people to sign up for a system that AI is driven to incentivize reaction behavior of people.

A digital slavery. Exactly. There's one interesting point that both of you alluded to, which is this reversing of Babel. And I've always found this fascinating because when you look at Genesis 11, and it says the whole earth... had one language and the same words and the people migrated from the east came to the land of shinar and settled said that we're going to build a tower this interesting interaction happens in verse 5 it says and the lord came down to see the city and the tower which the children of man had built and the lord said behold they are one people and they have they all have one language And this is only the beginning of what they will do.

And nothing that they propose to do will now be impossible for them. So what is then God's solution? This is what people don't understand about the division of the ethnicities and the nations. God confused the languages. God created the nations to divide us from being a unified body of human beings, trying to build a kingdom for our own glory and to cause us to look to him.

He says in verse 7, come, let us go down there and confuse their language. So that they might not understand one another's speech. So the Lord dispersed them all over the face of the earth. So this disunity in humanity came from the Lord because man was trying to build a kingdom to his own glory.

It seems kind of like that's exactly what's happening. Though we can't maybe physically speak the same language, mathematically, scientifically, technologically, we can speak the same language. And so this reunification of Babylon seems to be manifesting before our eyes.

Yeah, it's interesting that they want to build a tower because it seems to me that they want to look God in the eye and say, we are equals here. And that is God going, no, you're not building a kingdom here in my world. Yeah.

Now, it is easy for us to go negative because there is a lot of danger in all of this that we rightfully should worry about. Right. But before we continue down that path.

I want to pull us back from that a little bit and talk about what are, so I read another article about AI where this guy took his dog to three different vets and all three vets could not figure out what the problem is. The guy took all of the research from all of the vets, put it into AI, AI diagnosed the problem of his dog and his dog is alive today. You know, and so there are, so in all technology, just like in all tools, a knife can be used to cut, can be used to cut meat, it can be used to do surgery, it can be used to kill people.

So is AI that kind of tool or is there something else there? Or is it more than that? I mean, so it is that kind of tool in the sense of there's definitely a good utility. And Genesis 4 outlines the creation of tool, invention of tools that mankind did. AI gives people the tools to do some phenomenal things.

I just met with a founder of a company. I do coaching of some startup companies. And I just met with a founder who's invented a way to use a VR headset to detect and diagnose early onset Parkinson's using AI. Because it uses pattern matching of how neurological disorders present in the eyes. And he said, And he's coming up with this idea because AI allows us to solve complex problems in a really pointed solution set.

Like, say, how do you solve for this problem? How can you genetically identify a marker in someone's disease chain that says, I need to treat this? Or nutritional deficiencies?

Or there's a lot of medical benefits that we've seen from it, for sure. I mean, students writing their papers with chat GPT is not one of the primary use cases. But it's certainly a way that people say. How can I perform research better?

How do I get access to share information and ideas with each other and to do it at scale? Yeah, I was catching a guy this morning who wanted one of our documents because he wants to translate it into a document. another language and so we were trying to get him to download it to get it on his computer and he uh i'm you know i'm preparing for this interview and so i said to him well actually you shouldn't translate this you should just find an ai program right put it in and say translate it into russian and in five seconds it'll be done he goes what are you kidding me and he said yeah then then all you have to do is proofread it to make sure that it reads correctly but then you're done Exactly.

He knows, I need to find that. So Google just released their AI version of BARD is the AI that Google uses. And they just released Lens, the solution that you can use with Google and scan an item and see what the item is. And it searches all of Google's repository. They're doing that in AI now as well.

Both for language localization and conversion of like search for an item and then tell me how often does this appear in South America? What is the frequency of this item? And translation of items, translation of a video is. being developed now so that a live stream of a podcast could be translated live stream in a different language in a different region.

Yeah, live video translation. Yeah, I've seen. Courtesy of AI.

So, yes, there are some phenomenal possibilities both in medical and research and live streaming and content development. I mean, some, I think there's a lot of pastor help websites now that write sermons for pastors. Oh, yeah, I've talked to pastors that use it. Yeah, absolutely. We don't.

I don't. We find any guy on the way. His dad doesn't match.

You're out. Right. And so then you go, but is there a helpful mechanism of that for an area where maybe they don't have a pastor and are looking for localization of some content? Or as a starter pack, help me find someone who, and then just, it's search 3.0.

It's a method of that in that instance. What we've been talking about there is AI in the narrow sense. And the narrow sense is a specific use case.

There's. essentially two categories of AI. Narrow is solving a problem. General is the more generalized view of thinking like a human.

And it's a different scientific field and a different treatment of what AI is. But the narrow field use cases that we've been talking about have a lot of great utility and definitely a lot of value. I think the other question you asked about when we think of how is this impacting us and then how should we think of it, all of this is under God's sovereignty. None of it's a surprise and we shouldn't fear.

any of this. So I would just say anyone watching this thinking of AI and how to process, what do I do with, don't fear first off, because God's not surprised by this. I don't think anything in the last 20 years, 40 years, 2000 years, 6000 years has come as a surprise to God that we should ever go, oh, I bet God's wondering how to handle that.

In Babel, he knew how to handle that. Sure. With AI, he knows how to handle that.

We don't fear the innovation of man because We're all God's creation and he still has sovereign authority over us. That's a good word. So it's not something that I'm ever fearful about when we talk about some of the bad uses, just like guns don't kill people, knives don't kill people, AI won't kill people.

There's not an implication of AI acting autonomously that we would ever say, oh, but it's outside of God's control. That's just not where we live. Is this story real, the apocryphal story? I don't know if it's apocryphal or not, actually. And Microsoft's...

early version of AI kind of went crazy and they had to pull the plug on it because it started communicating with the computers and the language it created that the creators couldn't understand. And they just-That did happen at Microsoft and Google, actually. And both of them have had to come back.

And some folks that I know that have worked on those projects have said, some of them have resigned and they just-We're done. We're done. Too much.

Yeah. We need to pull back. The other part of this, it's been interesting, and there's been very vocal presence about this from some of the, you know, tech leaders, billionaires saying, we need ethics written around this.

The innovation has gone faster than policy and ethics guidelines and restrictions have been developed. So when that happens, and it happens a lot in different industries, when it happens with something like AI, we're reminded that if you don't first define what you're trying to solve and you just come up with cool solutions, you end up having to try to rein things back in after the fact. Microsoft learned that, Google learned that the hard way, and they've had to then recalibrate, okay, how do we get this back under control, put some framework around what is a good use, what's not a good use, when can the computer not, and then put guardrails up around protecting it.

And that's where the last decade of my career has been, is protecting people from systems and people from people in the digital world. And in AI, we actually started writing an AI ethics charter at PayPal and across other tech companies for use cases like hiring. Imagine when AI is making hiring decisions at companies and a bias influences how AI makes hiring decisions without human review or human.

Those are things that we need to figure out financial decisions on. Do you get a loan? Are you credit worthy? And determining that with AI could mean it includes your social media posts. It could include your geographic location.

It could include data that really from a privacy and an end. individual creditworthiness perspective, you wouldn't say should be considered AI may, if we don't put guardrails up and boundaries around it. So you made it, you made a scary and good point right there.

The bias is built in, right? I don't know if you have that as a question, but can you, can you elaborate on that a little bit on, is that something we should be worried about in, in this sphere of, um, the, the functional use of AI, the searching for information, the formulating of. thoughts and all of that stuff the bias is built into the programming itself to try to train or teach the population in a specific way i mean it it's it's undeniable that every piece of technology is um is biased in some way because it's developed by biased humans and in a sinful fallen world where we've got people with motivations to either you know gain in in profiteering or gain an influence and power there's always a motivation behind that.

So the bias in AI depends on its deployment. There are certainly some use cases that we've seen documented already, where AI has been used to bias media, to bias social media content, certainly political leaning. And with the elections coming up again, there's a tremendous concern about AI's influence in politics. But I think in general, when we think, well, what is the bias that AI represents?

The two factors. Because otherwise, there's a computing engine. When you mentioned earlier the AI computer, AI is really a program running on a computer with a set of data to decide against. So AI as the decision maker is sitting on a piece of hardware running processing, but against a data set that's already biased.

So the bias is based on what information it's reading or gathering or scanning. I mean, it's literally crawling the web, gathering data and building its own data set. And it's doing that on biased data when they can select which. sites to pull from, what level of authenticity or verifiable website scoring they can assign to it. So absolutely, there are inherent biases in the development of any technology because of the biases of the people developing it and based on the data that is prioritized as valuable.

And how it ranks that data is a big portion of that too. So an AI Bill of Rights sounds great, but it is only effective on the people that sign it. And it's only effective on the people that enforce the signing. And the people that don't sign it will automatically gain an advantage over everybody that does sign.

Exactly right. And that's what actually Sundar Pichai, the CEO of Google, came out and said when his head of AI research said, hey, we need to stop this. He said, it's too late. We have to compete. Right.

That's the point. So there's no stop. There are no seatbelts on this. Exactly.

So then I think that's a good moment now to bring in the threats of AI. So you did cybersecurity. That was your primary focus for years. And it was, I had all of this stuff in mind.

And so talk to us about the threats that AI poses for security. Yeah, so existential threats, of course, you know, people's concern with AI in the general sense is that it will take over for people. And there's even, it's kind of funny, there's a new tool that, an AI generated picture tool that people are using to post on social media.

And it's their last day on earth picture. So there's the Armageddon threat that people think AI poses to us, the existential threat. Not as concerned about that in this conversation.

We know where we'll be in from past. If folks haven't seen that, go check out the Revelation series. But I think in the end time series view of things, we say, all right, so if AI poses beyond an existential threat, a near-term threat to society, to our financial systems, to world unification, as Kyle talked about the you know, the Tower of Babel idea.

There are other threats that we look at in really what we as humans view in our society as our identity, our persona. I think especially because of the transition from the social media generation, we'd be concerned about obscurity. What happens when AI says, well, I've got a more important voice than you because you're just a person.

And a post on a social media site is authenticated or validated by, you know, barred AI says. And John Binzinger says, which is more valuable and which one gets more views or hits or is more credible. So there's a concern of obscurity in humanity of saying like, AI will gain credibility and popularity to the point that while Sam Altman and folks are saying, verify your humanity, that could be reversed and used against us. That if you identify yourself as human versus AI, you may actually be treated as a lower IQ capable being than an AI being.

So AI will set up classes. Exactly. ESG.

Which is a significant concern because of the development of ESG and its inputs into corporate ethics and individual ethics. Those things are influencing, of course, how AI treats people. So Jonathan, you said this earlier, it's a digital slave.

It's a form of digital slavery. Right. The digital caste system. Only approved views will be allowed. Exactly.

Another one more immediate would be like economic disaster, financial collapse. So AI is already being used. I helped develop a system and built it at Charles Schwab when I was there. It was Schwab's intelligent platform, which takes intelligent decisions on, hey, Kyle's been trading in gold and silver.

He's performing 20% better than you, Dale. You should be trading like Kyle does. That's a great idea to help people trade better and to be smarter. And those systems have been replicated in the financial industry. AI is now stepping in and saying, well, what if we do that analysis across markets and place trades on your behalf?

And they're doing that. So what happens with rogue AI? If an AI system manipulates the market, there's been concerns that's already happened in some situations that AI places trades or manipulates a market to downgrade a certain company to impose shorts or a short position that hurts a company's reputation. fake news, of course, and impacting it and then buying stock and AI could do both ends, right?

So there's the manipulation threat and financial security threat. And I think then there's, in ethics development, there's also the concern that AI may impose its own ethics in developing and writing material and content. So when I think of threats, I think of the existential threats and then I think of the near-term kind of intrinsic threats to the technology itself. And AI brings some really unique patterns we haven't modeled yet. And so in the modeling scenarios, you usually look at what's happened in the past, how can I prevent, and what's the risk of not preventing it?

You know, what's the cost of not addressing the threat? And each of these scenarios in the development of AI will bring new possibilities and innovations we haven't thought of yet. What if any lag time is there on the unveiling of AI? Meaning?

I know, especially in the defense industry, defense department area, that you can be 10, 15, 20 years behind technology, sometimes even 50 years behind before it hits the market. So is there a lag time to this? Are they further down the road than we already know? In some industries, they're way further than we know. In other industries, like chat, GPT, and BART and things, they're releasing it as soon as they can because it's a competitive advantage.

But in markets like government intelligence, they're absolutely way ahead of us. um and you know public surveillance way ahead of us in the area of medical research and development there's a lot that's not published yet that they are using it for that's good that they just haven't talked about because of ethical concerns and until they address the ethics then they can't talk about what they're doing so there's a lot of that well we'll tell you about it once you tell us if that's okay to talk about and and that's a big nature of like the lag time is to when it's deemed appropriate to talk about we'll start to hear more about what i know i don't say this to be you you know, a fear monger, but I mean, in a tyranny, which to me, this has the potential leading to ethics don't matter. Right.

Might makes right. Well, back to your point about, you know, enforcement of it, who gets to enforce an ethical violation against AI? You can put a stamp on your website that says ethics validated or verified by the AI committee. I mean, yes, an ethics bill of rights would be great, but it's not pragmatic. There's no practical implication to enforce an ethical law.

committee or review this at the very beginning of the interview this is about the heart right how what are people's motives in using this and if it's accessible to everybody which is seemingly is at this point at least on some measure if you have a bad heart you're going to use it for bad reasons that's right right and how do you police that and my biggest concern around threats then into the the use of it specifically are around the relational and intimacy threat i believe that humans and and We've been designed by God to be in relationship. We've been designed by God to be in community and to worship. So, left to our own, without a knowledge of God, without acceptance of the gospel, what do people do? They become worshipers of whatever else.

They'll idolize information, they'll idolize material possessions, they'll idolize relationships. I believe there's a tremendous draw in society since we just exited a pandemic that created fear in society. Fear is the biggest motivator of mankind, right?

The biggest motivation to change and to do something you normally wouldn't do happens as a result of fear. So what happens if AI becomes this fear-mongering tactic of imposing, again, this threat scenario here, of imposing relationships virtually, of saying, yeah, that's okay, you can interact with a virtual version, an augmented reality version of yourself, and it's AI, it's better than human interaction, intimacy relationships. all become now synthesized with hybrid humans or this view of an augmented. That is something that the industries that large are investing billions into. They want to create virtual relationships and AI is the way to make them look and appear.

The digital heaven, the digital afterlife. That's right. When you're here.

Okay. Interesting. I think too, like if we had a chair right there with an AI version of, I don't know, John Calvin, had all of his works in there combined with an AI person, and he's sitting here, you know, with all of his background and speaking into this, some of that would be like, man, that would be incredible.

Right. You know, but the flip side of it is really what we're digging into. We're seeing a rebellious kingdom setting itself up to live forever.

with the promise of a gospel through AI, with a digital saviors that are the tech companies. This is beginning to be a competing religion to Christianity. Yeah, I don't reject that at all.

I think there's a very valid statement that AI develops its own religious belief system, moral code, and there become adherence to that. There have even been some suggests that the Antichrist or the beast could be an ai hybrid human like you know how biblically you know appropriate that is but but if there is a following under a world system and a world's ethical view and a new religion that is ai generated it would be trustworthy by most people because it's all-knowing it's all present it's all capable and you know wow that is a version of an anti-god yeah that is but this i mean taken to its conclusions it's logical conclusions you can see how ai would replace not just human relationships it would not not just in general friendships, it could replace marriage. Right. Because why am I going to waste my time with an actual human being that's going to be different than me? I could just create the one that I want and then interact with that one.

That's right. It can replace church. Right.

What do I want to interact with messy people for? Right. I can have, they say, there is no perfect church. Well, no, I can create a perfect church where the preacher will say exactly everything I agree with and none of the people there are going to bother me.

So you can begin to see. how all of this replaces human interaction completely. Now, why would human interaction be something that would want to be replaced?

Yeah, so this comes into this broader view of the threat to humanity, which is the relational aspect and the intimacy aspect. The economy and the money and power behind a new innovation is always around how much are people going to invest in it and how many adherents or followers or buyers will you get into it. As soon as you flip the switch on, hey, your relationship's difficult, well, we can fix that. We've got a mail order built to order a bride for you or a husband for you.

You tell us what characteristics you want. Maybe we'll buy Neil's package online and figure out your relationship matching thing and build an AI bot specifically for you. There's huge money in that because people are lonely. People want companionship. In the fallen world's view of things, relationship is based on what I want.

And if I want to design someone around my needs, well, A, I could answer that. If I could have something built to answer how I want to do what I want, as opposed to a biblical worldview, which says, I'm actually in marriage and in relationship for God's glory and for your good. How do I do that in a relationship? Well, the anti-God pattern of that would be a self-absorbed, self-focused, self-designed version of relationship. And that's what AI offers in this AI bot sense of relationship and intimacy.

It will sell. It just really, it opens the opportunity for the church to be a counterculture to everything and to show that actually God's way, the Bible's way is better than the world that AI is offering. Absolutely. In many ways, it's healthier.

It's more in line with the way that we were created than the augmented reality that is. being pushed very very heavily and even more so is going to be even pushed more on us well we're just scratching the surface on this but i hope that was helpful hope you got some ideas that would be a blessing for you to figure out how are you going to interact with the technology explosion that's taking place the convergence of all these technologies and specifically ai and the the the combining of technology with human beings And so I hope this was helpful. We'll talk more about this because this is such an important topic. But thanks for joining us at Redeeming Truth. If you would like to give to this ministry, this is the only way that this ministry can happen.

Make sure to click the link down below to the give button and you can give to the ministry here at Redeemer. And then finally, if you want to know more about what's going on, check out this podcast right here.