Transcript for:
AI Strategies Against Fraud and Scams

No, exactly. How are you? Good morning. Great. Good morning. Good morning. Right, we're all here together. uh to discuss a very important topic which is how we can use new technology generally referred to as artificial intelligence to combat fraud and scams. Two different methods in which people steal other people's money. Many of us here have probably been the victims of a successful or hopefully unsuccessful fraud or scam. And if not, if we haven't experienced it directly, we've seen it occur to a family member, a friend, a loved one, a neighbor, somebody in our community. Uh I'm Aaron Klene. I'm the Miriam K. Carliner chair and senior fellow in economic studies. And I think the first time I got victim of a scam was when I was getting tickets for a Grateful Dead show at RFK and I got sold some fake tickets with paper and gave them some cash and got turned away at the door when the uh ticket didn't swipe. Uh today, not many people have a ticket stub in their hand. Instead, tickets are electronic, money is electronic, fraud and scams are increasingly electronic. The question of how we fight that has also gone electronic and in that context we have to think, act and behave differently. Solving this problem, making success in this problem will protect millions of Americans, particularly more vulnerable people, including, let's be honest, our parents, who may be a little more likely to fall prey. Failure to do so will make our entire system and economy less efficient. It will uh uh reward thiefs and crooks and it will make uh uh our ability to adapt innovation and move forward. So with that as a backdrop, I can think of nobody better than the group of panelists and experts we've assembled here today who are active in the field in research dealing with this. You know, at Brookings, we try uh and at the Center of Regulation and Markets, we try to have all sides represented in a debate. It's very boring to have a conversation where you just have people agree on policy. But on this one, there isn't a pro and anti- fraud. There's nobody who's like, "Well, you know, scams are kind of cool and fun, and we should have more of them, right? This should be a unifying concept where we're all in this together but there may be differences in terms of how we approach where we put the burden and right where we put the responsibility uh and so in this way we're going to have a diverse set of viewpoints and interest. So when panelists come up I'm going to introduce you from here and then I'm going to walk over and join. First we're going to start with Brian Boats. Brian is blocks risk lead. Isn't that a cool title? uh in that he he handles risk management uh and consumer protection with a into his job description is t focused on machine learning one of the core elements of AI. Then we're gonna have Kip Wayne Scott who's a executive director of global AI policy at JP Morgan Chase, one of the largest financial institutions in the world. Uh where he's thinking about how they're integrating AI across this world, particularly on fraud and scams. And uh uh last and certainly not least is Kelly Thompson Cochran, the deputy director and chief program officer of Finreg Labs, one of this area's leading uh thought producers and researchers in the field. Uh who's who's you know one of the lead authors of a seminal report in this space in terms of what's going on with fraud, scams, and AI. So join me in thanking them for joining and for a really fun panel that we're going to have which I will kick off right now. Uh and you know let's let's talk for a second about this thing because we're focused on AI, right? Fraud and scams has been around for a long time. AI is kind of new. We can debate when it started. Uh so is AI here in fraud and scams? Are we talking about what's going on today? Are we talking about what's going on in in the future, right? Um, I mean, you're right. It's it's it's always fraud and scams have always been around. I think what we're seeing is just the methods change. The methods are getting more sophisticated. It's not just us in the in the financial services sector trying to use technology to detect and prevent scams, but the scammers themselves are becoming quite sophisticated. I mean obviously the report that um my peer here has put out recently kind of highlights a lot of the synthetic identities and just the creativity around using generative AI to create fake documents that could be used to kind of trick identity verification systems allowing fraudsters more access to financial services that then they use to perpetuate these scams and and um victimize many many many people across not just the country but across the world. So it's it's a constant kind of cat-and- mouse game. Um, and that's why at at Block what we try to do is is stay ahead of the curve with the technology and figure out how we can use that to kind of stay a few steps ahead of the the bad guys. As they get more sophisticated, we kind of have to keep outrunning them and and um using the same technology to kind of fight fight what they're trying to do. So yeah, I mean AI is definitely here and you know artificial intelligence includes a lot of different technologies. the financial institutions have been using machine learning for decades to monitor transactions and and look for fraud. So, it's not new on the defense side. It's not new on the offense side either, but we're seeing it really spread. And I think it's having two effects. One is just really automating and spreading. The scale is quite a bit larger than what we've seen over time. And the sophistication is growing as people are using technologies like generative AI as well for kind of some more sophisticated frauds and scams. But we're also seeing huge increases in old school things like check fraud. So it's really I think a lot of things going on at once and we are seeing just more attention I think more bad actors um really getting sophisticated in this space and technology is just one of the ways in which they are acting in a more sophisticated manner. So technology is part of the solution, but it's not the only thing we have to grapple with here. Yeah, Erin, first of all, uh thanks so much for uh convening this this conversation and to Brookings for for hosting us and and for your your great remarks. Uh I think it frames it really well and I like this question sort of as a starting place um you know whether AI is here or not. uh to Kelly's point like some iteration of AI has been in use for for at least the better part of a decade in the financial services context. Um but I still think that there's quite a lot of cross talk around sort of what we mean when we say AI and uh and I think that's understandable. I think that the step change that came recently with sort of advances in generative AI really sort of had the effect of kind of jumbling the vocabulary or the vernacular. There had been sort of this progress that had been incremental over time where policy makers and even consumers I think were starting to come to familiar terms with the role of AI in say you know narrow AI systems in maybe um recommendations for their Netflix queue or sort of interactions with with Siri or or something along those lines. And then Gen AI comes along and suddenly we're having this really urgent conversation around AI. Um but in some context we're talking about maybe AGI, artificial general intelligence or uh you know some maybe agentic AI use cases that aren't in widespread you know uh use or deployment at this point. Um, and so it's really important uh as a starting place that you know we're we're just intentional about kind of what we're describing and what we're talking because if we're going to sort of set the table for the kind of collaboration across sectors and stakeholders and consumers and law enforcement and uh and it just the complexity of conversation that I think we need to to have and to facilitate. Um we need to be clear about that delineation of like you know uh risks uh and concerns that maybe fall into this bucket of clear and present or sort of longstanding in some cases um versus uh on the other hand you know more speculative or notional risks uh down down the line. And a lot of fraud and scams fall into that that former category. You know they're they're they're present they've been happening they they are an acute priority. Um, but we also need to be thoughtful about kind of what's around the corner and, you know, innovating in the direction of future state risks as well. Yeah. No, look, I I agree with you. One of the reasons I brought up the ticket scam is I get really frustrated by all these folks like, "Oh my god, there's this whole new ticket scam going out there where somebody's selling fake tickets on Craigslist or StubHub and, you know, then I sell them the money and uh uh or cash cash app them the money." Uh, but it's we're in the bad interest, right? If I cash app them Well, let's talk about if I sell them the money, maybe it goes away. If I cash app them the money, maybe you guys can can get it before. But it's like this isn't a new scam. Selling fake tickets is as old as selling events. You know, I bet if we went to Rome and the gladiators were there, there was some guy outside the coliseum uh pawning off something fake, right? And then there's this whole push, oh, it's got to be the the the the money transmitter. Is it? Yeah. you know, how would you guys, you know, use AI? What would you guys do to stop that scam that's different, that's unique? Yeah. Well, I think what's really interesting is, you know, we we know who our customers are. We have we have a lot of signals and data around their transaction history, who they know on the platform, their device information. So, we can train bespoke internal machine learning models and AI models that can detect in real time at the time of the transaction, what's the likelihood that this transaction is a scam. And then we have a bunch of different things that we can do. we're very confident we can just block that transaction outright. You might be a little frustrated because you wanted to buy those tickets. And so there's also uh it seemed like a really good deal. I've never seen a scammer offer a really bad deal. And so this is this is where like our warnings feature which we try to use very surgically instead of just broad-based scam warnings on every transaction. Only about 1 to one and a half% of our transactions we issue scam warnings where we try to give the customer in in the scam uh just a moment to kind of pause and reflect on like, hey, this might be a scam. And we try to use pretty direct language just to kind of shock them into kind of thinking and reflecting on what they're about to do. Um, and they're quite effective actually. You would think that a lot of people just click through scam warnings. Yeah. Yeah, whatever. I want to do this. Um, somewhere between 354% of people actually abandon the transaction when we issue our real-time scam warnings. Um, and it's it's proven to be like very very effective at at protecting consumers. So, let me drill down on that for a So, is that real-time scam warning being generated by something that people would call AI? Like the way that people on Netflix, like my suggested thing is generated by something people would call AI. I just thought it was, you know, some giant Excel thing that pop popped out thing. I would say that the screen itself, I mean, not yet, but our design team is actually really cool. They're playing a lot around a lot with a lot of design generative AI design tools, but today it's triggered by AI. It's triggered by machine learning. So I would say that in the moment an algorithm behind the scenes is deciding well many algorithms behind the scene are deciding do we show this person a warning right now or do we not and then the warning experience is designed to be consistent for for all customers in that case. But you can imagine a world where you start to tailor and generate that warning to be more personalized about the transaction or that individual in some way that might resonate with them even more deeply and be even more effective. But we haven't got there yet on the design of the warnings but just in terms of how the system triggers the warnings. So Kim, what about you guys? Listen, yes. Uh, I think that I mean, one, I really like this example because I think the illustration of the ticket scams, it's it's like it helpfully kind of uh demonstrates the evolution of risks that we're seeing here right now. You know, I think that um there are there there's this saying in sort of the trust and safety community in the tech industry that, you know, people will do everything where they can do anything or at least they'll try. And so it's not surprising that a lot of old methods of fraud and scam have migrated to, you know, the digital environments that people are living their lives in at this point. Um but the two I think really important differences are you know one the reach um just the sca you know so many frauds and scams these days originate for example on social media and the scale and speed with which somebody can can reach potential victims um in those environments is just a level of efficiency that was never available to that scamster who was hawking the Grateful Dead tickets you know uh in in the physical world. Um and then you know I think that um the the uh really the the the kind of def I guess from a defense perspective the dynamic perimeter at this point around which or the surface area around which we have to think about this because to your point it's it's not just happening on one platform. It's sort of a daisy chain um where you know this might by way of illustration you know it might originate you know let's say on a dating platform even and then you know that connection might migrate to say a Facebook connection and then a bad actor might uh you know let's say pitch like a fraudulent investment opportunity using like a fake website and and a company page on LinkedIn and then they might transfer the funds using Cash App or Zel or Venmo Oh, and um you know, we really have to it's become more of an ecosystem challenge than I think it was in the analog space. And so it really does just speak to the need to drive energy into these multistakeholder contexts. Uh at JP Morgan Chase, you know, we are we're founding members of the uh National Fraud and Scams Prevention Task Force uh launched by Aspen Institute. uh we're partnering with with researchers and thought leaders uh to really just support the kind of cross-pollination to build that sort of ecosystem uh coordination in the defenses. Um that I do I do think represents sort of a novel kind of evolution of of where we are and where these risks have been. Uh let me ask follow up on that because there was a question that came in from the audience uh and these are folks that submitted it online and I think right this is an online conversation to some degree and they said can AI help us to predict the newer emerging types of fraud because you're you're drawing a distinction that there's the historical fraud of the ticket sales but then there's this new type of fraud or system can is AI far enough along to say you know what We think this could be a new method that they're they're moving, right? There's a lot of stories out there about romance frauds, right? Where synthetic identities, people have long phone calls, right? It used to be that it took a long time to have a phone call about something. Now you can have your AI chatbot, right, have a phone call with 10 people simultaneously. And even if eight of those get hangups, it's much easier than have 10 people pretending to be somebody else that they aren't. Can AI get that far ahead or or are we a little too sci-fi here? We in was it minority report meets her? Well, I think if it is really helpful in anomaly detection and in potentially detecting new patterns as they're emerging, but I do think it's important to distinguish a little bit between frauds and scams. And I think we've been using both terms kind of simultaneously when when in the financial institutions context there's often a difference. Fraud is really on the institution. And when it's third-party fraud, it's somebody trying to convince a financial institution they're somebody else when they come into the system the first time. Scam is when they con convince one of your legitimate customers to put money into a fraudulent activity offline. Scams are much harder to detect and they're much harder to guard against because the consumer really thinks they're doing the right thing and they really have a right to their account and their money and everything else. And so we need lots of really good detection. And I think especially for the scam side of it, you're right that it's not just financial institutions acting by themselves. It's potentially social media and telecom companies because if they're getting to the consumer, they're getting to the consumer through some other channel. they switched to the financial channel at some point to make the payment, but that's way downstream of where this started. And it is harder to convince consumers at that point that wait, you might be getting fooled here. So, you know, there's ways that AI can help with both of those functions, but it's important to think about them separately because the scams are much much harder to to fight against. And and and you worked at government, Kelly, for for for a long time. Are a lot of our policies written to protect folks against fraud, right? Which would be what you call third party. Somebody stole my credit card. Somebody stole my debit card and I have a set of rights, right? Verse actually, no, I did I did authorize that transaction, but I got I got duped, right? I authorized something that wasn't real, right? Yeah. There are differences in protections and even on the fraud side there are differences in channels. So for instance uh the electronic fund transfer act in regulation E provides protections in some channels when somebody ski steals the pen and you know uses the card fraudulently but they don't apply to everything. So wire transfers for instance are covered. And so what that's one of the challenges that I think we're seeing is that the technology is expanding. The sophistication of the fraudsters and scammers are expanding, but our frameworks, both kind of technology and kind of regulatory and business practice, aren't always keeping up and they're not always even across every single channel. And that is a real challenge. This is a really important point and this this is something that we took under consideration when we were building out our scam reimbursement program is there's no kind of obligation for an authorized transaction. That's a scam to to reimburse the customer for that kind of claim. But we built out an entire scam reporting flow to kind of go above and beyond what the current regulatory obligations might be because we're making it really easy to facilitate payments between individuals. And so we bear some of that responsibility and trying to detect when things are going wrong, when they might be getting duped, issue those warnings, block fraudulent and scam transactions. But our systems aren't always going to be perfect. And so when we get it wrong and we miss out and the customer does actually go through with the transaction and get scammed, they actually have a means by which they can report directly to us, this transaction was a scam. they can give us a bunch of information. We manually review every single one of those scam reports, which at our scale is quite quite massive. Um, and then that's a lot of money for you guys to spend. Yes, it's a big program, but but we believe it's important. We kind of want to go above and beyond. Um, and we feel like we've built the systems and the tools to be able to offer this kind of a program in a scalable, sustainable way. Um, and so we actually will issue reimbursements to customers who fall victim to scams using Cash App. Um, I think to date it's been something like more than 50,000 customers have received reimbursement and we we've reimbured over $5 million. So, it's been like a great really great program. We we just rolled it out in earlier this year in January and um yeah, we we think it's it's kind of our job to kind of go above and beyond just the bare minimum obligations um in the space for sure. Yeah, it it's an interesting point you raise about the method because when the electronic funds transfer act was written, it was in 7074 something. I I uh I wrote a whole paper on this because it reminded me of two things. one in economics class called the coast theorem which is the question of who bears the the government's obligation to assign the property right which in this case was the right to being reimbursed from fraud and the politics of it and I tried to imagine this massive fight that went on about these debit cards and if somebody stole or lost the debit card who was liable and the political solution with a few caveats because every nothing's written so cleanly is basically as long long as you report it in a certain amount of time. It was 500 bucks was your maximum liability, which in 1974 was real money to people. Still is real money to people. It was a lot more. But the infinite liability of your whole account was placed on the financial institution. And I tried to uh imagine the intense lobbying fights that went on between consumer advocates and banks about should it be $100, $500, $1,000, should it be capped, right? And then the most fascinating thing happened once the property right was assigned and debit card technology took off. The institutions because they bore some unlimited liability invested so heavily in a technology system where the vast majority of accounts issued today actually a first dollar loss covered on debits reported in this and it became a competitive me method by which institutions said come to us and if you report it we you know we'll take care of you. I want to ask on a on a followup on that. I mean, do you guys see this on the financial institution side as a comparative advantage, right? Because nobody wants to be fraud and scammed and something in which people will say, you know what, I'll use this channel to send money because as you point out back in 1974 to get a wire, you had to go to the bank. Right now to do what's called a wire, you can do it on your phone or from your computer. It's not that different. the techn the the impediments to those transfers. Right? If I ask people here, are you using an AC? Are you using a wire? Are you using a a reverse direct debit? Are you using uh uh uh Venmo, PayPal, uh Square Cash? People wouldn't even maybe within your app, you're offering them four or five different options on how to send the money, and people don't even know the difference in the legal thing, nor do I think they ever can or should be expected to know that. Do you guys see that as a comparative advantage potentially down the road or just as a corporate responsibility? I'm doing the right thing. I I see it as both. I guess copout answer. I see it as both. I um I think that we do possess the technology to build these systems in a way that could actually create a higher trust, safer environment for consumers to use when they're transacting with one another. And I do think that over time the companies that are able to do that really well are going to be preferred by those consumers and it will grow into be um a comparative advantage. Um, but I also think it's also the right thing to do in many in many cases. And if we can do it, if we can't actually make that investment in that technology to solve that problem, um, and keep customers out of harm's way, we should we should be trying to do that. Um, and it's the bad actors, the scammers, the fraudsters who are also innovating that are keeping us on our toes and pushing us to innovate even beyond what they're capable of. So, yeah, I think it's definitely both. Yeah, I agree. These are mutually reinforcing objectives. I think you know trust is really sort of the packed sand upon which sort of our financial institutions are constructed. Um it's critically important. It's table stakes. Um but it's also you know it's about the customer experience in a lot of ways and we're leveraging and investing in these technologies in part so that we can improve the customer experience. And a part of that sort of balance that we want to to find, you know, not just within sort of our product offerings, but within sort of the policy solution set are, you know, really about sort of um you know, mitigating risks, being cleareyed about sort of what the the spectrum of of risks looks like. Um introducing technologies to deliver top quality services. Um but all while kind of uh in the direction of like reinforcing trust that this these are trustworthy tools. They're making your experience better. Um and you know I think that's um we you know simultaneously I think being able to to build sort of this community of practice around diverse stakeholders that are that are sort of spreading awareness of both the capabilities and and the solutions and the mitigations. um this is all going to be ideally like a very virtuous circle and and I think we're optimistic about that. Can I just add on one more point? I I think also by giving consumers an opportunity to report when these things happen like even beyond just the detection and the AI and sophistication just building a way for customers to let us know, hey, this happened and for us to actually investigate and be able to confirm or or deny whether or not we believe the scam actually took place. that feeds back directly into the training of our models and our systems that get better constantly to detect the next scam that's coming online. Um, and so I think that that's key is having those feedback loops as well where we're constantly learning about the new trends, able to put that directly automatically back into our detection algorithms that then, you know, continue to detect and get better over time. And since we built the system, we we put out a white paper uh on on fighting scams with technology blocked it a few months ago. Since just in the past few months, we've brought down our our confirmed scam rate, which is what we track is the percentage of peer-to-peer transactions that are confirmed as a scam. We've brought that down by over 70% just in the last few months alone by investing in this technology, building these systems, and now it's it's a fraction of a basis point. You know, point for those of you that aren't familiar with basis point, 0.01% is one basis point. We are below that, well below that, and and stable. So, incredibly low confirmed scam rates, which we're very, very proud of. Um, but we're still going to keep investing in this program and trying to get even better at it. So, I mean that look, that's really impressive, right? Because, you know, there tremendous amounts of scams out there as you point out, right? People are, you know, you're investing in stopping it. They're investing in figuring out how to get around it. It's a cat-and- mouse game. And to get to that point, I recommend that the the white paper. Let me recommend another white paper which is Kelly's uh and give you a second uh to talk about your findings from your white paper in this space which is another kind of foundational report here. Oh thanks. Um so we set out to do a couple of different things in the paper so it's a little bit on the long side I will admit but that's because it has multiple parts. We were trying to look one at data and technology trends, the kinds of things that we're talking about right now in terms of how people are fighting frauds and scams, but also looking at two other things. What are the underlying requirements for identity proofing and for transaction monitoring that really affect access to financial services. So we looked at it at account opening as well as for payment transactions either domestic or international and just providing an overview of both what are the legal requirements and then how do the business practices work on top of those and then the third thing is really to look forward to what kind of things can we do in the short term to really improve the ecosystem. Some things about this system are very hard to fix without a comprehensive approach to things like identity and digital privacy. But there are a lot of things that we can do right now. And we know because there's so much more activity, it has increased so much and we've seen so many data breaches that some of the old systems for identity proofing don't really work very well anymore. So we're really seeing a lot of investment by financial institutions and new data, new analytics and new data sharing mechanisms, but it isn't necessarily getting huge amounts of attention from the rest of the ecosystem. And it's really important that we do put that attention there both because there's tremendous upside here to building a better system to fixing the long-term problems that have actually been excluding people for years and and to building a system that the kind of system we want for online and inface you know transactions. But the other thing is there's risks and you know we know current systems in some cases produce huge numbers of false positives. They're they're flagging transactions that aren't actually fraudulent. They aren't actually a problem. They're not an AML risk. And that takes up everybody's time and anti-moneyaundering. Yeah, anti-moneyaundering. Sorry. And it takes up a lot of time and energy in the system. And it potentially ends up with consumers being excluded. And nobody wants those things to happen. But it really requires like the whole ecosystem working together. And as we I think we've said a couple times, it's not just the financial institutions. It really does have to include, especially on the scam side, other digital systems that are flipping into payments and then flipping back out in terms of the money because financial institutions by themselves only see that middle part. They don't see all the stuff that's leading into it. And that's a really important part of it. And this this point Oh, sorry, I didn't mean to. No, this point about this point about false positives is like a very real point that we have to deal with. Um, which is why this using AI machine learning technology is so powerful because it's so much more sophisticated and can be so much more precise than simple human crafted rules of 20 even 10 years ago um that a lot of a lot of risk systems relied upon especially now that we're in a real-time environment where we're trying to do these things at the time of the transaction synchronously. Um, and so there's there's a balance there where how do you design the system in a way where when you get it wrong, the legitimate customer has some recourse. They understand what happened. they know what they need to do to get unblocked to proceed with what they're trying to do. Um, but bad actors are not able to um and they kind of get stuck and and and you know their account gets take taken off. So, and that does get more complicated with AI because many of the systems are really complex. You're potentially talking a lot more data and a lot more sophisticated analytics, but that can also make it be be a little bit harder to understand how is the the model getting to its answer and if it is off, how do we go ahead and fix that? So, you know, it's a complex game where um you know, we are seeing like kind of back and forth between the fraudsters and and the financial institutions, but also really trying to think about how do we promote good long-term practice that benefits everyone and is kind of manageable for the the risk of exclusion as well as the risk of not catching the fraud and scam. So, what's your reaction? Yeah, no, I mean it was a really important contribution to the issue set. really grateful for the paper um super thoughtful um plus one to um Brian's point on sort of the feedback mechanisms and you know just the need overall for improvements in the ecosystem for information sharing and you know reducing this this rates of false positives and suspicious activity reports um also there's there's a great call out in the paper with respect to the need for sustained policy attention to these issues I do think that the conversation becomes a little bit fragmented sometimes because of all all the mixed equities and uh in stakeholders it places and uh you know I think the the paper really helpfully underscores that and then um also this conversation around privacy enhancing technologies it's just going to be super super important um I uh a former colleague of mine from from Stanford is now here in town uh in Washington uh working on um digital personhood credentials Renee Dursta and um I think that this is just going to be this is you know essentially like kind of a zero knowledge proof system for you know demonstrating via credentials that you are a human uh not an AI without revealing personal information about yourself and I do think that you know catalyzing innovation in this direction is just going to be a really important contribution to this this whole ecosystem and um I I was really grateful for the way that your paper kind of framed that as as a priority. Yeah. So there's a lot of directions to go off of this, but let let me start with one, which is this tension between the consumer experience, which Ryan you point out like, hey, you're about to send money to this person that's not you don't usually communicate with. We're letting you know that other people have flagged this as a as a potential fraudster. Like that little screen pops up. I'm like, wow, even if I'm totally certain about this, thanks. Right? And then there's the I'm old enough to remember when I had to call my bank or my credit card issuer to tell them I was traveling and you know I often didn't do that and when you are on the conference circuit you end up in you know some pretty bizarre places you know conference towns places I would never go to personally like Phoenix um I was just there I'm sorry I assume it was for a conference and um uh like you know I would constantly get these cards denied like oh you know like call us to confirm that it's really you in Phoenix and first of all I'm embarrassed to admit that I'm in Phoenix and second of all you know like I don't want to do that call like the consumer experience is push pull I want you to protect me but I don't want you to make me do an extra step right and then the institution that requires that it's like ah this card's such a hassle they're always false flagging me so I'll use this other card right or I'll use this other channel this other mechanism this other electronic transfer Meanwhile, I'm like, there are a million ways to know where I am. I just tweeted that I'm at this conference, right? You have my phone, which is probably pinging something or another, right? Like, can I toggle something? Uh, you know, why do I have to call and confirm to like a person, you know, or an automated thing where press one or whatever? Like, where do you see that push and pull? How does that technology handle that? How is it a privacy issue, right? Like, like how do you guys see that situation evolving? And where do you see consumers? Because one thing I've noticed is people here say that who here values their privacy. All right, almost everybody. Who here would want to tell your real time location to one of the largest advertising firms in the world? Okay, who here uses Google Maps? All right, you've all just contradicted yourself. If you think Google Maps isn't using your location for advertising purposes, you don't understand how the system works. I used it this morning to understand whether I should be on 16th or 14th Street. Right? We have this economist struggle with this privacy question because it's I just did what's called stated verse revealed preference. You all stated that you cared about something and then you all gave it away for abs the only value which was how to save two minutes with where you were driving. uh you know how do you guys handle this you know preference and this pushpull with your consumer base uh in this space? I mean I I think that this speaks to this spectrum of of of risks and uh and the responsibility really of the ecosystem policy makers. We need to provide meaningful choice and information so that consumers can make informed decisions about sort of the the level of risk that they're willing to accept and what the trade-offs might look like if they want to carry a more conservative risk profile. Um you know for for me for a lot of people the most you know consequential financial transaction of my life was buying a house and in that that transaction uh I was very happy for all the friction like it was a sort of level of of uh financial commitment that I was not you know I did not want to just push a button and make that money go. I wanted the I wanted the Right. But you got ripped off. How did I get ripped off? Did you buy title insurance? I uh I honestly I I don't I don't know. Then you did. You did. I I decline title insurance and the the settlement agent will stop you in that transaction and berate you because they're getting a kickback on that title insurance which pays out something like 5%. It's one of the great and and it's disclosed up the yin-yang, although most people think they have to buy it because what's required is lenders title insurance. You have to buy title insurance for the bank that has the mortgage. You don't for your own home equity. And I told the guy, explain to me how the fourth story of a condo of an abandoned bottling factory will have its title where I'm the original owner. I think I was the only one out of the 210 units to decline this, which is about a,000 bucks. Part of the way is the bigger the stakes, the the bigger the scam. $1,000 like in the course of normal life I pay attention to. in the course of buying your home, it's like me, well, you know, it's a it's the bigger the cost. I mean, also the greater layers of friction. This was constant consultations. I wanted that phone call from my bank to make sure that, you know, that this was what I intended to do. I wanted sort of the the confirmation of the wiring instructions. Uh, you know, I I want there was there was a lot of consultation. there was a lot of paper to kind of you know dig through and you know you're left sign right doesn't matter but that level of friction I do think is appropriate for the scale of that level of financial cost if I'm buying a cup of coffee I want none of that I'm a monster uh without coffee and I just want it to happen and I you know I want it to be fast and simple and I think that we need to do a better job of establishing like the the spectrum that exists of of risks and um and kind of risk appetite along that spectrum and the willingness to accept more friction and you know more confirmation um along that sliding scale. I don't think that eliminating high-risk sort of transactions is realistic. I don't think it's even ideal. Um but I do think that um consumers need to be equipped and that u there are a lot of stakeholders that have a a shared responsibility in kind of providing that information so that we can make those informed choices and and really understand you know what the the what the friction is meant to accomplish and and why it's there. It's a re it's a really interesting insight. I mean the mortgage example maybe the extreme example but to your point about like 20 years ago I would travel and I would just my car would get declined anywhere everywhere. Well, that's that's a relic of the the rulebased simple rulebased system where they're getting it wrong 99 times out of 100 and so you just become frustrated with that experience even though you know why it's happening. You understand what they're trying to do. They're trying to protect you. Um but nowadays, you know, you get a you get an SMS, you press one, you rerun the transaction. Okay, that transa that that's not such a bad experience. I understand why it happened there. Obviously, techn is gotten a lot more sophisticated, so that's happening less often to you. Um, and what we see is when you intervene with a scam warning or maybe a transaction confirmation and you're doing it at a high enough degree of precision where you'd be like, "Okay, this maybe is a weird transaction. I don't know. I don't normally do this." Um, it actually reinforces and builds trust with the consumer. So, there is like if you do nothing, it's like, "Okay, this is really easy to use, but is this thing really safe?" Whereas, if you're seeing these warnings and they're surgically presented in the right sort of setting, the right time, the right transaction, it actually gives consumers kind of peace of mind and increases trust that, okay, this is a safe platform. They're doing things to watch out for anomalous activity and it's not happening so often that it's annoying. Um, and uh I understand why it's happening, so it makes sense. I think that's the key. So, I think sometimes people do welcome that friction in the case of a mortgage or in the case of a strange jewelry purchase or or whatever it might be. Um, but I think there's that that's the balance there. So, so go ahead. I was just going to say I I mean I do think that financial institutions attitudes about this are changing somewhat and I think maybe consumers are too because they are seeing the headlines here, right? But there is still an awful lot of incentive that works the other way and it does make it complicated for people. I mean, there are lots It's actually It's much easier to find studies about how much people will drop out of a a digital purchase process if they get friction than it is to find out from like the consumer side what's going on because the businesses are really worried about people abandoning and walking away or switching payment methods or you know doing other things. So, I think that that businesses and financial institutions do weigh both sides of this. It is a complicated process and there is concern about the tremendous number of false positives. Um you know that that creates friction. It does exclude some consumers and it and it really frustrates people. So we have to be real intentional about it going forward and there's a lot of general trends where we're taking friction out of the system all the time to make new payments, real-time payments faster and faster. But there are reasons why that friction actually does do some protecting of both the consumer and the institution. And so we kind of have to start thinking about it from a protection viewpoint as well as a convenience viewpoint. And I mean one other thing real quick that I'm just fascinated that you would seemingly prefer your bank to go search your social media over asking them to call you 100%. I'm not sure everybody would make that that if you're putting something on social media, you're making it public. I chose to tweet that I was at this event. Yeah, fair enough. But like like I'm not saying I want them to read my email, but who here uses Yahoo? Who here uses Gmail? Okay, you've all made the choice you'd rather have a company read your email and market to you because it has a better interface. Now, that's my opinion on whether Yahoo is better than Gmail, but I the show of hand shows the market is making that statement, right? You ask people, would you prefer your email provider for free? Free email, right? Right. The market speaks. So, people, this is stated a revealed preference. If I did a poll, I'd find nobody wants anybody reading their private stuff, right? My private emails is private, right? I don't want, you know, but if I email my wife, should we take the kids to Disneyland this summer? Boom. The banner ads start popping. Right. And and and I really like so you know you said something that I like like I I vehemently dis disagree with which is this idea that speed of transaction and fraud are at all correlated and real-time payments are somehow scarier than slower payments which I think is just factually deeply inaccurate. Hold on before we go to that. Kib said the alternative framework, which I think is spoton, which is dollar amount of transaction, should be correlated with attention to fraud and scam. If I get ripped out of a $2 cup of coffee, that may suck, but if I get wired out of 50,000 bucks, that's lifealtering. And we have a system in place legally where it's it's very strange because I get these calls from reporters all the time about this scam or that scam. I mean, some scammers can make a lot of money stealing two bucks from a lot of people. Others spend a lot of time making 50 grand off one person. Person walks into a bank, right? And or they start a wire transaction and the bank teller says, "I really think you you know this person in Russia is not in love with you. I really think this person in Nigeria is not a prince." Right? And the person like, "It's my money. I'm convinced I'm going to get a million dollars. This person really is." And they try to stop him. Try to stop him. try to stop them, right? And then, you know, it turns out they're victim to scam. It seems to me that the policy should be more focused on and and the product should be more focused on dollar transaction than channel or method that if it's a $50,000 transaction, you really do want three or four or five checks. You're willing to take more of a risk if it's $10. I don't think anybody disagrees that the dollar amount is an important risk factor. So I I think that's but it's nowhere in the law, right? There's nothing in the law that says I'm going to treat a 20 $30,000 trans. It's it's done by method, right? Credit card versus debit card versus a vers wire. Well, and not all the methods are covered the same way, which is the point that we were talking about. My point about real time, it's just if you can't claw that money back after it's gone, it's gone. And that's why I think that's the kind of friction point that we're talking about. And not all methods are like that. But that's one of the things that I think we're grappling with as we have this diversity of channels developing and they all have different slightly different systems and slightly different rules and you know some of the things that you were saying. So I think we just have to be really thoughtful about that because if you're sending by a channel that you can't get it back that makes that changes the risk calculus fundamentally you know yes clearly when the dollar value is bigger even more but yeah so Erin I think that one thing that the you know the dollar amount it matters it matters a lot to the victim right um but to your point um you know at scale $2 uh like high efficiency scam network um that can yield real funding for really heinous activity. It could fund multinational terrorism. It can fund uh human trafficking. Um the the sort of like wider risks of uh like to society and to national security I think warrant the same level of policy attention. Um, and I I do think that that's why it's really important these efforts like the national task force uh that you know and there are some parallel like the stop scams coalition. I think that this effort to kind of widen the aperture a little bit on who the stakeholders are that are at the table, drawing stronger connectivity to yes, this is a victim level um issue that is critically important. Um, but that there's also these broader societal interests in addressing this. And I think that that can help with both kind of the the the solution set and just kind of informing how we think about kind of uh responsible policies in this space, but also I think really elevates this issue kind of on the the prioritization of the policy agenda. I I think that Washington can do a much better job of sort of connecting again kind of in isolation. there's a lot of it. I think Fenen and Treasury are actually bringing some energy to these these kind of more structural downstream uh consequences. Um but I don't think that that's connected back right now to the policy conversation. And we just need we really need a more uh coordinated and kind of focused um effort. And I think that drawing attention to kind of both ends of of the ecosystem is going to be an important part of that. So, let let me follow up on that and then we're going to open it up up to the audience in terms of what Washington could do to better enable uh in to to better unleash these these solutions. I'm glad you're optimistic. I'm pessimistic. I think Fininsson is out there focusing on the wrong thing, lowering their dollar threshold from to $200 for non-bank uh uh transmissions in 34 counties uh from uh uh Southern California and the El Paso area uh which which I think is is insane and violates the Trump administration's pretext to care about costbenefit analysis. I've publicly challenged them before and I'll challenge them again. Show me the costbenefit analysis to justify that Fininszen rule. I'm waiting. Still waiting. There isn't any. They just want to gather data. I suspect for immigration purposes that supposition, but you know, they said they were doing costbenefit. I think they're just trying to pile SARS. Somebody mentioned anti-moneyaundering. Somebody mentioned suspicious activity reports. These are the documents banks have to file uh for a whole range of activity when they think somebody is doing and not just banks I should say other money transmitters uh uh money transmitters are actually among the top five trans uh filers of SARS the number of SARS has grown exponentially these things are costly uh and yet I found very little evidence in my world that they lead to anything uh there's a SAR being filed on uh every state licensed cannabis company that's engaging in the banking system somewhere. You know, good old Google Maps can show you where to find the the local pot shop. I don't think you need Finson to do that, but they're just jamming paper, adding cost to the system. Where do you guys think Washington can unleash this new technology as opposed to these old outdated regulations? What policy changes would you like to see here to let you guys do a better job of unleashing the technology that you're sitting on to, you know, make the system better cha uh safer, faster for everybody? Yeah, sure. There I mean there are a few things. Um, one is, um, that I I do think that there are opportunities here to present a little bit more certainty in sort of the the technology space on the AI agenda in particular, a unified framework um that sort of takes a riskbased uh sectoral specific approach so that you know one companies like ours kind of have the assurance as to sort of what the risk framework that we should be working on looks like and but also that we're working with uh sort of in the in the regulatory context with regulators who understand and know how to identify risks kind of in the financial services industry and and in that context. Um another is I think that policy makers can be really helpful right now in sort of just thinking a little bit more holistically about the allocation of responsibility in this this ecosystem. Um I do think that you know when we talk about the ecosystem challenge and kind of that daisy chain illustration we were talking about earlier. If for example, you know, a a scam is initiated um on an online marketplace um where that platform operator has access to signals of inauthentic behavior um and other risk signals um and they take no action and then ultimately the financial uh dimension to this is accomplished you know offplatform via you know um Cash App or or Zel or Venmo. Um right now the the the preponderance of responsibility falls on that off-platform uh payment provider uh or uh money transfer mechanism and you know I I think that there's a more sensible way to look at kind of you know where some of these risk signals sit and you know who where where some of the responsibilities should be shared in trying to mitigate these risks. And so those are definitely a couple of areas um that I think are going to, you know, continue to be really important. And this is where the information sharing can be super valuable because, and they've both mentioned this many times, by the time the transactions happening, we're we're at the end of the life cycle of the entire scam, which might have been playing out for weeks or months on a social media platform or in private direct messages. and then the rubber hits the road on the the financial service provider who facilitates the payment but only is seeing a few percent of the activity that led up to the whole the whole thing. And so I think information sharing would be really critical just to help understand the history behind a transaction where it makes sense if they're seeing signs of inauthentic behavior. Um and then you know where what's the incentive to be for those other platforms to be policing this stuff um before it gets to the financial service. um where's the there's no like what's the shared responsibility the accountability the liability that's going to incentivize them to kind of maybe take a a a more thoughtful approach when it comes to this kind of activity before it gets to that stage so I think there's a lot of this is why I think it goes back to just the broader ecosystem who are all the players here it's not just the financial service providers it's social media platforms it's online marketplaces direct messaging apps um and law enforcement too I mean to your point about star filing look we want to provide reports on suspicious activity so that it can be investigated and those bad actors can be tracked down and action can be taken and um so on and so forth. And so making sure that we've got that tight partnership, we understand what's valuable for them and and they're asking for the things and requiring us to provide the things that are actually going to lead to outcomes. I totally totally agree. Yeah. Know I I found they you know it's it's like a caricature of a bad lawyer at Finson. They just want the information and then they never tell you what they do with it. And it it's it's infuriating and it's costly and for what societal benefit. I mean, you know, Denny Hastard and Mike the situation from the Jersey Shore are in jail because of SARS. Most people don't realize that uh you know, but like you know, have we really, you know, provided much societal value from all this? Are we just collecting paper at a large expense? I think yeah, I mean I think these are really good examples. I think fragmentation is the biggest challenge you know and all of these are examples of that fragmentation and even within institutions banks typically have different channels and you know fraud and anti-money laundering is often separated even though they're very closely related functions the systems the protections the monitoring for one payment channel will be different than another even within the same institution in many cases and then you start thinking about smaller institutions who don't necessarily have the resources to de be developing AI on their own without some guidance and some support and so and then we have law enforcement fragmentation as well and and law enforcement working on really antiquated systems that also are not keeping up with the technology. So I think that you know there are ways that everyone in the system has to step up but a federal presence is critically important because that gives you the whole system potentially across all the different levels and you know we are seeing states and all sorts of actors try to address this but everybody is only able to do the piece they can see kind of right in front of them and that's where federal leadership across institutions it's not just one agency in Washington even and that's where it could really make a difference but it's very challenging to do that because you have to overcome those boundaries and that inertia and those old systems and invest and that's complicated it takes time and energy so let's I see brag you have you have a question uh and then uh I see a couple over there oh thank you and again rules identify yourself and ask a question oh thanks great discussion Brad Blau, Inclusive Partners. It's a I I run a consulting firm in civil rights and financial inclusion. Um, two related questions. Um, prior to the use widespread use of AI and use of rules, there's often concern from the civil rights community that the rules were really exclusionary and kind of a bluntfisted way of doing, you know, compliance for AML and fraud. For instance, you could have companies along the border of Mexico that were widely being excluded that were heavily Hispanic. um how are you making sure that that those kinds of rules or proxies for race, ethnicity, gender aren't creeping into your systems? The related question is given the Trump administration's announcement that it's not going to enforce disparate impact at all. How are your practices changing if at all? I think this is a really important point. I think as the technology gets more sophisticated and harder to understand in some ways, you really have to be buttoned up and constantly kind of re-evaluating how are you impacting different different segments of your customer base in in unanticipated ways. And so I think the the you know things like disparate impact analyses um you know external uh you know vetted third party vetted disparate impact analyses I think are key to that just to kind of help help inform or educate where maybe you're getting something um skewed in some direction or or otherwise. And um providing that kind of like objective lens to to understand like how where's the inclusivity um element of your product? Who are you who are you potentially excluding unintentionally? Um and I think that I think that's key. And then internally of course just having really really disciplined uh process around what data we do use to make certain decisions and how that data is being used. I think just like staying vigilant um because as these systems get more complex it can it can it might crop up in in unanticipated or hard to identify ways. So I think that's that's like a key a key check. Yeah. You want you want to answer it? Sure. Yeah. Just uh very quickly but plus one to to all of that. I I do think that, you know, we're leveraging AI internally, even just through sort of our back office, you know, application of like LLMs to promote in efficiencies. Like we we have 30,000 software engineers at JP Morgan Chase, which is like really remarkable. But we've seen since we've un uh we we've rolled out an internal privacy protective LLM suite. Um we've seen it improve coding productivity by language learning model. Yeah. LLM. Yes. um and improve coding productivity by 10 to 20% uh depending on sort of the the departments. Um this is really creating more opportunities for us to devote energies to retraining models to to you know um uh like taking learnings and uh and applying them to sort of red teaming type of u of activities. um and really just thinking uh channeling those those efficiencies back into the work so that we have a better signal to noise ratio on you know how we're we're uh approaching the problem. But the final thing I want to say is just that I I really think this underscores the importance of kind of cross- sectoral standard setting. Um you know we uh we are have been active participants in some of the NIST uh risk management frameworks. Um we are working sort of across the multistakeholder community to help you know continue a lot of that kind of uh coordination across what these standards should be. Um you know not just for the certainty that it it provides sort of customers and you know our commitments to our regulatory responsibilities and our corporate responsibilities but also for innovation for entrepreneurs. We want them to understand that if they want to sell to, you know, to a company like JP Morgan Chase, if they want to, if they want to be one of the vendors that we're working with, that we have standards, they have to meet them. And they're going to want to know what those standards are. And uh it it's, you know, they don't want a startup doesn't want to get $10 million into their $12 million runway and then realize, oh, this is too risky or this doesn't sort of approach safety uh and responsibility in the way that big uh deployers are going to want to see. And so um you know that we're we're actively sort of engaging in that community of practice. Sure. Right there. Uh great panel. Uh Tom Oshawitz, general counsel for informed. I have a big picture question for you about the evolving role of the consumer in um detecting fraud in the ecosystem. And let me just give you a little background in the late n I guess question. Yeah, the question is coming. Yeah, really quick. uh FC um premised on the idea that the consumer credit report in detecting fraud, free access to credit reports, fraud alerts. Today we have um AI where people can't detect a real image from a a fake image. Um you have chatbt detecting a real essay from a a fake essay. How do we determine what the role of the consumer should be in this changing? Great question. I I can start off. So I think the consumer is incredibly important and you know when we were talking about scams and frauds before that's that's you know a prime illustration. The consumer does have the right to access their account. They are the the legitimate owner and the legitimate transactor right. So if we can educate them and help them to detect the the scam that is the best place to do this and and the the strongest defense. But I don't think that educating consumers alone is going to be enough to work in this environment. If if big institutions that are deploying AI are having a hard time detecting fraud, defakes, and other things, we can't expect consumers to figure this all out for themselves. And so it is absolutely a critical part of it, but we can't rely on that alone. And I think we really have to think more comprehensively about the entire ecosystem and how to get the incentives right. so that absolutely everybody is doing what they can to get this problem under control because we can't just rely on consumers to do this alone especially in this high-tech environment. I I totally agree. I think awareness and education are key. I think consumers like scams are constantly evolve. They're constantly confronted with okay it's not a Nigerian print scam via email which we all know and laugh about today but once upon a time I'm sure that was a very effective scam. It's evolved into more sophisticated things, which is why we actually like put ad spend behind content that we create. We have this if it's weird for real, it's weird for real campaign just to kind of show people it's a seemingly normal interaction, but if it actually seems weird, it probably is kind of weird. So there's this whole awareness and education component. But I totally agree it's not enough which is why we invest in the detection, the warning, the transaction blocking and then even in the case that it still happens. We have the customer reporting which feedbacks feeds back into our system which then tells us hey what about this was could we have picked up on to know it was a scam retraining our models getting better at detecting these like more and more and more compelling um hard to detect forms of scams and fraud and things like so it's the whole you kind of have to do the whole thing in order to build a resilient system that's going to be able to kind of educate consumers while protecting them and then learning from them as these things happen. So it I totally agree and the only thing I'd layer on is just that I I think that we have a responsibility sort of both kind of at the sector level and with working with governments. We really need to close that feedback loop and we need to encourage governments to coordinate internationally to bring these criminals to justice. Um, we need to demonstrate and, you know, we owe it to the consumer to understand that these hoops that they're jumping through are in service of of, you know, fighting the this this these crimes and and that it's a part of a system that actually works to bring these bad actors to justice. And so, we really need to see more energy around that kind of uh downstream activity and enforcement. Totally. Yeah. So, so I have no idea, but I'll just tell you one thing that doesn't work is signing your name. I mean, why are we still signing names? I sign my I sign the word fraud for about half the transactions that I'm asked to sign for, and I've never had one turned down. Kabir, thank you all for your remarks. Um, my name is Anton Shank. I'm a economist at Rand, which is a public policy research organization here in DC. Um, I know that you have all talked about how the way we buy things has been changing over time. And so to ask about an emerging way that this could change is a lot of frontier AI research organizations are working on developing increasingly autonomous agents that will take uh actions on our behalf. So the example you started with was purchasing tickets. So today I can go to chat GPT ask it to buy me concert tickets and it will go through that entire process for me. I'm curious if you can say a little bit more about what mitigations, if any, would be necessary to uh to mitigate the risks from that kind of way of buying things as well as uh what kinds of emerging risks um might be uh on the frontier from from things like that. Thank you. Do you guys Yeah. So, I mean it's it's such an important question right now. um you know there's it's galvanizing a lot of of policy interest and technology interest right now. Um, you know, I think that one thing that um I'll come back to Kelly's report um the the importance of privacy enhancing technologies is going to be like really essential I I think going forward and um we really are it's going to be really important that we're not just able to right now there's been a lot of investment in sort of identifying the authenticity of specific pieces of content um you know even of like biometric content there's like you can you can generally there are models that can detect voice clone ing um and uh the use of synthetic media in authentication practices. Um, but eventually we're going to have to get cleaner about just we, you know, everything is going every piece of content is going to touch AI. And I I think that um really finding ways to sort of uh with with great certainty that protects privacy um that can really validate that this person is who they're they say they are. Uh that that the platform on the other side of that transaction can determine whether they are interfacing with a an an agent um or with a human. um and can take sort of depending on the risk profile of that transaction can layer on whatever sort of defenses might be appropriate for for that context. Um all of those things are going to be really important, but this also comes back to like just the scale of the ecosystem challenge that we're confronting right now. Um you know, no single side or actor in that equation is going to be really well positioned with all of the information to kind of to to mitigate the risks. And we need to be thinking about, you know, not just how sort of the car is replacing the horse in this scenario, but how the car is transforming the way that people get around and the way that cities and urbanization and travel and like that the whole ecosystem and infrastructure um that came with that transition. That's really what we're talking about right now. And it's it's a really big question. All right, we got Kabir had a question, I think. Hi, Kabir with Flourish Ventures. Thank you for an engaging conversation. I had two questions. One, you mentioned earlier, all of you mentioned information sharing as a potential response to this complicated problem. So, how do we actually get there? Is it in private environments like the Aspen task force that you mentioned, Kip, that you can actually enable that information sharing? Is it in some other scenario that you enable that information sharing? I'll mention 314B and let Aaron and Kelly explain what that is, but is it in that is it that way that we make it happen? And a second question for Brian, which is um you you know, it's fascinating all the things that you're doing seems really at the cutting edge. It led me to think what would you like to do that you're unable to do either through technology because of technology or because of some you know internal constraints on a risk officer. You want to do 314 and then sure I can start off in the information sharing. So um different institutions sharing information with each other in this space is complicated and historically um a lot of financial institutions have felt real constraints on doing that for privacy and law enforcement reasons and a bunch of other things. So and you know we talked a little bit about SARS and anti-money laundering. There are specific laws prohibiting financial institutions from telling people when they file those. So there's there's you know layers of constraint here. 314b is a provision in the Patriot Act that allows financial institutions under certain circumstances to share for you know particular purposes. It has been interpreted to include fraud situations although I think um banks and other financial institutions institutions have taken a while to get really comfortable with trusting that and there's not a lot of regulatory infrastructure for how those kind of sharing initiatives should happen. So we are seeing people getting more comfortable with it. We are seeing both private and public uh initiatives underway. Um there's you know American Bar Bank Association is doing things to help small banks and all sorts of different initiatives in in different situations. Um I think it's probably a mix and you know people really look to the government to be part of that. You know we talked about the SAR side of it. Um, there's also some changes in laws that are directing Fininsson to start sharing out more insights from the SAR information process. I don't think that's really kicked in yet, but it's kind of all of the above is is I think the short answer. Would would more regulation clarifying the 314b exception to fraud be useful? I think I my sense in talking to some institutions is they would like some sort of regulatory architecture for just how do we do it, you know? Um, so I think for some folks it would help. Um, yeah. All right. Second part of the question. Did he kind of broke your rules with the second question there, but um, no, it's a really great point. I mean, I'm I'm fortunate in that I I kind of come from the machine learning and data science kind of background. And at Block, we've always invested very very heavily in those functions for all things risk uh, management. And I still have the benefit of leading a lot of a lot of those groups that I've built out over the years. So, when it comes to the, you know, what else would I like to do? We're doing a lot of really cool things. I think one, there's always more to do. So, you know, there's new fraud trends emerge, new issues crop up. The product team wants to run in 10 different directions. We have to kind of keep up and make sure we're building the right solution. So, to enable them to do that. Um, I think what's really interesting is obviously with the rapid development of AI and agents, I think it's thinking through the workflows and the automation and the tooling on the human side of the equation is where there's a lot of I guess uncertainty and excitement around the kinds of tools that we'll be able to build for the risk experts that look at these things every day um to improve decision quality, make sure they're not getting it wrong. Uh, I mean they're they're incredibly talented and and our tools are quite good, but I think this is going to really revolutionize that side of it, which is kind of the behind-the-scenes side of it. A lot of people don't think about that, but it's it's still a critical part of the product development, the experience the customer has, our machine learning and detection risk systems, and that team of operational experts who are investigating and and confirming and reviewing these transactions in these accounts. So, I think that's still the area where I think the most recent wave is going to have like a lot of um a lot of benefits and it's unclear exactly what that's going to look like at this time. So, great. Well, join me in thanking the panelists for such a fantastic and lively conversation and uh everybody uh be safe and secure knowing that uh there's a lot of work going on behind the scenes, but that doesn't mean that you don't need to be vigilant in protecting your own uh nest egg. So, with that, thank you all very much.