Transcript for:
Understanding Deception at DEF CON

Good morning, DEF CON. Thanks for coming out. I appreciate it. We have two speakers this morning, Tom Cross and Greg Conte, and their talk is going to be on deception and counter-deception. Please give them a good DEF CON welcome.

So good morning, everyone. Thanks for making it out this morning. So we're excited about this talk.

I've been coming to DEF CON since it got started. And when, you know, early in the DevCon, you know, at that time, the internet was just beginning to become publicly available to consumers, right? And we were thinking about, you know, what, what, how is the internet going to affect society? Like, what is going to happen, right? And we were sort of imagining a world that we now live in today.

And I think it's interesting to look back at what we thought would happen and question, you know, sort of where we've ended up, right? I think people in this community could see both the, like, promise and peril of the Internet. I remember going to lunch at DEF CON, going to a food court, and everyone's paying with cash. And we're thinking, you know, in the future, we're probably going to pay electronically for everything, and a little record will be kept.

every time we buy something and like it will create this model of our behavior and how will that get used against us, right? But at the same time, I think people were genuinely excited about the idea that everyone will have the world's knowledge at their fingertips, right? And maybe that will make people a little smarter. It might up level humanity.

So as you know, time has gone by and we've actually got all these capabilities and everyone's using them, you know, there's certain facets of human nature that have come roaring forth and have a huge impact on how this stuff actually affects us. I think the part of the problem is that people don't come to the internet because they want to get smarter, they come to the internet because they want to be told that they're already smart. They seek validation, right? And there's a lot of people who recognize this and they're in what I call the funhouse mirror business.

So they present you a world in which you're a good person. in which you should get what you want and in which the people that you dislike, we're going to show you those people in the most negative light possible over and over again. And this world view that is centered around you and why you're special is highly compelling to people's egos.

This is what we on some level mean by deception. And we find deception not just in the narratives that appear on social media, but also at every level, like in the phishing emails that people are getting, in the malware that's running on their computers. Ultimately the internet has become this massive deception engine in which these false narratives are appearing at every level of abstraction and you can't trust anything that you see in the screen. I think people have recognized this. They've recognized that the internet is not giving us what we wanted.

It's not making us smarter. And in fact, the theme of this year's DEF CON is how do we engage with the internet that isn't giving us what we want. How do we make things better?

So before I get into the overview of the talk, let me introduce us. So my name is Tom. I've been coming to DEF CON for a long time.

I've done a few social media projects over the years. My current project is called FeedSeer. It's a newsreader app for Mastodon. So it shows you the top links that have been posted on your feed in the past 24 hours and what people are saying about each link. and there's lots of InfoSec folks on Mastodon.

I've also generally had a career in InfoSec and I've spoken at a lot of cons on information security topics, often with this gentleman, Greg. Hi, I'm Greg Conte. Thank you for joining me here on Arrakis for DEF CON 32. My background is I was long-term faculty at West Point where I ran their cybersecurity research and education programs also, worked at NSA twice and US Cyber Command twice. And then I developed and taught the information operations course at Black Hat Training seven years ago and been running it since there and in private sessions and also run the military strategy and tactics for cybersecurity course for 10 years. Same deal.

Thanks, Tom. Yeah. Thank you. Oh, and Greg and I are teaching a class on adversarial thinking at DEF CON training after DEF CON is over, if you guys want more Vegas.

So what are we going to cover in this talk? The first thing we're going to do is we're going to tap into some of Greg's deep expertise with military doctrine and thinking. Militaries have been thinking.

about conflict for hundreds of years and deception in particular is an area where they have identified maxims that teach you how to craft effective deceptions. So we're going to talk about how to do deception effectively and then we're going to flip the coin over and ask well if we understand offense really well what does that teach us about defense? Are there counter deception principles that we can derive that tell us how to fight deception.

And the deception and counter deception principles we're going to cover, they're useful in any context where this could be occurring. It could be useful for malware attribution or in a security operations context. But, you know, this year's DEF CON is about engaging with the internet. And so we're going to spend the third part of the talk trying to apply some of these counter deception maxims and ask what sort of capabilities, you know, is the internet lacking that might be useful or might help? us defend ourselves against deception that we're facing.

So why are we talking to you guys about this? Like, why is this relevant? I think, you know, hackers have the ability to identify fuckery better than like a lot of, you know, general people. So you guys have a unique talent.

at that. And, you know, I also, you know, I mean, I listened to Cory Doctorow's talk last year, which inspired this year's theme and his talk yesterday. He's engaged in a very important discussion about like policy issues.

And I think, you know, what I do is I write code and I break code. And I want to talk about, you know, how I can apply those skills to engage with this problem as well. So, you know, hopefully, you know, we add to the discussion a new dimension.

So, Greg, let's talk about deception. So deception has been around for millennia. And the key idea is that it's the act of hiding the truth to get yourself an advantage. And what you're trying to do is influence your target to make an incorrect decision, right?

Or to take an action that you want or fail to take an action, all to your advantage. And we've seen examples from our five-legged Trojan horse, AI. generative AI delivers when you need a five-legged Trojan horse, to the Civil War where people painted logs black to create, to try and fake cannons, to the Cuban Missile Crisis, hiding medium-range ballistic missiles and concealing them in ships, to the Persian Gulf War where the invading Iraqis were concerned about an attack by the sea, by the Marines.

and instead were surprised by an attack by land from the opposite direction. We see it in the Russia-Ukraine conflict, the Ghost of Kiev, which was a mythical fighter pilot, ace fighter pilot, fighting the Russian aggressors over the city, which was later proved to be a deception operation. So we've seen this for millennia.

And when we think about the targets of deception, humans come first to mind. And in information security, users, things like phishing, typosquatting, domain mimicry, spoofed login pages. But also experts, think malware analysts, false flags, fileless malware, deceptive metadata. um, let's uh, code injection, rotating command and control infrastructure.

So specialists can be targets for deception as well. But more than that, it's not just the humans, it's our code. So our code, think malware detect, uh, systems people try and deceive that through fileless malware polymorphic malware rotating command and control infrastructure and as we move into the era of AI AI is also a deception target think poisoning the training data of the AI or jailbreaking techniques to overcome safeguards that people put in place And deception can occur at all levels.

It can occur at all levels of the network stack. It can also occur, thinking more broadly, at the tactical, operational, and strategic level. The tactical level, you're actively deceiving. someone you're engaged in conflict with, all the way up to a strategic level where you try to hide your basic, you know, national objectives, your intentions, your strategies, and your capabilities, or put forth, maybe to put forth an alternate reality that's more beneficial.

So humans process data through this idea of the DIKW hierarchy. We begin with data and by adding context we can create information, by adding meaning we can create knowledge, and by adding... insight we can create wisdom. Well, deception can poison our ability to, you know, by poisoning the data that we can have incorrect information, knowledge, wisdom that we think is entirely accurate. So deception poisons our ability to think.

And it's a professional discipline deception. What we have here, the orange document, is a declassified CIA document that includes the deception maxims that we're going to be discussing. And also there are manuals, like this is the Department of Defense joint publication on military deception.

It comes out every few years, updated. So the key takeaway here is it's a professional discipline. And looking at some of the maxims, we're just going to run through these pretty quick, and then we're going to flip them and show how you might counter. But one of the most important is it's easier to maintain a pre-existing belief in your target than to force a change, right?

So if someone believes the world is flat, you can help encourage them to continue thinking the world is flat. But if you want to have them think the world is round, well, that's much harder. So, yeah, I really think Magruder's principle is sort of the golden rule of deception. It speaks to this, like, fundamental aspect of human nature.

You know, say there's a politician you hate. There might be different answers in the room for who that person is, but everyone hates some politician, right? And if I walk up to you and I say, well, you know that politician you hate?

Did you hear about the stupid thing they did this morning? Whatever I say next, whatever it is, you're going to believe it. Because it's aligned with what you already think, and it feels good to be right.

And so if any of you have read Asimov's Foundation series, he is a really good model, I think, of deception that he presents in those books. There are these robots that can manipulate people's emotions. And they're constantly explaining that they can't radically change people's worldview. And the way that they function is by trying to understand what that person thinks and identify some specific belief that they have that they can emphasize or reinforce just enough to get that person to take an action that they might have otherwise not taken.

I think that's a really good representation of, like, effective deception in a nutshell. So one of my favorites is the idea of exploiting the limits of human and machine sensing and information processing. So you think, we as humans have senses. right, that are limited and like known parameters of what we can sense. And we have limitations in our ability to process information.

Machines do the same. They have sensors and they have information processing. So if you can exploit those limits, for example, think self-driving cars.

There's been some really interesting talks that you can exploit the sensing and processing of a self-driving car and make it think, see things that aren't there. In a heist movie where there's usually a motion sensor And someone moves really, really slowly to slip past the motion sensor, that kind of thing. And a variation of that is here's a picture of an inflatable tank from Ghost Army in World War II. It was a group of creatives, and they were a formal deception unit. And it's their ability to create inflatable tanks and fake enemy or fake troop emplacements.

was because the capabilities of the time to sense the environment, these tanks looked real from a distance. Now they'd obviously be not. Then the third maxim is Jones's dilemma, when the idea is that in this competitive information environment of competing narratives, that if you're going to try and deceive someone, and you have to ideally have more false sources than real.

So it's this battle of... quantity and quality of competing narratives. Another strategy is the idea that you want to carefully create a story, that you don't just have single data points, but you're putting puzzle pieces out there for the deception target to connect together that tell the story that you're trying to portray. Another is the idea of carefully designed planned placement of deceptive material.

The idea is you want to make your target work for it. If you think if you wanted to have a fake diary in your room, right, if you leave it on their desk, that is an obvious place. And that could be like your.

That probably what you want to do is make it hard, have a lock on it, have it hidden in a ventilation duct, have it written in code. By the time they find that and they've unpacked all that, then they think this has absolutely got to be done. be the real diary. And we see that in malware as well, right?

That's certainly a strategy. If you make them work for it, people will be more likely to believe it. The flip side of this is an orgy of evidence where many incriminating things are obviously found.

Well, that should raise suspicions. So yeah, Olympic Destroyer is a really good example of the use of some of these techniques in order to fool malware analysts as to attribution. Doing attribution on IOCs is fraught with risk with respect to, you know, the manufacturing of those IOCs.

In this case, there was a rich header in a binary that captures information about the developer's environment. And they literally copied a rich header from a Lazarus sample and put it in their sample. And so someone who knows rich headers and has a collection of them and knows that they can be used to attribute malware might find this and be excited that they connected the dots, right? And so now they're emotionally invested in the idea that this is true.

And you know they go out with this narrative it turns out that in fact in this case the rich header didn't match the Rest of the binary and the attribution was completely deceptive so again You know avoid if you're trying to protect yourself against deception Situations where you're emotionally invested in the consequences of your work So I particularly like this one, Maxim 6A, 6B. They're two sides of the same coin. The first is ambiguity.

So the idea is you want to increase doubt in the person's mind, in your target's mind, by providing many... possible truths. Okay? Makes sense.

They're uncertain. There's this cloud of uncertainty. The flip side of that is you want to decrease doubt and focus the target on a particular falsehood. So you want them, instead of being in a sea of doubt, you want them absolutely, positively sure that they're right or that they're sure about this falsehood.

They think that it's true. Another idea is that in... professionalized deception that there's the idea of husbanding deception assets that there are always limited resources you need to save them for the time and at the best time in the best place to be effective you can see that in cyber in information security capabilities oh day right but imagine if you had the ability to place deceptive material on a web server you can't if you use it you're using that capability and he might not be able to use it again so you save the these for the time and place you think will be most effective.

Maxim 8 is the idea of feedback. In a professional deception operation organization the attackers are monitoring their target audience looking for feedback that the deception has been believed and they are also so they want that feedback loop and they're also monitoring their friendly organization to see if they are being deceived. All right, so now that we've talked a little bit about the principles of effective deception, let's flip the coin over and talk about how you practice counter-deception. So at a high level, I think there are four general ways.

to counter deception. One is through intelligence collection. So if you're actually monitoring your adversary and just watching them create their deception, then you know exactly what they're doing and why they're doing it and what the truth is, right? And the U.S. Cyber Command has this defense.

a forward concept which has to do with like just going out and directly spying on the people that are committing the acts so that you know exactly what they're up to. Another thing you could do, we don't always have intelligence capabilities, is disruption. So we talked about husbanding assets, if you know someone has to go to a certain effort to inject deceptive narratives you know in places and and you can take action to to interfere with their capabilities then that prevents them from being successful.

A simple example is like if they have to build a botnet in order to spread something online and you're able to take the botnet down, then you're interfering with their deceptive capability. Another thing you can do is analytic. Sometimes that's your only option. You're looking at the information you're collecting and you're critically analyzing it in order to figure out whether or not it's deceptive.

And a lot of our talk will focus on that because that's often the only option that you've got. Another thing you can do is deterrent. So if you can can if you can demonstrate that the deception will not be effective perhaps your adversary will not bother to do it so in let's talk a little bit about analytic processes in infosec we're really good at devil's advocacy right and so this is the same thing it's about playing devil's advocate with respect to things that you are that you are deciding or that you believe and you have to have the discipline to do that and that's one of the hardest things about it But, you know, if you have a belief or an intelligence-related conclusion, and there's a set of facts that underpin that belief. You can look at each of those facts and ask yourself, how hard would it be for my adversary to simulate that fact?

How many ways have I measured that? Is it possible to fake that? And even in malware attribution, a rich header is easy to fake. Maybe could they get access to a particular source IP address, or how hard would that be for them to simulate in order to fool me, right?

So let's talk about some of the deception maxims, and I'm going to... gonna sort of flip them around and create counter deception maxims based on them and I kind of organize them into categories. So the first category is maxims that suggest when deception may be present. So your human intuition is both an asset and a liability when you're dealing with deception. This is the context where it's an asset.

You want to develop a kind of spidey sense that tells you things aren't quite right here. This looks fishy. fishy.

And that might prompt you to dig deeper and explore the hypothesis that what you're seeing might be a deception. So, for example, we've talked about carefully sequencing events to tell a story so it builds up in the target's mind. So flip that over. What happens if there's all kinds of evidence that gets disclosed all at once? If you find the diary sitting out on a table where it shouldn't be, you know, that might be a sign to you that, like, hey, someone's trying to, you know, manipulate me.

This is... this is possibly deceptive, right? So the person operating in the deception must either do ambiguity or misdirection. And in each case, we can try to counter that, right?

So if they're engaged in the ambiguity deception, there's going to be a lot of narratives that are available, right? And if you see multiple narratives, then that might be a sign that deception is taking place, that some of these narratives are simulated. That was certainly the case with Olympic Destroyer. There were a bunch of organizations that had different attributes.

that they were publishing and that's because the malware was intentionally sort of misleading analysts into thinking that different narratives were correct. Misdirection, in a misdirection deception, the attacker is trying to get you to believe a particular thing which is false and so if you analyze it carefully perhaps you can discover that it's not true. The plus-minus rule is this idea that nothing that is an imitation or a simulation can be exactly the same is the real thing, otherwise it is real.

And so identifying those characteristics which are added or removed from the thing, it helps you identify that it is deceptive. And again, you have to have the discipline to say, okay, this isn't right, it's incongruent, and therefore I must accept that it is not real. The place where your human intuition operates against you has to do with mental discipline. Humans are really good at jumping to conclusions based on not necessarily enough evidence.

We kind of have to do that in order to function in the complicated civilization that we have. But that's how deceptive operations get you. And so you kind of have to have the discipline to slow down and not listen sometimes to your intuition.

So this is Magruder's principle is the biggest one here. It's important to apply the same critical analysis to facts that support your assumptions that you do to facts that challenge them. And that's really hard for people to do. You know, also with respect to carefully designed placement of deceptive material, don't assume the fact is true just because you had to work hard to get it.

Ask yourself, could the adversary have simulated that? I love this quote from a book on professional counter deception operations. The vulnerable mind fits ambiguous information to its own.

preconceptions and expectations. I think that's a really clear statement of vulnerability to deception. So humans are not that complicated, which is part of the problem, right?

Psychologists have been studying biases in humans for hundreds of years. And I have a list of examples here. And if you're wondering, the prompt was...

humans swimming in a sea of piranha. That was the prompt that gave us that. But I wanted to highlight two. One is the idea of selection bias.

And just one instance of selection bias is where you can choose from a variety of facts, truths, to paint a picture. But you don't share all the truths. You're just selecting certain truths.

And I think we see that in many news agencies today. A lot of what they say is true, but which ones they choose are the ones that are not. Paint a picture. So that's selection bias. Confirmation bias sounds a lot like Magruder's principle, the idea that we are biased toward things that already reinforce our preconceived beliefs.

So the third set is maxims that suggest methods of preventing deception. So the two maxims that have to do with the limits of human and machine sensing and information processing, like, speak to the need to be able to measure reality from multiple perspectives. Think about this in a technical sense.

If you're in a security operations function and you have sensors out there that look at your network, if there's only one kind of a particular sensor, then that sensor could be, its data source could be manipulated, it could be presented with false facts. If you've got multiple ways of looking at the situation, then it gets harder and harder for the person that's operating the deception to fool all of the sensors that you have. With respect to husbanding deception assets, again, deception operations, professional ones, involve effort and resources. And if you can disrupt and target those capabilities, then you can undermine the ability for them to be successful.

Feedback is also interesting. A professional deception operation is going to be watching you to determine how you behave. You may choose to behave as if the deception was effective because you don't want the deceiver to know that you were not deceived. You may also choose to...

to show the adversary that you are aware of their deception, which is a deterrent method. One of the things that folks in DHS and CISA are talking about is pre-bunking. So we think that this deceptive narrative is going to be placed out there.

So in advance, we're going to go ahead and explain to everyone that it's false so that people just don't bother trying to push that narrative in the first place. So in developing this set of trustworthy sources of data, like when we're looking at things on the internet, it's really important for us to, you know, collect different places where we can get different points of view that are credible and objective. And so like curating collections of experts, I think is really important.

And but also a weakness that we have is we have a tendency to believe experts who are telling us what we already think. And so we have to find objective criteria for evaluating who we want to listen to. And so I think this is a real challenge.

Like even with... With respect to experts, experts sometimes deceive us, right? So let's say there's a set of facts, 1, 2, 3, 4, and 5, and they add up to a conclusion, which is 15, right? So what I can do is I can present you some of those facts, 1, 2, and 3, and a conclusion, which is 6. Each one of the facts I've presented to you is absolutely true, and the conclusion I'm presenting to you is a natural consequence of the facts that I've presented, right? So, when you're listening to an expert, the challenge is that you depend upon them not just to tell you what the...

conclusions are, but also what the underlying facts are to educate you about the issue. And so you're vulnerable to the fact that without knowing more than they do about the subject, that you can't see the other facts that they're omitting that may lead you to a different conclusion. And so, you know, one of the things that people wring their hands about a little bit is journalistic objectivity.

There's this idea that, like, in the American civilization in particular where we have two political parties, that what should happen is that journalists should present both sides. And we know that, like, getting both of these perspectives does not get us closer to the truth, right? And so perhaps there's this role for journalism as a professional resource that goes out and finds the other facts that are missing and and adds them into the equation to get the complete truth, right?

This is something that people have been talking about for a very long time. This is not an Internet problem. Walter Lippmann was writing about this stuff in the 1920s.

And he had these interesting ideas about what journalism could be, but then he also, you know, he was... He was concerned that it won't be that because the business model of newspapers doesn't support this kind of work. And if the news media isn't going to go out and find the additional facts for me, my question is, can my computers do that?

Can the Internet give me the facts that I need in order to objectively understand reality? And so that relates to the third part of our talk and our call to arms. you know, like hackers have an independent view of reality that's somewhat outside of mainstream thinking and that enables you to see things with different eyes. There are interesting capabilities, and I'm going to talk about some examples of them, that might be useful to people to help combat deception and disinformation.

I think also we're tool makers and breakers. So I think we're not entirely cynical about the value that new tools can bring. So I'm going to focus on four areas. One area, it's just something that I need to mention, it's that model the adversary's capabilities and neutralize them. I've mentioned that several times.

There's been tremendous work over the years here at Defcon in the misinformation village. There's also this thing called the disarm framework, which is essentially a kill chain for Internet botnets and deception operations. This is fantastic work. We love it.

I want to endorse it here. There's lots of things that people can do in that area. Because there's so much going on in that space, we're not going to delve deeper into it.

We're going to focus on some other stuff, particularly building tools for information triangulation. You know, knowing like when questions have been raised about what we're reading and identifying and collecting that network of experts that you can use to determine whether or not something is real. So a really simple.

example I want to include because it's a project that was the first one wiki scanner was a project that was done by somebody in this community it was a it was a small focused project and it had interesting consequences so it shows that like you can do some hacking over a few weekends and make something useful So Wikiscanner looked for anonymous edits to Wikipedia and checked the, you know, like who has information associated with the IP address and identified situations where people were editing Wikipedia anonymously from the IP blocks assigned to large organizations. And so you can kind of see what narratives were being pushed in Wikipedia by certain large orgs. There were certainly interesting results that came out of that analysis.

By the way, it's mothballed. I'm sorry. Somebody else made a thing called Wiki Watchdog. The source code is available for that, but it's also mothballed.

So this is a thing anyone in this room could pick up and do something with immediately, and there are interesting results to be found. Sticking to the topic of Wikipedia briefly, the idea behind Wikipedia is that with enough eyes, all bugs are shallow, right? So everyone's reading this article, and they're removing fake things from it, and so we get this objective truth.

The problem with it is that when you visit a page, it might have been edited two seconds ago by someone pushing an agenda, and you can't really tell, right? So I wrote a paper in 2006 in which I suggested that we highlight passages that are new so that when you're reading the article, you can kind of tell which passages are less trustworthy because they haven't gone through that editing process. There were some academics that built a thing called Wikitrust.

which implemented a very similar set of ideas, and with some added analytics underneath, and it was usable for a while. So this is kind of an interesting thing, and this is also mothballed. I believe in this set of ideas, and there's work to be done in this area.

But I also, I like the high-level concept that maybe my computer should tell me when I should be careful about what I'm reading. Because some objective criteria suggests that this information might not be reliable. Right, so I want to return to some of the like core information resources that inspired the creation of the web Because there's a whole lot of ideas in here for things that we don't have today And it's a way of seeing the negative space in the internet So the first resource here is a paper called as we may think by Vannevar Bush the van of Bush was the guy that ran all defense related research during World War II.

So he was an incredibly important guy. He wrote a paper in the 40s, which might be the most important academic paper of the 20th century. And it's about using microfiche to create, to link documents together and create knowledge management systems. This guy's got a camera on his head because he imagined he wanted to take screenshots of books he was reading. And so he imagined having a camera in his head so he could screenshot.

documents, right? So it's weird tech, but the ideas are super interesting. And he, you know, had a lot of influence over two other people who are incredibly important. One is Douglas Engelbart.

Douglas Engelbart gave the mother of all demos in which, like, he does... demonstrated things like mice and modems at Stanford University in the 60s. He wrote a book about augmenting human intelligence.

And one of his sort of precepts is that computers will always augment human intelligence better than they can replace it. And that is an idea that I think makes sense to remember these days, given everything that's going on with AI. And also, I think social media companies sometimes forget that the purpose of computers is to make us smarter. If you forget that, you will make the wrong things. So Ted Nelson is the guy who coined the term hypertext.

He did, so it first appears in print in this book, which is called Computer Lib. It's an extra large book that's handwritten and illustrated like a zine. It's a really interesting document, and you can buy a copy of it.

They've got it printed again. A scholarly read through these three resources will reveal a whole lot of ideas that these people had for things that the internet should do that the internet does not yet do. Late in Engelbart's life, he was sort of frustrated with the web and felt like it didn't achieve its potential, and Ted Nelson is still ranting about that on YouTube, and you can find him doing so.

So if we look at some of the stuff that these guys did, so Xanadu is Nelson's project, Hyperscope is something that the Douglas Engelbart Institute has been working on to try to implement some of his ideas. There's a bunch of things in there that we don't have today, and it's interesting to think, or maybe we do, but only in narrow, limited context, right? So it's interesting to think about, like, what would it take to make this a part of the Internet?

You know, why don't we have it? In the context of deception and counter-deception, I'm particularly interested in backlinks. All these guys wanted backlinks. So a web document can link out to another document from a phrase. In their systems, also when you're reading a document, you can see which people link in and what phrases they link into.

So you can see if someone built upon the ideas that you're reading, and you can also see if someone is criticizing the thing that you're reading. Why don't we have that in the Internet? Well, the answer is that... It's a content moderation problem if anybody could I annotate a web page in any way that they wanted and we'd have a bunch of Spam and abuse right and so we need to think about the tools that we've built for managing content moderation and see if we can couple them to Backlinking in order to create something that's useful There have been some interesting projects over the years that played around with back linking either in narrow Contacts or you know to varying degrees of success I think a key thing here is that all opinions are not created equal.

There are certain opinions, again, I've curated my experts whose opinions that matter to me. Those are the ones that I want to see. So when I choose to follow somebody on social media, I'm indicating that what they say is relevant to me, even if I don't agree with it. So I've selected that group.

It would be useful to me if I could see any time I'm viewing a document if those people have ever commented on it. I'd love to build that. A challenge that we face is that in order to build and innovate on top of our social media systems, we need systems that are technically, financially, and culturally open to innovation. And unfortunately, many of the systems that we collectively use fail at one or more of those criteria, and therefore it makes it difficult for people to create new things that sit on top of them.

And we need to create new things in order to make the internet better. So I've talked about curating collections of experts. There are some interesting protocols that exist that people have built over time for endorsing people.

Like, I really like LinkedIn endorsements, not as a tool, but as a concept. You know, you can say, I follow this person and this is why, right? But unfortunately, that's a closed system.

Some of these things out there aren't machine readable. You know, I think it would be cool if I, when I'm following somebody on... social media if I could say why and that endorsement sticks to them if they accept it and As long as I continue to follow them and if we did that then I could do this I could say who do the people I follow follow Who are endorsed for this particular thing like a wall?

And I could get back a list of a bunch of people who are endorsed for this particular topic and I can ask What are these people saying about this thing? It gives me the ability to identify the reputation of people on the internet and utilize that in constructive ways to help me figure out what's real. So I think there's lots of work to be done in that space. There's lots of open potential there. Another area is LLMs.

LLMs have some built-in biases. There's challenges dealing with them. They hallucinate. So there's lots of caveats to what I'm saying here, and I know what they are. But at the same time, it is a dispassionate computer, right?

So can it go out and find those missing facts for me that aren't included in a narrative by reading all the things, right? And dispassionately present them to me. That's a potentially interesting thought and I think it's harder than it sounds. But you know there are other things we could do with LLMs.

You know one of the things is that the I mentioned on the last page is that there are, like if you're a university professor at an accredited university in a particular subject area, your university website indicates that like this guy is the professor of this, right? But then your social media profile, it's not machine readable. Your university website is not machine readable. I can't tell automatically that you have this expertise.

But an LLM could go read every university website, find all the professors, find all their social media profiles, and create a machine readable system that allows me to, again, ask this question, who knows lots about economics? Here they are, right? And so I think that those are interesting possibilities.

LLMs are really good at turning human unstructured information into very structured information that's machine readable, and we can do things with it. Greg? So beyond these technical solutions, we have to think like how can we scale this so for humans, right? One is the idea of teaching media literacy in schools to try and have a better informed consumers of information.

Media literacy now, who I have highlighted here, they have a nice depth of resources from K through 12 for for classroom instruction. And the other idea is to emphasize critical thinking both in school and in our day-to-day lives and this is from and I can't quite see it Justin right the idea there are certain questions we ask that we ask and help us probe through and see through deception one is who benefits right where did this originate there's a 48 questions here and they're very thought-provoking and useful Yeah, so we've put a bunch of references in the slides, you know, which are like, look at this from a bunch of different perspectives. I'm sure that there are ideas that you have that relate to maybe the principles that we've articulated. that are different from the ones that I've come up with. And that's what I'm excited about and why I wanted to talk to all of you about this.

So hopefully, you know, somebody out there has been maybe inspired to work on something. And, you know, there's lots of... more to read about all of this stuff.

These are our email addresses. We'd love to hear from you. We're going to be hanging out outside after the talk is over, so if you want to talk to us or poke at some of the things we're saying, we'd love to talk to you more. And again, I really appreciate your time and attention this morning and the fact that you made it here on the last day of DEF CON for a morning session.

Thank