Transcript for:
Mastering Blender's Eevee Rendering Techniques

In this video, you'll learn how to get beautiful results from Blender's other rendering engine, EV, which contrary to what you might have heard, is not just for stylized rendering. I spent the last few months studying documentation to learn how to get similar results to cycles, but in a fraction of the time, which is why EV is so worth learning. You can iterate faster and produce 4K animations in minutes instead of days. In this video, you'll learn the theory of how Eevee actually works while building a practical interior room. This is the best time to learn it. A film that was rendered with it just won an Oscar, and it was recently rewritten to remove several of its previous limitations. And as the demand for video content keeps growing, artists that can deliver animations in minutes rather than days will become more and more valuable. So, if you're ready to learn everything there is to know about EV, then let's begin. So, with a fresh new scene opened up, we are going to create something a little more interesting for Eevee to work with, which is just an empty room. And we're going to make it very quickly 30 seconds. So, I'm going to grab my cube, move it up, and then scale it out like so. And then I'm going to scale it along one axis to just make it nice and long. Then, in edit mode, I'm going to create six loop cuts along this long axis. And then I'm going to create two loop cuts vertically. And this is going to be our windows. If I scale them out like so. Then in face select mode, select these three windows here. X faces. And that is our room. Then I can move my camera inside here by hitting Ctrl Alt numberpad zero. I'll make my camera a 32 mil just so it's a little wider. Then I'm going to select my default lamp all the way up here. Going to move it down. Let's make it a sun lamp. And then I'm going to set the strength to be 15. Back into my room. I go into rendered view mode and we should see light coming into our room. Lovely. Now this is everybody's first experience with EV light bleed. And this is why a lot of people feel like giving up because it feels like, hey, Eevee, if you can't even get this right, why should I even play with you? You know, like it's a solid box. Why is there light clipping into an object? So, there's a couple of reasons for this. I mean, the first is you shouldn't have like paper thin geometry ever, even in cycles, if you've got something inside it, right? Um, so we can just fix that by adding in a solidify modifier like this, which will just add a little bit of thickness to these walls here. And you can see it's it's better. And I could improve it by just like increasing the thickness, right? But I've still got some light that is coming in. and I'll never be able to fully fix it. Even if I maybe uh you know made this super super thick like a castle thick wall or something, you're still going to have some light coming in. So why is this happening? Well, the thing to remember about EV is that it's doing a lot of clever hacks in order to keep its render times very low. So unlike cycles where things just work cuz it's like physically accurate, right? It's a real This is like a simulation of a real room with light coming into it. Eevee is like a bunch of technologies stacked on each other's shoulders wearing a trench coat pretending to be realistic. But it's all a bunch of fakery. But that is good because that is what enables us to get the ultra low render times that matter so much. But you do have to understand what's going on. So there is a setting underneath Eevee shadow samples. There is a setting here called steps. And in my opinion, they should just call this accuracy because that is exactly what it is. Steps, and this is an oversimplification, but essentially steps is like if each lamp had a number of subdivisions across it, right? So, we've got our sun lamp here, and then there was like 1 2 3 4 5 six subdivisions, right? That's six steps. And at each one of these steps, subdivisions, um, it's asking, is there an obstruction? Do I need to calculate any shadows? And so saying is there any uh obstruction at this checkpoint? No, there is not. No shadow needed. What about this point? No, there is not. No shadow needed. What about at this point? Yes. Now we need shadow. So now we should be calculating shadow from here on for the rest of it. Right? But from here to here, it is not recognizing that there is any shadow. So that is why there is bleed specifically in those areas right along the top here because it's just now letting some of that light from there to there into the scene. So as you can guess if you were to increase the number of subdivisions from six to say let's just try 10. You can see it's drastically reduced. There's a little bit up there but let's just drop it. Let's try 12. You can see it's almost gone. And you can go all the way up to 16 if you wanted to. This will increase render times ever so slightly. When it comes to Eevee, it's like fractions of a second, but that will add up uh the more you uh change from the defaults, but you can see we've now erased it entirely. Now, importantly, this is related to the size of your or I should say the softness of your lamp, which is uh the uh the size of it. So, a typical lamp that is like this that that is the radius value there. the sun lamp it is the angle but essentially it's the same thing. So the more this uh the larger the lamp is, the softer your light is, the more of that bleed you're going to get. And you can see that even with the maxed out value of 16 steps, there is a point where you are going to have bleed coming into the scene. And that's just a limitation of how Eevee works. Um but you would never really have like incredibly soft shadow like this um for uh for Eevee. It that that's really is a weakness of real-time renders. It can't just do ultra soft uh lighting everywhere. So, that's it. Fixed the bleed. Well done. The other thing some of you might have noticed is, hey, is that what I think it is? Is that noise in my EV render? Yes, that's noise. So, just like cycles, EV also has samples. You might be thinking, well, why? I thought the whole point of getting away like moving from cycles to Eevee was to get away from noise. Well, there is noise, but the good news is it is only applied to specific areas. So unlike cycles whereby the entire image is grainy and it works pixel by pixel to uh find the correct value. Uh for Eevee it is only in select areas. So in this case it's only happening from here to here essentially where it's trying to calculate that softened edge for our light source. But the rest of it here and here it is completely 100% noisefree. So it's only calculating it almost just like imagine it like it's a feathered edge which is again related to the size of that. So the the smaller that uh amount of softened shadow needs to go then the less of that noise is coming in and the larger you go the more and more noise you're going to get. But let's say we needed a soft sun lamp and we don't like how noisy it is. Well just like with cycles there's the viewport samples which is what we're going to see in the viewport and then there's the samples that we see when we actually hit render. and that is 64. So, if you wanted to set them both to same, you could actually see how it's going to look when it hits uh render. Now, that's obviously a lot better and it's honestly pretty clean. Um, but let's say, I don't know, you had a client and boss who's like, I can see the grain in the shadows. I don't like it. Obviously, you could increase it. Um, you could go like, let's just double it 128, right? And that'll be a lot smoother. The only problem is is that's going to double it for everything across the scene. when it comes to other areas in EV that also use samples like reflections, light bounces, all this stuff we're going to get to later. And across the board, it's going to increase those samples. So, your render times are going to go up. What you can do instead though is actually increase the samples to just this specific area, right? So, this specific problem of calculating soften shadows for the shadow. And that is under the ray setting. You can think of rays as like a multiple of the samples for just this specific thing. So if I set this to two, it's now effectively like this is 64 * 2, which is 128 samples, but it's only going to be doing it for that specific problem. It's not going to be applying it to reflections and everything else when we get to it later. So very handy settings. Think of rays like a customizable way to reduce the noise for specific areas. All right, so that's the boring stuff out of the way. Let's get into reflections. So, let's add an object so that we can actually see something to reflect. Our good old monkey will work just fine. And uh let's add a subsurf too. Let's make it nice and uh pretty, right? And then let's give it a color so that we can see it uh once it is reflected. Let's make it nice and uh and red. And then with my room here selected, my room material, I'm going to set my roughness all the way to zero. So that I should see a mirror like reflection everywhere. And I can't. So what is going on? So in order to see reflections or light bounces, you need to go to your EV settings and then down there where it's got ray tracing. It is turned off by default. For some unknown reason, it needs to be turned on. And look at that night and day difference. the entire scene is different. Um, I don't know why it's off by default. I can only guess it's because they wanted to keep it really super lightweight. Um, and they didn't want to crash people's computers or something. But it's basically like it looks terrible. And most people's impression of going to EV is like, let's see how this looks in EV. Wow, EV looks terrible. I prefer cycles. If this was turned on, I feel like people would have a lot better first impression of like, okay, EV is pretty good. Anyways, so you need ray tracing turned on. Now, we will get to these ray tracing settings cuz these are very important. But first, let's address the elephant in the room. What is that black ghost appearing behind our monkey? What is it? It's so annoying when you switch on EV and you look at the scene and you've got weird reflection artifacts going on everywhere. cuz it's not just here, but there's also a weird one going on on that wall over there. And then also, if I move my camera up, look what happens to the reflection in the floor here. It's like it's eating this. Like, it's supposed to be reflecting the ceiling and then it's just like slowly being vaporized as I pan down. So, what's going on? The reason this is happening is because EV only sees in screen space. Screen space a fundamental important concept of understanding EV. Screen space means that it can only see what is directly in front of the camera. So as the camera moves, objects that are in front of the camera eat everything behind it. And the effects of this are going to be seen in your reflections and also your bounce lighting. And for proof of this, let's see how there's like uh bounce lighting coming off this wall here, right? And reflecting over here. Cool. It looks like it's it's great. It's bounce lighting everywhere. But look what happens as I move my camera to the right here. Look, the room goes dark. See? Back and forth. Lights on, lights off. Lights on, lights off. Okay, super annoying. But compare that, of course, to cycles. Doesn't matter where you are. I can move my entire scene around. I can look behind the camera and I can see reflections. I can see my monkey head in the reflection of the wall over here. EV. It's dark. It has absolutely no idea what is there. And this actually took me a really hard time to understand because I thought like it's a 3D scene. It's the mesh is there. What why can't it understand what is uh what is behind the camera? It it's it's just the reason that Eevee is able to be so lightning fast is this concept. It is essentially flattening everything down onto a 2D plane and it is only looking at that specific thing. So the moment I pan this way, it has it has amnesia like you wouldn't believe. It has completely forgotten that there was a patch of light there that was bouncing light into the rest of the scene. That is the same limitation that is across all game engines. And it's fundamental to understand what's happening and why there is a black ghost behind the monkey head. Because when you think about it, the monkey head is blocking the wall and the floor behind it, right? So if you consider there's like an outline, right? D all the way around it. Everything behind that outline is a black void. It has been erased. So you can imagine if my camera is here, my viewpoint is here. From here on out all the way over to here, right? Here's the monkey head being traced along the background. Right? Everything in this filledin space is black. It is a black void looking out into absolutely nothing. And so what's happening, the reason that there is this uh this happening is that this section of the floor here is reflecting what is behind the monkey head, which as you remember is nothing. It doesn't know that there's a wall behind it, which is crazy to think about because it's right there. it's there and then it's not there. It's like I've forgotten it immediately. But that's why there's this ghosting effect. That's the same reason that we've got the monkey head outlined over there, right? Because it does it's reflecting what is behind the monkey head. And then also it's the reason why as our camera pans up like pans down like this. It's forgetting that there is a ceiling up here. That's why right here you can see the edge right. It's reflecting up to point of here. the edge of my screen and then it is being erased. It's also why, just to kind of hop on this point for a little bit, why you can see right here there's this section where it's like the monkey has broken apart. We've got a section here and then we've got a section here, right? And you can also see there's like a weird like split going on there. The reason for this is that look, there's a gap. It can see this part of the monkey and then it can see the ear, but it has no idea that they're actually connected, right? So, it had to put this gap in its place. So, not desirable, but thankfully there is a way to fix it. There is a way to fix all of this and it is called probes. Uh, or as I like to call it, probing the scene. Let's go shift A. And then down here, you've got one called light probe. So, there are three options here, sphere and plane. These are used for reflections or refractions. So glass or chrome or anything reflective, you would use these. And then for volume, that is for light bounces. And we're going to use uh all of these. But first, let's add in a plane. Okay. So what it has added in, if we have a look at it, it's kind of like a little stack of pancakes, right? Um and what you do is you scale it to fit the object. And what it's going to do is it's going to add a reflection just within that stack of pancakes there. Okay? So any object that's inside that stack is going to get a special reflection which is more accurate. Now annoyingly by default it places it even though I put the 3D cursor on the floor. It places it just below the surface of where it should be. Super annoying and honestly is probably the cause of a lot of people's scenes breaking cuz they don't realize it. Um, you can check if it's working correctly by selecting the the probe, going to your probe settings here, and then under viewport display, clicking capture. Now, if you don't see anything, that means that it's underneath an object and it's like reflecting the bottom of the plane upwards. So, if you just uh move it slightly, you'll get to a point where you should see the scene come into reflection. And that means it's now positioned correctly. you can turn it off and we now have a fixed floor. So you can see we no longer have that problem. So what a probe is doing is it's like it's got another camera here from this point of view and it's looking up and it's capturing everything from that direction. So now as I pan around it, it's still screen space. It still doesn't know what's behind things, but because we've got this extra little camera positioned down here, it's at least able to capture that data and apply it to objects within the little pancake stack there. Probes do add to your render times. I don't know the specifics. I haven't really done any testing on it to be honest. Um, but you don't want to just be throwing them around your scene everywhere. It'd just be timeconuming to set them up anyway. You just want to look across your scene and if you see issues, any artifacting or anything like that, then that's where you would use it. Like you can see on the wall there, it hasn't fixed this wall. You can see the monkey head outline over there. Um because we don't have a probe on that wall. So this wall is still reflecting what is behind the monkey head which is absolutely nothing. So if that was a big issue, maybe we had like an actual mirror in the room which needed to see the exact uh what was behind it and we had an animation, it was annoying us, we could put in another probe over there just by adding in light probe. Whoops, wrong one. Light probe plane. But of course, we would rotate it to fit that wall um like that if that annoyed us. But it doesn't annoy me, so that's fine. But let's say it wasn't a plane. Let's say it was a spherical or like a glass of water or a uh like a monkey head, right? That was reflective, right? Let's set this like this. And let's make it metallic and reflective. Okay. So, you can see with this, this looks okay. This is pretty possible. But as I moved my camera in, look what's happening. It's slowly turning gray. Now, can you guess why that is? Because it doesn't know what's outside of the camera view. So, literally, as you get closer, it's removing like you can see there's like light here, right? And you can see there's light that is being reflected in the monkey head like up here. But the moment that little patch of light moves out of view, it's gotten darker, right? Cuz it's erased that from the reflection. So, as I slowly move further and further in, the objects are just vanishing from the point of view to the point where it doesn't have anything to reflect at all. It's literally reflecting a blank void right now. So, again, just like before, you would add in a probe, but not a probe plane. You would add in a spherical probe. And then you would position this roughly over the object that you want, about that size. And now, as I move in, I don't have that problem. Now there is something to note with probes is that you cannot have a probe reflecting another probe. So you can see here in the in the uh the plane here if I just move this back a little bit so we can see it. Can I see it? Let me move that. Okay, there we go. That this looks like chrome. And then this guy is like the uh palefaced alter ego. It is completely white and diffused because it cannot uh uh capture reflections. So, um, doesn't matter that there's a probe here. It's more so it's that there's a probe on here. So, actually, if you remove that probe, it's actually more accurate because it's using a is switching to a different reflection method, but you then have the issues with uh disconnecting monkey heads. Um, and you've got all the uh the the problem with the scene there. So, it's just something to keep in mind. If you had like a really reflective surface down here and then a chrome object, it just can't do it. But it's generally not an issue uh if you know uh how to how to just like tweak the scene. Actually, this is it's an interesting thing, but video games are built around these limitations. You usually don't have a character standing in front of a mirror, for example, because it can't actually see what is on the face of a character if the camera is looking behind it. So, they just don't put that into games until very recently because of uh realtime ray tracing, which is actual real ray tracing um not screen space. Okay. So that is reflections and we're going to talk next uh about light probes uh volume light probes uh because that is for bounce lighting. But before we can talk about that, we need to talk about light bounces in general. Because you remember when we turned on ray tracing over here, I said we were going to discuss some of these settings here because they are important to understand how light bounce works. Because when we turned on ray tracing here, it's not at all obvious what's happened, but Blender has actually used two separate technologies to make this work. The first is ray tracing, which is what it says there, right? And that is used primarily for reflections. Okay, this part right here. The light bounces however is actually split over primarily to this separate technology called fast GI approximation. GI standing for global illumination. So the light bounces is this section right right here. Okay. So this is the reflection. This is the bit that is going to be uh ray traced. Uh, and then this is the bit which is fast uh, GI. Okay. Now, interestingly, you don't actually need fast GI. You can turn it off if you wanted to. And you could force Blender EV to use ray tracing for everything, for the light bounces and for the ground. And when I learned that, I was like, yes, that's just what I want. I want rateracing for everything. Ray tracing is the golden standard. It is accurate. It's RTX on. It's in all the adverts. This is what you need to get realism in EV. Don't bother with the fasti, right? But then I noticed looking at it, I'm like, I don't know, it's kind it's kind of dark and also it's kind of muddy. Like look in the back like this the bounce lighting that's going on here. It's this like muddy weird thing that's that that's going on. And it took me a while to figure this out and actually I was uh working with uh Blender Brit who was doing some research for me on this. Took him a while to figure it out as well. But fastgi is actually required to get anything close to realism in EV. And the reason for that is that ray tracing is good for sharp reflections and terrible at rough bounces. So this surface here, our ground has a roughness of zero. Okay? So it's like a mirror. Completely sheen clear perfect. So at a microscopic level, this surface is like it's straight. And that's pretty easy for any engine to calculate. Do I just shoot a ray from the point of view of the camera or the other way? It doesn't really matter. But just in one direction hit a little piece of surface and then go boing and then go what should it look like? It should look like that there. Right? Pretty easy. But what happens when you increase the roughness, right? Well, now this surface here that used to be uh flat now looks like this, right? At a microscopic level. So now if a ray comes in, it could bounce a different number of directions, right? a different like this. Okay. And then the more this roughness goes and the more this surface gets more and more and more at a microscopic level, these rays could go any direction. Okay. So what that means is in order to get an accurate picture, so when the light hits this surface of this monkeykey's head, it could go up here, it could go over here, it could go any which way, right? So to get those little spots that would be appearing all over the room to correctly resemble anything resulting in how the light should look, we would need thousands of samples or at least hundreds of samples, which is how cycles is able to work. That is what cycles is doing. When you turn on cycles, every single little pixel on the frame here is spending 4,96 or yeah, a,000 in the viewport at least to get to a sensible looking result. And even with a thousand, you could see it's all like spotty everywhere, right? But it works because it's throwing so many samples. EV, we are trying to be efficient and we're working in screen space and it just can't get there. That's why these walls here are so dark, right? It just like there's not enough rays. And in fact, if you turned off dinoising, you could actually see the rays as they're hitting the wall there. And there's just not enough of them. There is some there, but with dinoising turned on to kind of average it out, it just becomes like it's nothing. There's just nothing close to how it should look. So that is why the developers of EV Clement uh actually the primary developer uh included fast GI approximation which although is an old school uh rendering technique it does a really good job without getting technical it's it is a form of ray tracing it gets really technical I don't fully understand it myself but it's a different methodology it's like it's using a hemisphere for each hit point using slices and it's determining which direction or if there's an object in the way it's very technical but essentially it's a lot more optimized and it is designed for rough surfaces. So this value here, this threshold value here is the key decider that'll actually tell EV when to use ray tracing and when to use fast GI approximation. It's pretty nerdy to kind of understand it and everything, but like this threshold of 0.5 means that if there is a material with a roughness value over.5, don't use ray tracing, use fast GI approximation. So that means as we drag this over here, right, you can see that the reflection starts to come in, right? Because it it's now switched over to use rate tracing. Now, honestly, this default value of 0.5 works fine. I don't think you need to to tweak it. If you had a whole scene going on and you really wanted to play with things, you could try setting it to like 0.1 or 2 and that means anything over.1 or 2 is going to use uh the fast GI. But like look, I think.5 just works uh for most things. I don't think you need to uh over complicate it. Okay. But there are four values that really do matter. There's the resolution. Resolution for ray tracing, resolution for fasti approximation, the steps, and the distance. The resolution can best be illustrated by first turning off fast GI and dnoising. So we can actually just see the actual pixels of ray tracing going on. And then we can change this resolution to say 1/16th. So this is what is actually being calculated for the ray tracing. And when you set to 16, it's really optimizing it. It's chunking each 16 pixels on the screen into one thing. And then with den noising, it's blurring it out to create this. And this is a lot more optimized um than what it was. But obviously it limits how sharp the reflection can be. This sharp uh floor reflection, if we set this all the way to uh zero, that's as sharp as that is ever going to get, right? So if you needed to be sharper, then you would have to increase it, right? And then you would get something a little uh a little nicer. Uh and if you went to one one and that's as obviously it's completely uh completely sharp, like a mirror. Okay? So that's easy to understand. So I might go like a4 let's say for that. And then for my fast GI approximation similarly it's going to be grouping it. And actually if you want to illustrate how it actually looks um if you set your viewport samples to one it'll stop at one and you can see the uh the splatter that's going on the wall there. And then if you change your uh resolution you can see like8 you can see 16. You can see the the the size of those spots there of the first year. is this spray that's uh that's appearing over everything. That's uh that's how it's working. Okay, but I'll probably leave it at 14. That looks like it might get me close. Let's set this back to 64 so we can see how that wall would actually look. That's good. Now, the other ones that that matter is the steps and the distance. So, the steps can best be thought of not as accuracy, but as like energy. Um, and that's not a correct technical term, but you can see what happens as I if I double this from eight steps to 16, there is a lot more light that is hitting our walls here. Right? So, this is what it was before and then this is what it is after. You could say it's accuracy in a similar way to the uh the shadow steps up here had a steps and that's kind of accuracy. Um, I'm not entirely sure. It probably is accuracy, but it's I think of it as like how much light how much energy is actually going to it. um because as you increase it, you get a much more brighter looking bounce lighting uh coming off that object there. But this will add to your render times. So I might uh you know, I think the default of eight is not good enough. I think probably 16 is the lowest I would go, but increasing that will increase render times. And the final value that I mentioned is important is distance. And you will notice predictably that just limits how far the light bounces can actually go. You can see it radiating out here as a uh uh increase or decrease the distance. But you'd be wondering why would we do that? This looks terrible. Why would we not have it maxed out basically set to zero to infinite like we had before? Cuz light just keeps going. What's What are we doing here? Why are we using distance? Well, believe it or not, this fast GI light bounce thing should be used sparingly. Most of your light bounces should be coming from a volume probe. So, let's add that right now. So, shift A, go down to light probe, and then we're going to use the final one, the volume. So, this has added in a box with a bunch of points in it. And I won't see any effect until I bake it. So, to do that, I go to my uh probe data uh settings, and then here you'll see bake light c. So, let's just do that. You can see at the bottom of the screen, it says it's baking lighting. And when it finishes, you will see not much has changed, but if you go to data down at the bottom there, viewport display. If you click that, you should see little ping-pong balls. And these little ping-pong balls tell us what data is actually being recorded on uh on each of these points. So, you could think of these points almost like a HDRI capture, like a 360° capture of that point in space. what is the light and which direction is it coming from? And so you can see this point here being close to the monkey head as this is radiating off, it is capturing that. But it's quite hard to tell what difference it's actually making to the scene. And the reason for this is that ray tracing right here is so strong and so powerful that it's adding most of the light to the scene that we can't really see this. So what we need to do is turn off ray tracing because ray tracing has no impact whatsoever on the light pro. So, if we just re-ake it with ray tracing turned off, it produces the exact same result. But now that it's turned off, you can clearly see what is actually happening to the scene. And if I turned off my main light source as well, you can see now we're looking at just the raw data from our light probe. Because if I hid that like that, you can see there's absolutely no light coming into the scene whatsoever. So now it's a little easier to see what's going on. It is a capture of light in time. And interestingly, if I was to add another object to this right now, like another monkey head, like so. And if I moved it uh around, you will see that it actually takes on light. So, although it is a like a static bake in time, new objects that are added to it will take on that light. So, you can move things around. Um, and you can see that as it gets closer to our monkey head here. It's almost like there's a red light here that is casting a glow onto it. Um, it's actually kind of clever how it's working. like this is super optimal. There's no lights that are in here. It's just taking on this baked data from the points um in the scene. So, as you can imagine, you want to have these points to fill the objects that you've got in your scene. So, what I typically do is I go into wireframe shading mode at the top. And then, if I'm working on a scene or whether it's a room or anything like that, I will just scale it up to generally fit uh the the scene like this. So, I've got points basically near my objects and where I want things to go. Now, generally speaking, I want to have like squarish shapes because like if you can see this is like a rectangle, right, from from this uh this perspective. And that would mean that I would have more information vertically than I would have uh going uh to the left. So, essentially, I would just change my resolution here in my probe settings to try to get like some squarish looking shapes. I might go five that way. And there's a lot of like people I've actually seen different scenes online um and some people have like heaps like they just fill it and that's actually not recommended. For one, it'll just take a extreme amount of time to bake. Um it doesn't really make much of a difference. Like the you could think of the volume as kind of like generalized light. Doesn't have to be too detailed because most of the detail is going to be coming from that ray tracing. Anyways, now I've got a very long scene going this way. So I'm going to be creating much more points along the Y-axis. So, I'm going to increase that. Annoyingly, this is the annoying part. Every time you change your your boundary and and then you change the resolution, these points keep moving around. Um, so you kind of have to like keep scaling and changing things or whatever. Um, but yeah. All right. So, that's And by the way, this isn't precise yet because I'm going to show you what happens when we get to the world lighting with HDRIs and stuff. Um, how quickly this breaks. But, as a general scene, if we just had like no world lighting outside, this is kind of what it would look like. Now, this isn't correct because we haven't baked it yet. So, these points don't line up. So, let's hit bake again. And you should see Oh, okay. We don't actually have any light in the scene. So, that's important. Uh, bake lighting. And there we go. So, now we can see that was really quick. Actually, that was super quick. Suspiciously quick. But you can see that if we go down and just turn off our data here, that we've got bounce lighting. And importantly, this is not screen space dependent. As I move around, we don't have that lights on, lights off effect. Behind the camera, we've got bounce lighting going up here, and it's no longer screen space uh dependent, which is exactly what we want. So now, if we were to turn on ray tracing here, um you can see it's now put on that extra layer of ray tracing on top of it. But now that we've got actual baked lighting, we don't need as much of it. We the the ray tracing is much more accurate than our light probes. Um, but we only need it sparingly, as I said. So, down in the distance field, what I would typically do is set this to around about like a one or a two or something. Um, it obviously helps if you got a whole scene going on with lots of objects. But that's generally what I would do is something like that. Uh, somewhere between one and two, but it's dependent on your scene. So, you're just looking at the objects and going like, yeah, I want more bounce in sort of like that area. So, I'm going to increase it, and I want like maybe that much on the wall. So you're just kind of playing with a little bit to get that value because of course as the camera moves you will still have that lights on lights off effect right as my uh as I go in and out you can see right here as I go back and forth there is a little bit of lights on lights off um but it's much more minimal than what it was previously when this was set to zero where the whole screen uh the whole room changes drastically. All right so that's the basics of probes that's the basics of uh how to set it up. Now, let's talk about world lighting and then we will show you why this setup that we've got isn't precise enough because this is exactly what I did when I was working on my scene. Had it set up and then I'm like, "Yeah, now let's make this look realistic. Let's have some lighting coming in through the windows." So, you could use an HDRI or we could just use a simple color at this point. So, with my world lighting settings here, I'm going to increase this and make this look like a bluish color. And then I'll give it a strength of like four to get some brightness coming in here. And now we can see something weird. Like this is not a great result. We've got like bleed coming in through the top here. Um, and at the back wall it's like blue. It doesn't look very good. And if we did another bake, it wouldn't really change anything. It's exactly the same. And I was like, this is ugly. This is terrible. And believe it or not, this actually took me 2 weeks to resolve. And it actually almost killed this tutorial cuz I was like, look, if I can't figure this out, if I can't actually make a scene that looks good with world lighting, which is so important. Um, there's no point using EV. Um, and it took me forever thanks to uh X and people helping me out. Um, but I I finally figured it out. So, first we have to understand uh what world lighting is actually doing when you're using EV. So, let's turn off our volume. Let's just hide it and let's have a look. Okay. So, if you've got no volume in your scene and you've just got, you know, some world lighting like this, it's actually blasting everything with the same amount of light because although it looks like this is like how it should look, right, with blue lighting coming in through the windows, um it's not coming in through the windows. And you can show this by actually just filling in the windows. I'll just do it to show you. Right. So, this is our room with windows. If I uh create faces there, it's now a completely closed space and we've got blue light inside, which obviously shouldn't happen. And you can prove that by going into cycles and you should see it should be completely black right now. But Eevee is not taking into account any of the geometry when you're using world lighting. And this is a huge mistake that I see artists do. I've downloaded scenes that people have set up and uh a lot of people have this issue. They don't actually realize that when you use world lighting or an HDRI, it's not it's not creating any shadow whatsoever. It's just blasting it equally. So, a volume probe actually helps us because a volume probe isn't just adding light. It's not like an additive process. It's actually also blocking light uh from coming in. So, you can see it's now a lot darker and it is how it should look. But, we've got so much bleed going on with the world light. So, it's just creeping in through the walls. So, we need a volume, but it's got this horrible bleed effect. And this took me forever to figure out because it's not documented anywhere. The Blender documentation doesn't explain it. Even on the Blender website with the demo files, there is an EV scene which you download to look at it. And even they get it wrong. This probe has to be specifically set up and the points have to be meticulously placed to ensure that this doesn't happen. So, the reason that we have this world lighting creeping in is that the last point in your array, right, in in your your volume, so the last point here, the last point here, uh it's going to stop doing that shadow effect where it's it's stopping the world lighting from coming in. So, the last point needs to be the exact position of the wall. And it's so precise, it's so finicky, you have to set it up meticulously. But essentially, you want to scale it up so that the points are just inside of the wall. Okay, so this is our wall here, right? And this is the inner wall. And then this is the outer wall cuz it's got some thickness, which is important. You need to have thickness to the wall. Um, and then uh you want to scale. Yeah. So just these points are just inside of it. Okay. So let's go all the way around and let's just check. Okay, that's not quite. Let's scale that out. It's It's finicky, but it has to be set up uh correctly. Then you want to go to your world settings. Let's give it another bake and let's see how it looks. Okay, it's better. But you can see I haven't got it exactly lined up on the X-axis. So, I'll move that over. So, the points are just inside of that wall. Let's do another bake. Okay, that's pretty good. Um, you can see on the roof here, I've got a little bit of like artifacting. I've got this kind of like grid effect which took me a while to figure out. When you see a grid effect, it means that the pro the probes aren't quite properly uh placed. So, generally it means uh you want to just just decrease it ever so slightly just to bring those probes more towards the surface uh of the the face there. Let's just shrink that a little bit. As I said, it's finicky. It's my least favorite part of using EV, but you should be able to uh reduce this still a little bit there. I might reduce it a little bit more. Let's do another bake. And it's slowly, you can see it's it's just about gone. Now, the other thing you want to do as well, and the reason this actually works, um, is that these probes here, it's not actually calculating from the point inside this wall, because, and this is the bit that took me forever to figure out, there's a search offset and search distance. So, Blender is using this to figure out that these probes shouldn't be inside the wall. they should be to uh it's going to use this distance to search where is the next point outside of the inside of this object to place that point. So because there is a search uh and a search surface offset or whatever it's positioning those points inside it. Now interestingly if this was too high like so and then we baked it. Okay, it hasn't actually done that, but let's actually if if it was more towards the middle of the wall. And I have to show this because people will run into this problem uh without a doubt. If it was too far to the other side of the wall, like closer to that, when I do a bake, you can see that it's now it looks horrible because that point now this point because of this the high search distance, it's now jumped to the other side of the wall and it's now sampling the wall on the other side. So, we're getting a lot of world lighting. So what you would do is again make sure that the points are close to the correct side of the wall but just inside it and with a minimal search distance so that it's not jumping to the other side. Okay, it's now it's still jumping to the other side. Let me check. Uh this one is a little close over here. As I said, it's finicky. It's not fun. I wish it was easier. I really wish that uh that yeah, it wasn't like this. it is. So, alas, we are dealing with it. But that is pretty good. That's pretty clean. Quick interjection. Something I forgot to talk about when I was recording this part of the tutorial. Uh, so I talked about search distance, right? That's how far it will the point will be able to jump. But surface offset is actually also very important. And that is how far when that point has been repositioned to like like inside of uh this this room here. Um, how far away is it going to be from the the face? So this is the face of the wall. If this was set to zero like this, then that point is going to be right there. And that can actually result in some like a banding appearance, right? You can see this this kind of grid kind of look here. Um so generally you want to increase this surface offset so that the point is uh is yeah a little further away from the wall. Right? So instead of it being there, as you increase that amount, it's going to be further and further away and that can result in some smoother looking uh bakes. So, if I increase this further, you can see. There we go. So, now we've got rid of a lot of the banding. It also feels brighter. Like the scene feels like it's uh like it's how it should look basically. But yeah, you can increase it even further and further. I think you can go too far. Yeah. Where it'll actually like maybe clip through the other side somehow with the search distance. Um but yeah, that's uh that's something to uh to keep in mind. Uh and I'll probably set this to let's go 32 something like that. But there you go. The other thing that I discovered during this learning process was that surfal resolution is way too low by default. You could think of surfal resolution like the resolution of our HDI probe, like how accurate the uh light capture is going to be. And while I was playing around with settings, I was experimenting with different values. And I noticed that when you have 20, it looks way worse than if you have say 40 or 60 or 80 or 100. You get a diminishing returns around about 100, but you can see the higher you go, the more accurate it becomes. Again, don't go probably further than 100 for a scene like this, but at least change the default. I don't know why it's 20. Um, I'm actually going to submit it as a bug request to try to have the default changed, but yeah, I would go uh about 100 and that'll give us uh slightly better results um and more accurate light bounces. Now, later on in the video, we will get to the final boss, which would be glass and volutrics. But first, we need to talk about textures because there is a setting inside of Eevee that is turned on by default, which will smooth out and ruin any textures unless you turn it off. But before we can do that, we need to add some textures. You can get your textures from anywhere. I'm going to be getting mine from my sidebar using the polygon add-on, which if you don't know, lets you live browse a library of textures from inside of Blender. And then if you found one you like, let's say it was uh this one at the bottom. I just hit download. It's downloading the full texture set. And then I just hit apply. And that's it. And if you went to your shading tab, you would see a full PBR stack that's been loaded in and correctly configured for optimal realism inside EV and Blender. Polygon just turned 10 years old, if you can believe it. So, I started the site back in 2015 because I wanted a place where artists could go to get great reliable textures. So, we hire experts to go out and we scan the world's surfaces and then we process and publish them in a consistent reliable format so that they always work together. And we're now the fastest growing PBR texture library. The library is growing at about 40 textures per week. So, all of the major categories you could want textures in are all there. And you can download the textures from the website, but if you use Blender, 3ds Max, or Cinema 4D, uh you don't need to use the browser at all. You can just do it all from the add-on. So, to get the add-on, click the link in the description or go to polygon.com/blender. And then you're going to click on this big blue download button. Save the zip. Don't unzip it. And now you can just click and drag that directly over to Blender. And then say okay to install from disk. And then that's it. If you hit N, that'll bring up your properties. And you should see Polygon in the sidebar. You just have to log into your account first and then you're in. Now, we do have a free category here with some limited assets. But for this tutorial, I'm going to be using premium assets. So, I'm going to go to textures and I'm going to go wood category subcategory flooring. And I like the look of this one here. So, I'm going to hit apply. Okay. So, it's imported. Obviously, the scale is way off. And that is because if we go to our UV image editor and then select our mesh. But first, we haven't UV unwrapped our mesh at all yet. So, this is using the old coordinates. So, I'm going to hit U and I'm going to go just to smart UV project and then say unwrap. Now, I'm going to select everything. And actually, let's just look from the camera's point of view. Let's just see how it looks. And I'm going to scale this up by about seven. Let's see how that looks. That's pretty good. If you wanted a rustic log cabin, this is most of the way there. Uh, I want it to look a little more friendly, more designer than that. So, I want to have some nice concrete on the wall. So, what you can do is in edit mode, select part of your mesh that you want to have a new material on. I'm going to select all of my walls here. And to bring up the properties, I'm going to go to the concrete category. I'm just going to type in the name of a concrete that I know that we have, which is this uh nice modeled uh concrete texture that looks really nice on the walls. So, I'm going to hit apply. And you can see that's automatically created a new material, and it's assigned it to just this selected part of my mesh. Something I like to do when I bring in uh my materials is I like to just increase its glossiness just to get a little bit more uh reflectiveness from some of the materials cuz this is a uh it's a nice wooden floor but it's a little older so it's a bit dull and I want to just like brighten it up a bit. So I'm going to select my wood, go to my shading tab, and then inside of my node setup between roughness and the principled shader, I'm going to add in a math node. Plug it in right there. And then instead of add, I'm going to set this to multiply. A value of one is what it is as default. So if I muted it on and off, it would have no difference. And then as I just turn this down, that's going to increase or sorry, decrease the roughness, which is therefore increasing the uh the invert, which is glossiness. Um, and there you go. So I've now got some nice reflection coming off this wooden floor here. Now, that value that I mentioned that is on by default that is making this texture look smudgy and soft and undetailed is inside your render settings. Down here in your ray tracing for our reflections, there's a setting for dnoising. Now, we want denoising on cuz if it's off, it looks very noisy, but there's something in here called temporal accumulation. So, I'm going to uncheck that. And you can see that it becomes a lot sharper and a lot clearer now because we have changed the materials in our scene. That means that the bounce lighting is different than when it was a completely white sterile looking hospital room. So that means we need to select our volume again and then hit bake light cage. And when we do this, you can see that it now looks considerably darker because of course the light is not bouncing off a white wall, but off a uh darkened wall, uh dark wood and and etc. Uh so it looks a lot darker, but that's okay because we are going to be using an HDRI. So let's talk about HDRIs next because again there is another value that you need to know about. So you can get your HDR eyes from anywhere, but the Polygon add-on also has them. The one I'm going to use is on the second page. Yeah, this one right here. This uh snowy uh woodlands type scene. So, I'm going to go and hit import. And there we go. We can see we've got the uh the scene out there. Great. Um I want to increase the strength of this. Um and we're actually changing this in the add-on. Um but right now we we actually split out what is the background exposure and then what is the lighting exposure in case you wanted them different, but we're going to change it to uh soon be linked. But anyways, I'm going to change it to six for the lighting, uh, and six for the back, which will eventually just be one value. But, uh, there you go. So, it there's now a lot more light coming in. And you'll notice that, look what's happened. We've got something weird going on, right? This is what happens when you use HDRIs with EV. Uh, and for clarity, the reason there is two sun sources is because we still have our sunlamp. Okay? So, if we just didn't have the sun lamp and we just had an HDR, we would have this. So why is it that the sun is on the roof? Well, there is now uh as part of EV next in your world settings, there is a sun threshold right here, which you can see as you change will change which part of the HDRI image over here um it actually treats as the sun, right? So when you increase uh the threshold, it's essentially changing it and and like zeroing in on where the uh the threshold value should be. But you can see that this will change the sun position, right? So I I I think what it is is like threshold of the maximum value means that it's going to look for the highest height the only brightest point in the entire image, which it should be that point right there and make that uh the sun. However, what what's annoying is that when you change your uh strength of the HDRI as a whole, it also changes the sun position because when you're changing the strength there, you are are shifting what point is the brightest in the image. And at first I thought this was a bug. I even reported it as a bug to Clement and then I discovered that no, it's actually doing this threshold value after the background uh node uh after this node here. And so therefore, this threshold value needs to be tweaked after you do your strength. So, personally, I'm not actually a fan of using HDRIs as the sun uh source. Like, I don't like this value here. I also just don't really generally like using HDR eyes as a sunlight anyway because you can't change the height of the sun very easily. This is the like where the sun is in it. And if I want the sunlight to be up on the wall, I can't do that without choosing a different HDRI. So instead, what I like to do is I just set the threshold to zero. So it's only being powered by the uh the world lighting, like treating it like skylighting coming into the scene. And then I use my sunlamp and I position my sunlamp wherever I want cuz I have ultimate control over this. I don't have to be uh yeah, bound by whatever was captured in the HDR. And as long as it's plausible, like the light isn't coming from a completely different direction, like it's like a midday HDR and you've got like sunset, it should still look generally pretty fine, which it does. So yeah, I always use a threshold of zero whenever I'm using uh the uh HDRI sun setting. All right, now it is time for our final boss, glass and volutrics. So I'm going to create a glass object. We could use the Suzanne monkey head, but I actually want to demonstrate something. Um, and I it's better demonstrated with a sphere. So I'm going to add in a UV sphere. Plop it in here. Let's add shade smooth for that. Move it up a little bit. Move it across. And now go to my material. add a new material. And we could change the uh shader to be glass if we wanted to. But uh principal shader, if you just turn up transmission all the way to one, that's exact same thing as if this was a glass shader. Now, cool. Now, this uh it is glass, but it's it's looking very frosted, and that's because roughness is turned up to.5. So, let's dial that back to zero, so that it is like a glass orb. And that looks pretty terrible, right? The reason it looks terrible is that just like over here with our Chrome monkey, this ball is working in screen space. And not only is it only able to reflect what it can see, but it is also only able to refract what it can see. A double whammy. And so it doesn't know what's behind the camera, so it can't reflect anything behind us. And then it also doesn't know what is behind the ball, so it can't refract anything. So basically, if you've got any glass object, you need to have right in its position a sphere light probe or a square light probe, by the way. Um, but yeah, sphere in this case. So, let's just dial this back like that. Changing my size to something like that. So, you generally want your probe to be roughly the size of your uh your sphere here. Now, we didn't talk about this before, but there is a fall-off value here, and that controls that in a sphere. So essentially, if you have like an object, yeah, like a monkey head, right, and maybe this had to be positioned just so that it was like a little bit smaller, you might want it to kind of fade out on the edges of the ears, uh, rather than be a hard cut. Whereas in our case, this is a ball. So we could actually just turn the fall off all the way to zero and then get this exactly, uh, where it needs to go. All right. Now, so I wanted to show this, right? You can see how we're seemingly refracting something really bright. And this is something that confused me for a while. So the reason this is happening is that these every probe has a start and an end. Um and the reason this is important is that because there's an object inside it, it has to be able to clip what is inside it. Otherwise the object will be reflecting um itself, right? So and you can actually see it pretty clearly over here with the monkey head. Um if the clipping start was like uh like if we started all the way in the middle there, you can see you get this like gray value that appears all over it. And that's because now the top of the monkey head is reflecting itself. Um, and you would actually see it if you turned on data and turn this all the way up. You can see you got these like gray patches going around it. So that is like parts of the monkey now reflecting on itself, right? So this clipping value is actually really important. It can't be too low and it also can't be too high because you'll see the moment I go too high that a uh bright portal opens up beneath the monkey head. And the reason that's happening is because this is now clipping through the bottom of the floor. So we're now actually seeing what is on the other side of uh the ground there. Uh and if you go even higher, you clip through the wall, go even higher again, and you clip through everything, right? So that value, it's really just the start that is the most important value. The end can really be like just I don't know, make it as big as you want, right? Just not too small obviously, but like the default value of 20 is just is fine. Um, so in this case, what's happened is we've got a start value that is too uh too high. So I'm just going to dial that back a little bit down like that. And that should now be pretty good. We just have to make sure we don't go too high. You can see that's going too high because it's now reflecting and refracting itself. So there we go. So that's our lovely glass ball. Here's how it compares to cycles. And yes, it is noticeably worse, but this is also the worst comparison for EV because every real-time engine does not do glass as well as path tracing. That's just a limitation. However, how many objects do you have in your scene that are made of glass? And how often are they this close to the camera? Almost never. But yes, if you had a close-up of a glass paper weight on a desk, cycles would be a better option. Okay, but what about something more practical? Not a glass paper weight, which almost never happens, but something like a glass of water or a window pane. Let's do that. So, I'm just going to quickly model a uh a cup, a glass cup. Put this here from a cylinder. Going to delete the top face. Shade smooth. I'll give it some thickness with a solidify modifier. And what the hell, I'll add in a bevel as well. And now I'm going to apply the same material. So, shift Ctrl L and click link materials. And now we have this ugly result. You might remember from before. We need to add in a probe. So, I'm just going to duplicate this probe, place it over the top, and you'll see that it is better, but it is still not great. Let's just scale that up. Okay. So, it looks kind of like a cylindrical fish tank, right? It's got some weird uh weird refraction going on here. And if you look inside, it looks like this. Looks terrible. And more to the point, if we had some liquid in here. So if I just duplicated this object, delete the solidify modifier, scale it in, shrink it a bit, move it down, face along the top so that it's actually like this. It's inside of the glass walls. And then had a look at this. You can see it just looks even worse, right? We cannot see that there is water inside here. In fact, we couldn't really see anything. If I just made this its own uh like new material, we can't see it. So, so this is actually a um a limitation of real-time rendering engines, not just EV. Uh whenever we are using refraction, we can't see through it. We can't see any other object on the other side. We can only see whatever the probe is seeing at that point. So, I cannot see the liquid inside it. There is no way for me to render this liquid uh inside this cup if I wanted to have refractions. So, what games do and I actually noticed as I was playing Call of Duty, I took some uh some screenshots. They don't use refraction in glasses if you need to see the liquid or something inside of it. So instead, you use fake glass or sometimes called arch viz glass, but it's a special shader that isn't just using a transmission. So this is how we do it. Go to the shading tab around a little bit here. Here's my object. What I'm going to do is I'm going to delete this principled shader and I'm going to create a primarily transparent shader. So transparent shader like this. Plug this in. Right. And you can see that now it's it's like that object doesn't exist. And really all this thin glass is there's a microscopic amount of refraction, but it's mostly it's just gloss, right? So we need to have transparency with gloss. So I can do that very quickly by adding in another shader called a glossy shader. Then add a mix shader to combine the two like this. And then we need to combine it so that it uh basically the glossiness only appears wherever there is for now. So, I'm going to go to input and I'm looking for a layer weight node. And then I'm going to take the facing value of this like so. And that is pretty good. If I turn my my roughness all the way down to zero, you can see that now it works. And this is how you render liquid inside of glass. So, whenever it's really really thin like this, this is how uh how you do it. You don't need refraction, right? There is technically a small amount of refraction in real glass, but it's more important that you can actually see the liquid inside it. And hey, you know what? Look, now we could even give it subsurface scattering on our liquid and make it uh look like milk. And uh I just noticed that my orb over here has taken on the same material. But there you go. You can see uh you can see how it's doing. And this is also how you create arch viz glass cuz if I was to add in a plane right here, like this, like that, right? And then I just gave it my, you know, standard glass shader, right? Uh, turn the roughness all the way down. You can see we've got the same problem because it's it's forced to do refraction. Uh, it's making it look like it's it's just like it's looking through a fish tank or something. And if you're used to cycles, you might be thinking like, oh, you fixed that with a solidify modify. No, that does nothing. Right. So, the solution is to use the exact same shader. So, ctr L, copy that, and there you go. So, now I've got my uh reflections. Uh, but I can actually see through the window as well. Now, as an aside, you might notice that this looks kind of fuzzy, kind of noisy. That's because in your uh uh material settings, if you go down to render method, it's set to dithered, which means it's going to use like some form of screen space as well as your light probe. If you changed this to instead be blended, it would be forced to just use the probe by itself, and it's a lot cleaner, a lot nicer. It doesn't have screen space or anything. Um, but you can see it's a lot cleaner and we avoid it. However, it only works for spherical light probes. And this is something that bugs me out. I actually logged it as a bug request and was told that it's just a limitation currently. Um, but you'll notice that my window here no longer has any reflection. And that is because light plane probes uh this one right here does not cooperate with the blended method which means you can't have uh a window basically without it looking fuzzy and uh and blurry. It has to be in this dithered mode which gives us this noisy horrible look. It's really disappointing. Um I'm hoping that it is improved. Um, if you do want to encourage the developers to rethink this, I put a link in the description to the uh the talk on uh projects.blender.org. You can cite your uh agreement with what I'm saying. But basically, windows can't really be done properly um until we can set this to blend um and actually see the result in here. But anyways, all right. The final type of material to be adding and really the bit that Eevee does tremendously well and and Cycles is terrible at is volumetrics. So, we're going to create a cube like this. And then we're just going to have it fill our scene like so. You know, roughly the size of the room doesn't really matter. It's just a fog effect over everything. Uh let's go into rendered view. Let's just scale that out a little bit. There we go. Um, and now add a new material. Delete this material from the surface. So, just uh remove. And then go down to volume. And then we're going to select volume scatter. And now we have this. It's always way too high. So, let's turn this down to a 0.05. And look at that. Look at that. That is why you use EV. Look at it. Huh? Isn't that amazing? It's beautiful. in Cycles. This is a nightmare. Cycles really struggles with volutrics and EV just does it tremendously well. And it's something that they uh recently improved a lot for uh for EV, but just how it handles um uh shadows when it comes to to volumes. And so it is a really good competitor to cycles because you can you can render this in real time. This is incredible as real time. So it's uh that's basically it. All right. All right. So, our scene looks lovely, but if you followed this along and then tried to uh build and render stuff, you will inevitably run into a problem that I need to help you with. So, at some point, you will go, "Oh, I should uh re-ake my volume probe." Right? Remember this guy, the one with the uh the little ping-pong balls everywhere? So, that's capturing our light bounces. Um and you go, "Yeah, like I've added more objects to the scene. Maybe I've got some stuff that's bouncing and reflecting light in a different way. I'm going to re-ake it." and you'll hit re-ake and everything will go dark. And it's not documented anywhere why this would happen. And uh I I was even like asking chat GBT. I was like fooling around with settings. I was trying to find something was like blocking it was in some way. Maybe this was like not in the right collection. Maybe there's a setting here. No, it's because and I wish it was documented. Um this this uh object the fog object uh will block all of the volume uh probe. Okay. So let's just uh let's call this uh fog. So what you can do is you can uh like disable it for rendering and then bake it. But I usually like forget to then like check it and uncheck it. So what I normally do is I move it to a new collection like this. And I call this hide before baking like this. All right. So now if I did that and then I went to do the volume probes, um, unfortunately it would not be fixed. Why is that? Well, because there's another one that you have to disable. And some of you can probably guess what it is, but it is our window here. The exact same uh problem. Okay, this also has to be hidden before you bake. Um, hide before baking. Oh, and I didn't even hide it. Okay, so yeah, you got to hide it. But yeah, you disable it from the render. Then select your volume probe bake light c. Now it's baking properly. And then once it's finished baking, I turn it back on. Okay. And that's that's my solution. That's what's worked. Every time I need to re-ake it, I go here and I uncheck it. I bake and then I recheck it. That is annoying. It's not a solution that I I really like, but it's the only solution that I know of. Um, and I'm hoping that they uh kind of improve that cuz that really sucks and it's annoying to have to keep doing that. Now, I'm speeding this next part up, but as you can see, I'm just adding in some models from Polygon to fill in the scene so that we can optimize it because I don't want to just give you a a tutorial where you like build an empty scene with Suzanne heads because it's not very practical when you add in objects that are real that have textures and shaders and things like that. Um, that's where things can break. So, we're doing this so that we can optimize it. Like for example, with a real scene, we can see that we get uh this weird light coming from underneath our objects. Um and the reason for that is our volume, our volume uh probe here with the little ping pong balls everywhere, right? Uh there is some bleed that is coming through and there's not really an easy way to stop that for things that are like that close to the surface that it's going under it. Um ex like using the probes. Instead, you go to these little settings up here, which I believe I mentioned before, these settings here are to kind of fix artifacts like this. So, in this case, and I will be truthful here, I don't really know how these work, but I know that if you turn down normal, that disappears. Uh, the other one that can help is turning up view. Uh, essentially, whenever I see issues with um my uh like light bleed or something like that, I just come in here and I change these settings a little bit just to play with them and uh it usually fixes it. So that's the first problem. Another problem is that we have a tree here that has uh it's it's this is the the mesh, right? But then there is a uh a texture which is driving its opacity, right? So this part of it is uh yeah, basically this part isn't rendered. Um and I don't really know why there is this this speckled effect. I don't know why this happens, but it's obviously not great. And that is going to flicker. You're going to have these little fireflies. You see the little flickering? You're going to see that flickering in the animation, right? Um, and whenever I see something like this, I go in here and I mess with the render method. So, I I think I can't I've done this going all over the place with the tutorial. It's a long one, but I believe I talked about it, but essentially render method will switch to be probes only instead of using like screen space reflections. Um, and that can fix some things like this. So, we just change that. You can see it's max shader compilations, whatever. And there we go. It has fixed it. I'm sure there is like technically less accurate reflections coming off those leaves now, but it has fixed that problem for us. All right. So, that's cool. Now, if we do a render, we're coming in at 11 seconds. And I suspect what is part of that cause is that we have a lot of textures being loaded into memory. Um, and just a lot going on with each of our shaders here. So, let's go to our shading tab and I'll show you what I mean. Okay, so here we go. Like this. So each of polygon's objects has a standardized list of textures which makes it cross-co compatible. It can be used in Unreal Engine etc etc. Um but it means that there is a lot of textures and some of them aren't necessary for the result. So for example this is like a flat piece of artwork but we have to include like a metalness map to show that it's not metal so that it works with metalness shaders. Um but the actual texture is just black. Okay. So, that is not a necessary um not a necessary texture. So, what we need to do is just come in here and delete anything that isn't necessary. So, I'm going to delete uh the metaless, the roughness, uh the alpha. I'm just going to leave just the base and the normal. And that's all that I need for that. Um for a chair like this, I would do a similar thing. I would delete the metaleness, the roughness, the alpha, and again, I'm just going to leave basically the base color and uh and the normal. Now, we are working on a method at Polygon to be using material X to sort of add in hopefully numerical values for some of these things where there's a flat map. Um, and we will get there, but uh in the meantime, we have to distribute the maps and uh it can yeah have this uh this effect where it's using a lot of memory where it doesn't necessarily need to. This is annoying. I don't like that it uh that it takes as long, but this is essentially what you do. So you just come in here and you just uh cross it out by holding down control and uh right mouse to just sort of cross those uh those fields out. So I'm going to do this now for the rest of the scene. Going object by object and just deleting any unnecessary maps. They will be different. Like this one is an object that has metalness and a diffuse. So I need the metallic map. I also probably need the roughness cuz there's a more diffuse roughness on the top than there is the legs. Uh I need the normal. I can really only just get rid of the transmission. So, I I'll delete that. Anyways, I'm going to go through the rest of the objects. And we're back. And you can see that we are now at 4 seconds. And we were at 11 seconds previously with lots of textures. So, that's something to keep in mind. EV will increase render times the more textures you add, whereas cycles is a little more resilient to it. Um, so yeah, if you don't need the the the textures being loaded into memory, it's good to uh remove it cuz yeah, you can like more than half your render time. And I know that doesn't seem like much, like, hey, six seconds. If you're doing a still, it really doesn't matter. However, if you're doing an animation, you've h haveved your your full animation sequence render time, which is pretty cool. The other thing that will play a big role in render times is these guys. Remember this, the volume probe, right? So, the sorry, the the probe reflection probe that is, as to to recap, letting us see outside of screen space reflection. So, with it turned off, we're only looking at what it can literally see on the screen. Okay? And it was very obvious when we had that empty room with just, you know, sheer reflections. We had all sorts of like artifacting happening. But now that we've got like a texture, we've got we don't have reflection, uh, like a reflective wall in the background. It's a lot more subtle. And if I was to disable that from my render time, look at the render times after this. There you go. So, it's gone to uh 3 seconds 50 whereas it was 4 seconds and 27. So, it's like taken off like yeah almost a full second roughly and there's not much difference. Um you can see that the it's able to capture more of what's outside of the window when you've got um the the uh the the the plane probe there, but it's a negligible amount of difference. So, it wouldn't be worth having. Um and I also put one on the window here. However, the window, unless you had a closeup of the window, you probably don't need to see like the rest of the room reflected into it. Um, so I would also disable that one. And any others, I don't think there is. Okay, so now if I did another render, it's even less. We're now sub 3 seconds. Okay, so probes make a big difference uh to the scene. And the bigger the probe in, as far as I can tell, the uh the more impact it will have on it. The one area that you do want to have a probe would be on an object like this. A very unique walled uh decoration that we actually did as a collaboration with a designer from uh the High Key. Um but yeah, very cool looking thing. But yeah, it's got this big uh dome mirror on it and that appears because we're looking at screen space. As the camera moves closer, it has no idea there's a room behind us right now. So, it is now reflecting the HDRI, which is not good. So, this is where we would add in, and it makes sense to add in a spherical uh probe like this. So, I'm going to dial this in so that it's like this. And I'll position it about there. Turn off my falloff cuz it doesn't matter cuz I'm going to get it like nice and sharp like so. And then the one thing that matters is the clipping distance cuz it's currently clipping out the ceiling. And there we go. And we put that back. Cool. And so now, if I was to do this render, it will add a small amount to it. So, it's 266 currently. If I was to do another render, it's like 280. So, it's adding a very, very tiny uh amount. Uh, by the way, you might be wondering like, is it possible to make this reflection clearer? And it actually is. There's a setting that is way out of the way. You wouldn't expect to find it, but if you go to your scene properties, there is light probe sphere resolution, which by default is actually I think by default it's 2K. I changed it cuz I was doing the uh tutorial. But yeah, you can go up to 4K, right? But again, unless the camera was this close to it, maybe you had a shot in a film where you're looking at one of those dome mirror things for like a security camera or something, maybe that would be a time to use it, but like no. Like it I don't even think you need 1K for most things. Like yeah, it's just not that uh important. All right. It is also probably worth testing the resolution for rate tracing and your resolution for your fasti approximation because it looks like honestly you could get by with like 16 um maybe even for for for both of them. Uh something really low and that can definitely reduce your render times. Um not actually by as much as I thought. Uh yeah, two two seconds 46. So not not a drastic amount, but with these high amounts uh like this uh high as in like cheaper rendering. Um you will notice flickering in your animation. So that was something that I noticed. I went like really low originally and then I was I realized that no for for the the light bounces that which is this setting here, you're going to see some flickering. Um and then same with the uh the reflections as well. So, I think I went 1 one and then one two for my uh my final uh animation. And the final thing to call out that will have an impact on your render times is the fog. This this little uh guy here, the volume, it's super cheap in EV, right? But it does add something. So, you can see with it turned off, we're getting 3 seconds exactly and then with it turned on, it's like half a second um or whatever for for the frame. But still super lightweight and Eevee can definitely be used for realism if you know what you are doing, which I hope you do now that you finished watching this video. If you liked it, please give it a big thumbs up. And if you want to see more tutorials like this one, hit subscribe and I will see you next time. Bye. [Music]