Transcript for:
Diving Deep into C# Async/Await

Hey friends, I'm Scott Hanselman and I am here with partner software engineer Steven Tobe. How are you, sir? I am doing very well.

It's great to see you. Great to see you too. I am confused. Many years later, how many years since Await and Async came out?

12, 13, something along those lines. It was around 2010. It's so powerful. It's changed the industry.

People use it outside C-sharp. It's a pattern now. And I feel that I understand it just enough to be dangerous. I feel like it's one of the world's greatest foot guns that I've now got directly pointed at my toes. And I do not have steel-toed boots.

So maybe you can help us understand what's really happening so that we might become better programmers. I would love to. One of the things that I really value is understanding how things work, having a really good mental model for how things work.

And I feel like you don't have to understand every line of code that went into something that you're using. But the more of it you understand, the better you're able to use it and take advantage of it. And you're right.

Async await has been around for so long. And yet still, or maybe because of that, there are a lot of misconceptions about how it's actually implemented and built. So last year, about a year ago, I wrote a blog post called How Async Await. really works in C sharp, but it's long and it's detailed and it's walking through individual lines of code.

So what I thought you and I could do would be to sort of pretend it's 10 years ago and implement a super simple version of async and await from the ground up. And along the way, kind of really see the bits and pieces that make it tick. I won't be particularly worried about performance, which pains me.

And I was doing things that I wouldn't do in any other context. But our goal here is just to sort of understand how things fit together and get a really good mental model. Well, David Fowler and I recently came off of doing a like 20 or 30 part series on beginner C sharp. And we got right up to but did not get to a sink and a wait. And the number one most requested thing on our on our channel is, is more complicated, more technical content, would you put this at 200 level or 300 level?

Like if you're beginner, intermediate, advanced, what should someone know when they join us here? 300 level. Okay. So we're not going to be doing anything that is fundamentally difficult.

But some of the concepts can be a little mind bending how things relate to one another. All right. Well, I am going to put my empathy hat on and try to be like it's 10 years ago.

And I've been doing this. So I'm going to be the audience. And I'm going to ask questions if that's okay. Yeah, please.

Let's take a look at your machine. You're on Visual Studio here and we've got console.WriteLine. Hello, Scott.

Yep. Awesome. So one of the first things you realize when you start talking about async, await, and asynchrony is asynchrony sort of breeds concurrency, right?

You start something and then it might complete immediately, it might not, but you've just launched it and you can go off and do something else at the same time. And so then when that thing completes and maybe it's going to do something else after, now you have multiple things possibly happening at the same time. And you need some way to enable multiple pieces of work to all run at the same time.

So at the very bottom of the stack, you have a thread pool. And since we're going to try and build up this whole thing sort of from scratch, we should run a thread pool, a very simple thread pool. Yeah.

So to level set though on definitions, if I'm going to say I'm going to do things concurrently, I'm going to do two things or more than n things at once. If I'm going to do things asynchronously, that's not quite the same as saying I'm going to do multiple things at once. Yeah.

So let's say I had a for loop here. Let's say it's i equals zero, i is less than a thousand, i plus plus. And then I'm going to use a thread pool. We're now pretending that this does exist again, even though we were about to pretend that it doesn't. And I can come in here and put some work inside here.

This line here is just queuing a work item. And immediately after I queue it, I can do something else. So I have queued this work asynchronously.

I'm launching whatever the body of this is, I'm running it asynchronously from where I currently am. Fire and forget. Fire and forget.

It's working. It's working without the join. Yeah.

And that doesn't necessarily mean it's going to run concurrently. If this was the UI thread of a Windows Forms application, and I wasn't using thread pool queues to work on... I was instead using control.beginInvoke.

That's really just asynchronously running that work, but queuing it to run back on the very place that I am. So it's asynchronous invocation, but it's not necessarily concurrency. Where concurrency comes in is the fact that this piece of work here actually could end up running at the same time as this piece of work here.

And so you can't have concurrency without asynchrony, but you can have asynchrony without concurrency. And then it's arguably not deterministic whether or not those two dot, dot, dots, those two ellipses, they might run at the same time. Depends on your processor, depends on a million different things, depends on if you get preempted, who knows. Absolutely. In fact, we can see that this actually leads very nicely into what I just wanted to show, which is let's say I have this very loop here, and I'm going to say console.writeline i and then thread.sleep 1000. Now, you might think that when I run this, it's going to print out the numbers 0 through 1000 or 0 through 999. And ideally that's what I wanted, right?

I'm queuing a bunch of work items. This machine that I'm running on has 12 logical cores. The thread pool is going to have about 12 threads-ish.

And so you would kind of expect to see 12 numbers printed, 0 through 11, and then 12 through 23, and so on and so on. But I actually have a bug here, and if I run this, it gets to exactly what you were just talking about. Oh, this came up on the wrong monitor. You can see it's printing out all one thousandths, and it's exactly because of what you just said. if I sort of, you know, just minimize this and forget what this was doing, all I'm really doing is queuing 1000 work items, and then I'm going off to do something else.

By the time I do that, I is now 1000. And then when these work items eventually end up running, they're just referring to that same I value that was captured by this work. They're all just referring to the same variable. And so they all see it as 1000 keeps popping up on the wrong screen. rather than printing out what I wanted to print out. In fact, what's really cool is I can actually just select this code and I can ask GitHub Copilot, for example, why is this printing out 1,000s?

I'm sorry, you wanted to say something? Well, while it's thinking about explaining that to us, is it going to print out 1,000 1,000s and then print out 1,000 999s? This is just going to print out 1,000 1,000s. And that's the end of it.

That's the end of it, yeah. Why is this printing out thousands when I expected it to print incrementing numbers? I love that you're using it as a rubber duck, right? Rubber ducking is this idea that I'm just going to have something on my desk that I'm going to talk to and it's going to help me understand it.

I actually have a rubber duck that I have on my monitor that's just there to ask these questions to, but now I can do it to copilot. Well, the really neat thing about this is it's more than a rubber duck. It's explaining the problem to me.

Oh, wow. But then it's actually recommending a solution which I can go and preview and then just accept the change. So what it's done is it explained the problem and then it's basically saying rather than using this variable, which is sort of in this outer scope and is going to be reused across all of the closures, it's instead putting the thing that's being captured into the local scope so every work item IQ will have its own copy. And if I run this now, we'll see indeed that we end up getting that behavior we were expecting where we're printing out these incrementing numbers as we go. We're seeing it across, even though we're seeing it all in one row here, we're seeing them fight a little bit.

Exactly. What we want to do is here I'm using the real ThreadPool, but I need to do this with my own ThreadPool. Got you. We're going to go in here, we're going to write a little class called MyThreadPool for lack of a better name, since I'm not very creative when it comes to names.

We need that same Queues or WorkItem that we just saw. Queues or WorkItem will take an action. and we need to do something with this.

And one of the things I love about doing examples like this is they kind of write themselves in that I'm saying queue here, so I need a queue. Like I need somewhere to store this data. So I'm going to come in here and say static read only.

And there are lots of different data structures that I could use, but I'm going to use one called blocking collection. And the beauty of a blocking collection here is that you can store things into it. It's basically concurrent queue. But when I want to take something out, I will block waiting to take out the thing if it's empty. And that's what I want my threads to be doing.

All of my threads in my thread pool are going to be trying to take things from this queue to process it. And if there's nothing there, I want them to just wait for something to be available. So my queues or work item is just going to say work items dot add and put that into the queue. Then I need a bunch of threads to do the processing. I'll say in a static constructor for here, I'll just kick off a bunch of threads.

I is less than environment. I mentioned I'm on a 12 logical core machine here, so I'll just have 12 of these. Each one of them is just going to kick off a thread and start it. Now, interestingly, sorry, go ahead Scott. If I may, I just wanted to call, if you scroll up just a smidge, I want to make sure that for folks that are following along and learning in online 6 there you said delegate.

And now you're using uppercase A action. Action is a delegate, right? It's public delegate void action.

So why did you maybe explain the juxtaposition there as you immediately and intuitively picked action? Yep. So action in.NET is that you can have delegates of all shapes and sizes.

They're managed function pointers, basically..NET has built in definitions of some of those for very common shapes. one of those super common shapes is just a parameterless void returning method. That's all action is.

So this can bind to anything that is parameterless. And because we're not doing anything fancy, we're not accepting any state, we're not returning any additional arguments. I have just used action here. And if I write delegate and I change this to my thread pool, we can see that it binds successfully because the compiler is able to convert. this anonymous method into an action.

It's also able to do that with a Lambda, which is just another way of writing the same thing. Just a good reminder to folks, an action is a delegate that has already been defined. All right, cool. Please, thank you.

Yeah, no, thank you. I'm creating a thread here. This isn't really here or there, but interestingly,.NET distinguishes two kinds of threads.

It has what are called foreground threads and background threads. The only distinction between those is, When your main method exits, do you want your process to wait around for all of your threads that you created to exit as well? Foreground threads, it will wait for them.

Background threads, it won't wait for them. Because I don't want these threads that are sitting here in an infinite while loop to keep my process alive forever. I'm just going to say is background equals true. And that way, these threads don't keep my process from exiting.

Now, is that something that might not necessarily be intuitive to someone who came from a Unix world, who is not going to think about that kind of foreground thread, background thread? And there's also the concept of green threads and native threads in some cultures. Yeah, and frankly, it's not something that you frequently run into or that matters. But since we're sort of looking at implementing the lower level stuff here, call some of these things out along the way.

So these threads just sit in an infinite loop doing something. And what are they doing? Well, they're taking the next work item from work items. and running it and that's it. Now I've got my thread pool.

If I kick this off, we can see that we get the same behavior that we saw before, even though we're using my thread pool. It's behaving in a very similar fashion. We just implemented our own new thread pool. Now if you were to look at the real.NET thread pool, it's a whole lot more code than the 15 lines or 20 lines here. Almost all of the real code, goes into two things one making it super efficient and two uh not having a fixed number of threads a lot of the logic is about thread management and increasing and decreasing the number of threads that threadpool has over time in order to try and maintain good throughput for your application but as i said i'm not worrying about perf so i'm sorry right but one of the interesting things here though because we're implementing this sort of at this lower level is there are other things that we need to think about that most developers implementing most library, implementing most code don't need to think about, but because we're implementing the details here, we do.

And in particular, if you're familiar with, say like ASP.NET, right? ASP.NET has this thing called HTTP context. And you're able to use this HTTP context accessor to kind of basically say, give me the current HTTP context for where I am. Or If you're using principles with threads and you say, who is the current principle associated with this thread? That information somehow flows when you queue work items or you do other things.

There's all this sort of ambient state that somehow seems to be able to magically flow from one thread where you're doing something to the continuation or to the other work that you've queued. Okay. that has to happen somehow and it happens via something called execution context.

In lines 4-18 here, we have that captured value i, does that then join thread local storage and go along for the ride as you queue that work item? It's not exactly what we would call thread local storage. This value here is really just being stored onto an object that's being passed into queues or work item.

If thread local storage would be if I actually had a static, a static field and I tag it as thread static, what that then ends up doing is saying that each thread has its own copy of that static field. But with something like async and await or queues or work item, I'm going to be hopping between threads. If I put something in thread local storage on one thread, it may or may not end up being available in the work that I queued because it might run on a different thread.

What we need is a mechanism to say, That ambient state that we kind of have hanging out there, like in thread local storage, how do I enable that to automatically flow with my work? Because in the case of something like async await, I do A and then I await something and then I do B and I await something. I kind of like that ambient information to be present, even across all those possible hubs.

Think about it like if you were building a large distributed system, maybe like a correlation ID that's allowing you to go and track the logical transaction over the course of a large distributed system. In the early days of ASP.NET, a lot of people got nailed by marking things as thread static, and then a thread gets reused in the pool, and then someone else's account number is there. And like, wait a second, that variable is not my data.

Exactly. And you mentioned for distribution or tracing or whatnot, that's another good example where we take advantage of this. The activity stuff that's used for doing sort of distributed tracing, and you can await any number of times, yet somehow your correlation ID or your...

your IDs for your spans end up being the same. And that is via this mechanism. So there's a type in.NET called async local.

And I'll just call this I. And then I don't want I. Let's call this myValue.

And then here I can say myValue.value equals I. And I can use myValue.value here. This looks like I have a single shared thing, right?

It's sort of the same as the initial problem I had. It's outside of my loop, and I'm just storing a value into it, and I'm using that value here. But if I run this, we'll see that it correctly does what I wanted it to do, right?

I've still got my incrementing numbers, and I'm not somehow sharing the same value across all these. that magic is happening via something called execution context in.NET, which is this thing that takes all of that thread local state that has been specifically put there, and flows it with all of these asynchronous operations like queues or work item, or new thread, or await, or task.run, or any of these things. Now, normally, that all just happens for you, right?

That magic is happening for you by queues or work item or by task.run. But if I switch this over to my thread pool, and now I run this, it's all zeros. Because we're re-implementing that lower level of the stack.

We need to handle that flow. So async local is this kind of multiverse-friendly interdimensional traveler that's going to like, okay, we're going to start hopping around from thread to thread, and it's going to be passed in as a parameter to this function, and then any changes to that object are going to be seen by the caller. But then if you assign a new value, it's not going to be seen by the caller. Yeah, and this is where...

Seeing exactly how it works under the covers helps clear up exactly how that flow happens. It's one of the reasons I love learning about this stuff at the lower level because your mental model for this locks in place. What is this actually doing? Well, rather than just storing an action, we need to also store that execution context, that thing that's getting passed around. Then here, rather than just adding the action, I'm going to add execution context.capture.

I'm going to grab the current context and store it along with this action. in this collection. And then when I take this out, I'll just take out that execution context as well. So now I've not just, I've not only dequeued the action, but the execution context associated with it.

Did you have a question? So action is a delegate. So I get that.

Uppercase A action is effectively a delegate. Execution context seems to be like a very friendly and convenient thing that you just happen to have available to you in the base class library. What is its underlying data structure?

If you were to use something a little bit lower level than even the fact that you could just say execution context. Execution context is basically just a dictionary of key value pairs that is stored in thread local storage. It's...

a little bit more fancy than that, but it's really just what it is and everything else is an optimization. Then it provides these APIs to say capture, which just means grab the current one. Then what we'll see now is we need to be able to actually use it. We captured the context that was present when we queued the work item.

Now we need to actually use that same context and restore it temporarily while we invoke this delegate. Now it is possible that it's null because it's possible to suppress execution context flow. That's not really relevant for our discussion, but if it is, no, I'm just going to invoke the delegate and not worry about it.

Otherwise, GitHub Copilot wrote it for me. Otherwise, I'm going to take this context and run the delegate that I'm passing in using that context. Now, I previously got all zeros.

Now when I run this, we'll see that again we get that behavior that we wanted because now that ambient state is flowing. from where the work item was queued to the invocation of that work item. Let's scroll down and let's just spend a moment looking at line 36, just a little more deeply for those that may have seen that fly by, because you're casting that state to action. Let's just make sure we understand what's happening there on that line. This is the line that GitHub Copilot wrote for me.

And if I was writing this on my own, that's what I would have written as well. It might be a little bit clearer for our purposes here, if I write it a little bit less efficiently. And that is if I run it instead like this, these are functionally the same thing. I'm just saying invoke this delegate, which is just going to invoke this work item with this context set basically as current. Have it restored and then it's going to undo it afterwards.

The difference between these lines is executioncontext.run actually takes a state argument, and then that state argument is passed into that context callback delegate. So that delegate is just an action of object, basically, just with a different name. So you can pass state into it. In fact, I should be able to browse to the definition here.

And if I look at context callback, all you can see, you can see it's just a delegate that takes a state object. This was introduced. before the action and action of object were added.

So it's a dedicated delegate type. If we were doing it again today, this type wouldn't exist. It would just be action of object. Right.

And then just go back to program.cs. Sometimes, for those who may not be familiar, when you see something like state show up, you see context, which you might think, oh, that's a variable. It is in this case.

But then state, it's a named parameter. Exactly. This is a lambda. It can be easy for a 200 level person to kind of go state. Well, what state?

Where'd that get declared? Yeah. So I can expand that a little bit. This is just the argument to this function. And so basically this state, this, this art object here is then being passed to execution context that run, which will invoke this delegate passing that object in as the state.

And the reason I said this is for efficiency is because this version has what's called a closure. It needs to reference this work item that's defined out here. So there's actually multiple objects being allocated here to be able to capture that work item into some object and create a delegate that's been passed in. And here I can avoid that.

In fact, I can see that it's being avoided and that there's no closure by using the static keyword in C sharp. If I were to do anything in this delegate that tried to use state out here, like if I were to try and do this. The compiler is going to warn me or give me an error and say, you can't do that. You're capturing state.

And you told me via this static keyword to not let that happen. So I'm not letting that happen. Yeah.

So control ZS back to glory just a moment ago. And I also want to call out, there's two fun things going on here. One from C sharp eight, which is the, they call it the dammit operator.

That null forgiving was a state. And I mean it make that talk about that for just very briefly. And then of course you've got object. With a question mark, because there's no work happening here.

Yeah, so the nullable reference type support in C Sharp is quite nice. It's not perfect. There are some APIs where you just kind of can't fully express from an API definition perspective what you want to express.

And in this case, what I really want to be able to say is I want to be able to pass in something here that is null. something that's not null, and I would like that to then impact whether this is nullable or non-nullable. If I pass something here that's non-null, I'd like it to be this.

If I pass in something here that is null, I like it to be that because this question mark means, can this be null or not? You can do exactly that with generics. This API was introduced before there were generics. There's only one thing that this can possibly be and it is null.

has to be able to work with nullable or non-nullable things that are passed in here. And as a result, the only thing that can be is the thing that can possibly be null, because something that is non-null or maybe null can both be maybe null. In my case, I know it's non-null because I'm only in this code if work item is non-null. And therefore, I say that warning. that you would otherwise give me by trying to use this thing that might be null, I know it's fine.

I know better. I know better than the compiler. One of the few times we know better than the compiler.

Hence the dammit operator, actually the null suppression operator. Well, null forgiving, I think is what we call it. Yeah, null forgiving.

That's, yeah, exactly. All right. So this is starting to take shape here. Yeah. So we've got our thread pool, but queues or work item is pretty low level.

We queue work, we can fork, but we're not really joining with it. Because of that, I've got this console read line here to prevent my program from exiting. I'd really like to be able to both queue the work and then have some object that represents that work that I can later say, wait for this thing, join with it. For that in.NET, we have a class called task. I'm going to implement my task and we're going to implement, again, a very simple version of task that can then layer on top of my thread pool.

So there are a few things that we would want to do with this task. Task is just, at its core, it's just a data structure that sits in memory that you can do a few operations on. One of the things that you can do is check whether it's completed.

So I'm going to have a little bool is completed property here. And I'm just going to kind of scaffold this out and we'll fill it in in a moment. You also need to walk up to that task and say, well, you know, I can check whether you're completed, but I want to be able to mark you as completed.

Basically say that you're done. For that, I'm going to add two methods. I'm going to add a set result, set completed, whatever you want to call it.

Also, it might be representing an operation that has failed. I want to have set exception. You saw again, GitHub Copilot there automatically completing the line for me, which is quite nice. Now, in the real.NET, these are separated out onto a separate type called task completion source. That's not a functional thing, that's purely so that I can give you a task and not be worried that you're going to complete it from under me.

So I'm kind of reserving the capability to mark this task as having been completed. For our purposes, I'm just putting it right on to task. And then I also want to be able to wait for one of these things. So we were just talking about being able to join with it. So I want to be able to say, you know, wait for this task to complete.

Or if I don't want to synchronously block, maybe I want a callback. Maybe I want a notification that the task is completed. I want to be able to walk up to it at any point, whether it's completed or not, and give it a delegate that it will invoke when it comes.

completes. And for that, we're going to write a method called continue with that. Again, we'll just have take an action, which it will call when it's us.

This is the surface area of our task we're going to implement. And yeah, and this is this is why I just love them. One of the best and most fun parts and the hardest parts of computer science, of course, is naming stuff. And I'm sure you've probably been in meetings with with partners and friends saying, let's get a thesaurus and find the word.

And when you find the word, it's got the right mouth feel. You're like, okay, that's what it is. So like a task is an action, but it has other actions.

So a task can have actions like has a, is a, you know, all of those kinds of things. But when you started writing task, I'm thinking to myself, well, gosh, an action is kind of a task, but no tasks, they have more things. They have more context. They need to be a little different. It's not only sort of, you know, representing some operation, but also then interacting with that operation in some way.

Yeah. Um, so. We can start filling this in and again it kind of writes itself so is completed.

Alright, well, I need to track whether my task has completed or not. So I need this completed field and we see that I can set an exception. So I probably need to be able to store an exception onto here again. We can see that the question mark because I may not have an exception.

I may have an exception. So this is nullable. We can see here you can walk up to this task at any time and give it.

an action to that it's going to invoke when it completes. I need to be able to store that somewhere. So we're going to have action continuation.

And then as we just saw with the thread pool, not only do I want to store that action, but I also want to be able to take that execution context that was sort of floating out there and capture it and restore it when I invoke this thing. So I'm also going to have an execution context. And now we can start filling in our method. So let's do is completed first. This is the easiest one.

I'm going to say get And then I want to return completed. I do need to do a little bit of synchronization here because this task object is sort of, it needs to be implicitly thread safe because something over here is going to be completing it, something over here is going to be joining with it. The real task in.NET has a whole lot of code to try and make this synchronization as cheap as possible with lock-free operations and whatnot. I'm going to do a really simple thing that I don't recommend anyone else do in, you know, a general case, and I'm just going to say, lock this. and just protect all of my operations.

Everybody get in line behind this guy. Yeah, exactly. But with that, this method has been implemented.

And again, it's what's kind of neat is you kind of look at your state. And I love the syntax highlighting in VS because it kind of shows me what I've used and what I still haven't used yet. The things that are grayed out are the things that I haven't kind of used yet.

And in that way, it's kind of guiding what I do. Both setResult and setException actually need to do the exact same thing. I'm just going to implement them in terms of a single helper that optionally takes an exception.

Then I can go up here and make this just call complete with null. Again, GitHub Copilots knows what I want to do and writes it for me. Now I just need to implement this complete method. The operation has completed.

This task is being used to represent that operation. So the code needs to come along and mark the task as having been completed. So again, big honk and lock.

And it doesn't make sense to complete one of these twice. So we'll just say, if it's already completed, throw an exception, stop messing up my code. And then we can now proceed to actually implement this. So I need to mark it as completed.

That's pretty obvious. And I need to store the exception that I was given. And now I'm almost done.

But we can again see, if we look at our state, right, I've set isCompleted, I've set exception, but we said that this continuation was meant to be invoked when the operation completed. So now I can say if continuation, I can type it, if continuation is not null, and again, it tried to write it for me, which is pretty cool, then I want to queue a work item that invokes the continuation. Now, this isn't 100% correct. And it's a good reminder that while... something like GitHub Copilot can help you write most of the code.

You still want to check to make sure that it wrote what you wanted it to. This is functionally correct, except it's missing using this context. So I'm just going to go up here and again, do exactly what we saw before, which was if context is null, then just invoke the continuation.

Otherwise do that whole execution context, execution context. It doesn't feel like there should be a way to say all of that in one line though. Yeah.

Maybe we should add an overload of run that, you know, does the right thing. Just doesn't. Yeah. Yeah.

And so complete is now complete. So we've implemented is completed. We've implemented set result. We've implemented set exception.

Let's do continue with next. This one was also pretty simple. So we'll just lock around ourselves. And now we can say if.

If we're already completed, well, we can just queue the work item that the user asked us to invoke. We don't have to do anything special. I'll just do this and this.

Otherwise, we're just going to store that for later. And then we also need to capture the context for use. And that's continue with. So now we've hooked up this delegate and all we're doing is saying if the task is already done, run this now by queuing it.

If it's not done, store it such that when it is completed, this code over here can then launch this. In this, I don't know if the word naive implementation, in this simplified implementation, how bad is it? I just want to make sure, how bad is it that you're locking on this?

Is that a reasonable thing to do because we are creating a low-level component? As a general rule. Application developers should not be locking on this because they don't know who else is locking on the thing. But is it less of a sin that you're doing it?

So there's two aspects to your question here. One is using locks in general, and the other is locking on this. I'm speaking specifically about lock this, which I was taught and ingrained to never do. Don't do that.

Right. Yeah. So the concern is, if this was actually task, you would definitely not want to do this.

And the reason you don't want to do it is it's. This, the lock that you, that you, is basically an implementation detail. It's private state.

And yet this, the reference to the object is public. So it would be akin to having, you know, it would be exactly the same as having a, uh, my lock object, but then choosing to make this public. Right. Because anyone else could lock on it. And now you're having this weird interaction with code that you didn't expect to be touching your private state.

And so that's really what it's about. If you know that no one else is going to be, that no one else except you will have a reference to your object, you could lock on this. I get it. So we are a public class. My task is like.

a public class, but it's not a public class. Pardon me. It's not, no one will ever have a handle to us.

So there's no way for anyone to ever lock on us is what you're saying. Ah, that's good. That's good information. Yep. So I didn't leave any of those lying around today.

I don't think so. Okay. So our last method, the only thing we have left to implement is this way, because I want to be able to, for at least, especially for demo purposes, walk up to the task and say, let's make the fonts just a smidge bigger.

Sorry. I know that you're good. I want to point out how talented you are in your zooming. Your control scrolling is very good.

But I like watching your brain as you're like, I'm getting increased scope, but I'm reducing scope. You're using the zoom yourself, not just as a presenter, but also as a way of scoping the space that you know that this work will take up. I want to be thoughtful for our friends on their phones and on their iPads, too.

Apologies to them. Yeah, sorry. So now we want to be able to wait for this to stop.

just walk up to the task and synchronously block. And what's fun about this one is we can actually implement this in terms of continuous. And this isn't just some novelty that I'm going to do here.

This is actually how task.wait is implemented. It's also implemented in terms of continuations. So I need to be able to block.

And anytime you want to kind of synchronously block waiting for something, you need some sort of synchronization primitive. In this case, I'm going to use a manual reset event. And I'm going to again, lock. And I'm going to say if we're not completed, I need to do something.

And I'm purposefully ignoring what GitHub Copilot is telling me to do here because it's right, but it's also making it hard for me to be sort of pedagogical and teach because it's jumping way ahead. So I'm going to wait for this manual reset event, but only if I create one. And so I'm only going to create one if this task hasn't yet completed. If it's already completed, there's nothing for me to wait for.

So if it hasn't completed, then I actually instantiate this. And now I need to signal this manual reset event to become in a signaled state so that anyone waiting on it will wake up when this task completes. How do I do that? I can say continue with manual reset event dot set. So now I'm implementing wait in terms of continue it by saying, when this task completes, hook up a delegate that will invoke manual reset event slim.set, which will then cause this to wake up.

And manual reset event slim is literally the slim lighter weight version of manual reset event. And because you're not going to be waiting long, it would be appropriate to use the diet coke version of manual reset event. It's actually appropriate to use the diet coke version in 99%.

of situations. And better to use the Diet Coke version, even though I know my wife tells me that I shouldn't be drinking Diet Coke. I know, my wife says the same thing.

But in this case, the manual reset event is just a very thin wrapper around the OS's, the kernel's equivalent primitive. And that means that every time I do any operation on it, I'm kind of paying a fair amount of overhead to dive down into the kernel. Manual reset event slim is a much lighter weight version of it.

That's all implemented up in user code in dotnet world basically just in terms of Monitors which is what lock is also built on top of the only time it's less appropriate to use this if you actually need one of those kernel level things which you typically only need if you're doing something More esoteric with weight handles and in a broader. Yeah, so anyhow totally good here The last thing I need to do though is, we can see, I mentioned using the grayness of my fields to know whether I was done or not. Obviously, this one is still grayed out. I'm missing something.

This grayness is saying that I said it, but I've never actually read it. And that's because when I wait on this thing, I actually want whatever exception was there to propagate out, and I haven't read it yet. So now that I know this is done, I'll just say, If exception is not null, and again, I'm going to ignore GitHub Copilot, even though it's right.

I basically want to throw this exception so it propagates out. Now, this isn't ideal either. If you have an existing exception object that has previously been thrown, that exception contains a stack trace.

It contains what's referred to as the Watson bucket, which contains sort of aggregatable information about where that exception came from for use in. post-mortem debugging and diagnostics. when I throw exception like I'm doing right there, that's going to overwrite all of that information. I don't want to do that.

One common way around that, and that was the only way around that when task initially hit the scene in.NET Framework 4.0, was to wrap it in another exception. You might wrap this in. Have like an inner exception.

Exactly. You can see exception has an inner exception, and now throwing this will populate this exception stack trace. this exception will still be available as the inner exception and it won't be touched. So all of the stack trace will stay in place.

And then we're not doing a just a throw, because we're not in the middle of an actual active exception that we have just previously previously caught in our rethrowing. Exactly. Yeah.

Now, so task basically had to do this. While it was doing that it also factored in the fact well, a task could represent multiple operations that were sort of all part of the same overall operation. Like if you have task.whenAll, you can wait for multiple tasks and that produces a single sort of task result which needs to be able to contain multiple exceptions. So task, instead of throwing regular exception, it throws an aggregate exception. And you can see from constructors that are available that you can give this any number of exceptions and it can wrap any number of inner exceptions.

That's what the params there means. But here I'm only wrapping one. Now, since since task was introduced, and something that was very useful for await, is another sort of pretty low level type called exception dispatch info. The name doesn't really matter. But what this does is it takes that exception, and it throws it.

But rather than overriding the current stack trace, it appends the current stack trace. And so for anyone who's looked at an exception that's propagated through multiple awaits, You might be used to seeing a bit of a stack trace and then a little dotted line that says, you know, continued out or original throw location and then more stack trace. Every time this exception is getting rethrown up the call stack, up the asynchronous call stack, more state is being appended to that stack.

And that's all handled via this. So we've now implemented task and that's basically what it is. So I can go up here and I'll just say I have a list of my task. And then actually one more thing I want to do first. What I was going to say was I was going to have tasks.

And then here I was going to say add my task dot run. And then I realized we haven't actually implemented my task dot run yet. So let's do that.

So is this an opportunity for you to go over that dot run and hit, you know, control dot and see if it will a Visual Studio will generate that run for you. I could try. What do you want me to do?

Is it like a quick action? If you hit the little generate method run, will it do the right method? So it generated the method, but without implementation.

Now, Copilot can actually start filling this in for me. But again, I kind of want the fun of doing it. Oh, yeah, I agree.

I'll let it do the little things. And it made assumptions as well, of course. In that case there, Visual Studio made some assumptions about... scoping and things like that.

So public static my task run action. Yeah, which looks a lot like task.run. Now in all of these little helpers we'll see implemented, they all have a similar form.

We're going to create a task and we're going to return it. Then in the middle here, we're going to do something that does the operation and completes that task. Now in the case of run, all that's doing is saying myThreadPool, user work item, we're going to have a try catch block that.

invokes this action. When it has successfully completed, we'll say t.setResult. And if it failed with an exception, then we'll say t.setException, and we'll bail. And now we've fully implemented task.run.

And again, other than some minor perf differences, this is exactly what task.run actually does. Queues a work item that completes the task when the delegate has been invoked. I think that chunk right there, that's where it really crystallized for me from 105 to 118 right there. You've abstracted away that previous use of queue user work item, added a lot of value around the things you might want to do with the task, check on its completion and things like that. Set continuations, huge amount of value and a small amount of code.

Absolutely. And this also speaks to the kind of the ubiquitous nature of task. One of the most important things that task does isn't even the operations on it.

It's conceptually the fact that it unifies into a single type, the ability to join with any arbitrary asynchronous operation in.NET. And that was a critical step for async and await because you want to be able to use await with any asynchronous operation. And by having a single type that can represent any of them, it makes that a whole lot easier.

more convenient. So this is the building block. This is the beginning of it. We've got about 20 minutes to bring it home then.

So let's understand how task then becomes such a powerful pattern. So that's great. So I wish two aspects of that.

First, we can see now that my squigglies are gone. And here I could just say for each T in tasks, T dot wait. And I'm going to lower this number because I don't want to wait for thousands of these to complete.

Now when I run this, where's my, oh it's still building. Build. Let's try to zoom in on both our terminal and our code. As soon as it finishes here. Oh, is it thinking?

There you go. You're over here. So when this gets to 100, we can see it hasn't exited. But the moment it gets to 100, then my application exits because it was waiting for all of those tasks to complete.

Now, you mentioned, Scott, that we start to see this being a building block and we can build other things on top of it. It's kind of unfortunate that I'm having to wait for... each of these tasks individually. You want to wait for all of them. Wouldn't it be nice if I could say myTask.whenAll and just pass in all of these, and then for the purposes of my demo, I'm just going to block waiting for that thing.

I'll use your little trick here and I'll say. You can hit Control. I think as well.

I was using the wrong one, Alt. Control.generateMethod.whenAll. We're going to have this return on myTask. I also want this to be public. This is taking a list of tasks.

So we're going to do the exact same thing we saw before. I'll say my task and return t. And now we again just need to fill in this intermediate part.

I'm going to handle one base case, which is if the number of tasks is zero, then I'm just going to say, all right, I'm done. Nothing else for me to do, right? Otherwise, I need to loop through all of these tasks and hook up a continuation to each of these.

that will basically count down how many are left, and when all of them have completed, then it will set that task. So out here I'm just going to create a little continuation that I'm going to reuse for all of these tasks. I'm going to have a little counter, how many are left, tasks.count, and here I'll say if after decrementing remaining I end up with zero, Then I'm going to complete the task. Now I should also be doing some stuff with exceptions here. Not going to bother with that right now.

It's kind of not the point. But now I can take this continuation. I can put it here.

Oh, I have something else named T. Sorry. Task in tasks. And now I've implemented when all.

So if I go up here, I've got my little, my squiggly is gone. I can run this again, bring this window over. And again, now when I get to 100, we should see this.

All right. And then jump back to the implementation of that very briefly for me, sir. So I want to call out, you're using interlock.defgerman instead of just saying remaining minus minus because? Because I don't have no idea what these tasks are doing. They might all be completing at the same time or not.

And if they were to both complete at approximately the same time, this continuation, two different threads might be trying to decrement this value. And if they each tried to do it without any synchronization, their operations might sort of stomp on each other. And we might lose some of the decrements, which would be a big problem because we wouldn't know when we actually hit zero.

So I am using this lightweight synchronization mechanism to ensure that all of the decrements are tracked and that only the one that is actually the last one to complete performs this work. Because as we saw... If I dive into this, if multiple of them think that they're the last one and they try and both complete it, it's going to fail. Right. And you said lightweight synchronization method as opposed to trying to do some locking around that, which I suppose you could have done.

Totally could have. I could have had it taken a lock here. But this is one place where it's really simple and straightforward to use basically the lowest level synchronization primitive that I have available to me, which is a lock-free interlocked operation.

Very cool. As long as I'm implementing other helpers, I can implement some more and we'll see they all follow the same pattern. One of the most useful that people find with tasks is delay. So let's also implement that.

So I can say delay and we'll have some timeout here. This is again going to follow the exact same pattern we've seen before. So we'll say new task and we'll return that task. Then here, I just need to do something that after this timeout has happened, we'll complete the task. I can use a timer for that.

So I'll say new timer, when this timer completes, it's just going to set results. And then I'm going to schedule the timer to complete in this number of milliseconds. Why is that more appropriate than what a night what someone who may be trying to use to this exercise themselves might naively say, Oh, thread dot sleep?

That's a great question. thread dot sleep, takes the thread and puts it to sleep for the specified amount of time. So if I had 12 threads in my thread pool, and Someone wrote thread.sleep 1000 as part of their work item. Now all of the threads in my thread pool are unable to do anything else. And that means if someone comes in and queues something that's actually important, they're going to have to wait for all those threads to become available.

Wouldn't it be nice if we could instead sort of still have my logical flow of control pause its logical flow of control for this period of time, but allow that thread to do something else while that's happening. And that's the beauty of await task.delay. So we can see that sort of in practice now that we have our delay. I'm just going to go and delete.

I'll comment out all this up here. And I'll just do something simple like console.write hello. And then I'll say my task.delay, let's say 2000. And then after that delay, I'm going to use our new continue with method to now print out console.write. And again, I'll have our console.readline here to make sure our program doesn't exit because we're spawning this asynchronous work, but we're not currently joining with it. And so now when I run this, my window pops up, we get hello, and then two seconds later, we get world.

But I would kind of like to be able to just say wait here rather than having that console.readline. But we can see we're getting a squiggle here saying continue with returns void. We can fix that exactly the same way that we've seen in our other methods.

I'm just going to say myTask, newTask, down here, I'll return it. And I just need to do something slightly different than queuing this action. Rather than queuing that action, I want to have a different, let's just call this callback.

And what this callback is going to do is invoke action. And then call t.setResult. Now we're going to do the same dance here just to be good citizens that we saw before.

We'll catch exception. We'll set the exception onto this task. And so if this action were to fail, we will still end up completing this task. And now I can just take this callback and use it instead of use it instead of the original action that was passed in. And now when I go up here, we'll see that I no longer have a squiggly and I'm gonna make this a little bit longer Just because I keep having to move my I can't figure how to get my window to start over here.

So We get our hello and then once the world's appears That's when the program nice. Hello pause for effect world exactly Kind of like an LLM right? You're spitting out these little tokens as you're waiting for you. Yeah. Now it would be nice If I could sort of not just do one thing after delay, but it'd be kind of cool if I could just take this and say, after another two seconds, I want to print out.

Like in like the LLMs, chain them and just have them go, hello, hello, hello. Exactly. Chain these things together.

And then maybe I want to do that again in here. Say, how are you? Right. But we can see I'm getting a squiggly because.

I'm trying to return a task out of something that was just accepting an action. We were talking about the delegate earlier, that the action delegate is just void returning. Moreover, even if that worked, I want this wait to not only wait for this work that has completed, but also for any task that it's sort of returning out of its body.

So I need a slightly different version of Continuate that's able to sort of unwrap that inner task. Continue with here. I'm going to just copy and paste this whole thing and create a slightly different version of it We already talked about action.

I'm just going to take another version of it that not only invokes this action, but this action is then going to return another task. And we don't want to return the task that's completed from here until this task has completed. So I'm just going to store that.

Next, take this result out of here because we don't want to complete when this outer one has completed, only when the inner one has. And then I'm just going to hook up a continuation to this. So I'll say when this task completes...

Kind of a link lift of actions here. Sorry, say that again. I just kind of a linked list of actions, just like what exactly in the, in the, in the tree.

So here I'll just say set exception with that exception. Otherwise we'll say set result. And I don't need to change anything else. Now my squiggles have gone away.

And with any luck, when I run this, we'll see hello world and Scott, how are you? And we don't exit until that whole chain has completed. Nice.

This is a pretty year. unfortunate way to have to write yeah zoom zoom in a little bit there it looks a little weird i mean like it it's kind of like you know what they called it arrow code in the old days where exactly and and we can fix that to some extent we could go in here and we could delete this and then here i could say continue it because of that because i've already implemented yeah but that's an aesthetic at this point like it's not this right so i could run this and it would do the right thing but i have this very linear Continue with, continue with, continue with. If I wanted to instead do something like for i equals zero, i is less than, or forgive me, just this.

And I wanted to print out for the current i, but I still wanted to have that my task.delay in here. A nice delay. I don't, what do I do, right? This won't work because I'm not going to be waiting at all.

I don't want to use thread.sleep. This is where if I had something called a wait, I want it to kick in here, but I don't. Interestingly, there is something that almost serves that exact purpose that we've had since C-sharp 2.0, and that is async iterators.

If I were to instead have code that did this, if I had IEnumerableInt, call it count, it's just got or i equals zero i is less than count i plus plus here i can yield yeah out of here and somehow i'm able to magically come back in so if out here i would say for each into i and count count ten right and that yield is just kind of like hey here's the next one we're going to keep returning keep returning it's just yields of not used enough and not well understood keyword. And one of the great ways to understand it, if I just debug into this and I start stepping through this, I call count. I didn't actually step into count yet until I move next.

And then when I step, we can see I end up back in this method and it's restoring the state each time I step. It's remembering that state from this previous operation. Wouldn't it be nice if I could do that exact same thing, except instead of yields returning out. I.

Okay, so we are in this case rehydrating the state. In this case, the state is simply I. You're going to rehydrate the entire execution context. So what I'd kind of like to do is to have this code, but have this in a method.

Let's call it, forget what this is actually returning for a second. Let's just call it print async. And I want to sort of yield return out this task. from this.

And rather than kind of manually pushing it forward, calling move next on that I enumerator, what I want to have happen is when this task is yielded and this task completes, I want its completion to call move next. I want it to sort of drive itself. So we can implement that.

And actually we can implement it pretty easily. Let's go down to where I was writing all these helpers and we're going to write one very last helper. I'm going to call this iterate.

This is going to take an enumerable of tasks. We're going to do the exact same thing we saw before. Turn that out.

And if you're familiar with enumerators, the main thing on an enumerator that moves it forward is a moveNext method. So we want to moveNext method here. I also want to invoke it to sort of kick things off. I need to get the enumerator of my task. Out from here, so we'll say tasks.getEnumerator.

And now we just need to implement this little bit of code that says. move the state machine forward, move next, get the task that was returned, and when it completes, move it forward again. So I'll say if e.moveNext, if we were able to get another one, and we'll fill that in in just a moment, if I wasn't able to get another one, well, I'm done. There's nothing more for me to do. And again, for good measure, we can wrap this with a catch block that will set the exception.

And now all I have to do Is this little bit of code here? What is this going to do? Well, we're going to say, what is the next task?

It's whatever was yielded. And we're going to take that and say, continue with move next. And now when I call iterate with this lazily produced iterator of tasks, we're going to start it off. We're going to enter the method calling move next, which will push the iterator forward, which will start running the code in my iterator.

Eventually, we'll yield return a task. We'll get that out, we'll hook up a continuation, and we'll exit. When that continuation runs, it will call moveNext, it'll push it forward, and so on.

Eventually, there won't be anything else to yield. My iterator will have reached its end and we'll call setResult. If I go up here, and now I can just say myTask.iterate printAsync, and if I run this, we'll see.

We're getting that delay, and we've been able to do it with just this little helper. Believe it or not, that little helper is basically what the compiler generates for async await. We've effectively implemented async await here. In fact, in the C-sharp compiler, the logic to support implementing iterators, and the logic to support implementing async methods, it's like 90 percent the same. There are a few differences here and there, but for the most part, it's implementing a state machine that allows it to be exited and re-entered and rehydrated and come back to where you were.

And the real thing that differs is who is calling move next. Is it the developer's code with foreach calling enumerator.moveNext? Or is it the completion of the awaited task or in this case, the yield return to task calling continueWith with moveNext that will feed back into it. We can take this Further, I can show full circle how we can actually replace this with a wait by implementing a little waiter on this and how we can replace this with async my task by implementing an async method builder.

But at the end of the day, it's just some syntactic sugar that's allowing the C Sharp compiler to use our custom task. We really have implemented async await from scratch. I love that word syntactic sugar because I think people don't realize that like each little additional layer of abstraction is indistinguishable from magic.

And if we accept those little abstractions as being black boxes, then we are going to struggle. But if you realize that like when you went and made that iterate function, go back down there, you just buried it, you hid it, but it's so clean and small. and now you have a nice helper function. But you can go and look at that.

You can go and see that. You can see what the compiler generates. You're not helpless. Exactly.

And I know you have to run, but just to exemplify that, for now we're going to pretend that task exists again, and I'm going to make this await. Await is just the compiler saying like, hey, I want to hook up a continuation. Tell me how to hook up a continuation to your thing.

It knows how to do it for task. It doesn't know how to do it for my task. But in just a few lines of code, we can make that keyword work for our task. So I can just write a little struct here called a waiter that's going to accept a task. Here I'm using primary constructors.

I just have to implement a little bit of code. I say I notify completion and implement that interface. We're not going to, I'll just have the, let's just let the GitHub Copilot write it all since I know you're. stressed for time. So we're just going to fix up a few things here and we can now, with one more line, get a waiter, do this and if, oh this needs to be public.

You notice this squiggly has gone away, we're now able to await. our custom task as part of this loop. Again, we see the exact same thing, but using the actual async await. Yeah. Just like you swapped from task to my task, then you can swap from a waiter to a wait, and you're really showcasing that the core functionality is the same.

Exactly. And if we had a few more minutes, we could do the exact same thing and I could make this say my task, but I won't keep you from your son. I appreciate that. This has been incredibly helpful. I hope that the other folks who are watching have enjoyed this as much as I have.

That's basically 70 minutes, just a little bit over an hour to understand that fundamental concept behind Await and Async, how it works, why it works. And then a good reminder to us all that you can see that you can dig in if you choose to. I want to encourage folks, though, who may be application developers who might think, like, why do I need to know this? A reminder that I tell myself is I pick the layer that I understand truly, and I go one layer below to get a little bit uncomfortable.

I don't think, Stephen, you're telling us that we all need to drive stick shift. We all need to have a kit car in the garage that we built from scratch. If you want to build a toaster, you don't have to smelt your own iron. But it's fun to just look underneath the hood and go, huh, I use it every day.

Now I know. And by doing so. you build a better sense for how it works, and then you can use it better, even if you never have to write that code ever. Exactly. Fantastic.

All right. Well, I think this has been super fun. I'd love to have you and some of your engineering friends on to chat with us again sometime. So we'll do that soon. Always happy to chat.