Hello, people from the future welcome to normalized
nerd! In my last video, I explained the fundamentals
of Markov Chains. And since then, many of you have asked me
to make more videos on Markov Chains so here I am with another one. Today, I’ll explain some very important
properties of Markov Chains. BTW, If you are new here please subscribe
to my channel I make videos about Machine Learning and Data Science regularly. So, let’s get started. Well, if you are new to the concept of Markov
Chains then I highly recommend you to first watch my previous video and then continue
with this one. The link will be available in the description
and will appear in the cards right now. So, I hope now you have the basic knowledge. First, I’m gonna take a very simple Markov
Chain. For the sake of simplicity, I haven’t written
the transition probabilities on the transition arrows. Just remember if there’s an arrow from state
A to state B then it means that there’s a non-zero transition probability from state
A to state B. And obviously, the sum of the outgoing probabilities from any given state
is 1. Now, focus on state 0. We are gonna perform a random walk starting
from state 0 and we’ll try to find if something interesting is going on. For the first couple of times, we are looping
in state 0 itself. But as soon as we leave state 0 and go to
state 1 then state 2 and etc. now there’s absolutely no way to go back to state 0. It might happen that in some random walks
we never leave state 0 but it might also happen that we leave state 0 and we never come back
again. So, the probability of revisiting state 0
in a random walk that started from state 0 is less than 1. We don’t know for certain whether we are
gonna come back. This type of state for which the probability
of coming back to itself is less than 1 is called a transient state. Now let’s focus on state 1. Here’s a random walk starting from this
state. We can clearly see that after some time we
are bound to revisit state 1. So, in this case, the probability of coming
back to state 1 is 1. This type of state is called a recurrent state. The same is true for state 2 also. Look at a random walk starting from state
2 and convince yourself...So, both 1 and 2 are recurrent states. Okay, now let’s try to understand why state
0 is not recurrent. Well, this is because there’s no way to
go from state 1 or state 2 to state 0. In this type of situation where some states
are unreachable from others, we say the Markov chain is reducible. Now, I’m gonna add a single edge connecting
state 2 and state 0. This time we can easily see that every state
is reachable from every other. Consequently, state 0 has become recurrent. This type of chain where we can go to any
state from any given state is called an irreducible chain. Let’s go back to our original Markov chain. You might wonder why they name it reducible. Well, it’s probably because we can divide
or reduce this chain into smaller chains which are irreducible. Let’s delete the transition from state 0
to state 1. We can imagine 2 Markov chains here. The upper one contains only one state and
its recurrent and hence irreducible. The bottom one contains 2 states and every
state is reachable from every other state hence it's also irreducible. Okay, now, I’m gonna take a slightly bigger
Markov Chain. Some of you might be familiar with this. It’s famously known as the Gambler’s Ruin
but let’s not get overwhelmed by the name; just consider this as a regular Markov Chain. What we want to find here is which states
can communicate with each other and by communication, I mean both-way communication. Let’s focus on state 0. If we start from state 0 we can’t go to
any other state. The same is true for state 3. But for state 1, we can go to state 2 and
then again to state 1. But if we go to state 0 from state 1 then
there’s no way back. A similar situation happens for state 2 also. So, we can see state 1 and state 2 can communicate
between them forever. On the other hand, state 0 and state 3 are
self-contained. On the basis of this communication, we divide
the states into classes. Here, we can have 3 classes. The first one contains state 0 second contains
states 1 and 2 and the last one contains state 3. Between any of these classes, we can always
go from any state to other states. And yes, these classes are known as communicating
classes if you haven’t guessed that already. SO that was all for this video guys. I hope you enjoyed it. And if you want more videos on the Markov
chain then please comment below. And don't forget to subscribe. Stay safe and thanks for watching.