DEV Community

Cover image for What is Markov Chain?
Muhammed H. Alkan
Muhammed H. Alkan

Posted on

What is Markov Chain?

So, firstly let's learn what is Markov Chain.

Markov Chain is a stochastic process (random process) of a possible events.
So if you give an information or data, it's possible to change evolution of the events.


Now, let's see how it does work.

For example we have three inputs:

  • I am Sam.
  • I am Kevin.
  • I do

Let's go step-by-step to learn what algorithm does

Building chain

  • First, we should tokenize our sentence. The output should look like:
["I", "am", "Sam."]
["I", "am", "Kevin."]
["I", "do"]
  • Secondly, we will start to build our chain. We need to iterate on each tokenized sentence, let's make list to iterate on it:
[["I", "am", "Sam."], ["I", "am", "Kevin."], ["I", "do"]]
  • While iterating on this list, we should create our chain.

    • First sentence, first word: "I"
{
  "I": {

  }
}

As can you see there is only "I" to generate a sentence.
So you will only get output like "I I I I I I"!

  • First sentence, second word: "am"
{
  "I": {
    "am": 1
  },
  "am": {

  }
}

Because "am" words becomes after "I", we have added it
in "I" node. And 1 becomes for 1 / 1. Because there is
only 1 possibility that can come after "I" word. We have
created another "am" outside "I" for the possible words
that can come after "am".

  • First sentence, third word: "Sam."
{
  "I": {
    "am": 1
  },
  "am": {
    "Sam.": 1
  },
  "Sam.": {

  }
}

We have added "Sam." to "am" because it comes after "am".
And we created another node outside "am" for possible words
that can come after "Sam.".

Simply, we did same thing we did for "am" before.
And our sentence has ended, continue with other sentence.

  • Second sentence, first word: "I"
    We don't have to do anything because "I" already exists
    and it's not in any node.

  • Second sentence, second word: "am"
    Same process with Second sentence, first word.

  • Second sentence, third word: "Kevin.

{
  "I": {
    "am": 1
  },
  "am": {
    "Sam.": 0.5,
    "Kevin.": 0.5
  },
  "Sam.": {

  },
  "Kevin.": {

  }
}

We have added it to "am" node, but the value is 0.5. Why? Because
there is two possible words that can come, "Sam" and "Kevin.".
So the value should be 1 / 2 = 0.5. And we have created the node
outside "am" as we do for each new node.

  • Third sentence, first word: "I"
    Nothing! Because it exists and not in any node.

  • Third sentence, second word: "do"

{
  "I": {
    "am": 0.666666667,
    "do": 0.333333333
  },
  "am": {
    "Sam.": 0.5,
    "Kevin.": 0.5
  },
  "Sam.": {

  },
  "Kevin.": {

  },
  "do": {

  }
}

Now, why "am" became 0.666666667 and "do" 0.333333333?
Let's make simple calculations. Now, there is two possible words
after "I", "am" and "do". So the values should be 0.5 to 0.5.
But "am" node has words that can come after it. The count is two so
the value of "am" should be 1 - (1 / (1 + 2)). But "do" does
not have so it will doesn't change the amount. So if "am"'s amount is
1 - (1 / (1 + 2)) = 0.666666667, and as you can guess "do" is
1 - 0.666666667 = 0.333333333.

Markov chains example

Creating the sentence

Let's say we want to create 3 word sentence now.
So we will select 3 random words, but they should
which words can be come! Let's start!

Our createn data looks like:

{
  "I": {
    "am": 0.666666667,
    "do": 0.333333333
  },
  "am": {
    "Sam.": 0.5,
    "Kevin.": 0.5
  },
  "Sam.": {

  },
  "Kevin.": {

  }
}

The steps are these:

  • Select a random node start. Uhmm, our lucky node is... I!

Sentence: I

  • Get into I node. The possible words are:
    • am
    • do

Now, let's create list that haves 0.666666667 times am,
and 0.333333333 times do.. Now let's select our lucky node!

Our lucky node is... do! That's a low chance.

Sentence: I do

  • Get into do node. The possible words are: nothing! What we should do now? Is it end of our sentence? No! Let's select a node randomly again.

Our lucky node is... Kevin.!

Sentence: I do Kevin.

Here's our three words sentence: I do Kevin..
And our algorithm will continue like this.

If we want to count the of all possible sentences:

2^len(data)

So, if our data has 4 unique words, the count will be
2^4 = 16 sentences. That is it!

Thanks for reading!

Top comments (3)

Collapse
 
deciduously profile image
Ben Lovy

Excellent, clear write-up!

Collapse
 
rafalpienkowski profile image
Rafal Pienkowski

Awesome post. You reminded me of my studies where I learned about Markov Chains, Petri Nets, Turing Machines, etc. That was a great time. Thanks for that.

Collapse
 
ameliagapin profile image
Amelia Gapin

Fantastic explanation! I’ve been running a Twitter bit that generates markov chains based on my own tweets for a while. It’s been a lot of fun!