DEV Community

Cover image for Predict & Generate Output From Fizz Buzz Problem with Machine Learning Approach; Deep FizzBuzzNet.
Rama Reksotinoyo
Rama Reksotinoyo

Posted on • Updated on

Predict & Generate Output From Fizz Buzz Problem with Machine Learning Approach; Deep FizzBuzzNet.

Wait, because the title is really fancy in my head, i will consider using that title for my thesis if I take master studies.

Maybe you are familiar with FizzBuzzProblem, like al-fatihah in Islam, fizzbuzz problems are also mandatory in the world of job application if you are a software engineer. Usually we will be asked to make a loop i in range(n),

  • if i is divisible by 3 it will print fizz

  • if i is divisible by 5 it will print buzz

  • if i is divisible by 3 and 5 or the number 15 it will prints FizzBuzz.

Implemented in python.

def fizzbuzz(number: int) -> str:
    if number % 15 == 0:
        return "fizzbuzz"
    elif number % 3 == 0:
        return "fizz"
    elif number % 5 == 0:
        return "buzz"
    return str(number)
assert fizzbuzz(1) == "1"
assert fizzbuzz(3) == "fizz"
assert fizzbuzz(5) == "buzz"
assert fizzbuzz(15) == "fizzbuzz"

Enter fullscreen mode Exit fullscreen mode

There are many approaches to solving this problem. From printing manually, this is definitely forbidden for you if you are a software engineer. Until using loop + condition. But there is an approach that I think is really cool, namely the machine learning approach, as I saw on Pydata's youtube video by Joel Gruss
So I decided to build it using another deep learning library, namely Pytorch.

Generating Datasets

First step that i wants to do is, generating the datasets by building a function which accepts two parameters, i and numbers of digits which produces output list of encoded numbers into binary.

def binary_encode(i: int, num_digits: int) -> List[int]:
    return np.array([i >> d & 1 for d in range(num_digits)])
Enter fullscreen mode Exit fullscreen mode

I will tests this function with 10 times.

Image description

Next, i will encode of the fizzbuzz output, depends on which output supposed to do.

def fizz_buzz_encode(i: int) -> int:
    if   i % 15 == 0: return 3
    elif i % 5  == 0: return 2
    elif i % 3  == 0: return 1
    else: return 0

def fizz_buzz(i: int, prediction: int) -> str:
    return [str(i), "fizz", "buzz", "fizzbuzz"][prediction]
Enter fullscreen mode Exit fullscreen mode

Spliting datasets into training and testing

X = np.array([binary_encode(i, NUM_DIGITS) for i in range(101, 2 ** NUM_DIGITS)])
y = np.array([fizz_buzz_encode(i) for i in range(101, 2 ** NUM_DIGITS)])

X_train = X[100:]
y_train = y[100:]
X_test= X[:100]
y_test= y[:100]
Enter fullscreen mode Exit fullscreen mode

here's the sizes of splitted datasets

Image description

Modelling

This was the most interesting process for me, because i can play with hyperpatameters tunning, here's code :

class FizzBuzz(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.first = nn.Linear(10, 100)
        self.relu = nn.ReLU()
        self.bactnorm1d = nn.BatchNorm1d(100)
        self.output = nn.Linear(100, 4)

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        a = self.first(x)
        relu = self.relu(a)
        batcnorm = self.bactnorm1d(relu)
        return self.output(batcnorm)
Enter fullscreen mode Exit fullscreen mode

I picked cross entropy loss as my criterion and Adam as an optimizers, with 1-e3 for learning rate.

At the end of the training process, this is the loss of my last epoch, i set 1000 epochs by the way.
Validation loss: tensor(0.5942, grad_fn=) Training loss: tensor(0.3454, grad_fn=)

Testing

Image description
This was pretty good, the machine learning model working correctly at leasy for 100 testing data.

Full code on my github repo

Reference:
Jeol Gruss's Blog

Youtube

Blog

Top comments (0)