## Disclaimer

Today's topic will still be split between today and tomorrow. Since you might tell from the meme, I don't know if I'm a genius, an idiot, or that's just how math maths. Usually it's the second one.

Today we'll discuss

- Elementary Transformations
- An example of using
*row-echelon form (REF)*to help us find the particular and general solutions - A missing connection between previous discussion.

Tomorrow, I'll write about what's *row-echelon form (REF)* even is and what's the difference between using gaussian elimination to create a upper triangle matrix. Honestly, I'm just curious because that's what REF seems to me...

## Elementary Transformation

Basically, it transforms the matrix into a simpler form without changing the result. We can transform it in a few different ways

- Exchange of two equations (rows in the matrix representing the system of equations)
- Multiplication or divisions of an equation (row) with a constant.
- Addition or subtractions of two equations (rows)

### Example

#### System of equations

#### Matrix-vector multiplication form

Let's divide the variables and the constants!

## Augmented matrix form

Finally some new UI design of the matrix, looks more elegant this way.

### Now what?

Now we'll transform the augmented matrix using elementary transformation to becoming a *REF*!

Before we continue, remember! R stands for row and the notation will be Rx -> Ry, which means the value of Rx will be put in row Ry. Now let's continue.

#### Swap R1 and R3

#### Multiple Row Multiplication

#### Subtract R4 with R2 and R3

#### Scaling

### Okay Terra, you made me read all of this complex mumbo jumbo, what do I do now?

First off, you need to relax. Secondly, Given that a+1 = 0 we can create a solution for `a = -1`

!

But... I need to confess, the authors didn't provide a step by step solution to their answers :( But I did create one for my solution :D so you be the judge if I'm correct or not by proving it yourselves as well!

#### Particular solution

Remember on the previous chapter we used a type of identity matrix? well, there isn't one here and since on the pages I'm reading on didn't have the steps and only the solution, I'll show you know I did it :D

##### Find Ax = B

and assume that:

We can split the result to form an equation like this, where each x is associated to a specific column.

##### Let's assume that d and e are equal to 1 and add both of these to see what we get

Notice that the result of adding these result in the negative value of the result we're searching for?

##### Finding d and e

Assume:

and

Given:

Since we know that x and y are not zero, the only way to fulfill this equation is by having (d+1) and (e+1) = 0.

So, both

#### Particular Solution Result

Great! now we can conclude that the particular solution is:

Though... there's a small caveat. This isn't the author's answer, it's mine, I tried finding the solution in the sentences or maybe I slipped a page or two, but I think it's after going in-depth to *row echelon form* first, so I'm not sure about the method used to solve it.

#### General Solutions

I won't go into depth in this one, considering I also got the answer of Ax = 0 different. For more context, I'm using the same method as the yesterday with the lambda is from column 4 and column 5 (Since only those two contain the most non-zero column) and the column will become -1.

I'm really not sure if this is just math being math or I'm doing something wrong!

##### My answer

##### Author's answer

### Ending note:

That's the end, I spent too much time racking my brain about the answer and might as well post it since I might get an answer on what kind of behavior is this.

## Acknowledgement

I can't overstate this: I'm truly grateful for this book being open-sourced for everyone. Many people will be able to learn and understand machine learning on a fundamental level. Whether changing careers, demystifying AI, or just learning in general, this book offers immense value even for *fledgling composer* such as myself. So, Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong, thank you for this book.

Source:

Deisenroth, M. P., Faisal, A. A., & Ong, C. S. (2020). Mathematics for Machine Learning. Cambridge: Cambridge University Press.

https://mml-book.com

## Top comments (3)

What an incredible series! I was a physics student at uni some years ago and reading math-related posts makes me miss those times. A great and very clear post!

And as a fellow physicist mixed with geology (aka geophysicist) I feel you! Our strenuous journey through the thoughts of Gauss, Euler, Fourier, and more can't be forgotten!

I hope you can criticize and give me feedback if there's any misconceptions that I'm posting! Have a nice day!

I'll definitely follow and read those articles very carefully. Thanks for writing such a high-quality post for the DEV Community.