This post should have been posted two weeks ago, but I didn't write it, so let's imagine that I wrote it two weeks ago :)
And yes, this post is a follow-up of the other I posted...
So, release 3.0.0-alpha has happened, and with it, we are nearing our final release: release 3.0.0.
I can't help but feel a little bit nostalgic when looking at the first release I worked on, release 2.5. Even though it has been only four months, I felt that much more has occurred.
Either way, what I managed to contribute for this release? Even though not as active as previous releases, I managed to finish the tasks that I described on my previous blog post.
Doing backups: not as difficult as it sounds
I'm glad we chose Postgres as a database. Not only because it's open-source, or that it's free, or because it is well-integrated with Supabase, but also because it has an easy-to-use client program to run backup creation and restoration.
I am not an expert on backups, so I don't know much about making backups on databases, but the experience I have when making backups
on my personal data, I always found it somewhat unreliable when I had to use a third-party program to create backups, since Windows builtins are not good enough. I am glad I was proven wrong when it came to dealing with the difficulty of it.
However, there was an important thing I had to take into account. All of our important services are deployed as Docker containers, so using localhost:5432
to refer to the database is not going to work. The original idea is to create a script and run it in the host computer that is running the containers. However, @humphd
pointed out that that is not going to work, that we had to move the script into its own container that accessed the database container throught the docker network.
So, after reviewing how to write Dockerfiles, the next step I had to verify is to how run the script. The main idea is that the script is ran as a cron job at a specific time inside the container. I was lucky enough to find a blog post that explained just what I needed. I had to place the script into a folder of the container's file system so that it can be run at 2 o'clock in the morning.
That was run for creating backups, however. I also had to write a utility script that would restore the database using the backup generated by my script. Again, thanks to the wonderful client programs offered by the postgres team, this was a cakewalk.
The major difference between the restoration script and the script that creates the backups is that the restoration script does not have to run periodically, so I just included it inside the container. That way, a system administrator can connect to the container and run the script inside. With Portainer available, this task becomes fairly straightforward and accessible to do. If you want to check the PR, here it is to your delight.
dependency-discovery
, are you tested?
So, after having a crash course on unit tests for the nth time, and some reading on the jest
documentation, I wrote the tests for the /projects
and the /github
routes.
The main problem I have when writing unit tests is that I don't know how much is a "unit". Some websites that a unit can be a function, while others say that a unit is a class, while others say that an entire module is a unit! With so many different definitions, it is hard to choose a source of truth.
Instead of worrying on the actual definition of a unit test, I had to understand the reasoning behind it. Why does make a unit test different from an integration test or an end-to-end test? They tend to be small, so that they are faster to run at once. They tend to not have points of failure. They also tend to not directly depend on anything that could influence the result of the test, among other stuff.
So, in this case, I had to understand something regarding these tests. In these tests, we are testing just the routes and their responses, so that means we don't care about how the modules that the routes depend on do their work, we just care what they give us in return. We will assume that they work (although in some cases, they might not work), so that we can focus on the defects that our specific code has, instead of the whole system at a time. This brings the important concept of mocks.
When I started reading the jest
documentation, they mentioned on how to use mocks and the like, but I failed to understand why would you want to mock your own code. Well, the lesson was, it does not matter if the dependencies that your code has are also another part of the project, they should be treated as a third-party library that will always work when the tests start running. This helped me on how to write the mocks that I needed for the unit tests, and thus helped me to write the tests themselves, too.
And with that, the work for release 3.0.0-alpha has been finalized. Now, onto the final step, release 3.0.0!
Top comments (0)