DEV Community

Gerardo Enrique Arriaga Rendon
Gerardo Enrique Arriaga Rendon

Posted on

Telescope 2.9

Welp, Telescope 2.9 release a few days ago.

Amasia and I were in charge of making the release a reality, since we were sheriffs for that week (which is like a supervisor of sorts). I think we both did a pretty good job for this week, although she and I think that we could have done a better job (since there were several tasks that were pushed to the next release...).

A lot of things were merged for release 2.9!

  • The parser service is almost done! Well, it is technically done, but we need to remove the legacy back-end that Telescope currently uses. For now, we had to turn it off, and we are expecting to file a PR that removes this back-end for once and for all.
  • Setting up our linter with the current monorepo structure is done! Now, we have to figure out how to migrate things like tests and build steps (if it is even possible...).
  • Although not yet in active use in production, the client authentication with supabase has been merged, and hopefully, after release 3.0, we can use Supabase for real.
  • The search service got a nice upgrade in the back-end, that it now supports autocompletion, so if you were to give it a name that is half-written, it will return several suggestions, just like how Google or YouTube do.
  • The dependency-discovery currently features searching for issues of repositories linked to the dependencies of Telescope, with some minor implementation caveats.
  • There are other features that got merged, such as a YouTube banner information support for YouTube posts, and a cool star field that shows GitHub contributor profiles.

Experiences as a sheriff

It was the first time I did something like this, and as always, I found a lot of things I could improve on. Some funny anecdotes:

Telescope 2.9 released early?

We tried to do a patch release (2.8.3), and it went somewhat wrong. We asked Duc's guidance on it, so that we could prepare for release 2.9, when accidentally he released Telescope 2.9! The command we use to do the release is:

pnpm version version-tag -m "Message..."
Enter fullscreen mode Exit fullscreen mode

However, pnpm (and pnpm) have a shorthand if you don't want to specify the version-tag value by hand. So, if you do regular patch updates (like 2.8.73!), you can write

pnpm version patch -m "Message..."
Enter fullscreen mode Exit fullscreen mode

However, Duc accidentally mixed the shorthand with minor, which refers to the second number in the version tag, so this made pnpm bump Telescope to version 2.9!

We did run some commands to fix this. We had to reset the master branch from the upstream repository, as well as delete the tags that were associated with the commits. We also deleted the release changelogs that get generated every release. After learning from our mistake, Duc will probably never use the shorthand again...

Huge backlog for release 2.9

During our first meeting of the week, we were going through the open PRs to figure out the current progress on them, and see what issues would need to be figured out for successfully merging them. After going through the PRs, we decided to go through the issues.

We discovered there were 3 full pages of issues scheduled for 2.9. Amasia reacted very surprised, and she also was impressed that I was able to hold back my reaction when I pointed out the number of pages.

What happened with the dependency-discovery service?

So, the dependency-discovery is finished. We can issue a front-end that does the thing we were planning all along, but there are some huge caveats:

  • It is not efficient,
  • It does not have tests.

The second caveat can be taken care of, but the first one is a little bit more cumbersome. I'm not implying that it is impossible, but the way I currently wrote it does not allow for a lot of optimization.

The first major problem is that I am using a poor implementation of what a cache is supposed to be. Since we are doing calls to a GitHub API service, there are costs when we call it, one of those is the rate limit. Since we do not want to request the GitHub API every time we receive a request, we need to cache these responses, serve them when another user accesses the same resource, and request the GitHub API service when the cache has reached a specific lifetime.

Ideally, I should have used redis in the first place (since we already have it). Why did I not use it? Well, the way I handle the requests does spark some annoying edge cases due to the way nodejs works.

Nodejs and the event loop

Quick crash course on Nodejs: Nodejs runs JavaScript in a single thread. This is a major oversimplification, but it does describe the reality really close.

However, how can Nodejs be fast, then? Well, although it does things single-threaded, it allows for a lot of concurrency due to the asynchronous nature of certain tasks in JavaScript.

Imagine you try to request another service, which can be a service across a network, or even in your own filesystem. Since the request might take a long time to answer, you wouldn't want to waste CPU cycles on just waiting for the result, so instead you hold this task, put it into the bottom of a TODO list, and instead jump on the next task that you need to do. So, if someone made a request to your server, you can handle their request, while your server waits for the response of the service. When you finish handling this request (or you put it into the TODO list because you need to ask another service), you go and check on the task you had put on the list before. You notice that you received a response, so you continue with processing the first request and hopefully bring it to completion. After you are done with that, you check on the second task you put into the TODO list. You still haven't received a response yet, so you put it back into the end of the TODO list and instead check the first item of the list and handle that instead.

What I briefly explain is how Nodejs handles asynchronous tasks, and they are akin to how an event loop works: the program receives an event to handle through a queue, they handle it, during handling they might receive or issue more events which will get appended into the event queue, and when they finish handling the event, they go back to the event queue to handle the next event.

Of course, this is all managed by Nodejs, and the programmer has to write their programs in a way that fits this model. While some applications can be easily written like this, when you have something like a central resource that can be updated by everybody, it starts to become a little bit of a timing issue.

You see, let me explain a little bit on how the dependency-discovery handles a request to get GitHub issues:

  1. The service receives the request, which contains the name of the project,
  2. the service will find the npm package information which has that project name, and get the GitHub repo link,
  3. using this information, it will access the cache to check whether this information already exists, if it does, it copies and gives it as a response,
  4. if it doesn't exist, it will request the GitHub API for the issues,
  5. when it receives the issues, it will write it to the cache and then return them as a response.

Sounds straightforward, right? However, if you remember how Nodejs handles the scheduling, you might notice something, something really bad.

Let's say that two requests reach the server for the same resource (which means, they request for the same project name), since one has to have reached first, Nodejs will organize them in the queue like this:

Request A Start -> Request B Start
Enter fullscreen mode Exit fullscreen mode

So, we start with A first. We will go through steps 1 to 2, assuming that we do not need to await anything. When we reach 3, we have to await since we are accessing the cache (that being redis), so now Nodejs makes the queue like this:

Request B Start -> Request A step 3
Enter fullscreen mode Exit fullscreen mode

Since we assume our access to the database will take a while, it is more efficient throughput-wise that we leave it and jump on working with B.

So, after working with B, we will reach step 3, which gets awaited:

Request A step 3 -> Request B step 3
Enter fullscreen mode Exit fullscreen mode

We continue with request A, and we assume that we received something. Since the cache is clear, we should continue to 4. Oops, 4 requests an API through a network request, this has to be awaited for sure:

Request B step 3 -> Request A step 4
Enter fullscreen mode Exit fullscreen mode

Okay, time to handle Request B! Well, since we haven't written anything in the cache yet, it means that we received nothing, so we have to request to GitHub for the information... Oh no.

Request A is already doing this, so if we request GitHub through B for the same information, we would be wasting a request, right? Indeed, we will. So, what can we do?

Well, instead of every request making their own requests to GitHub, you make the first request to create a Promise that every other request that is accessing the same resource can await for, even that first request that created the Promise.

This is the way the service currently does it, and it has a cache that stores these Promises that are accessed by every other request accessing the same resource.

Now, what is the biggest problem with this? Well, the cache is not optimized for this, at all. Actually, calling it a "cache" is giving it too much credit, it is just a global JavaScript object that gets hold of those Promises.

How can we fix this, then? We cannot simply put redis and expect it will store Promises for us.

One way is to maybe use redis to store the GitHub data after the Promise has resolved, which is fine, but now you have two sources of information that have to have their expiration time to be taken care of. And it is somewhat redundant, since you could have future requests refer to the already resolved promise.

Another way is to slightly fix we currently handle the requests. Since this is an implementation detail, it should be fine that we change it as long as we keep the same behaviour. I have yet to think of a different way to handle the requests so that we do not reach this problem.

At the end of the day, while the depenndecy-discovery does the feature and it works, there are a lot of wrinkles that we need to iron out.

Top comments (0)