In the previous four articles (Migration from Classic Hosting to Serverless, Migration of a Multiplayer Game from Hosted to Serverless, Migration of a Dynamic Website to a Static Website, and Migration of Mario 5 to Serverless ) I've introduced you to my plan of migrating away from my dedicated server to a fully serverless infrastructure.
A little bit over 8 months I finished the migration (it's been a while!) of my website florian-rappl.de. So, now it's time to reveal how it went and what's improved (or even got worse).
Before I go into details - let's recap a bit.
Why Migrate?!
Quick recap: What did I expect from this migration:
- A more clean code base (finally I can clean up my stuff, maybe remove something and modernize some other parts)
- No more FTP or messy / unclear deployments - everything should be handled by CI/CD pipelines
- Cost reduction; sounds weird, but the last thing I want to have is a cost increase (today it's about 30 € per month for the hosting and my goal is to bring this below or close to 10 € - note: I pay much more for domains and these costs are not included here as they will remain the same).
There is a bit of background to this: Having my own dedicated server is something I was initially was happy about, however, over all the years the burden of properly maintaining this machine was a bit too high. I have quite some things on my plate and dealing with the (software-side) of a dedicated server was always on the bottom part of my ToDo list.
So, here we are!
The Long Way
Surely, this is "just" a little homepage, but as you can see from the previous articles there are quite some things I'd like to keep working. Overall, the whole migration was prepared all along - with a quick brainstorming first.
This identified what needs to be migrated and how. So having arrived at the end it's time to look at my homepage and what kind of migration it needs.
The Final Piece - My Homepage
In a way this website belongs in a museum. It's out there since 2008 and the inner core never really changed. Surely, there have been some migrations (e.g., from ASP.NET Core MVC to MVC 3 to MVC 4), but all in all the code never really changed.
At the core of the website is a custom CMS that allows me to write, edit, and publish articles on the website. An article may look like this:
While the content is dynamically rendered, other relevant parts of the page (such as the suggestions or tags) are retrieved in relation to the shown article.
The CMS is fully embedded in the website, i.e., the website is also rendering its own admin area that allows to perform all the administrative tasks.
All the data is kept in a MySQL database. There are 28 tables for the whole website - with some tables (such as membership etc.) not being necessary any more. Most of the data is actually stored in the analytics tables, which are then used in the administrative area to display the usage charts.
The red lines have been used by me to indicate what was already successfully migrated.
So let's see how the migration process was done in detail.
Migration Process
First, I identified the general architecture. For me it was clear that in the long run I want to migrate the page to something more static, e.g., based on Astro. But right now, I certainly don't have the time to look over this and actually do it.
As the website is an ASP.NET MVC 4 project that is certainly incompatible with ASP.NET Core I was in a bit of pickle. It was obvious that I cannot use something really cheap (such a Linux App service or Azure Container Instance). I still needed to run on Windows. But on the other hand, I did not want to have a full virtual machine (too expensive, and too much maintenance required).
In the end, I went for using an Azure App Service with a rather cheap plan (that is still fully covered by my monthly Azure allowance). As plan I've taken B1, which is around 30 € per month. Still, this plan already comes with support for custom domains and full-time operation.
Once I deployed the page I've seen that the performance was just superb. Page generation took **0ms! Well, if things are too good to be true - they usually are not true. So let's investigate...
The whole logic is triggered from a certain IHttpModule
called TimingModule
. The code is like this:
namespace FlorianRappl
{
using System;
using System.Diagnostics;
using System.Web;
public class TimingModule : IHttpModule
{
public void Init(HttpApplication context)
{
context.BeginRequest += OnBeginRequest;
context.EndRequest += OnEndRequest;
}
public void Dispose()
{ }
void OnBeginRequest(Object sender, System.EventArgs e)
{
var stopwatch = new Stopwatch();
stopwatch.Start();
HttpContext.Current.Items["Stopwatch"] = stopwatch;
}
void OnEndRequest(Object sender, System.EventArgs e)
{
var stopwatch = HttpContext.Current.Items["Stopwatch"] as Stopwatch;
if (stopwatch != null) stopwatch.Stop();
}
}
}
In short, when a request begins we start a stopwatch. Once it ends - we just stop the stopwatch. We place the result in the current HttpContext
. This way, we can access it within our view and render the page with the stop watch timing.
The problem now occurs as the system.web
configuration of the Web.config
file is not fully considered when running in an Azure App Service. This is a consequence of the App Service running its own runtime - so we don't bring the full runtime. Therefore, only configuration steps after the runtime initialized can be considered.
Consequently, the following sections can be removed from the Web.config
:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.web>
<httpModules>
<add name="TimingModule" type="FlorianRappl.Modules.TimingModule, FlorianRappl" />
</httpModules>
<!-- rest as-is -->
</system.web>
<system.webServer>
<modules runAllManagedModulesForAllRequests="true">
<add name="TimingModule" type="FlorianRappl.Modules.TimingModule, FlorianRappl" />
</modules>
<!-- rest as-is -->
</system.webServer>
<!-- rest as-is -->
</configuration>
But what to add for bringing back the timing module? Introducing... the PreApplicationStartCode
. Just introduce a class like this:
namespace FlorianRappl
{
using System.Web;
public class PreApplicationStartCode
{
public static void Start()
{
HttpApplication.RegisterModule(typeof(TimingModule));
}
}
}
Now we can have access to the startup code of the runtime. We use it to register the TimingModule
in the application.
With this everything is in ship-shape:
Now that this is working it's time to look at some performance improvements.
Performance Improvements
You may wonder: Why is it slower than beforehand? Well, there are many reasons. For once, we now are running in a shared environment. Beforehand, we had a dedicated server.
However, certainly the larger impact comes from the database. Beforehand, we have been using a MySQL system running on the same machine (i.e., short travel distance), now we run on very cheap Azure Table Storage (i.e., longer travel distance, less query throughput).
Specifically, there are many unnecessary queries running. One thing we can do to improve this is to introduce some caching. Specifically, we can cache the article listings (not the articles themselves) and the tags (they are often correlated and need to be used for coming up with suggestions).
The effect of caching can be seen immediately.
But one thing was still missing... I realized that there is a certain spike in response times when the long polling endpoint is called. In the end, it turned out that ASP.NET MVC is only allowing a single session to retrieve data at a time. If more requests come in from the same session then these requests need to wait.
There is a simple way out of this trouble: Declare the SessionState
as ReadOnly
. This way, we cannot modify the session associated with the request, but we therefore also don't need to wait to avoid any race condition.
This can look like this (here, all endpoints of the controller are moved to this state):
using System.Web.Mvc;
using System.Xml;
[SessionState(SessionStateBehavior.ReadOnly)]
public class ApiController : BaseController
{
// ...
}
For an ApiController
that does (or should not) work on the session this should be the default behavior.
As far as the cache goes the standard ASP.NET cache behaves rather poorly on the app service. What we can do is to introduce a ConcurrentDictionary
to handle this:
private static readonly ConcurrentDictionary<string, object> cache = new ();
This, of course, only works because we are restricted to a single instance. If the website would run in a multi-instance mode we'd need a different solution such as as a Redis cache (alternatively, we could then use the ASP.NET cache as it is automatically synced within an app service).
We introduce two convenience methods to control the cache.
private T GetOrAddCache<T>(String key, Func<T> callback) =>
(T)cache.GetOrAdd(key, (_) => callback());
private void ResetCache(String key) =>
cache.TryRemove(key, out _);
As an example the FindAllTags
method to retrieve all tags now looks like:
public IEnumerable<Tag> FindAllTags() =>
GetOrAddCache(nameof(FindAllTags), () => Tags.Query<Tag>().ToList());
Likewise, when we mutate the tags we also need to reset the cache. Here the Delete(Tag)
method had to be adjusted:
public void Delete(Tag tag)
{
Tags.DeleteEntity(tag.PartitionKey, tag.RowKey);
ResetCache(nameof(FindAllTags));
}
With everything in place we can now compare the behavior of the website with these changes (running in the cloud) to the previous solution.
Comparison
We started with the news overview page taking between 16ms and 25ms. This was on the dedicated server, with the database being integrated (i.e., running on the same machine) as a MySQL database.
Now, we are running in Azure using an Azure App Service. We had to use a Windows ASP.NET MVC 4 compatible App Service plan, which is a bit pricey (total cost would be a bit higher than we previously had) - but due to my monthly Azure allowance still much cheaper than beforehand.
With everything in place we reach times between 4ms and 50ms. The average is between 8ms and 10ms. That the variance is higher is not actually a problem, as long as the worst result is within the 100ms time frame.
As the average time is now much better than beforehand I'm (on average) happy. Still, I potentially could have achieved the same time on the dedicated machine; I only should have looked. But since I thought the performance is good enough I did not spend more time on it.
Next Steps
1️⃣ Migrate away from the custom analytics solution to Azure App Insights (or some other service - ideally one with a possible data import)
2️⃣ Put everything into a SSG such as Astro, i.e., generate the page fully at build (hopefully incrementally to avoid building 1000s of pages every time something small changes)
3️⃣ Use micro frontends to bring in little islands of interactivity that can be updated / deployed independently of the SSG part
The latter would only be interesting for the parts that definitely require JavaScript or data at runtime, such as the current list of articles on other websites ("stream" on the homepage) or things such as little games that are available on the page.
Conclusion
It runs - faster, more modern, and far more cost efficient (for the given subdomains no additional costs occur). The crucial part was to identify a way of providing the content in a mode that fits its purpose best.
With an average memory allocation never exceeding 70% and a peak CPU utilization of 30% I think the chosen plan works out nicely. While the plan would usually cost around 30 € monthly (thus certainly not being cheaper than my dedicated hosting beforehand) the monthly Azure allowance reduces that to 0 €.
So overall the payment for the infrastructure was reduced significantly. Previously I had to pay about 30 € per month for the hosting - now it's just $5. In total I am now at 15% of the original cost. Surely, if the monthly Azure allowance would not be in there - that ratio would not be the same - but let's not forget that the page does not need to run on an Azure App Service plan.
In the long run I will migrate it to a fully static page - thus not requiring much infrastructure and being far more efficient. But this is the content for another story...
🙏 Thanks for following my migration and happy coding everyone!
Top comments (0)