DEV Community

Bahman Nikkhahan
Bahman Nikkhahan

Posted on • Updated on

Moving a legacy application to containers

In this post, I am going to talk about the areas that you need to be focused on when containerizing a legacy ASP.NET application. What are the gotchas? This post is based on my recent experience to containerize a legacy app. Hopefully, it can help you to not go through the pain that I went :)

By legacy app I mean an app that you cannot easily convert it to .Net core or .Net 5. You need to use Windows Containers to run the app.

Configurations

I talked about configuration builders in section 2 of these series. It is important to know how can you inject configuration values to containers.

Local logs

Application logs

If your application is logging on the local file system, you probably need to change this. The reason is whatever you save in the containers will be gone after you re-create your container or restart it. You can utilize cloud services like Azure Application Insight for this purpose. If cloud services are not your preferred option, then you may need to look for another way to save your logs persistently.

IIS logs

As an ASP.Net developer, it is very common to check IIS logs every now or then for debugging purposes. You may still be able to do that in Windows containers, but as I mentioned above, there wouldn't be persistence storage in containers and you need to find a way to save or stream the logs somewhere else. Even if you don't want to have persistence storage for IIS logs, you still may want to access them somehow.

You can log in into each container to view logs. However, a better way can be using Microsoft Log Monitor. Microsoft Log Monitor is a good tool to write IIS logs to container output. Then if you are using a container orchestration framework like Kubernetes, or you are using Azure Kubernetes Services, you should be able to view the logs.

Local files

If your application saves any file locally, it can even be a temporary file, then you may need to change this. Again the important reason here is lack of persistency. However, there could be other reasons like when your application is not ready to work behind a load balancer.

Suppose you are using Kubernetes to manage your containers. Then Most probably, you will end up having multiple containers running your application. But your application can be unprepared for it. For example, it saves a temporary file on a container file system and saves the file name in the database. Then when the time comes to process the file, it is possible that it cannot find the file. Because the request could have been redirected to a different container which doesn't have the file locally.

Caching

Depending on the type of caching mechanism which you use, you may need to revisit it. In our case, we had in-memory caching which we had to change. The reason to change it, however, wasn't just containerizing the app. We changed it because it wasn't easy to get it working in a load-balanced environment. Instead, we used Azure Redis Cache which was easy to work with (unless you find yourself stuck in assembly redirects!).

ASP.Net Sessions (Web Forms)

Similar to caching, you may need to check ASP.NET sessions if you are using ASP.NET Web forms. We have some legacy web form applications in our conatiners and had to make sure ASP.NET sessions work fine when more than one instance of the application (containers) exist. Again we used Azure Redis Cache for this purpose.

Third-party integrations

Most of the times legacy applications have legacy integrations too. If your application is only talking to other applications using APIs, you are in luck. Otherwise, you need to check them out.

For example in our case, our application is integrated with multiple systems using different mechanisms. For example, we were using a third party C# library which needs a license file to exist on the server. This license file should be generated on the server when we set it up. This could work very well in a non-containerized environment. However, it is not a container friendly approach. Because containers have a short life span and we didn't want to generate licenses for each container that we spin up. Luckily, we were able to remove this dependency.

Another example was the case of using a certificate to communicate with a third party application. Our support team had to install this certificate on our web servers. Again this doesn't work very well with the containers, instead, we put the certificate in Azure Key Vault and used it from there.

Use of other windows components other than IIS

If you are only using IIS on your servers, you should be able to get it to work easily in a Windows container. You can inject the application's configuration values using configuration builders as I mentioned above. However, if you are using different windows components you need to think about alternatives.

We were using two particular Windows components which caused some issues for us.

MSMQ: MSMQ is a queuing service in Windows which can be used to create and manage tasks in queues. This particular Windows component exists in the base Windows container image which we were using. The problem was however, it didn't make sense to have queues in each container when we could have a couple of them running the same version of the app. After some investigation, we realized this is no longer needed and we could achieve the same thing by maintaining a single table in the database.

Windows Task Scheduler: We were using Windows Task Scheduler to run an application periodically. If you want to use the same thing in a container, there is a bit of issue to inject your configuration values into the app using config builders (check this). Instead, we created a separate container for this particular application and used Kubernetes Cron Jobs to run the container periodically.

Moving to cloud?

If you are containerizing your application, it is likely that you are going to host your application in the cloud. We used Azure Kubernetes Service (AKS) to host our cluster.

When you move from on-premise systems to cloud, you can face some extra challenges. For example, we had some challenges around timezones. Our application used to work on servers which were using local timezone. On the cloud, however, everything was on UTC. So we ended up with some broken areas which we had to fix. One example in our c# code was when we were using DateTime.Now() method. It returns date and time in server timezone. This means we used to get local timezone and now we are getting UTC. Therefore we had to make sure this doesn't break anything and if it does, fix it.

Summary

In summary, containerising a legacy application could be challenging if your tech stack is old and there are many unknowns. I suggest doing a proper analysis of the items which I mentioned here, before going with containers. For example, ask yourself about the logging requirements of your app or the way that you integrate with other systems, what sort of cashing mechanism do you require, etc. These questions can help you identify and manage any possible risk on your way.

Top comments (0)