loading...

The story of how I created a way to port Windows Apps to Linux

ruidfigueiredo profile image Rui Figueiredo Originally published at blinkingcaret.com ・9 min read

Some day during a weekend sometime around the summer in 2018 I was doing house chores while listening to a podcast.

The podcast I was listening to is called Coder Radio, and I was specifically listening to episode #322 Not so QT.

That episode is about using QT to develop a cross-platform GUI for a .NET application. In the end they decided to give up on the idea, mainly because it was very complicated to setup, required it to be developed on Windows (QT does not support cross compilation) and in the end the license was prohibitively expensive.

When I heard this I though, humm, I think I know of a way to solve this problem. I think I can come up with a solution that would work well in this context, specifically for business applications where memory usage is not too constrained.

A bit presumptuous and naive of me to think like this? Perhaps, but let me take you through that journey. I promise it won't disappoint.

windows logo and tux with an arrow going from windows to tux

The idea

.NET does not have a solution for developing cross-platform GUIs. There are a few options, but they are not easy to set up and develop for.

On the other hand there's a technology that has been super popular for developing cross-platform apps which is Electron.

Electron has been heavily criticized because of its heavy memory use (mostly because of Slack), but there are great applications written in it that feel super smooth (VSCode) and are probably responsible for enabling people to be able to choose a different operating system than what they normally use.

The problem is, you can't develop using .NET in Electron, it's all JavaScript and Node.js (I know, I know, there's Electron.NET, but trust me, what I'm talking about here is completely different).

So the idea was, if Electron is basically Node.js and we can start a .NET process from Node why can't we use Electron to build the UI and have all the behavior written in .NET. We just need a (non-convoluted) way of sending commands/requests between Node and .NET and it all should work, right?

Turns out that yes, it works and you probably already use this approach all the time.

Any time you pipe the output of a command to another in the shell, you are basically using the same idea I'm going to describe next.

And if you are skeptical about how robust this is, let me tell you that people do database restores/backups using this technique (e.g.: cat backup.archive | mongorestore --archive).

Ok, no more beating around the bush: the idea is to use the stdin and stdout streams to create a two way communication channel between two processes, in this case between Node.js and .NET.

In case these streams are news to you, the stdin (standard input stream) is normally used to read data from the terminal (like when a program asks you for input) and the stdout (standard output stream) is where you write to in your program to get data to show up in the terminal. These can be redirected (piped) so that the output of one becomes the input of the other.

Node.js has a module named child_process that contains a function, spawn, that we can use to spawn new processes and grab hold of their stdin, stdout and stderr streams.

When using spawn to create a .NET process we have the ability to send data to it through its stdin and receive data from it from its stdout.

Here's how that looks like:

const spawnedProcess = spawn('pathToExecutable', [arg1, arg2]);
spawnedProcess.stdin.write('hello .NET from Node.js');
spawnedProcess.stdout.on('data', data => {
    //data from .NET;
});

Very simple idea, very few moving parts and very simple to set up.

Obviously, the code above in that form is not very usable. Here's an example of what I ended up creating:

const connection = new ConnectionBuilder()
        .connectTo('DotNetExecutable')
        .build();
connection.send('greeting', 'John', (err, theGreeting) => {
    console.log(theGreeting);
});

The code above sends a request to .NET of type "greeting" with argument "John" and expects a response from .NET with a proper greeting to John.

I'm omitting a lot of details here, namely what actually gets sent over the stdin/stdout streams but that's not terribly important here.

What I left out and is important is how this works in .NET.

In a .NET application it's possible to get access to its process' stdin and stdout streams. They are available through the Console's properties In and Out.

The only care that is required here is reading from the streams and keeping them open. Thankfully StreamReader supports this through an overload of its Read method.

Here's how all that ended up looking in the first implementation of this idea in .NET:

var connection = new ConnectionBuilder()
                    .WithLogging()
                    .Build();

// expects a request named "greeting" with a string argument and returns a string
connection.On<string, string>("greeting", name =>
{
    return $"Hello {name}!";
});

// wait for incoming requests
connection.Listen();

First experiments

I called the implementation of this idea ElectronCGI (which is probably not the best of names given that what this idea really enables is to execute .NET code from Node.js).

It allowed me to create these demo applications where the UI was built using Electron + Angular and/or plain JavaScript with all non-ui code running in .NET.

Calculator Demo:

Animation of a simple calculator running

PostgreSQL database records browser:

An application to filter records from a database

On that last one on every keystroke a query is being performed and the results returned and rendered. The perceived performance is so good that it totally feels like a native application, and all the non-UI code is .NET in both examples.

One thing that might not be obvious by looking at the examples is that you can maintain the state of your application in .NET.

One approach that is common with Electron apps is to use Electron to display a web
page, and the actions you perform end up being HTTP requests to the server that hosts that web page. That means you have to deal with all that is HTTP related (you need to pick an port, send http requests, deal with routing, cookies, etc etc).

With this approach however, because there's no server and the .NET process "sticks" around you can keep all your state there, and setup is super simple, literally two lines in Node.js and .NET and you you can have the processes "talking" to each other.

All in all, this gave me confidence that this idea was good and worth exploring further.

Pushing on, adding concurrency and two-way communication between the processes

At the time of these demos it was possible to send messages from Node.js to .NET, but not the other way around.

Also, everything was synchronous, meaning that if you sent two requests from Node.js and the first took one minute to finish, you'd have to wait that full minute before you got a response for the second request.

Because an image is worth more than a thousand words here's how that would look visually if you sent 200 requests from Node.js to .NET and where every request took an average of 200ms to complete:

Grid where the numbered cells change color whenever the corresponding request complete

Enabling request running concurrently involved dealing with concurrency. Concurrency is hard.

This took me a while to get right but in the end I used the .NET Task Parallel Library's Data Flow library.

It is a complicated subject and in the process of figuring it out I wrote these two blog posts, in case you are curious about DataFlow here they are: TPL Dataflow in .Net Core, in Depth – Part 1 and Part 2.

This is how much better the example above is when requests can be served concurrently:

Grid where the numbered cells change color whenever the corresponding request complete resolving much much faster

The other big feature that was missing was to be able to send request from .NET to Node.js, previously all it was only possible to send a request from Node.js with an argument and get a response from .NET with some result.

For example:

connection.send('event.get', 'enceladus', events => {
    //events is a list of filtered events using the filter 'enceladus'
});

This was enough for simple applications but for more complex ones having the ability to have .NET send requests was super important.

To do this I had to change the format of the messages that were exchanged using the stdin and stdout streams.

Previously .NET's stdin stream would receive requests from Node, and responses to those requests were sent using its stdout stream.

To support duplex communication the messages included a type, which could be REQUEST of RESPONSE, and later on I added ERROR as well and also changed the API, in Node.js:

connection.send('requestType', 'optionalArgument', (err, optionalResponse) => {
    //err is the exception object if there's an exception in the .NET handler
});

//also added the ability to use promises:

try {
    const response = await connection.send('requestType', 'optionalArg');
}catch(err) {
    //handle err
}

//to handle request from .NET:

connection.on('requesType', optionalArgument => {
    //optionally return a response
});

And in .NET:

connection.On<T>("requestType", (T argument) => {
//return optional response
});

//and to send:

connection.Send<T>("requestType", optionalArgument, (T optionalResponse) => {
//use response
});

// there's also an async version:

var response = await connection.SendAsync("requestType", optionalArgument);




Proof: Porting a windows store application to Linux

When I first started with this idea I imagined a good proof that it would be viable would be to pick an application that was built using MVVM and be able to take the ViewModels, which are (should be) UI agnostic, and use them, unaltered, in an application using this approach.

Thankfully I had a game I built for the Windows Store around 2014 for which I still had the source code for. That game was named Memory Ace and you can still find it in the Windows Store here.

Memory ace screenshot

Turns out I was able to re-use all of the code to create the cross-platform version with no problems. Here it is running on Ubuntu:

Memory Ace running on Ubuntu

I also was able to run it on Windows with no problems. I don't own a Mac so I could not try it there.

If you want to have a look at the source code, you can find it here. Also, the source for ElectronCGI is here for Node.js and here for .NET.

Also, here are some blog posts with extra information: ElectronCGI 1.0 – Cross-platform GUIs for .Net Core, ElectronCGI 1.0 – Cross-platform GUIs for .Net Core and ElectronCGI – Cross Platform .Net Core GUIs with Electron.

You can also see here how easy it is to setup a project with ElectronCGI (using an outdated version, but the process is identical).

So that's it. If I managed to grab your attention until now, can I kindly ask for your help?

I've been personally affected by the covid-19 pandemic. I was working as a contractor in a company that was badly affected (hospitality sector) and had to let everyone go. Me included.

I appreciate you might not be in a position to offer me a job, but any help is appreciated, for example if your company has open roles you can suggest me (I'm well versed in .NET. Node.js, React, Angular and several other technologies). Maybe there's even a referral program.

Or maybe you can add some endorsements on my LinkedIn profile.

Or if you know of any roles I could be a good fit for let me know, here's my twitter (my DMs are open).

Take care and stay safe.

Posted on Oct 22 '18 by:

ruidfigueiredo profile

Rui Figueiredo

@ruidfigueiredo

Currently working as a contractor, mostly on Node.js and Typescript, also React. Also have a background in academia, I have a PhD in CS and worked as a researcher in AI.

Discussion

markdown guide
 

Great explanation on a cool topic 😃. The use of GIFs to illustrate the performance of concurrency is really helpful 👍

 

I used peek, works great