DEV Community

Kevin Galligan for Touchlab

Posted on

Kotlin/Native Concurrency Changes…

On the JetBrains Blog, Roman Elizarov just dropped a pretty big piece of news, if you're in the Kotlin/Native and Multiplatform world.

If you still have questions after reading this post and want our input, submit your KN concurrency questions and we’ll get back to you.

The explicitly thread-confined, "frozen" concurrency model, is changing.

I've been talking about this concurrency and runtime model for the past 2+ years. From a safety perspective, there's a lot to like. Unrestricted shared memory access is problematic, to put it mildly. Enforcing some rules at runtime means extra safety, and forces you to think about how your JVM code is architected. Maybe it could be done differently?

Of course, if I take a step back, I've done a lot of content explaining this model over the past couple of years. That would imply that, perhaps, it is difficult for developers to learn. It is certainly a unique model, and the fact that the JVM and Native live by different rules has been a source of some confusion indeed.

Unpacking The Post

I want to highlight some parts of the post and discuss them a bit.

The most important bit was in the TL;DR:

"Existing code will continue to work and will be supported."

It is important to stress that this post is talking about future changes, and that your Native code will be compatible with these future changes.

"To solve these problems, we've started working on an alternative memory manager for Kotlin/Native that would allow us to lift restrictions on object sharing in Kotlin/Native…"

This means the runtime-enforced thread-confinement model will be going away. There was some confusion about what "object sharing" meant. That's not just global objects. It's the full memory management model.

"We plan to introduce it in a way that is mostly compatible with existing code, so code that is currently working will continue to work."

"mostly compatible" might sound like a concern, but unless you have something really exotic going on, it should work fine. Freezing will still be there, but it'll be optional (like Javascript). The annotations will still have implementations, even if they no longer do anything.
Your code will continue to work, and in many cases, work faster.

"and we will be looking at ways to improve Kotlin's approach to working with immutable data in the whole Kotlin language, not just in Kotlin/Native."

This part is interesting. The fact that the JVM and Native worked differently at runtime was one of the biggest issues. Improving immutable data in the language, and applying that to concurrent best practices, sounds like a good thing, but we'll have to see what emerges.

"Meanwhile, we'll continue to support the existing memory manager, and we'll release multithreaded libraries for Kotlin/Native so you can develop your applications on top of them."

This is important. For our apps, we use the multithreaded coroutines branch. We're careful to use it in such a way as to avoid the potential memory leaks discussed in Roman's post, but it is a core part of the library ecosystem. There will be ongoing support for it.

What now?

Well, personally, I'm going to find other things to talk about at conferences :)

Other than that, not a whole lot changes right now. There's no timeline on the memory manager updates, but you can assume it'll be quite a while. This isn't a few months away. That means, for the foreseeable future, if you write Kotlin/Native code, you'll need to understand the frozen model and how to write code in it. All the stuff we've been talking about still applies.

KMP continues to mature as a platform, and the concurrent code you write for Native now will continue to work whenever these updates arrive.

I think we (Touchlab) may start producing content around how to accomplish certain tasks, rather than deeper explanations around the fundamentals ("frozen", etc). Tech recipes, basically. For example, how to use Flow and Sqldelight, how to handle networking in various scenarios, etc. You can mostly hide the details of the current memory model, and you can write code that will transition well when these changes do come.

So, not a whole lot changes right now, but changes certainly are coming. Ultimately, this will improve adoption, which is good for the ecosystem. I'm sure I'll have more to say down the road…

Top comments (5)

Collapse
 
dbaroncellimob profile image
Daniele Baroncelli

I think it would be very useful to have a wrapper class that handles the typical multi-threaded coroutine operation. Which would eventually reflect the changes to the new memory management model, under the hood, when they comes.

Collapse
 
kpgalligan profile image
Kevin Galligan

Do you have an example of what you'd like to see? Most of the concurrent libraries handle freezing under the hood, so you don't really code anything different. It's being aware of the implications. So, in general, your code won't change at all. The changes coming are a loosening of restrictions.

Collapse
 
dbaroncellimob profile image
Daniele Baroncelli • Edited

Hi Kevin, I still haven't had a real chance to dig into practical code, but I had read your KotlinNative tutorial on the Kotlin website and left me more confused than I was before. I am glad to read at the end of this article that TouchLab will now be focusing on "producing content around how to accomplish certain tasks, rather than deeper explanations around the fundamentals", as it sounds to me a sensible path to make things clearer.
One of the key points to clarify in my opinion is how you can achieve multithreading when making network calls, as I also read that currently Ktor on K/N doesn't allow calls on background threads, but you are forced to suspend the call on the main thread. It would be important to understand how things will be different between the current memory management and the next one. This is why I was talking about wrapper classes, as I think it would make sense to that these changes wouldn't affect your code. By calling a wrapper class, the changes would be done just in that wrapper class, rather than in your own code.

Thread Thread
 
kpgalligan profile image
Kevin Galligan

Ktor is a special case. That is because of the way ktor specifically is architected. Ktor in Native needs work. If the new memory model landed today, Ktor would likely still have the same restriction. It has less to do with the memory model than how ktor currently integrates with URLSession.
We're still hoping they produce a version that works with the multithreaded coroutines. We (Touchlab) had to hack around that to use it.
Now, if your calls don't return huge results that need parsing, calling and suspending on the main thread isn't really a problem. That's the point of the suspend. It doesn't block. Changing threads to then initiate a network call doesn't buy much. Where this is an issue is when you have large results that get parsed on the main thread, or if you're already in a background thread and attempt a call.
Ktor was designed around coroutines, and is a great design if coroutines ran perfectly on all platforms, but they do not. We've been discussing a simple networking library alternative for a while. Using sync calls and/or making suspending calls one option rather than default. However, the value of such a library is temporary, so it's never left the discussion phase.
I don't think I explained much there. Just brain dumped. Recipe content may help, but again, keep in mind that ktor is a very special case in this mix. So much so that we may move to pushing our network calls to expect/actuals again until the MT version is resolved.

Thread Thread
 
dbaroncellimob profile image
Daniele Baroncelli • Edited

Hi Kevin, indeed in our project we have pretty small results, typically 2-3 KB Json responses (maximum 5KB).
If parsing 5KB isn't a problem on the main thread (using a suspend function), I am wondering what is actually a typical app situation where coroutines running on a background thread are necessary. I am expecting reading from a local sqlite or settings wouldn't require a background thread either.
Would you say that unless a function is cpu intensive (e.g. image processing, big file parsing, etc.), it's fine to use coroutines on the main thread?
If that is true, then I suppose MT coroutines shouldn't even be an issue for most KMP developers.