Photo by Victor Serban on Unsplash
Update Aug 12, 2021: This article has been updated with a new section about using the new memory model with kotlinx-coroutines
.
If you've been following Kotlin/Native at all over the last couple of years, you'll know that it's memory model has been controversial. Last year, the Kotlin team committed to redesigning it, and this year they promised a preview by the end of the summer. Well, there hasn't been an official announcement yet, but that preview is present in the Kotlin 1.5.30 early-access release.
TL,DR
- Update your Kotlin version to 1.5.30-RC
- Add
kotlinOptions.freeCompilerArgs += listOf("-memory-model", "experimental")
to your Kotlin/Native compilations - Add
kotlin.native.cacheKind=none
togradle.properties
- To use coroutines with the new model, add
https://maven.pkg.jetbrains.space/public/p/kotlinx-coroutines/maven
to repositories and use version1.5.1-new-mm-dev1
. - Mutate unfrozen objects from different threads
Wait, what?!
Yup! The memory model is controlled with the -memory-model
command-line flag. Pass experimental
for the new model or strict
for the existing one*. Due to current limitations, you also need to disable compiler caching with the kotlin.native.cacheKind=none
gradle property.
*You can also pass relaxed
, but it's probably not a good idea.
You can pass the flag to all your Kotlin/Native targets by doing something like this from Gradle:
kotlin {
targets.withType<KotlinNativeTarget>().all {
compilations.all {
kotlinOptions.freeCompilerArgs +=
listOf("-memory-model", "experimental")
}
}
}
Now you can freely mutate unfrozen state across threads. Let's see what that looks like.
Testing the two models
In the current strict memory model, if you wanted to write a function to run code in a background thread, it might look something like this:
fun <T> doInBackground(action: () -> T): T {
val worker = Worker.start()
val future = worker.execute(
TransferMode.SAFE,
{ action.freeze() },
{ it() }
)
return future.result
}
The function takes a lambda, freezes it, and executes it in a new Worker
which runs on a background thread. Because the lambda is frozen, the only way we can have mutable state is by using atomics, as in the following test (which can run on the iosX64
target)
@Test fun oldMemoryTest() {
val didRunLambda = AtomicReference(false)
assertTrue(NSThread.isMainThread)
doInBackground {
didRunLambda.value = true
assertFalse(NSThread.isMainThread)
}
assertTrue(didRunLambda.value)
}
Here we initialize an atomic boolean to false on the main thread, mutate it to true on a background thread, and assert that it's true from the main thread.
Code like the above is the primary way of handling mutable state across threads in the current memory model (although it's usually hidden deep in the machinery of a library like kotlinx.coroutines
), and it still works in the experimental model. But we can also do new things we coudln't do before.
We might naively expect that the new model will just let us drop the atomic and do something like the following
@Test fun newMemoryTest() {
var didRunLambda = false
assertTrue(NSThread.isMainThread)
doInBackground {
didRunLambda = true // *
assertFalse(NSThread.isMainThread)
}
assertTrue(didRunLambda)
}
However, this will fail at the starred line with an InvalidMutabilityException
. Our doInBackground()
function freezes the lambda, and the new memory model still respects freeze semantics and doesn't allow frozen things to change. That includes the didRunLambda
boolean which is captured from the outer scope.
So let's create a new backgrounding function. Note that this function works only in the new model, and will fail with an IllegalStateException
if you use the existing strict memory model.
fun <T> doInBackgroundUnfrozen(action: () -> T): T {
val worker = Worker.start()
val future = worker.execute(
TransferMode.SAFE,
{ action }, // No more freeze() call
{ it() }
)
return future.result
}
Subbing this function into the test allows it to pass in the new model.
@Test fun newMemoryTestUnfrozen() {
var didRunLambda = false
assertTrue(NSThread.isMainThread)
doInBackgroundUnfrozen {
didRunLambda = true
assertFalse(NSThread.isMainThread)
}
assertTrue(didRunLambda)
}
Coroutines
There's also now a early dev release of coroutines with the new model! It's available from the coroutines internal maven repo, so you need to add it to your repositories:
repositories {
// ...
maven(url = "https://maven.pkg.jetbrains.space/public/p/kotlinx-coroutines/maven")
}
Then you can add the coroutines dependency with version 1.5.1-new-mm-dev1
:
kotlin {
sourceSets {
val commonMain by getting {
dependencies {
// ...
implementation("org.jetbrains.kotlinx:kotlinx-coroutines-core:1.5.1-new-mm-dev1")
}
}
// ...
}
}
Now you can scrap doInBackground()
and doInBackgroundUnfrozen()
, and just use withContext()
:
@Test fun newMemoryCoroutinesTest() = runBlocking {
var didRunLambda = false
assertTrue(NSThread.isMainThread)
withContext(Dispatchers.Default) {
didRunLambda = true
assertFalse(NSThread.isMainThread)
}
assertTrue(didRunLambda)
}
Further thoughts
It's pretty neat seeing this in action! As the Kotlin team promised previously, existing code written around freeze()
in the strict memory model still appears to behave the same in the new experimental model. But we also now have the ability to pass unfrozen things across threads. This should mean that, once the model is finalized, existing code won't need to migrate immediately. However, early adopters of the experimental model will likely need to wait for any concurrency libraries they depend on to update, or else they'll still need to handle the existing freeze()
behavior. That work has already begun in kotlinx.coroutines.
Caveats
The experimental new memory model is an undocumented preview release. I haven't tried much beyond what's presented here, and I have no idea what limitations or possible issues there are. Use at your own risk! That said, it might be a nice time to try it out, especially if you maintain library code that handles freeze-related logic currently. Be sure to report any bugs you see, and give JetBrains feedback on if it works for you.
Should I use this in production?
No.
Thanks for reading! Let me know in the comments if you have questions, or you can reach out to me at @russhwolf on Twitter or the Kotlin Slack. And if you find all this interesting, maybe you'd like to work with or work at Touchlab.
Top comments (1)
Looking forward to the availability of
synchronized(lock) { … }
👀