The new memory manager is Alpha. It's not production-ready and may be changed at any time. Opt-in is required (see the details below), and you should use it only for evaluation purposes. We would appreciate your feedback on it in YouTrack.
In the new memory manager (MM), we‘re lifting restrictions on object sharing: there’s no need to freeze objects to share them between threads anymore.
In particular:
@SharedImmutable.Worker.executeAfter no longer requires operation to be frozen.Worker.execute no longer requires producer to return an isolated object subgraph.AtomicReference and FreezableAtomicReference do not cause memory leaks.A few precautions:
AtomicReference from kotlin.native.concurrent still required freezing the value, and we suggested using FreezableAtomicReference instead. Starting with 1.6.20 AtomicReference on the new MM behaves exactly like FreezableAtomicReference. Alternatively, you can use AtomicRef from atomicfu. Note that atomicfu has not reached 1.x yet.deinit on Swift/ObjC objects (and the objects they refer to) will be called on a different thread if these objects cross interop boundary into Kotlin/Native.The new MM also brings another set of changes:
@EagerInitialization. Before using, consult the @EagerInitialization documentation.by lazy {} properties support thread-safety modes and do not handle unbounded recursion. This is in line with Kotlin/JVM.operation in Worker.executeAfter are processed like in other parts of the runtime: by trying to execute a user-defined unhandled exception hook or terminating the program if the hook was not found or failed with an exception itself.Update to Kotlin 1.6.20 or newer.
Add the compilation flag -Xbinary=memoryModel=experimental. In Gradle, you can alternatively do one of the following:
gradle.properties:kotlin.native.binary.memoryModel=experimental
// build.gradle.kts kotlin.targets.withType(KotlinNativeTarget::class.java) { binaries.all { binaryOptions["memoryModel"] = "experimental" } }
If kotlin.native.isExperimentalMM() returns true, you've successfully enabled the new MM.
To improve the performance, please also consider enabling a concurrent implementation for the sweep phase of the garbage collector. See more details here. It will be switched on by default in Kotlin 1.7.0.
To take full advantage of the new MM, we released new versions of the following libraries:
kotlinx.coroutines: 1.6.0 or newer (will automatically detect when running with the new memory manager).Worker boundaries.native-mt version, library objects are transparent for freeze. For example, if you freeze a channel, all of its internals will get frozen, so it won't work as expected. In particular, this can happen when freezing something that captures a channel.Dispatchers.Default is backed by a pool of Workers on Linux and Windows and by a global queue on Apple targets.newSingleThreadContext to create a coroutine dispatcher that is backed by a Worker.newFixedThreadPoolContext to create a coroutine dispatcher backed by a pool of N Workers.Dispatchers.Main is backed by the main queue on Darwin and by a standalone Worker on other platforms. In unit tests, nothing processes the main thread queue, so do not use Dispatchers.Main in unit tests unless it was mocked, which can be done by calling Dispatchers.setMain from kotlinx-coroutines-test.ktor: 2.0.0 or newer.In your project, you can continue using previous versions of the libraries (including native-mt for kotlinx.coroutines). The existing code will work just like with the previous MM. The only known exception is creating a Ktor HTTP client with the default engine using HttpClient(). In this case, you get the following error:
kotlin.IllegalStateException: Failed to find HttpClientEngineContainer. Consider adding [HttpClientEngine] implementation in dependencies.
To fix this, specify the engine explicitly by replacing HttpClient() with HttpClient(Ios) or other supported engines (see the Ktor documentation for more details).
Other libraries might also have compatibility issues. If you encounter any, report to the library authors.
Known issues:
For the first preview, we're using the simplest scheme for garbage collection: single-threaded stop-the-world mark-and-sweep algorithm, which is triggered after enough functions, loop iterations, and allocations were executed. This greatly hinders the performance, and one of our top priorities now is addressing these performance issues.
We don't have nice instruments to monitor the GC performance yet. So far, diagnosing requires looking at GC logs. To enable the logs, add the compilation flag -Xruntime-logs=gc=info in a Gradle build script:
// build.gradle.kts kotlin.targets.withType(KotlinNativeTarget::class.java) { binaries.all { freeCompilerArgs += "-Xruntime-logs=gc=info" } }
Currently, the logs are only printed to stderr. Note that the exact contents of the logs will change.
The list of known performance issues:
autoreleasepool around loop bodies (both in Swift/ObjC and Kotlin) that do interop calls.kotlin.native.internal.GC.threshold and kotlin.native.internal.GC.thresholdAllocations to force GC to happen less often. Note that the exact meaning of threshold and thresholdAllocations may change in the future.Workers and unconsumed Futures have objects pinned to the heap, contributing to the pause time. Like Swift/ObjC interop, this also manifests in a growing number of stable refs in the root set. To mitigate:Worker.execute with the resulting Future objects that are never consumed using Future.consume or Future.result. Make sure to either consume all Future objects or replace these calls with Worker.executeAfter instead.Workers that were Worker.started, but never stopped via Worker.requestTermination() (also, note that this call also returns a Future).execute and executeAfter are only called on Workers that were Worker.started or if the receiving Worker manually processes events with Worker.processQueue.In some of our measurements we observed performance regressions with a slowdown up to a factor of 5. In some other cases we observed performance improvements instead. If you observe regressions more significant than 5x, please report to this performance meta issue.
Some libraries might not be ready for the new MM and freeze-transparency of kotlinx.coroutines, so unexpected InvalidMutabilityException or FreezingException might appear.
To workaround such cases, we added a freezing binary option that disables freezing fully (disabled) or partially (explicitOnly). The former disables the freezing mechanism at runtime (thus, making it a no-op), while the latter disables automatic freezing of @SharedImmutable globals, but keeps direct calls to freeze fully functional.
To enable this, add the compilation flag -Xbinary=freezing=disabled. In Gradle, you can alternatively do one of the following:
gradle.properties:kotlin.native.binary.freezing=disabled
// build.gradle.kts kotlin.targets.withType(KotlinNativeTarget::class.java) { binaries.all { binaryOptions["freezing"] = "disabled" } }
NOTE: this option works only with the new MM.
If you want not just workaround the problem, but track down the source of the exceptions, then ensureNeverFrozen is your best friend.
If you encounter performance regressions with a slowdown of more than a factor of 5, report to this performance meta issue.
You can report other issues with migration to the new MM to this meta issue.