Create JSON manually with kotlinx.serialization

Kotlin serialization is a great library for serialisation in Kotlin. It is mainly geared towards serialising from objects to strings and back, but on closer look it also contains a comprehensive Json library. Even after discovering the documentation, though, the use of this new library might be confusing.

🕋 Serialisation to objects

Consider a data class:

data class Credentials(
	val publicKey: String,
	val privateKey: String,

The @Serializable annotation enables the encoding to string and back:

val credentials = Credentials("publicKey", "privateKey")

val stringValue = Json.encodeToString(credentials)

val credentialsDecoded = Json.decodeFromString<Credentials>(stringValue)

	Credentials(publicKey=publicKey, privateKey=privateKey)

With object serialisation, the the library shines with its ease of use. With JSON, however, the use becomes more ambiguous.

🎞 Serialisation to JSON string, manually

When creating a web requests, a separate class for posting the data is not required. Then, the request body can be created with the JSON features part of the serialisation library. In there, comprehensive function set exists to handle most JSON encoding problems.

Creating our credentials string, for example, would look like:

val credentials = JsonObject(
        "publicKey" to JsonPrimitive("publicKey"),
        "privateKey" to JsonPrimitive("privateKey")

val array = JsonArray(listOf(credentials))


There is also a DSL version of this construction, which might be preferred:

val credentials = buildJsonArray {
    addJsonObject {
        put("publicKey", "publickey")
        put("privateKey", "privateKey")



🌐 Serialisation for YAML and other formats

Only JSON JSON, and some experimental formats, are supported out of the box. For others, like YAML, an external library that implements a custom formatter, can be used.

With this dependency, a YAML string can decoded into a @Serializable object as follows:

val yamlEncoded =
        publicKey: "publicKey"
        privateKey: "privateKey"

val credentials =


	Credentials(publicKey=publicKey, privateKey=privateKey)

🌸 Pretty printing a JSON string

If JSON input is without line breaks, it can be useful to make it more human readable. The JSON library can then be utilised with the prettyPrint property.

val format = Json { prettyPrint = true }
val input = """

val jsonElement = format.decodeFromString<JsonElement>(input)
val bodyInPrettyPrint = format.encodeToString(jsonElement)

        "publicKey": "publicKey",
        "privateKey": "privateKey"

As shown here, the input string needs to be decoded into a JsonElement, and then encoded back to string again. Only then, the prettyPrint property will cooperate in achieving our goal.

⌛️ Conclusion

Kotlinx.serialization is a great tool for serialising objects and parsing JSON strings. Since it is a new library within a new language, all of the features might not be obvious at first. Therefore, analysis of the documentation is encouraged before using it in code.

Sample code is available in tonisives repo.

Related tweet:

Architecture Automations Notion

Add a sequence diagram to Notion

In order to express their ideas about a project’s architecture, developers can utilise the UML sequence diagrams.

There are different ways to create and publish these diagrams, ranging from paid apps(8$/month, 100$) to free Confluence apps.

It can be a formidable task to choose from all of these options. As a programmer, a good solution could be to write the diagram in text, and later commit or convert it to an image.

PlantUML ☘️

With PlantUML, sequence diagrams can be defined in a text format, and output to .svg or other image formats via a command line tool.

Consider the diagram:

Alice and Bob sequence diagram

This can be defined in a very human-readable format inside a diagram.puml file:

Alice -> Bob: Authentication Request
Bob --> Alice: Authentication Response

Alice -> Bob: Another authentication Request
Alice <-- Bob: Another authentication Response

And then converted into .svg using the command line tool:

java -jar plantuml.jar -tsvg diagram.puml

There are different extensions for PlantUML. In Atlassian, one can write the definition and it will be output as an image into the document.

For the Notion workspace, there is no official integration. Instead, we can write a script and use their API.

Notion table 📝

First, the diagram should be described using either the PlantUML text format, or as a relational table.

If we want to use relations between UML objects, we need to use the latter.

Services table

In the first table, a single column with our services Alice and Bob needs to be described.

Alice and Bob services

Diagram table

In the second table, the actual diagram will be described with actors from the related services👆🏽table.

Alice and Bob diagram

Note field will be used as the action name.

Output from the tables ⚙️

To convert the tables into UML, we can use a Python script to download the Notion table, convert it to PlantUML, and output the UML image., “Unofficial Python 3 client for Notion API v3”, can be used to:

  • request the tables from our Notion workspace
client = NotionClient(token_v2=token)
cv = client.get_collection_view(sequence_diagram_table_url)
  • translate the tables into PlantUML
for row in cv.collection.get_rows():
    service_origin_block = row.origin[0]
    service_end_block = row.end[0]
    pum+=("\"{}\" -> \"{}\": {}\n".format(,,
  • output the final image using plantuml.jar
subprocess.Popen(["java", "-jar", "plantuml.jar", "-tsvg", "out.puml"])


After calling our script:

python\?v\=0de421bd4e2b485fb624ff4edc527e0d Alice

we get our sequence diagram result

Alice and Bob diagram from Notion table

Optionally, the output could be uploaded to Notion or other services.

The script is available in tonisives repo. It is based on the official Notion blog post.

Conclusion 🖋

It is possible to create sequence diagrams in a text format, and convert them to images later.

What are the positives? 👍🏽

  • Can edit diagrams in a text format.
  • Don’t have to load a web page or a different app.
  • Can generate UML from command line, then transfer the image anywhere.

What are the negatives? 👎🏽

  • Cannot use a GUI program
  • Requires scripting and shell usage knowledge
  • More difficult diagrams can be hard to write

Related tweet:

Architecture Java Kotlin

Callback styles for async tasks

For asynchronous tasks, the actions on completion need to be handled via a callback. There are different patterns to achieve this, with each having their own benefits and shortcomings.


One of the oldest callback styles are interfaces or anonymous classes. They are used to great effect in Android. As an example, with okhttp library, a network request could be sent:

okHttp.newCall(request).enqueue(new Callback() {
  @Override public void onFailure(Call call, Exception e) {


  public void onResponse(Call call, Response response) {


Interface use is very convenient, because the request and callback can be written in one line and all of the outer class properties are available in the nested function.

However, shortcomings arise when handling multiple tasks. Consider if we needed to wait for all of the requests to finish. Then, a response counting logic is required:

Response[] successfulResponses = Response[requests.size()]
final int[] responseCount = {0};

for (Request request in requests) {
  okHttp.newCall(request).enqueue(new Callback() {
    public void onResponse(Call call, Response response) {

// wait for all of the responses

while (responseCount[1] != requests.size()) {

// all responses are here
println("All tasks finished:");

There is a better way to handle this kind of scenario.


Since Java8 and Android 24(or with a support lib), CompletableFuture is available. It proposes to fix the interface shortcomings like scattered callback locations, deeply nested callbacks or sequential tasks management.

With this new API, waiting for all of the answers can be done with the allOf() method:

CompletableFuture<Response>[] requests;

CompletableFuture<Void> tasks = CompletableFuture.allOf(requests);

CompletableFuture cf = tasks.thenRun(() ->
  print("All requests finished"));

// start a blocking thread to run the tasks

Single completions can also be observed:

for (CompletableFuture<Response> request : requests) {
  request.thenAcceptAsync(response -> {
    print("request response: " + response);

CompletableFuture API is expansive, having different methods for creating, combining and executing tasks. Some extra benefits are:

  • Sequential task management
  • Integration with Kotlin coroutines, Streams API, RxJava

LiveData and RxJava

Recent paradigm shift in programming has been the introduction of Observable pattern. It is now even the Android’s recommended app architecture style.

What differentiates it from the previous styles, is that a single callback is used for all of the updates of a property. A common scenario is a view state, which advertises its value changes. Only the new value is advertised, irrelevant from the source of the change.

In our case it would mean that we wouldn’t get the response from the web request directly, but from an field in the ViewState object:

class ViewState {
  String response;
  String error;

MutableLiveData<ViewState> viewState = repository.getViewState();

// observe the view state. Observer count is unlimited
viewState.observe(this, viewState -> {

// repository makes the requests internally and updates the // viewState object

RxJava possibilities are even greater than the ones of CompletableFuture, including a rich set of chaining operators. Before jumping in, one has to consider the learning curve of a new programming paradigm.


There are different use cases for all of the aforementioned Async task callback styles.

Interfaces can be used for simple tasks or callbacks that are only run once and no combination is needed.

Chained tasks or more complicated process management is handled better with the CompletableFuture.

Observable pattern can be used for even greater flexibility and added benefits. It is a programming paradigm shift though, and weighing the benefits over the skill acquisition time is recommended.

Android Architecture Kotlin testing

Logging in a Java library

It can be useful to emit logs in a library. When doing so, one needs to consider when to emit, how to filter and who is responsible for printing/handling the logs. Correct logging should also be tested.

When to log

There are different reasons to emit a message, for instance on important events, undefined behaviour or different levels of debug events.

Any potentially useful message should be emitted. However, in order to not clutter the terminal, output should be refined.

Filtering the logs

A library logging level should be configurable according to user preference:

/**  Possible logging levels. */
public enum Level {
    /**  No log messages */
    /** Informational messages and errors only */
    /* Debug messages */
    /* All messages, including fine traces */

It is expected that important and error messages are emitted by default, so Level.INFO should be the default setting.

However, if finer traces are required, the filter could be set to DEBUG:

Library.loggingLevel = Level.DEBUG

The library should then filter the messages according to level:

static void logDebug(String message) {
    if (loggingLevel >= Level.DEBUG) {

Example of filtering implementation in hmkit-android library.

Choosing a logging framework

The user could be using any java subsystem, or maybe emitting the messages to a web service. For this reason, the library should never output to System.out.println or android.util.Log. Instead, it should be an interface from where the logs are emitted, and it should be up to the user to choose where the messages are output in the end.

A popular logging facade is slf4j. api part of it should be included in our library:

implementation 'org.slf4j:slf4j-api:1.7.25'

Then logs can be emitted through a Logger instance:

// sample informational log
logger = LoggerFactory.getLogger(Library.class);
..."Library initialised")

Our library user could then continue using his favourite logging framework and add an slf4j binding to see the messages. For instance, a Timber binding:

implementation 'at.favre.lib:slf4j-timber:1.0.0'

Testing the emitted logs

To verify the emitted logs, a test setup is necessary. Mockk can be used to mock the slf4j and verify calls to it’s Logger.

For this to work, each test could inherit from BaseTest, which initialises the mock:

lateinit var mockLogger:org.slf4j.Logger

fun before() {
  mockLogger = mockk()
  every { MyLoggerFactory.getLogger() } returns mockLogger

  every { mockLogger.warn(allAny()) } just Runs
  every { mockLogger.debug(allAny()) } just Runs
  every { mockLogger.error(allAny()) } just Runs

This class could also contain convenience lambda methods to assert the emitted logs:

fun debugLogExpected(runnable: Runnable, count: Int = 1) {
    verify(exactly = count) { mockLogger.debug(allAny()) }

From the derived class’s test method, an assertion can be then written about the log message:

@Test fun invalidStartControlControlModeThrows() {
    debugLogExpected { 
        val action = Library.resolve()

This test will succeed if one debug message is emitted to the slf4j interface.

Please see auto-api-java as an example of using this pattern.


Logging in a library can be very beneficial. The maintainer should however be aware of the library user’s perspective, and filter too precise logs by default. Log printing should also be left to the user, and correct log emittance should be tested with unit tests.


Kotlin: Concatenating nullable strings

A null String is concatenated as “null” in Kotlin:

val first:String? = "first"
val second:String? = null

println("First is $first and second is $second")
> First is first and second is null

What if the goal is to concatenate only if the second string is not null? There are a few options:


One of the solutions would be to check the null value with elvis(?:) operator:

var result = "First is ${(first ?: "")} and second is ${(second ?: "")}"
> First is first and second is 

Now there is no “null”, but the whole “and second is” part is superfluous because the second string has no value. With an if case this segment can be omitted completely:

result = 
(if (first != null) "First is $first" else "") +
(if (second != null) " and second is $second" else "")
> First is first

This is the desired result, but the code is not easy to read with double if checks and concatenation of empty string “”. What if we could hide this logic with an extension function?

Extension function with lambda expression

Using an extension function with a function parameter enables us to separate the if case logic:

fun String.concatIfNotNull(fromObject: Any?, transform: (String) -> String) =
  if (fromObject != null) this + transform(this) else this

result = "".concatIfNotNull(first) {
  "First is $first".concatIfNotNull(second) { " and second is $second" }
> First is first

This code doesn’t have the if cases, but one might argue it is even harder to read than the first solution because of the extension’s dot notation and empty string “” concatenation in the beginning.

Top-level function with lambda expression

By defining a top-level function we don’t have to use the dot notation and extra string concatenation:

inline fun <T> ifNotNull(fromObject: T?, transform: (T) -> String): String =
 if (fromObject != null) "" + transform(fromObject) else ""

result = ifNotNull(first) {
  "First is $first" + ifNotNull(second) { " and second is $second" }

This solution could seem like the most human-readable way of conveying the purpose of our code: appending a sentence only if its contents are not null.


Kotlin string literals can become difficult to read if multiple if cases are used to construct them.

The developer can be aware of this and use alternatives like top-level functions or extension functions. This can make the code more self-documenting and easier to read.

Automations General

Workflow automations

Some of the smaller, repetitive tasks as a programmer can be automated and by doing so productivity improved and frustrations reduced.

I am talking about tasks like moving the cursor, writing keystrokes and manipulating text and apps. My purpose is to use the trackpad less and keyboard more. By doing so you can focus more on your ideas and less on pointing the mouse cursor.


MacOS enables some automations by some of its native apps. I find third-party apps more convenient and feature-rich. I use:

  • Keyboard Maestro(KM) for activating apps, text manipulation, clipboard history and much more. It costs 36$ but considering the time and focus it saves it is worth it. Most of my automations use this app.
  • Karabiner-elements to map some keyboard keys to more useful functions.
  • ShiftIt to resize and move my windows with keystrokes.

Remapping redundant keys

Some of the keyboard keys are rarely used. They can be mapped to more useful ones to reduce finger gymnastics. Karabiner-elements app is used for that.

I use these complex modifications:

Karabiner-elements complex modifications

Most useful things here:

  • Map caps lock to control on hold, escape on click. Good for TouchBar users.
  • Left control to hyper key for extra modifier key. I use this to open and bring apps to front.
  • Left and right shift keys to parentheses ( and ) if pressed. Normal shift behavior if held.

There is more that can be achieved via karabiner-elements, for instance ctrl-hjkl for arrow keys navigation.

Code navigation

Moving the text cursor with trackpad is slow and imprecise. There are solutions to make it more convenient, like using vim keybindings or native MacOS text navigation shortcuts.

I went for a custom setup:

  • ctrl+w/s to scroll the document. This is a KM macro that simulates the scroll wheel.
  • ctrl+e/d to jump between text paragraphs. In IntelliJ this shortcut is “Move Caret Backward/Forward a paragraph”. Otherwise it is “alt+up/down”
  • ctrl+r/f to page up/down. Mapped in KM.
Code navigation shortcuts

I map different macros to these keybindings so they will work across all apps – code editor, web browser or notes.

Code navigation shortcuts

Here are my KM code navigation macros.

Switching between apps

I use hyper+keystroke to activate and move the cursor between apps. This is convenient because you don’t have to visually identify the app icon, compared to cmd-tabbing. Moving the cursor to the app is useful in multi screen environment.

I use hyper+g to search in google.

I use hyper+l to toggle clipboard history. This saves you time because you can copy multiple items and don’t have to navigate back and forth between apps.

Googling a solution. Trackpad is only used for selecting the text in the web page.

There are many more shortcuts I use in my workflow. Here are my global KM macros.


Automations have improved my workflow considerably. I can focus more on my ideas and less on navigating the OS or the code editor. I would recommend trying these out to any programmer who feels that expressing her coding ideas could be faster.

Android Automotive

Car data points in Android Automotive

When following the Android Automotive overview, it might seem that not much of the car info is available for developers. It is even stated that only media apps are allowed and there is no documentation about car properties.

Looking further into the car emulator and Android source code, it looks like more is on the table.

Emulator VHAL properties

Maybe the first thing to notice is that the car emulator has some data points available. For instance, door locks and seat positions:

Data points in the emulator

From this we could assume that these properties are also readable/settable. Let’s look at the package in 10.0.0_r40 branch. This is the recommended branch when using a phone as an Automotive Development Platform.

Car properties

All of the car properties are defined in VehiclePropertyIds file. They can be read with CarPropertyManager. However, when trying to read the car VIN,

String vin = propertyManager.getProperty<String>(INFO_VIN, VEHICLE_AREA_TYPE_GLOBAL)?.value 

a SecurityException is thrown. This means the app needs to request user permission to access this data.


There are 3 files that should be used to combine the property with the permission:

* Door lock
* Requires permission: {@link Car#PERMISSION_CONTROL_CAR_DOORS}.
public static final int DOOR_LOCK = 371198722;
  • defines this permission as a string
* Permission necessary to control car's door.
* @hide
public static final String PERMISSION_CONTROL_CAR_DOORS = "";
<!-- Allows an application to control the vehicle doors.
    <p>Protection level: signature|privileged
android:protectionLevel="signature|privileged". android:label="@string/car_permission_label_control_car_doors" android:description="@string/car_permission_desc_control_car_doors" />

The name of this permission should be used to ask for the user consent via app’s AndroidManifest:

<uses-permission android:name=""/>

However, when asking for this permission, it doesn’t show any dialog and reading the car doors value will fail.

Permission types

When looking at the Doors permission, it’s protectionLevel is set to


This means that this property is available for system apps only.

There are 3 types of Car permissions:

  • normal – permission granted by default.
  • dangerous – user is asked for the permission.
  • signature|privileged – Only system apps have access to these properties.

From this it can be concluded that only normal and dangerous level properties could be accessible for developers.

⭐️ To disguise yourself as a system app and access the signature|privileged properties, one can sign her application with build keys. Be sure to use the same build or build the emulator from the branch where keys are located.

Car system service

All car-related services are encompassed in the Car System Service. This service can be accessed via adb and then properties can be queried or set via dumpsys:

# 16200B02 is hex value of door lock property 371198722
adb shell dumpsys car_service get-property-value 16200B02  1

Use adb shell dumpsys car_service -h to get more info about available commands.

List of properties and their permissions

Navigating between the 3 files that contain car properties and their permissions can be difficult. To help with that, here is a list of these items merged together from the 10.0.0_r40 branch:

INFO_MAKE : [normal]
INFO_MODEL : [normal]
INFO_MODEL_YEAR : [normal]
INFO_FUEL_TYPE : [normal]
FUEL_DOOR_OPEN : [normal]
CURRENT_GEAR : [normal]
NIGHT_MODE : [normal]
DISTANCE_DISPLAY_UNITS : [normal, normal, signaturePrivileged]
FUEL_VOLUME_DISPLAY_UNITS : [normal, normal, signaturePrivileged]
TIRE_PRESSURE_DISPLAY_UNITS : [normal, normal, signaturePrivileged]
EV_BATTERY_DISPLAY_UNITS : [normal, normal, signaturePrivileged]
VEHICLE_SPEED_DISPLAY_UNITS : [normal, normal, signaturePrivileged]
FUEL_CONSUMPTION_UNITS_DISTANCE_OVER_VOLUME : [normal, normal, signaturePrivileged]
PERF_VEHICLE_SPEED : [dangerous]
WHEEL_TICK : [dangerous]
FUEL_LEVEL : [dangerous]
EV_BATTERY_LEVEL : [dangerous]
RANGE_REMAINING : [dangerous]
FUEL_LEVEL_LOW : [dangerous]
INFO_VIN : [signaturePrivileged]
PERF_ODOMETER : [signaturePrivileged]
PERF_STEERING_ANGLE : [signaturePrivileged]
ENGINE_COOLANT_TEMP : [signaturePrivileged]
ENGINE_OIL_LEVEL : [signaturePrivileged]
ENGINE_OIL_TEMP : [signaturePrivileged]
ENGINE_RPM : [signaturePrivileged]
TIRE_PRESSURE : [signaturePrivileged]
TURN_SIGNAL_STATE : [signaturePrivileged]
ABS_ACTIVE : [signaturePrivileged]
TRACTION_CONTROL_ACTIVE : [signaturePrivileged]
HVAC_FAN_SPEED : [signaturePrivileged]
HVAC_FAN_DIRECTION : [signaturePrivileged]
HVAC_TEMPERATURE_CURRENT : [signaturePrivileged]
HVAC_TEMPERATURE_SET : [signaturePrivileged]
HVAC_DEFROSTER : [signaturePrivileged]
HVAC_AC_ON : [signaturePrivileged]
HVAC_MAX_AC_ON : [signaturePrivileged]
HVAC_MAX_DEFROST_ON : [signaturePrivileged]
HVAC_RECIRC_ON : [signaturePrivileged]
HVAC_DUAL_ON : [signaturePrivileged]
HVAC_AUTO_ON : [signaturePrivileged]
HVAC_SEAT_TEMPERATURE : [signaturePrivileged]
HVAC_SIDE_MIRROR_HEAT : [signaturePrivileged]
HVAC_STEERING_WHEEL_HEAT : [signaturePrivileged]
HVAC_ACTUAL_FAN_SPEED_RPM : [signaturePrivileged]
HVAC_POWER_ON : [signaturePrivileged]
HVAC_FAN_DIRECTION_AVAILABLE : [signaturePrivileged]
HVAC_AUTO_RECIRC_ON : [signaturePrivileged]
HVAC_SEAT_VENTILATION : [signaturePrivileged]
AP_POWER_STATE_REQ : [signaturePrivileged]
AP_POWER_STATE_REPORT : [signaturePrivileged]
AP_POWER_BOOTUP_REASON : [signaturePrivileged]
DISPLAY_BRIGHTNESS : [signaturePrivileged]
DOOR_POS : [signaturePrivileged]
DOOR_MOVE : [signaturePrivileged]
DOOR_LOCK : [signaturePrivileged]
MIRROR_Z_POS : [signaturePrivileged]
MIRROR_Z_MOVE : [signaturePrivileged]
MIRROR_Y_POS : [signaturePrivileged]
MIRROR_Y_MOVE : [signaturePrivileged]
MIRROR_LOCK : [signaturePrivileged]
MIRROR_FOLD : [signaturePrivileged]
SEAT_MEMORY_SELECT : [signaturePrivileged]
SEAT_MEMORY_SET : [signaturePrivileged]
SEAT_BELT_BUCKLED : [signaturePrivileged]
SEAT_BELT_HEIGHT_POS : [signaturePrivileged]
SEAT_BELT_HEIGHT_MOVE : [signaturePrivileged]
SEAT_FORE_AFT_POS : [signaturePrivileged]
SEAT_FORE_AFT_MOVE : [signaturePrivileged]
SEAT_BACKREST_ANGLE_1_POS : [signaturePrivileged]
SEAT_BACKREST_ANGLE_1_MOVE : [signaturePrivileged]
SEAT_BACKREST_ANGLE_2_POS : [signaturePrivileged]
SEAT_BACKREST_ANGLE_2_MOVE : [signaturePrivileged]
SEAT_HEIGHT_POS : [signaturePrivileged]
SEAT_HEIGHT_MOVE : [signaturePrivileged]
SEAT_DEPTH_POS : [signaturePrivileged]
SEAT_DEPTH_MOVE : [signaturePrivileged]
SEAT_TILT_POS : [signaturePrivileged]
SEAT_TILT_MOVE : [signaturePrivileged]
SEAT_LUMBAR_FORE_AFT_POS : [signaturePrivileged]
SEAT_LUMBAR_FORE_AFT_MOVE : [signaturePrivileged]
SEAT_LUMBAR_SIDE_SUPPORT_POS : [signaturePrivileged]
SEAT_LUMBAR_SIDE_SUPPORT_MOVE : [signaturePrivileged]
SEAT_HEADREST_HEIGHT_POS : [signaturePrivileged]
SEAT_HEADREST_HEIGHT_MOVE : [signaturePrivileged]
SEAT_HEADREST_ANGLE_POS : [signaturePrivileged]
SEAT_HEADREST_ANGLE_MOVE : [signaturePrivileged]
SEAT_HEADREST_FORE_AFT_POS : [signaturePrivileged]
SEAT_HEADREST_FORE_AFT_MOVE : [signaturePrivileged]
SEAT_OCCUPANCY : [signaturePrivileged]
WINDOW_POS : [signaturePrivileged]
WINDOW_MOVE : [signaturePrivileged]
WINDOW_LOCK : [signaturePrivileged]
VEHICLE_MAP_SERVICE : [signaturePrivileged, signaturePrivileged]
OBD2_LIVE_FRAME : [signaturePrivileged]
OBD2_FREEZE_FRAME : [signaturePrivileged]
OBD2_FREEZE_FRAME_INFO : [signaturePrivileged]
OBD2_FREEZE_FRAME_CLEAR : [signaturePrivileged]
HEADLIGHTS_STATE : [signaturePrivileged]
HIGH_BEAM_LIGHTS_STATE : [signaturePrivileged]
FOG_LIGHTS_STATE : [signaturePrivileged]
HAZARD_LIGHTS_STATE : [signaturePrivileged]
HEADLIGHTS_SWITCH : [signaturePrivileged]
HIGH_BEAM_LIGHTS_SWITCH : [signaturePrivileged]
FOG_LIGHTS_SWITCH : [signaturePrivileged]
HAZARD_LIGHTS_SWITCH : [signaturePrivileged]
CABIN_LIGHTS_STATE : [signaturePrivileged]
CABIN_LIGHTS_SWITCH : [signaturePrivileged]
READING_LIGHTS_STATE : [signaturePrivileged]
READING_LIGHTS_SWITCH : [signaturePrivileged]


Only when looking at the Android Automotive source code, it comes obvious that different car data points can be available.

Developers need to keep in mind that most of it is for car manufacturers only. Of the public information, some could be available by default, and some with user permission.

Android Apps Architecture testing

Part 2: Testing with MockK and Koin

One of the best things about MVVM is the use of separation of concerns principle which by design enables testing of each component in isolation. View, ViewModel and Model are all separated and thus easily testable.

When thinking of testing, then unit testing comes to mind first and for that mocking of dependencies is required.

Mockito vs MockK

After investigating Google’s demo project, it seemed Mockito was the way to go with mocking and verifying tests.

What I soon realised, was that it wasn’t the most convenient library to use in Kotlin, and I also had problems with just getting it to work. I then discovered MockK, written in Kotlin, which was easy to setup and thus made a perfect choice for my project.

Consider mocking a network response:


val call = successCall(contributors)


val call = successCall(contributors)
every { service.getContributors()} returns call

I choose MockK’s every / lambda style over Mockito’s `when`.

The only thing missing from MockK is verifying constructor calls, for which there is a Github issue. Because of this I needed to refactor my code and inject the dependencies, instead of constructing them.

MockK setup

Separate libraries are required for unit and instrumentation tests:

testImplementation "io.mockk:mockk:$version"
androidTestImplementation "io.mockk:mockk-android:$version"

This is enough to start mocking dependencies in Unit tests. For instance, mocking a network client:

val client = mockk<RepoClient>()

For instrumentation tests, you need to launch the real activity and thus need a separate Test App class and mocked Koin modules. Read about this in Instrumentation section below👇🏽👇🏽

Testing the Repository

All the classes can be covered with unit tests. In the @Before block, you should create the class with mocked dependencies:

fun before() {
    client = mockk<RepoClient>()
    repository = RepoRepository(client, /*more mocks..*/)

Then, in your test, you can mock answers from your dependencies and verify expected Repository behavior.

// mock the repository observer
val observer = mockk<Observer<Resource<List<Repo>>>>()
// call getRepos() and observe the response
// verify repos are fetched from network
verify { client.getRepos() }
// simulate that network data was stored to db
// verify getRepos() observer was called
verify { observer.onChanged(Resource.success(repos)) }

With this style you can write unit tests for all of your classes.

UI Instrumentation tests

Instrumentation tests are used to verify what is visible to the user. The app will launch with mocked ViewModel and the tests can then verify the UI state.


For instrumentation tests, you have to set up a custom Test App and its companion Test Runner which is then used to run the tests. Needless to say the setup is pretty complicated but the tests are worth it after the initial hurdle.

@Before and @After

Before the UI test, the ViewModel should be mocked and its responses set. Then the tested activity/fragment should be launched.

fun before() {
    // mock the ViewModel
    loginRequest = MutableLiveData()
    loginViewModel = mockk(relaxed = true)
    every { loginViewModel.user } returns loginRequest

    module = module(true, true) {
        single { loginViewModel }
        // mock other dependencies
        single { mockk<MainViewModel>(relaxed = true) }
        // ... etc

    // launch the activity
    scenario = launchActivity()

After the test the activity should be closed and Koin modules unloaded so the next test can start with cleared objects.

fun after() {

The test

Then, as in Unit tests, you can mock updates from ViewModel and verify expected View behavior. You can also simulate input from the view.

For instance, verifying a Toast message after invalid login:

// input wrong credentials
inputCredentials("wrong", "wrong")

// click the login button

// verify ViewModel's login is called
verify { loginViewModel.login(any(), any()) }

// simulate error response
loginRequest.postValue(Resource.error(getString(R.string.invalid_credentials), null))

// assert error toast shown

Similarly to this, all of the views can be tested.


Although setup and complexity of Android tests could be improved, it is essential for delivering a quality application.

I can say from my experience that numerous times seemingly irrelevant tests have failed after writing new code. These failed tests, if not caught, would have meant bugs in production.


Please have a look at the unit and instrumentation tests in the sample project’s source code.

Android Apps Architecture

Android App Architecture, Part 1

Not being familiar with modern Android App architecture and coming from the ViewController world, it is confusing jumping into the recommended MVVM architecture. There are also some parts of the official guide that are left for the reader to figure out, so I will write about my experience with implementing it in my repo browser project.

Choosing dependencies

Besides native Architecture dependencies, there are some parts of the setup that are not defined in the guide. Two of the main ones are the networking and dependency injection libraries.


The guide uses Retrofit for networking. For me, it made sense to use Volley. I like that I’m in control of the requests that I write, and am not dependent on the Retrofit abstraction of mapping requests to data objects. I can be sure that if there will be customisation required for my requests, Volley can handle it.

Dependency injection

There are many DI libraries, with the most popular one being Dagger. For me, it seemed complicated to get started with and with a lot of boilerplate. I didn’t see any drawbacks from using Koin, so I went for that one instead.

Here are the final dependencies for my project.

Koin setup

After planning and creating the required Activity, ViewModel, Networking, Database and Repository objects, it was time to set up the Koin modules. For my use case, I created singletons and viewModels:

// repository Database
single { get<AppDatabase>().repoDao() }
// repository networking
single { RepoClient(context, get()) }
// repository repository to merge data from database and network
single { RepoRepository(get(), get(), get()) }
// the viewModel
viewModel { (handle: SavedStateHandle) -> 
    RepoListViewModel(handle, get(), get()) } 

Notice the viewModel with SavedStateHandle argument. This allows access to the saved state and arguments of the associated Activity.

Other singletons required were: Thread executors, Room database, SharedPreferences

Here is the Koin setup.

Repository setup

For fetching the repositories from Github and storing/reading them from the database, 3 components were required:

  • Network client
  • Database object
  • MediatorLiveData to merge network/db data

After that, accessing the data in LiveData format is straightforward:

val repoResource = object : NetworkBoundResource<List<Repo>, List<Repo>>(executor) {
    override fun saveCallResult(item: List<Repo>) =

    override fun shouldFetch(data: List<Repo>?): Boolean {
        return data == null || data.isEmpty() || rateLimit.shouldFetch("repos")

    override fun loadFromDb() = repoDao.getRepos()

    override fun createCall() = repoClient.getRepos()

    override fun onFetchFailed() = rateLimit.reset("repos")


Check out the repository source code.

Populating the view

For the last part, the view is updated according to the LiveData<List<Repo>> result. If there is data, the ListView is populated with a DataBinding adapter:

viewModel.repos.observe(viewLifecycleOwner) {
    // update UI

    when (it.status) {
        Status.SUCCESS -> {
        Status.ERROR -> {


These are the main steps of creating an Android MVVM skeleton app. Of course there are more details(networking, database), which can be discovered in the GitHub repository.

Stay tuned for Part 2, where I will write about testing the app with Mockk.


Project code in GitHub.

Android Architecture NDK

Sharing native code between Android and Java projects

In High-Mobility, we have separate libraries for Android and Linux. For each of these we use native code that handles transport protocol. After implementing JNI for both of the platforms, we realized it would make sense to use a shared JNI module instead.

Android setup

At first we started developing for Android, and went for the quick solution of including C submodules and setting up the JNI classes/Makefile.

We ended up with structure:

This structure already had the problem that the C developer had to work in the Android project to update the native code. This means he had to follow the Android project’s branches, and not develop in the C repository independently.

Linux setup

Later we also needed a Linux library. Following the success of the Android project, we created a project with similar structure:

Now our C developer had even more problems. He had to follow the branches of both Android and Linux projects, and update the native module’s branches on both. Since our JNI code is the same, he also had to copy/duplicate the JNI code from one project to the other.

Solution: Shared JNI module

It was clear that restructuring of the projects was necessary. We would need to create a new package that includes the shared items: C submodules and JNI classes. Makefiles are different for Linux and Android, so these would be retained in original projects. 

This is the final project hierarchy:

Now, when our C developer adds functions to the JNI code, he only has to update the shared “Core JNI” package. Our Java projects can then update to the new “Core JNI” branch.


HMKit Android:

HMKit Linux:

HMKit Core JNI: