DEV Community

Artem Goncharov
Artem Goncharov

Posted on

MR/AR devices as the next generation of exocortexes

TL;DR

The next gen exocortex would proactively help us to do whatever we do by offloading some work to itself without us telling it to do so. The future App Store would be the Skill Store — we will be able to download skills right into our MR/AR devices, which are the best-equipped computing devices to be the next gen exocortex.

What is an exocortex?

An exocortex is an external information processing system that augments the brain’s biological high-level cognitive processes. [1]

In Wikipedia, there is no such article and I’m being redirected to the page about BCI (brain-computer interface) [2] –computer_interface which is a bit different from the notion of exocortex and describes the interface between two computing devices — the brain and the exocortex itself.
Strictly speaking, we can say that exocortex is any external system that enhances our learning or thinking process. A notepad is one of the examples: when we write, think, read what we write, then write again — we use the simplest exocortex ever.

Currently, any personal electronic device is an exocortex, especially a mobile phone. We offload a lot of storage tasks to our phones, and we use it as a notepad, as a scheduling tool, it helps with learning and solving simple tasks.

What an exocortex consists of?

I think the modern exocortex is a computing device with an interface to our internal computing device (brain). So, very high level there are three parts it contains:

  1. The computing device is just a CPU/GPU + memory. Nothing interesting here.
  2. Interfaces to other computing devices, including the human brain are very diverse — starting from Wi-Fi, mouse, keyboard, and monitor for PC, and touchscreen for mobile phones and ending with BCI ( that interacts with brain neurons directly.
  3. Model — it’s a tricky thing. The models inside the exocortex are being computed with the computing device and should imitate models in our brain on multiple layers. One of the layers is a communication layer and here an interaction design can be used [3]. The core part of the model in the device is the set of very high-level abstractions that have counterparts in our brain — we name them applications. For instance, let’s take the application “Notes” — it models the notes catalog, it means in the brain we have a model/idea of the catalogue/list of notes, where each note is a piece of content about something. So the exocortex supports our brain by helping to offload computations of some models to itself and provides the result of computations using a protocol (interaction design) over the interface from (2).

How exocortex works?

The main process looks like this: our brain gets some input from the external world or from the internals of the brain itself, then it decides to activate the exocortex, sends some info to the exocortex using available interfaces, the exocortex makes some models calculations and then returns the result of the calculations, which our brain uses for its own calculations.

There are a lot of moving parts. The brain needs to switch between different activities here — thinking about a problem, deciding to use the exocortex, deciding what interface to use, encoding the data to be processed, passing the encoded data to the exocortex, then periodically checking if the results are already available, decoding the results. Of course, our brain is super adaptable and we usually don’t notice all those activities — they just occur. However, there is a cognitive power we spend on these activities anyway, any switch means losing focus and increased chances to be distracted by something else (new message from a friend while translating the word in the translate app).

v How to make a better exocortex?

I believe the fewer moving parts we have the more efficiently we use our exocortex and the more enjoyable is the process. It’s one of the reasons we start using phones as an exocortex — switch time is lesser than when we use a PC, the same is for the voice assistants which make switch for some tasks super quick.
For a PC or a notebook one needs to turn it on, find the app, run it, use mouse and keyboard as a rather slow input. For mobile phones usually, there is no hassle of finding the device and turning it on, the input is quicker in some tasks and slower in others. For voice assistant there is almost no time for a switch — just ask the question (yes, we still need encoding, communicating, and decoding but for most of people speech is faster than typing + no need to run the specific app).

What would be the next steps to make the exocortexes even more efficient? I see two ways of doing it:

  1. We can continue optimizing interfaces to be less disruptive, probably the final target here is neural BCI which connects the brain and exocortex directly. However, we would still need to have an interactions layer and a high level protocol that connects models in a brain and in a computing device. So it could turn out that we will not save a lot of time because of this direct interface.
  2. We can make exocortex to be proactive, driven by its own AI. In this case, the exocortex should have the same inputs as our brain and then it would predict the tasks our brain is trying to solve and proactively pre-calculate some models and provide the results to the brain. Of course, the AI should learn to provide what is really needed for this particular brain at this particular moment. This way can considerably decrease the switch time or even get rid of it completely!

The next gen exocortex

I think the second way is very promising. Just imagine, you are somewhere in France in a bakery shop and want to buy a loaf of artisan bread, but unfortunately, you can’t speak French, however, your AI driven exocortex can easily understand what task you are working on (buying bread) and proactively provides you with some phrases that can be of use now. You choose and say one of them — AI understands what phrase you use and learn from this fact, making the following advice even more accurate. The way it works looks super useful to me.

The only problem with the second way is that the exocortex device should have the same input your brain has + the output/results of models calculations should be available to our brain constantly. Obviously, the future MR/AR devices [4] are the best equipped for this task as they see and hear the same as you do + the results of models calculations are available to you any time as an overlay above the real world.
Interestingly, in this case, the models (or apps) would be very different from the current apps that we see every day in our phones or PCs. The future models would more resemble the real models in our brain that are being activated by specific contexts automatically — so it could be that multiple models would be activated by the same context and will fight for the chance to provide help. It resembles one of the models of consciousness (Global Workplace Therory) where different object on the scene of our view fight for the attention of different parts of our brain areas and one with the largest number of activated areas wins the attention [5]. For instance, when you cook, two models, for cooking and for healthy habits could fight each other because the context activates both of them. In this case, you would get help from both or from one that wins.

The result would be that the App Store for MR/AR devices as an exocortex will look like the Skill Store: users would install models that support the required activity to their exocortex (MR/AR device) — one for translating, one for calculating, one for drawing, one for woodworking, one for management and so on.

For example, I’m going to learn to play the guitar, I open the Skill Store and choose the skill “guitar playing “ — it’s downloaded to my brand new MR device and once I take a guitar and look at the strings the new skill is being activated. It starts showing me accords right on the guitar, drums the beat and attracts my attention to mistakes, and provides some help to fix them.

Another example: when I install the skill “bouldering”, go to an indoor bouldering center and start climbing, my AR device starts helping me by counting the climbs and their complexity, highlighting the holds in the route, and letting me know about breaks when my heart rate is spiking.

A new way of developing models for the next gen exocortex

The way the models/apps/skills are being developed would change as well. Skills are not the mobile apps, they are much more complex and we may need to change our thinking process in order to learn how to develop skills for the next gen exocortex.

For instance, we may apply systems thinking and notice that if we use the usual exocortex, like a mobile phone or assistant, we use it as a technology in one of our practices. However, if we use an exocortex that is equipped with AI, we can consider the exocortex as the active team member, which at each moment of time knows what practice we are doing, what the lifecycle of the outcome of the practice is, and proactively plays some sub-roles of our role, does its practices and provides outcomes of those practices, so once we need them — they are already ready.

Systems thinking can help us to understand what practices the exocortex would help us with, what would be the lifecycle of the outcome for each practice and how we may split the models in the most efficient way.

References

[1] Exocortex | Transhumanism Wiki | Fandom
[2] Brain — Wikipedia
[3] Interaction design — Wikipedia
[4] Virtual Reality vs. Augmented Reality vs. Mixed Reality — Intel
[5] Frontiers | Global Workspace Theory (GWT) and Prefrontal Cortex: Recent Developments

Some resources about systems thinking: isss.org, eem.institute.

Disclaimer: Opinions expressed in this article are solely my own and do not express the views or opinions of any company including my previous or current employers.

Top comments (0)