Spiral Technology
Spiral Technology
Spiral Science & Technology Ltd. is dedicated to advancing industrial inspection and monitoring through innovative, technology-driven solutions since 2018
Blog

Navigation and minimap in Mixed Reality app

By Andriy Yeroshevych

Navigation

In the early stages of developing navigation for augmented reality, there’s a natural desire to use well-established methods from traditional screen interfaces — menus, arrows, breadcrumbs, context menus, tooltips, and so on. Some of these do work, although not necessarily in the way originally assumed. However, the very approach of having interface elements “detached” from objects (real or virtual) breaks when it comes to interacting with actual objects. Space, objects, sounds (which we’ll discuss in more detail), smells (which we won’t discuss yet), and motion — all of these must be the primary source for any manipulations, interactions, and information.

That means if, for example, an operator creates a marker (a cube), then all related actions, additional information, and the current status should be shown directly on that cube. Any consequences of these actions should likewise appear on the object itself.

Of course, virtual objects do not always have to replicate their real-world counterparts, since real objects can sometimes be poorly designed in the first place.

Buttons

The result of any action must remain in the user’s field of view. That might sound like a simple requirement, but it’s one of the hardest for designers — long accustomed to screen-based UIs — to adapt to. On a screen, it’s almost impossible not to see the outcome of an action (assuming a more or less well-thought-out workflow). However, in augmented reality, especially when we “tie” a button to a real object (for instance, a real cup with a virtual button labeled “I’m empty — press me to turn on the kettle”), it could easily happen that the result of pressing that button appears in a different room. The solution is obvious: display the action status on the button itself or somewhere close by. This way, objects become interconnected and form a new informational layer.

Sound

Sound plays a huge role in MR — no question about it. Moreover, much of what we’re used to doing graphically in a screen interface can (and sometimes must) be implemented via sound in MR.

If you close your eyes, you can still recognize what’s happening around you based on sound alone: whether there are people present, where they are, how large the room is, how many objects are in it, and where windows and doors are located. To augment reality effectively, you have to pay as much attention to audio as to visuals. You need to augment reality not only visually but also aurally.

Modal Dialog

The concept of modal dialogs, in and of itself, has sparked controversy for decades. Blocking the entire functionality of an application just to perform one action is a legacy of 1990s-era modal design. In the context of MR, it’s particularly odd: “blocking” reality — even if it were possible — sounds like something out of poorly conceived science fiction.

Minimap

Much like the minimaps in video games — a small depiction of the map for spatial orientation — we also added a “mini” version of the surrounding environment. Initially, it was a small-scale building model “standing” nearby, showing the user’s position with a little human figure. It looked impressive, but it wasn’t very useful. At least in the case of building inspections, the operator usually already has a good sense of orientation and doesn’t need a map. And if there’s a need to show something hidden behind walls, for example, it’s better to embed that information directly into the real-world setting.

Then we encountered an interesting scenario involving the inspection of composite airplane parts. Some of them require ultrasonic scanning to detect internal irregularities at a distance. The final product of such a scan is a “density map,” which technicians typically print out and use — along with a ruler — while standing next to the part to figure out where any detected defects actually are. Having the ability to see directly on the part where everything is would be quite convenient, to say the least. That’s how we arrived at the idea of a “minimap.” But it turned out not to be “mini” at all — it was a true map. The goal was to display densities right on the physical object. So again, the minimap concept quickly gave way to a full-fledged visualization.

Another potential scenario, which was one of the first in early AR advertising and promotions, is remote observation. It’s almost the same as the building minimap, except now it’s available not only to the operator but also to their supervisor. Having a sort of 3D immersive zoom sounds appealing, but in situations where truly two-way communication is required, regular video zoom typically suffices.

In the previous scenario, it might happen that there is no operator at all, transforming this into a monitoring system — for example, a visualization of a SCADA-type system used to manage complex mechanical setups. But the complexity of such systems makes their 3D representation even more challenging.

ConclusionA minimap will likely find its place in augmented reality, just as it has in games. In screen-based interfaces — restricted as they are — an additional layer of information can be very helpful. In the realm of augmented reality, however, real objects are far more informative than virtual ones, and our natural ability to navigate space fundamentally outperforms not only a minimap but also a regular map.