Many people think that ubiatar would better be an AR (Augmented Reality) application running on devices like Google Glasses, worn by the Avatar and projecting a ‘virtual image’ superimposed on the external world.
Google glasses are a cool invention of the Alphabet’s company and, even if they have been removed from the consumer market, are available for professional applications.
This device, like all others of its kind, is a complex product. It allows the user to see the word while superimposing digital graphics on it. The resulting view for the user could be something like this:
Here the surgeon can see data about the patient while operating; it is a nice application.
If the surgeon moves his head, the data window will move with it. It normally does not ‘stick’ on an element of the scene.
There are also real AR applications where the graphic element ‘sticks’ on the real-world scene, so that when the user looks around it seems to stay in place, like in this example:
here the digitally-generated road seems to lay on the table even if the point of view changes. The red squares are called ‘markers’ and normally are needed for proper ‘sticking’ of the digital graphics on the physical scene.
The Google glasses need to be connected with a Smartphone and are not cheap, costing around 1500-2000 dollars. It is not too much for a complex device like that.
An even more expensive device is called Microsoft Hololens:
They cost around 3500 dollars, but are capable of reconstructing the 3D structure of the physical world and ‘position’ complex 3D objects in it.
Really professional, with the drawback of a small angle of vision; if the user moves the head too much on the side, the virtual 3D objects disappear.
Ubiatar normally runs on a standard smartphone and does not connect to devices like these.
The icons that are the base of the ubiatar technology appear on the live video of the world where the Avatar is and everything is shown in the smartphone held by the Avatar himself.
No glasses, no connection of the smartphone to the Avatar body:
Is it a ‘too simple’ solution?
Would not it be better to connect with the Google glasses or even the Microsoft Hololens?
To really understand if these solutions are ‘better’, we should think about the applications:
In the base situation, ubiatar is about an Usar from home/office directing an Avatar in the field.
For ‘field’ we could mean any place in the world, from the Colosseum to a factory, to a surgery operation room.
The Avatar acts as the ‘remote body’ of the Usar that directs him/her to explore around and/or perform some procedures.
In this situation, we have a person looking to the world through the other person’s smartrphone: if that device is fixed to the body of the Avatar, each small movement would result in a large shift of the image at the Usar’s end.
In a very short time, motion sickness would arise.
It is a very different situation from that of a single user looking at the world through his AR googles, because the Usar would not be in control of the small movements of the Avatar’s head.
We field-tested it in many situations and almost invariably the resulting experience has been really uncomfortable for the Usar that tries to direct the Avatar.
So, why Google created this glasses and sold some of them?
Because the application was different.
Google glasses are very good as a single-user device. If you are in a surgery operation room and cannot touch anything, they are great. You just have to summon data from the medical devices (you can select them with voice commands) and look at it over the view of the room that YOU are controlling with YOUR head.
Nobody else has to see, no Usar has to direct you.
The same applies for any factory AR application, where you are working alone and can activate 3D reproductions of the part you have to work on.
No other people are looking at your view, the application is perfect.
ubiatar is about telepresence.
Our solution is that of one person directing another, while the two look at the same physical-world scene.
If the Avatar uses a smartphone fixed to his body, the experience would simply be uncomfortable.
There is another problem with AR glasses in real-world applications: you are not in a science fiction movie.
In fiction, there are no real-life problems. Environments are not dirty and devices are not thrown around:
that is, in fact, fiction.
We know what is instead the REALITY of factories:
workers have dirty hands, tools are thrown around and nobody has time to treat delicate optical devices with the great care they deserve.
How many hours of heavy work in an industrial environment will a Google glass endure?
And a Microsoft Hololens?
Also, these devices costs thousands of dollars, while a simple smartphone can be bought for 40 bucks.
Last but not least, there is the problem of 3D models for real AR applications:
if the environment where the service is to be used does not consist of one of a few sets, the work of creating the 3D or even 2D models for each set could be daunting.
Imagine an application for the servicing of cars: to properly offer an experience like this one:
you would need to create 3D models of each arrangement of the engine for each model of car to service: the total number of models could easily reach the thousands. It is something only Boeing has ever done, just for the few configurations of few production airplanes; and even that was incredibly expensive.
With ubiatar, instead, the Usar would show one icon from a short menu, just on the spot where the operation is requested. No need to create thousands of 3D contents.
In conclusion, AR glasses like Google or Hololens are a technological marvel, but they can be used only in single-user situations in very clean environments with ‘gentle’ operators, where the sets of possible contents is not very large.
In all other real-world situations with one person directed by another, ubiatar on a regular smartphone is the most effective solution.