IoT, Technology and Digitization

Augmented Reality Technology in Field Service and Maintenance Applications – by Alex Rapoport

Overview and History

The idea of overlaying digital information onto human field of views is not new. The first application of this concept was done in the late 1960’s by Ivan Sutherland, an Associate Professor at Harvard University. Sutherland invented the first Virtual Reality (VR) and Augmented Reality (AR) Head Mounted Display (HMD), which was composed of a miniature CRT display, video system and mechanical tracker to provide head position and orientation.  Subsequently, a large amount of research was conducted to prove applicability and usability of AR/VR HMD systems in industrial applications. Among the earliest notable achievements in this area was the work of Caudell and Mizell at Boeing in 1992. These researchers designed an HMD system to assist workers in the airplane factory by displaying wire assembly schematics in a see-through HMD display. A year later, Steven Feiner at Columbia University introduced KARMA, an HMD see-through system that incorporates AR instruction sequences to assist in the maintenance and repair of laser printers. What all of these early devices had in common was that they were bulky, impractical and expensive.

Today, Augmented Reality technology has reached a tipping point.  Technological advancements have enabled developers to pack sufficient computing power and advanced optical design into a small device capable of handling sophisticated image processing and mathematical algorithms. This has turned Augmented Reality technology into a practical solution for work.

Computer-Generated Vision

The key component of any Augmented Reality system is computer-generated vision technology.  This rapidly developing area of scientific research and innovation can be used in different applications to enrich physical vision through the use of artificially created graphics and data.

The following table summarizes three main categories of computer-generated vision:

Augmented Reality Virtual Reality Mixed Reality
Definitions Superimpose digital data on physical objects in a field of view. This digital data and graphics are spatially aligned to physical elements. User is immersed in a synthetic, computer-generated environment and has no visual sense of the real world. Merging virtual objects into real world scene to produce a new surrounding where virtual and real objects interact in real-time with a user.
Devices Smartphones and see-through smart glasses Oculus, Samsung Gear and similar (blocks vision of the real world) Only possible with optical see-through smart glasses (e.g., HoloLens, Meta)
Key Technology

 

Image processing and tracking algorithms, optical see-through displays Different sensing and tracking technologies Hologram optics
 

Target markets

 

Industrial, field service, medical, tourism Gaming, entertainment, design Design, architecture, training, education
Applicability for industrial use Maintenance, service, assembly, on-the-job training Training, simulators, sales Training, design
Examples  

 

   

 

While Virtual Reality concepts are better known to the general public, Augmented Reality technology has much broader opportunities for implementation. Market research by Digi-Capital™ estimates that in 2020 the AR market will grow to $120B, while Virtual Reality will focus on gaming, entertainment and education/training implementations.

Augmented Reality is highly suited for industrial and field service applications. It enables true integration of digital data within human vision, using superimposed digital information to extend and enrich visual perception of physical objects.

Augmented Reality Technology

Implementation of Augmented Reality systems is based on two main principles: Registration and Tracking, as explained in the table below:

Registration

Registration is the initial spatial positioning of a digital element on a target physical object. The first step of the registration process is recognition of a real object or scene. The camera image of a 3D object or environment is processed in real time and transformed into a digital model.

2D AR – Image Recognition 3D AR – Model Recognition
The camera image is binarized and compared to a target image to identify pixel patterns and to find a homography between the patterns and the actual image.

Due to its relative simplicity, this method is used in many augmented reality applications. The image recognition method does not require any prior preparations and can be used on any object.

However, a change in user (observer) position with respect to the object will result in displacement of the spatial positions of digital elements or a distortion in their homography. Progress in computer vision and environment reconstruction algorithms (e.g. SLAM) enables significant improvement of 2D AR methods.

The main advantage of this method is its simplicity as compared with 3D AR, while enabling the definition of positions of AR elements on a 2D image of a physical object.

This approach requires prior knowledge about the actual scene based on a 3D model.  A 3D model of a target object (e.g., equipment) can be produced from CAD files or reconstructed from a real object. For example, SfM (Structure from Motion) technology can be used to construct a 3D model of an object by scanning the object with a smartphone camera. This method requires high performance computing, not (yet) available in smart glasses.

Once the 3D model is uploaded into an AR system, different algorithms can be used for matching the wire frame 3D model with the edges of the real object’s image.

The model recognition method can be applicable for stand-alone mechanical devices. However, this method is impractical for electric wire panels, network racks, or entire machines that may incorporate customized subsystems or may be integrated with other equipment.

Tracking

After the physical target object is recognized and digital AR elements are spatially positioned (registered) on the physical object, the AR system should track the alignment of digital elements respective to the relative position and movement of the user (observer) or the object. The AR system should be capable of tracking this alignment with 6 degrees of freedom (3 for the position and 3 for head orientation). Tracking can be done using sensors such as gyro, accelerometer, etc., or by using optical input parameters of the camera, that captures the real world views.

Display Technology

When combining the real and virtual worlds, two basic choices are available for displaying digital data superimposed into the real world: optical or video. The table below summarizes and compares these options:

Optical See-through Video See-through
With optical see-through smart glasses, the real world is seen through waveguide lens or semi-transparent prisms integrated into the transparent lens. The optics are used to reflect the computer generated images into the user’s eyes, combining them into the real world view. Optical see-through provides the most realistic augmentation. In video see-through mode, the real-world view is captured by a camera (or two cameras) and the computer-generated images of digital elements are combined with the video representation of the real world. Video see-through allows AR to be presented not only on special smart glasses, but also on smartphones and tablets.
Challenges:

■    Occlusions.  Digital objects will “float” on top of physical objects. Rendering of digital elements, so that they will be positioned in the proper depth and even behind real objects will require extremely complicated algorithms.

■    “Swim” effect. The user’s head movements cause the AR tracking system to continuously adjust spatial positions of digital objects. However, latency of the tracking system in adjusting these positions destroys the illusion of the digital objects being fixed in the environment, causing them to “swim” around.

■    Depth Perception. Our cognitive system takes over 15 different stimuli into account in order to perceive spatial relationships between 3D objects. In optical devices, the depth sensing can be achieved only by implementing binocular vision or depth sensor, plus stereoscopic displays. Smart glasses which are not equipped with two cameras and/or depth sensors will not be able to produce correct spatial positioning and homography of digital elements.

Challenges:

■    Latency. The video projection should be merged with digital objects and will be slightly delayed in respect to the real world. The increased speed of mobile processors will reduce this latency to virtually undetectable.

■    Point of view. In many smart glasses the video camera (or cameras) are located in a slightly different position relative to the user’s pupils. This causes displacement of views of the real and projected worlds especially when the user is observing objects at close range (up to 1 meter). New smart glasses are designed with a better ergonomic design that solves this issue.

■    Obscured vision. Video streams displayed on the smart glasses obscure the real-world view. Actually, the user may see two pictures of the environment: the physical world and its video projection with AR. This issue is less important when a smartphone is used as a viewing device, or when smart glasses are used solely for presenting guidelines and information and not for continuous operation.

AR in Industrial Applications

In field service, maintenance and assembly applications, emerging AR technology can play a significant role in changing the way people collaborate to resolve technical issues, provide support and access technical documentation. AR enables context-sensitive display of digital information in real-time. This will reduce time needed to search for information during complex, time-critical processes.

With the deployment of IoT and Industrie 4.0, terabytes of data are being uploaded to the cloud.  Factory operators may soon become overwhelmed with huge amounts of information, big data analytics, and real-time process data. Finding the right data where and when they need it has already begun to be a major challenge for field services. The combination of AR and smart glasses allows technicians to streamline information at scale – presenting them exactly the right information about the right process or machine – hands-free and at the right time and place.  By wearing smart glasses powered by an AR-enabled application, the processes or equipment can be automatically identified and real-time data, alarms, and service manuals can be automatically presented in front of the operator’s eyes.

Moreover, by connecting with experts from headquarters over real-time video, the smart glasses can help guide an operator to resolve a problem by displaying augmented visual instructions which will show the operator how to set up equipment or replace parts step-by-step.

There are two types of industrial AR applications:

Online AR applications Preconfigured AR applications
In this type of application, the AR elements should be presented in real-time and without any prior configuration of the AR system.

For example, in remote support interaction between an expert in the service center and a person on-site, the expert doesn’t have any prior information about the on-site scene and systems and can’t prepare 3D models in advance.

In such situations, the AR registration and tracking should be based on information reconstructed from 2D images received from a remote site.

New technologies that combine advanced 2D image recognition and tracking algorithms enables implementing accurate AR features based on 2D transformations.

It is possible to create AR elements and build 3D models in advance to improve the accuracy of registration and tracking and to allow the user to view AR information from different locations.

However, such an approach requires CAD files or reconstructed models of the object or scene and involves considerable efforts to build the model and AR elements.

This type of augmentation can be used for training purposes, installation of new or customized machines, or when viewing AR on identical devices (e.g., a standard office printer).

Emerging AR Technologies

As the computing power of mobile devices continues to grow, so has the viability of implementing advanced mathematical and computer vision algorithms (e.g., SLAM, SfM, RANSAC). These emerging technologies significantly improve reconstruction of the environment from camera images, enhance accuracy of AR registration and tracking, and speed up rendering of computer generated elements. These advanced algorithms, combined with optical see-through smart glasses and fast cellular network infrastructure, will make AR-based information and collaboration the standard in field service and industrial applications.

Conclusions

The table below summarizes implementation of AR technology for different types of industrial applications:

Field service and maintenance Design, assembly and training
AR modeling technology

 

 

2D AR modeling today and 3D in the future

 

3D AR modeling
Display technology

 

 

Optical see-through and video see-trough

 

Optical see-through only
Viewing devices  

Smart glasses, smartphones and tablets

 

See-through smart glasses only
Implementation requirements  

Instant implementation, integration with real time data

 

Preconfigured, precise spatial positioning

 

Alex Rapoport is Vice President Marketing at Fieldbit, a company that provides a commercial, end-to-end software platform that incorporates 2D-based Augmented Reality and develops new technologies for implementing advanced 2D- and 3D-based AR for industrial applications. The Fieldbit system can be used today on smartphones and smart glasses for providing visual, interactive support to remote operators and field technicians in real time, while enabling on-the-job knowledge capture. Fieldbit is an out-of-the box, hardware independent solution that enables machine manufacturers to support remote technicians dealing with different types of new, customized and legacy equipment. 

Previously Alex has been a senior sales and marketing manager with companies such as Axeda, Metrolight, Electronics Line and PowerSines. He has a B.Sc. in Electronics Engineering from Technion, Israel and an MBA from Manchester Business School, UK.

 

Si2 ON-Demand: Deep dive remote sessions with Experts, backed up by analyst and research resources to solve problems and get things done faster, with less mistakes, at lower cost and less risk! – A fresh new approach for Service Leaders and their teams. To find out more see our post on this blog or visit Si2 Partners

If you are a Service professional (manager, practitioner, consultant or academic) in an industrial setting join our group Service in Industry on Linkedin

We curate many Magazines on Flipboard on service markets and industries, service business and operations as well as service related technologies, the IIoT and innovation. You can follow us on Flipboard here . The content is crowdsourced. If you would like to be a co-curator, and share interesting articles with the community through Flipboard, please send us an email at info@si2partners.com with the heading “Flipboard”.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s