Future trends in the AR-based consumer goods industry

The development of VR in B2B over the last years was not the success story experts were hoping for. Initially heralded as a new disruptive technology by most industries, VR now only occupies a small portion of immersive technologies being used by businesses. This is due to the fact that other technologies were more suited to the tasks originally attributed to VR. With the release of the Microsoft Hololens in 2016 most potential VR use cases like remote maintenance work, upskilling of the workforce, or remote collaboration got completely taken over by AR solutions.

Hence VR in the B2B space is still existing, but it only occupies certain niches. Primarily sectors where immersive visualizations are beneficial, utilize VR in their workflow. Real estate companies for example use VR to show non-existing objects to clients, Architects use VR to design buildings and eliminate the need for expensive models. Industrial designers and engineers create and simulate their products in VR without needing to build physical, not scalable prototypes. Additionally, a variety of creative software is being used in the VFX and entertainment sectors. The biggest market for VR remains the consumer market, where new Headsets like the Oculus Quest or the Valve Index are sold out for months and are taking the entertainment sector by storm.

Even Though Augmented Reality has been in the public eye frequently, it still remains a mystery as to when this technology will be completely accepted by the consumer market. In the B2B sector, AR has already become a staple technology, which has clearly defined use cases and benefits.


Augmented Reality has spread through all types of industries from healthcare to automotive, and a large portion of major businesses have either already successfully integrated or are experimenting on how to leverage Augmented Reality to their advantage.

So how come that this technology is nowhere to be found on the consumers’ end?

This is mainly due to the cost associated with high-quality AR experiences. The technology required to create these experiences is usually only found in high end Augmented Reality Head-Mounted Displays, like the Magic Leap or the Hololens. These devices house specialized sensors and externally mounted cameras, which deliver data to the display.  The camera and the sensors are needed to understand the surroundings and help to create a digital representation of it. As a result, amazing AR visualizations and interactions can be created.

In addition, these headsets are also extremely difficult to source, and only a few retailers and companies have the opportunity to purchase these devices. 

Consumer AR has mainly been limited to AR experiences on mobile devices, which apart from a few exceptions, do not manage to live up to expectations. Mobile AR utilizes image-based algorithms to try to locate the device and map its surroundings (Simultaneous localization and mapping). While these techniques can produce pretty good results in regards to localization (figuring out the movement of the device), usually they fall short in the mapping department (understanding and reconstructing the environment). This is due to the extremely inconsistent nature of image-based AR techniques. Image Processing algorithms try to extract information about surfaces and real-world objects by trying to find regions in an image with a large variance of pixel information. These regions are susceptible to different factors such as lighting or texture of a surface, which can drastically change the effectiveness of these algorithms.

So is this the end of consumer AR?

No. Even Though a large portion of the AR industry has shifted its focus on B2B, some of the largest players in the space are doubling down in their efforts to bring quality AR experiences to consumers. Probably the most infamous of these players is the tech giant Apple. In the past years they have been buying up AR-related tech startups and this year we finally got a glimpse of what Apple is trying to do with all of this aggregated knowledge and technology. Beginning of the year the new IPAD Pro 2020 became commercially available. While to most people this might be another run of the mill iteration of the already well-known tablet computer, to AR developers, thís was the first opportunity to get their hands on an affordable high-end AR device.

 

Additionally, leaks from earlier this year confirmed that Apple is working on AR dedicated wearables. The leaks revealed plans for the Apple glass, an AR dedicated smartglasses, which seem to be able to revolutionize the general approach to consumer AR. The biggest feature of this wearable may be the fact that you cannot distinguish between a regular pair of glasses and this high tech device. Previously, most AR headsets required a multitude of cameras and sensors to deliver a satisfying AR experience. This can lead to a very uncomfortable and noticeable design, which makes the headsets impractical for daily use. Nobody feels comfortable talking to someone with a huge device strapped to their head, especially when this device has an array of visible cameras and sensors pointed at them. So how does Apple plan to deliver high-end AR experiences without all this additional hardware? By relying solely on a specific type of sensor, which was previously disregarded by most consumer AR producers.

So what makes the new Apple devices deliver such quality AR?

Well, apart from dedicated hardware which is designed to accelerate AR computing, the IPAD 2020 Pro also houses a special type of sensor, which is said to be featured in all new Apple devices. This sensor is a special scanning LIDAR (Light Detection and Ranging) sensor, which similar to Radar, uses Laser to judge depth and distances. The sensor shoots out a train of lasers from the sensor into the environment and records the wavelength and the timestamp of the returning laser. Based on these differences the distance between the sensor and the reflecting object is determined.

 

Lidar has been around for a while now and most notably Lidar was used on Apollo 15, to map out the lunar surface for landing during their expedition in 1971. Since then Lidar has come a long way and is used in a multitude of different applications from 3D scanners, scientific applications to self-driving vehicles.

How does this help Mobile AR?

As previously stated, LiDAR can be extremely useful to map out the distances between the sensor and objects in its surroundings. This information can be combined with camera information and then used to recreate a digital reconstruction of the environment and its surfaces. This reconstructed environment can then be used to drive all types of AR experiences and features, which were previously only available to users with expensive AR HMDs. 

 

The Benefits/ Features of LIDAR supported AR include:

  • Object Occlusion
  • Increased Tracking accuracy and stability (Markerless & Object based)
  • Scene Semantics (labeling and semantic understanding of surfaces)
  • Physics simulation and interaction of digital content with the scene

What does this mean for the AR Industry?

The integration of AR supported LiDAR technology and the availability of devices that are capable of running these algorithms may drastically change the landscape of business AND consumer AR. 

For businesses, the resulting reduction of cost can lead to increased interest in utilizing AR, since now the barrier of entry is much lower than before. Additionally, the technology may now have new use cases, especially in areas where previously the cost outweighs the benefits of AR.

For consumers, this means that sooner or later most of our mobile devices will be partially dedicated to delivering us high-level AR experiences, whether we like it or not. Which areas of our day-to-day life will be influenced by AR solutions most is still unclear. 

 

For AR to be fully adopted into the mainstream,  it will require a breakthrough application that leverages AR technology in a way that cannot be done by traditional applications. The big question is, which application is going to be THE one. The scope of imaginable use cases is immense. A huge demand would obviously be on-site navigating in unfamiliar surroundings, like big commercial centres or large train stations. Picking the wrong exit to the outside world after leaving the London tube for example can easily cost you half an hour walking time. Education and clinical diagnostics could be two other promising options. At the end of the day, the business case will decide which of the thinkable applications is going to be the main driver for the upcoming change in the AR market. But for now, at least we are confident that if we do develop that application, people will be able to use it.

You may also like