Social Media

Follow us on...

Twitter
 

Facebook


LinkedIn

YouTube

 

 

Blog

You can also view the Augview Articles Blog which contains more short informative articles.


The challenges implementing geospatial augmented reality on mobile devices


Posted: Monday 27 May, 2014

Augmented reality (AR) enhances the perception of reality by adding visual data to a live view of the real world. This live view may either be perceived directly by the eye and be augmented with head-up displays (e.g., Google glass), or it may be an augmented video stream recorded by digital cameras rendered on a display, typically on mobile phones or tablet computers.

An AR system requires some form of graphics rendering - plain text in the simplest case - and the evaluation of sensor data to establish the relationship to the real world. Sensors may include GPS data to estimate the location, compass and gyroscope data to estimate the orientation of the device, and/or camera data to detect known objects.

Supported by dedicated hardware, graphics rendering can be fairly sophisticated on mobile phones. The largest challenge in improving AR solutions, however, is in developing robust user interaction and environment perception. The remainder of this post discusses the limitations imposed by low quality sensors, the limitations stemming from reduced computing power in mobile devices and challenges in graphics rendering for AR.

The sensor data provided by mobile devices is not directly usable for environment perception due to inferior quality of measurements. Low end gyroscopes and accelerometers suffer from poor signal-to-noise ratios, orientation dependent biases and drift; and magnetometer measurements are susceptible to influence from nearby ferromagnetic materials and may not be well suited to the weak magnetic field of the earth.

With advances in computer vision, digital cameras are an increasingly popular mechanism for obtaining accurate measurements, permitting, i.e. tracking of objects or device pose respectively. However, low end cameras as found in phones in general do not easily allow us to achieve the same performance as industrial, or upmarket consumer cameras.

A major restriction is imposed by small lens sizes as these permit little light to pass through. This results in long exposure times or high signal gains which cause elevated image noise. Moreover, small lenses, according to Abbes' law, will have a rather limited spatial resolution, or in other words the ability to record fine details is reduced.

The design of the imaging sensors does have a significant impact on performance of computer vision systems as well, with the spatial resolution (i.e., how many megapixel) being the least important factor. Limitations are imposed rather by the speed of electronics to read the image from the sensor (time to read one frame, frames per second), thermal noise and the design of the shutter. Low end devices always have a so called rolling shutter, that is, images are read from the sensor simultaneously with photon aggregation so that the upper image area is recorded at an earlier time than the lower area. This causes distortion of moving objects (for example, vertical lines appear to be tilted) and significantly complicates computer vision tasks such as object detection and tracking.

To deal with rolling shutter, noise and low framerate, low resolution images are used for computer vision tasks. For this reason augmented reality SDKs (such as Vuforia, Metaio, ARToolkit and so forth) have a hard coded limit of 640 by 480 pixels. However, this impacts the ability to detect and track objects over a wide depth range in real-world space. In particular, distant objects may be to poor in visual features to permit a reliable pose estimate.

Regarding algorithms for object tracking, all current AR software development kits require the presence of well structured planar objects and do not support pose estimation based on full 3D projections. Estimating the pose of a 2D plane is a computationally less expensive task, which is a crucial factor on devices with limited battery life.

Graphics rendering for AR apps face a number of challenges as well: Rendered objects should appear to be faithfully connected to the real world, and this requires not only rendering lighting and shadows coherent to the real world, but also to be cognizant of realworld objects which may partially occlude virtual objects.

Our developers are constantly honing the AR algorithms to mitigate hardware limitations to provide the best user experience possible with today's hardware. We are excited to see devices with enhanced hardware capabilities constantly entering the market. These advancements are used by our developers to further improve the user experience provided by our augmented reality solutions.

To discuss this topic or other augmented reality techniques, please comment on our LinkedIn page or contact Ralf directly at ralf@augview.net  

 
Posted by Ralf Haeusler
Tags: computer vision, computer optics, mobile hardware, photogrammetry
   
Facebook Link Linkedin Link You Tube Link
Twitter Link Blogger Link Googleplus Link

 

Contact

0800 AUGVIEW - 0800 284843
International +64 9 575 5298
New Zealand Time Zone UTC +12:00
Skype augview