ToF vs LiDAR: what’s the difference?

Lately, there has been so much hype around LiDAR about new Apple devices that it’s easy to forget that mobile Augmented Reality can work any other way. But it can and does, particularly with ToF tools reaching new heights in Samsung phones.

Whether you’re a developer, looking for a new device, or just curious, it’s worth taking some time to unpack these acronyms and learn the ins and outs of cell phone depth sensing.

Special Black Friday 2020

  1. Read our shopping tips:
    • The best offers of the first day
    • Kindle discounts
    • Amazfit smartwatch offer
    • Discount Sennheiser headphones
    • Amazon Fire TV discounts
    • Apple accessories offers
  2. Discover all the updated offers on Amazon

What is ToF?

ToF is short for Time of Flight.

Technically, ToF refers to using the speed of light (or even sound) to determine distance. Measures the time it takes for light (or sound) to leave the device, bounce off an object or plane and return to the device, all divided by two reveals the distance from the device to the object or plane.

Hence, the relationship is that all LiDAR is a type of combat time, but not all flight time is LiDAR. To keep things simple, when we talk about “ToF”, we mean optical distance measurement, excluding LiDAR.

So, if LiDAR and non-LiDAR optical ToF both use light for distance determination and 3D mapping, how do they differ?

What is LiDAR?

LiDAR stands for Light Detection and Ranging . This technology uses a laser, or a laser grid, as the light source in the equation described above.

A single LiDAR reading can be used to measure things like the width of a room, but multiple LiDAR readings can be used to create “point clouds”. These can be used to create three-dimensional models of objects or topographic maps of entire areas.

While LiDAR may be new to mobile devices, the technology itself has been around for some time. In non-mobile environments, LiDAR is used to do everything from mapping underwater environments to discovering archaeological sites.

How are LiDAR and ToF different?

The functional difference between LiDAR and other forms of ToF is that LiDAR uses pulsed lasers to build a point cloud, which is then used to build a 3D map or image. ToF applications create “depth maps” based on light detection, usually through a standard RGB camera.

The advantage of ToF over LiDAR is that ToF requires less specialized equipment so that it can be used with smaller, less expensive devices. The advantage of LiDAR comes from the ease with which a computer can read a point cloud compared to a depth map.

The Depth API that Google created for Android devices works best on ToF enabled devices and works by creating depth maps and recognizing “feature points”. These characteristic points, often barriers between different intensities of light, are then used to identify different planes in the environment. This essentially creates a lower resolution point cloud.

How ToF and LiDAR work with Mobile AR

Depth maps and point clouds are interesting, and for some people and applications, they are enough. However, for most AR applications, this data needs to be contextualized. Both ToF and LiDAR do this by collaborating with other sensors on the mobile device. Specifically, these platforms need to understand the orientation and movement of the phone.

Making sense of the location of the device within a mapped environment is called Concurrent Localization and Mapping or “SLaM”. SLaM is used for other applications such as autonomous vehicles , but mobile-based AR applications are more required to place digital objects in the physical environment.

This is especially true for experiences that stay in place when the user isn’t interacting with them and for the placement of digital objects that appear to be behind individuals and objects.

Another important factor in the positioning of digital objects in both LiDAR and ToF based applications is the “anchors”. Anchors are digital points in the physical world to which digital objects are “attached”.

In world-wide applications like Pokemon Go, this is done through a separate process called “Geotagging”. However, in mobile-based AR applications, the digital object is anchored to points in a LiDAR point cloud or to one of the characteristic points on a depth map.

Is LiDAR better than ToF?

Strictly speaking, LiDAR is faster and more accurate than flight time. However, this becomes more significant with more technologically advanced applications.

For example, ToF and Google’s Depth API have a hard time understanding large, low-consistency floors like white walls. This can make it difficult for applications that use this method to accurately position digital objects on some surfaces in the physical world. Applications that use LiDAR are less likely to have this problem.

However, applications involving larger or more structurally varied environments are unlikely to have this problem. Additionally, most mobile-based consumer AR applications involve using an AR filter on the user’s face or body , an application that is unlikely to run into problems due to the large unstructured surfaces.

Why are Apple and Google using different depth sensors?

In releasing their LiDAR-compatible devices, Apple said they included sensors and other hardware to “open up more professional workflows and support professional photo and video apps.” The release also called their LiDAR-compatible iPad Pro “the world’s best augmented reality device” and touted Apple’s measurement apps.

Google hasn’t provided such explicit explanation as to why their API Depth and new line of supporting devices don’t use LiDAR. In addition to working around LiDAR, keeping Android devices lighter and more affordable, there is also a major benefit in terms of accessibility.

Since Android runs on mobile devices made by multiple companies, using LiDAR would favor LiDAR compatible models at the expense of all others. Also, since it only requires a standard camera, the Depth API is backward compatible with multiple devices.

In fact, Google’s Depth API is device independent, which means developers using Google’s AR experience building platform can develop experiences that work on Apple devices as well.

Have you explored depth sensing?

This article focused primarily on LiDAR and ToF in mobile-based AR experiences. This is largely due to the fact that these more complex experiences require most of the explanations. It is also because these experiences are the most fun and the most promising.

However, depth sensing approaches like these underpin many simpler and more practical experiences and tools that you could use every day without thinking too much about it. Hopefully reading about ToF and LiDAR will give you further appreciation for these applications.

 

by Abdullah Sam
I’m a teacher, researcher and writer. I write about study subjects to improve the learning of college and university students. I write top Quality study notes Mostly, Tech, Games, Education, And Solutions/Tips and Tricks. I am a person who helps students to acquire knowledge, competence or virtue.

Leave a Comment