The Race Towards Human-Level Perception in Self-Driving Cars

 

Manufacturing

CONTENTS

LiDAR:

Laser-based 3D Road Mapping Jostles for Position

SUMMARY

In 2016, the World Health Organisation reported that the annual number of road traffic deaths recorded worldwide had reached 1.25 million. In the UK alone, the total number of road casualties reached 186,209 and “driver error or reaction” was considered a factor in more than 65% of fatal crashes. Advanced Driver Assistance Systems (ADAS)—first developed in the 1990s, and introduced in vehicles the 2000s—have helped to reduce the number of road accidents, but ADAS cannot fully address this problem, as over 90% of all car accidents result from human mistakes. Innovations in sensors and computing power have led automobile manufacturers to pursue full vehicle automation, and many are now considering introducing this technology to the mass-market in the near future. Is the transfer of control from humans to their autonomous vehicles just around the corner? Self-driving cars will need to be ruled to be safe on the road first. We review the technology involved in current self-driving car applications, including sensors that can provide self-driving vehicles with the ability to detect moving objects, driving paths, and road signs, and particularly LiDAR sensors that can create the 3D maps of the vehicle’s surroundings.

 
 

Introduction

As Tesla unveiled its Model 3 in March of 2016, its CEO Elon Musk declared that “All Model 3s will come standard with Autopilot hardware”. This announcement set the tone for the automobile industry’s current pursuit of full vehicle automation. Self-driving taxis are being deployed in California by the start-up Voyage, and the appeal and effort to develop autonomous vehicles has spread across the automotive sector: companies such as Volvo, Google (Waymo), Uber, Toyota, Tesla, and more recently Honda, Audi, BMW, and Apple have revealed their plans to introduce automated cars to the market. 
In March 2017, the British Government committed about £100 million towards autonomous driving projects. However, a recent survey suggested that 73% of British customers still believe that “fully self-driving vehicles will not be safe” (Deloitte). Though most cars are now equipped with Advanced Driver Assistance Systems (ADAS)—that provide features such as “park assist”—the more complex levels of vehicle automation that are predicted to be achieved between 2025-2030 (Deloitte) will require further technological advances to ensure safety.
Although ADAS have reduced both the number and the severity of road accidents, they cannot prevent a driver’s mistakes; these systems can warn and assist, but they cannot take control of the vehicle to avoid a collision. With the total number of cars projected to reach the two billion mark globally by 2040, self-driving technology is expected to play an important role in improving the safety of the world’s roads. Tragic events such as the June 2016 fatal crash involving a Tesla vehicle that was running in “Autopilot” mode have graphically demonstrated that more progress must be made before fully automated driving systems can be considered completely safe.
In 2014, the US-based Society of Automotive Engineers (SAE) published a report, titled “Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems”. In this report, they defined six levels of driving automation, from “no automation” (Level 0) to “full automation” (Level 5), with ADAS representing Level 1. In order for a Level 5 car to be considered safe on the road, it must be able to identify dangerous situations, anticipate hazards and other road-users’ mistakes, and react appropriately. Thus, an “autopilot” system needs to possess some basic senses. 
Providing cars with the senses they need to understand their surroundings is a complicated process. Cameras can help the car to detect objects in front of it and record information about road signs and traffic lights—they are already in use to provide lane-departure warnings in ADAS. However, cameras have their limits as sensors. For example, interpreting a pair of stereoscopic camera images to produce a 3D map of the car’s environment is an extremely complex task, and requires significant computing power. However, alternative types of sensors are available to complement the use of cameras: Radar sensors, Ultrasonic sensors, GPS receivers and Inertial Navigation Systems (INS), Lasers, and advanced radars can help vehicle automation systems map the road and identify nearby objects in 3D in real time.

Using radar sensors to prevent collisions

Commonly used to detect objects that are nearby and moving quickly, even through snow and fog, radar sensors are already used in many cars with adaptive cruise control systems. Radar can be used to analyse an object’s distance, angle, and movement due to the Doppler Effect. A radio wave—usually a Frequency Modulated Continuous Wave (FMCW)—traveling at the speed of light is emitted from the radar, and after reflecting off of a moving object, the radio wave’s frequency shifts in relation to the reflected object’s speed. When the reflected signal reaches the radar’s detector, this information can be extracted. Travelling at 300 million meters per second, these radio waves provide effectively instantaneous measurements.
Radar is not the only solution for the detection of moving objects. One alternative is photoelectric sensors based on eye-safe lasers, such as those developed by SICK. Their sensing range is more likely to be affected by the colour and reflectivity of target, however, and they do not perform as well in poor weather conditions. Although some solutions exist that make photoelectric sensors more robust—such as those proposed by Allen-Bradley—solid-state radar-on-a-chip systems are common, small, and inexpensive, which makes them easy to implement on both the front and rear of a car. For instance, Google’s self-driving car is equipped with four radars mounted on its front and rear bumpers, enabling it to detect vehicles from both front and back ends. Although these qualities make an FMCW system desirable for vehicle-near obstacle detection systems, at present, most devices require a relatively large radio frequency (RF) bandwidth to detect objects with any great resolution. Regulations governing the allocation of radio frequencies constrain the bandwidth available for vehicle-near obstacle detection systems, and limit the maximum resolution of such systems. While technology exists that “…combines both a frequency modulated, continuous wave (FMCW) system and a two-frequency Doppler (2FD) system”, the most common solution to address the shortcomings of radar sensors is to include additional types of sensors, such as ultrasonic sensors
.

Using ultrasonic sensors to detect nearby obstacles

Ultrasonic sound waves can be used to detect variations in an object’s distance with a degree of accuracy on the order of a centimetre. They are also small and cheap, and already part of many cars that are equipped with “park assist” technology. Valeo, a multinational automotive supplier based in France, offers a complete line of park assist systems called “Beep & Park”. Using ultrasound sensors, the on-board system detects any obstacle in front of or behind the vehicle (e.g., other vehicles, pedestrians, curb stones, etc.) and informs the driver of the obstacle’s proximity and location on a control screen.

Using a GPS receiver with an inertial navigation system (INS) for navigation

Inertial navigation systems (INS) analyse the data collected by an accelerometer, a gyroscope, and a compass, and provide position and angle updates at a faster rate than a GPS, giving the car an estimate of its instantaneous position and speed. Initially developed for vehicles such as marine vessels, aircraft, submarines, guided missiles, and spacecraft, modules adapted for use in ground vehicles are now commercially available. The Oxford-based company OxTS designs and manufactures devices providing constant position, slip angle, orientation, velocity, and acceleration estimates at 100 or 250 Hz with excellent accuracy and reliability. Most ADAS in use in cars around the world today—including lane departure warning systems and collision avoidance systems—have been developed, tested, and validated using OxTS’s equipment.
The information collected by all the sensors in an autonomous car must be analysed in real time by a decision-making computer. Such data analysis requires significant computing power, as well as complex machine-learning algorithms. The car must be able to recognise road signs and interpret other road-users behaviour and gestures. It must also be able to anticipate potential hazards and accidents. Human drivers can consider the ethical implications of a particular action, but it is not that simple for artificial intelligence. This is an important consideration, as every manoeuvre done in reaction to a hazardous situation implies trade-offs in terms of the risks it poses to different parties. Is the set of sensors described above effective and sufficient to reconstruct the car’s environment well enough to facilitate this decision-making, or does the computer require some additional support? This question is currently dividing the automotive industry, with Tesla betting on smart software and processing capacity, while the rest of the players preferring to use more powerful sensors providing direct 3D mapping of the car’s environment.

Mapping the road in 3D –

The sensing route paved with lasers and advanced radars

Tesla revealed recently that in its Model 3, the data from the radars, ultrasonic detectors, and eight cameras dotted around the vehicle will be processed by a Tesla Neural Net running on a computer powered by Nvidia's Titan GPUs—claimed to be 40 times faster than the hardware used for autonomous driving on its previous models. This computer will be used to construct a 3D map of the vehicle’s environment out of the information gathered from all of the Model 3’s sensors, while in the solutions used by competitors’ automated driving systems, the 3D representation of the car’s surroundings will be created directly using an additional type of sensor: LiDAR. Google, Uber, and most other carmakers that are aspiring to make autonomous vehicles already use LiDAR. 
Short for “light detection and ranging”, LiDAR technology employs a spinning laser, that fires off millions of pulses of near-infrared light per second. The time taken for the light to return to the sensor is a direct measure of the position of nearby objects, to an accuracy of a few centimetres. It can provide accurate readings at distances of about 100 meters, while producing a 360-degree map at a high refresh rate.
Can Tesla’s LiDAR-less approach succeed? The company’s former partner Mobileye, an Israeli leader in computer vision expertise has claimed that that demonstration vehicles can drive autonomously while relying on camera sensors alone. However, they have also noted that production-series cars using their technology (expected by 2021), are expected to include additional sensors to deliver a robust, redundant automation solution based on multiple modalities (primarily radar and LiDAR). LiDAR sensors currently in use on self-driving car prototypes remain bulky and expensive, costing up to tens of thousands of dollars each. To make them suitable for a mass market, the US automaker Ford and the Chinese tech giant Baidu have jointly invested $150 million in Velodyne, the world’s leading LiDAR supplier. Quanergy and Velodyne are working separately towards developing small, solid-state LiDARs that could cost hundreds of dollars each; a disruptive innovation that would be a game changer for the industry. 
These solid-state sensors could measure only about 10 cm by 5 cm by 5 cm, and are small enough to be embedded into the front, sides, and corners of vehicles. Velodyne’s technology should begin mass production at its new megafactory in San Jose, California in 2018, while Quanergy is also planning to producing its solid-state LiDAR solutions.
Though they are the leader in the market, Velodyne is now in a race with a number of new competitors: The start-up Innoviz has plans for an even smaller $100 solid-state device designed specifically for autonomous vehicles, and intends to begin production in 2018. Oryx Vision’s singular innovation—microscopic antennas that unlike photodetectors are not wavelength-restricted—provides a low-cost, coherent LiDAR solution. Oryx are able to assemble tens of thousands of these antennas into a single sensor, creating a solid-state flash system made from silicon, and capturing economies of scale. The American start-up Luminar is developing a very high resolution and long-range LiDAR (up to 250 meters), which also allows for better detection of less-reflective objects, such as black cars. Comprising moving optics and a detector suited to more focused laser wavelengths, this technology will not lead to a reduction in the cost per device, however, as is expected from the solutions using solid-state technologies. Luminar are currently building a 50,000 square foot factory in Orlando, Florida. Two former Apple engineers recently founded a start-up called Aeva in Palo Alto, which is focused on developing cost competitive LiDAR solutions. Although their prototype is still under development, they plan to sell devices in 2018.
At a more fundamental research level, a group at The Massachusetts Institute of Technology, supported by a grant from DARPA, fabricated a LiDAR-on-a-chip measuring 0.5 mm by 6 mm, which could also integrate on-chip lasers. These chips could eventually cost only $10 each, and interface with each other across free-space at data rates of up to 40 Gb/s. Such data rates, and even higher rates, have been demonstrated using radio-over-fibre signal generation similar to mobile radio signals. A Danish research group recently reported on a hybrid optical fibre-wireless transmission link achieving 100 Gb/s, five times higher than the best 5G outdoor trial ever reported. 
Researchers from the University of California, Berkeley have also fabricated a chip-scale LiDAR embedded on a 3 mm by 3 mm silicon-photonic chip that uses a 5-V power supply and is coupled with novel self-sweeping lasers. This chip, developed in collaboration with Bandwidth10, Inc., a Californian company based in San Jose, performs 180,000 range measurements per second with a precision smaller than a millimetre.
The supremacy of LiDAR is not assured. On June 24, 2017, researchers of the Korea Advanced Institute of Science and Technology reported that when they used a device that produced a strong light of the same wavelength as that used by a specific LiDAR unit, they could actually “erase” detectable objects in the sensed output of the LiDAR’s detector. 
Oculii and Arbe Robotics, two startups based in Dayton (Ohio, US) and Tel Aviv, Israel respectively, have been developing high resolution 4D radars, which provide information on an object’s 3D position, as well as its speed (via Doppler-based measurements). Arbe Robotics state that their device has been designed to achieve Level 4 full autonomous driving, featuring an obstacle detection system accurate at up to 300 meters, and samples a real-time picture up to 50 times per second. Although they have not provided an official release date or production timeline, they have announced plans to sell their hardware and software as one package to car manufacturers, as well as to auto part integrators like Denso and Bosch. 
Echodyne, a specialist in radar vision, is currently optimising its new 4D Metamaterial Electronically Scanning Array (MESA). Unlike conventional phased-array devices, which use a grid of antennas and phase-shifters to steer a radar beam in a desired direction, this miniaturised MESA—only slightly larger than an iPhone 6 Plus—uses very small, orientable, electrically-controlled panels that can be tilted together to shift the beam in less than a microsecond, without the need for any phase-shifter. It can also determine velocity down to two meters per second while drawing only 35 watts of power. It has been proven to be able to track trucks at a range greater than three kilometres, to detect and track walkers at around 1.4 km, and can detect a DJI Phantom 4 drone flying at a distance of up to 750 meters. This metamaterial-based technology is also being implemented in radars by Metawave.

Conclusion

Expected to hit the market within the next decade, fully autonomous vehicles are both benefiting from and stimulating the development of emerging sensing and computing technologies. In order to reach Level 5 autonomy in all driving situations and environments, both the sensing problem—detecting moving objects, driving paths, and road signs—and the mapping problem—providing the car with a complete picture of its surroundings, updated with an ultra-high refresh rate—must be solved. Though automotive companies seem to have reached a consensus on the use of well-established sensors such as cameras, radars, ultrasonic detectors, and GPS receivers, the details of their strategies to take vehicles from ADAS to fully autonomous differ. Will the future of automated driving be made with better sensors or better computers? Autonomous driving technology is at a crossroads, as recent developments in artificial intelligence, machine learning, and big data processing have led companies like Intel and Mobileye play a more increasing role in the automotive sector.

 
 
 
 
 

visit

European Region

St John's Innovation Centre

Cowley Road

CB4 0WS Cambridge

United Kingdom

+44 (0)1223 926 926

North American Region

3532 Bee Caves Road

Suite 203

Austin TX 78746

United States of America

+1 (512) 988-2454

EMAIL

info [at] camin.com

follow

LinkedIn

Cambridge Innovation Consulting is an ISO certified organisation with 27001:2013 (# 317252019) and 9001:2015 (# 317242019) certifications.

COPYRIGHT 2019 - Cambridge Innovation Consulting