Edge AI in Autonomous Vehicle




Edge AI in Autonomous Vehicle

Edge AI in Autonomous Vehicle

Edge AI is deploying AI applications in devices throughout the physical world. It’s called “edge AI” because the AI computation is done near the user at the edge of the network, close to where the data is located, rather than centrally in a cloud computing facility or private data center. Here we have discussed what is Edge AI and how it can be used with Autonomous Vehicles.

Example of Edge AI – Autonomous Vehicles

Self-driving cars (autonomous vehicles): Self-driving cars need to be able to process data speedily and this requires that must be able to process data like the identification of traffic signs, pedestrians and other vehicles on the road, in the same hardware and in real-time. This helps ensure the best conditions for the safety of passengers and other road users.  

An Overview of Self-Driving Technology
Levels of autonomy

When talking about self-driving cars, most technical experts will refer to levels of autonomy. The level of autonomy of a self-driving car refers to how much of the driving is done by a computer versus a human. The higher the level, the more of the driving that is done by a computer. 

Level 0 (No Automation): All functionality and systems of the car are controlled by humans.

Level 1 (Driver Assistance): Minor things like cruise control, automatic braking, or detecting something in the blind spot may be controlled by the computer, one at a time.

Level 2 (Partial Automation): The computer can perform at least two simultaneous automated functions, such as acceleration and steering. A human is still required for safe operations and emergency procedures.

Level 3 (Conditional Automation): The computer can control all critical operations of the car simultaneously including accelerating, steering, stopping, navigation and parking under most conditions. A human driver is still expected to be present in case they are alerted of an emergency.

Level 4 (High Automation): The car is fully-autonomous, without any need at all for a human driver, in some driving scenarios. For example, the car can fully drive itself when it’s sunny or cloudy, but not when it’s snowing and the lanes are covered.

Level 5 (Full Automation): The car is completely capable of self-driving in every situation.

Most self-driving cars that we hear about in the news today such as those made by Tesla and Waymo are at level 2. They’re at the level where they can drive fairly well on their own, but a human driver is still required to ensure the safe operation of the vehicle.

Stages in Self Driving

The self-driving cars of today use a combination of various cutting-edge hardware and software technologies to perform their driving. A typical self-driving system will go through 3 stages to perform its driving. Those 3 stages are sensing, understanding, and control.

Stage 1: Sensing

When we humans are driving, we use our eyes to see what’s around us. A self-driving car also needs eyes to see. The eyes of the self-driving car are its various sensors. Most self-driving cars are using one or some combination of 3 different sensors: cameras, radar, and LiDAR.

Cameras

Cameras are the most similar to our own eyesight. They’re capturing continuous pictures, i.e videos through their lenses just like we do. And just like our own eyesight, it helps a lot with driving if a car’s cameras can capture high-quality videos — high resolution and high FPS.

Self-driving cars will have cameras placed on every side: front, back, left and right and more to be able to see everything around them, a full 360 degrees. Sometimes, a mix of different types of cameras will be used — some wide-angle to have a wider field of view, and some narrow but high resolution to see further.

The advantage of using cameras is that they’re the most natural visual representation of the world. A car is seeing exactly what a human driver would see — and more since its internal computer can look through all of the cameras at once. Cameras are also very inexpensive. The downside is that the data being captured by a camera, which is images and videos, don’t give us much sense of how far other objects are from the car or how fast they’re moving. Cameras are also difficult to use at night time since we simply can’t see as much.

Radar

Radar has been traditionally used to detect moving objects like aircraft and weather formations. It works by transmitting radio waves in bursts or pulses. Once those waves hit an object, they bounce right back to the sensor, giving data on the speed and location of the object. In self-driving cars, radar is used to detect the speed and distance of various objects around the car. It’s a perfect complement to the cameras, which can see what the objects are but not precisely where (how far away) they are. And just like the cameras, the radar will be used in 360 degrees around the car.

Radar also supplements cameras in conditions where there is low light, such as night-time driving. Since radar is beaming out a signal, it really doesn’t matter if it’s 3 AM or noon-time, the signals move and bounce back in the exact same way. This is in contrast to cameras which really don’t work as well at night because of the lighting.

The drawback of radar is that the technology is currently limited in its accuracy. Current radar sensors offer a very limited resolution. So radar does give us an idea of the distance, location, and speed of other objects, but that idea is somewhat blurry — not as accurate as we’d like it to be.

LiDAR

LiDAR stands for Light Detection and Ranging. It works by sending out beams of light and then calculating how long it takes for the light to hit an object and reflect back to the LiDAR scanner. The distance to the object can then be calculated using the speed of light — these are known as Time of Flight measurements.

LiDAR sensors are typically placed on the top of the car, firing thousands of light beams per second. Based on the collected data, a 3D representation called a point cloud can be created to represent the environment around the car.

The big advantage of LiDAR sensors is their accuracy. A good LiDAR sensor can identify details of only a few centimetres from objects that are 100 meters away. For example, it is said that Waymo’s LiDAR system can even detect which direction a person is walking, based on the accurate 3D point cloud that comes from LiDAR. The downside of LiDAR is the cost, which is currently far more expensive, even 10 times more, than cameras and radar.

Stage 2: Understanding

The understanding stage of a self-driving system is the brain — it’s where most of the major processing takes place. In the understanding stage, the goal is to take all of the information that came from the sensors and interpret it. That interpretation is aimed at gathering useful information that can help to safely control the car. The information can be things like:

What are all the objects around me, where are they, and how are they moving? So our system might detect things like people, cars, and animals.

Where am I? The system would determine where all the lanes are and if the car is perfectly in the correct lane, or where the car is relative to the other cars on the road (too close, blind spots, etc).

In the year 2019, this information is being acquired mostly through AI and more specifically using Deep Learning for Computer Vision. Large neural networks are trained for tasks like Image Classification, Object Detection, Scene Segmentation, and Driving Lane Detection. The networks are then optimised for the car’s computing unit to be able to handle the real-time speeds required for self-driving.

Keep in mind that a self-driving car may have data coming in from multiple different sources: cameras, radar, and LiDAR. So all of those regular Computer Vision tasks can be applied to each kind of sensor data, thereby gathering information about the environment around the car in a very comprehensive manner. This also creates a kind of redundancy — if one system fails, the other system still has a chance to make a detection.

The great thing about using Deep Learning for a lot of these tasks is the fact that the networks are trainable. The more data we give them, the better they get. Companies are leveraging this quite heavily — self-driving cars are being put on the roads with human drivers in them, where they can constantly collect new training data to improve themselves.

Computer Vision is really the meat of the self-driving car system. An ideal system will be able to accurately detect and quantify every single aspect of the car’s surrounding environment — moving objects, stationary objects, road signs, street lights — absolutely everything. All of that information is then used to decide how exactly the car should move next.

Stage 3: Control

Once the Computer Vision system has processed the data from the sensors, the self-driving car now has all the information it needs to drive. The role of the control stage is to figure out how to navigate best the car based on the information extracted during the understanding stage.

The technical term for describing how a self-driving car navigates the road is path planning. The goal of path planning is to use the information captured by the Computer Vision system to safely direct the car to its destination while avoiding obstacles and following the rules of the road.

The car will have knowledge of its target destination based on the GPS — the data from the GPS contains the information for the long-range path. To move towards its target destination, the self-driving system will first “plan the path” i.e calculate the most optimal (read: shortest time taken) path to its target. That means deciding which roads to take and how fast to drive.

Once the optimal path has been determined, the next step for the system is to determine the best possible “next move”. This next move will again always be based on following the most optimal path to its target destination. The next move could be to accelerate, brake, switch lanes, or any other regular driving move.

At the same time, any move it makes must follow the rules of the road and maintain the safety of the car’s passengers. If the Computer Vision has detected a red light up ahead, then the car should slow down or stop (depending on how far away it is).

All of these controls are sent directly to the car’s mechanical controls. If the car needs to switch lanes then a command to turn the wheel by a very specific amount is sent to the appropriate part of the car. If the car needs to brake, a command is sent to press on the breaks with exactly the precise amount of pressure needed, slowing down enough to follow the optimal path while maintaining safety and obeying the rules of the road. This process of sensing, understanding and controlling is repeated, as often and precisely as possible, until the car reaches its destination.

Purpose of ML in Edge AI

Adding Machine Learning (ML) to Edge AI software can enhance IoT data analytics and decision making. The combination of machine learning and AI technology can filter noise collected by IoT devices and store important data only for analysis.

Working of Edge AI Technology

For machines to see, perform object detection, drive cars, understand speech, speak, walk or otherwise emulate human skills, they need to functionally replicate human intelligence. AI employs a data structure called a deep neural network to replicate human cognition. These DNNs are trained to answer specific types of questions by being shown many examples of that type of question along with correct answers.

This training process, known as “deep learning,” often runs in a data center or the cloud due to the vast amount of data required to train an accurate model, and the need for data scientists to collaborate on configuring the model. After training, the model graduates to become an “inference engine” that can answer real-world questions.

In edge AI deployments, the inference engine runs on some kind of computer or device in far-flung locations such as factories, hospitals, cars, satellites and homes. When the AI stumbles on a problem, the troublesome data are commonly uploaded to the cloud for further training of the original AI model, which at some point replaces the inference engine at the edge. This feedback loop plays a significant role in boosting model performance; once edge AI models are deployed, they only get smarter and smarter.

Sources:

https://www.taiwannews.com.tw/en/news/4596955 

https://blogs.nvidia.com/blog/2022/02/17/what-is-edge-ai/