How Self-driving cars still have major perception

How Self-driving cars still have major perception

Yulong Cao is a PhD student in the Department of Computer Science and Engineering at the University of Michigan. This story originally appeared in the conversation.

Nothing is more important to an independent vehicle than the sensitivity of what’s going on around it. Autonomous vehicles, like human drivers, require immediate decision-making skills.

Today, most individual vehicles rely on multiple sensors to understand the world. Most systems use a combination of cameras, radar sensors, and lidar (light and range detection) sensors.

On board, computers combine this data to create a broader view of what is happening around the vehicle.

Without this data, self-driving vehicles would not expect to move safely in the world.

Multiple sensed cars work more efficiently and safely – each system can act as a scan on other systems – but no system is immune to attack.

Unfortunately, these systems are not guaranteed. Only camera-based perception systems can be tricked into a traffic light to completely change the meaning.

Our work from the Robostnet Research Group at the University of Michigan has shown that a leader-based knowledge system can also be created.

By strategically tracking LiDAR sensor signals, the attack can create a barrier to “seeing” the vehicle’s LiDAR visualization system, which does not exist. If this happens, the vehicle may crash by obstructing or suddenly breaking the car.

Deception of the Leader Signal

The LiDAR-based perception system consists of two components: sensors and machine learning models that process sensor data.

The Leader sensor issues a light signal and calculates the distance between it and its environment by measuring the amount of time it takes for this signal to return and return to the sensor. This round-trip time is also known as “flight time”.

Leaders transmit several thousand light signals per second.

Then the machine learning model strikes a picture of the world around the car. The analogy is how the bats use an echo position to find where the barrier is at night.

The problem is that these trends can be created in disguise. To trick the sensor, the attacker can drop its own signal on the sensor. The sensor needs to be mixed.

However, it is more difficult to intercept the leader sensor than to “see” a non-existent vehicle. To be successful, the attacker must determine the exact timing of the firing signal aimed at the damaged leader.

This should be at the nanosecond stage, as the signals travel at the speed of light. A small difference occurs when the leader calculates the distance using the measured flight time.

If an attacker deceives the LiDAR sensor, he or she must drive the machine learning model.

The work done at the OpenAII Research Lab shows that machine learning models are subject to specially crafted signals or inputs – known as aggressive examples.

For example, posters made specially at traffic lights can trick camera-based visualization.

We have found that an attacker can use a similar tactic to interrupt LiDAR.

It won’t be a visual poster, but there are barriers presenting incognito signals created specifically to distract the machine learning model when they really don’t have to.

Leader sensors will feed the hacker’s fake signals into the machine learning model, which they will recognize as an obstacle.

Aggressive examples – fake entities – can be created to meet the expectations of the machine learning paradigm. For example, an attacker could create a signal for a truck that is not moving.

Then, in order to carry out the attack, they can either place it on the fork of the road or place it in the front of an autonomous vehicle.

Two possible attacks

To demonstrate the scheduled attack, we chose an autonomous driving system used by many car manufacturers: Baidu Apollo.

This product has more than 100 partners and has reached a huge production agreement with many manufacturers, including Volvo and Ford.

Using realistic sensor data collected by the Baidu Apollo team, we showed two different attacks.

In the first “Emergency Break Attack” we showed how an attacker could continue to think that there was an obstacle in this way by tricking a moving vehicle.

In the second, we used a complex barrier to drive a car stopped at a red light to stay in the park after the “Vehicle Frozen Attack” turned green.

By exploiting vulnerabilities in autonomous driving awareness systems, we hope to sound an alarm for independent technology creation teams.

Leave a Comment