AI Engine for autonomous driving

While Machine Learning is a wonderful tool for high abstraction-level problems, we do not think it should be considered as the onlyor definitive solution for most of the problems related to Situation Awareness, because of several serious drawbacks:

  • We don’t really know why Neuronal Networks make the choices they do.
  • This black-box nature makes it much harder (yet not impossible) for these systems to meet functional safety requirements like those in the ISO26262 standard.

  • Machine Learning is far from being infallible:

    Besides the problems of overfitting and under-fitting, Neural Networks operate in an opaque way. Several studies have revealed that artificial perturbations on natural images can easily make DNN (Deep Neuronal Networks) misclassify objects and hamper its functionality through the insertion of effective algorithms to generate alternate samples called “adversarial images”.

  • The Machine Learning process has to go through too much useless data: Even if the inference phase is much lighter in terms of processing requirements than the training phase, it still requires the input of the full raw data from sensors. This huge amount of information takes up valuable network bandwidth, processing power and/orcosts too much with regards to time and energy consumption.
  • The Machine Learning approach relies heavily on the invisible labour of humans, who must tediously label the training data for it to work. These people are often working in isolated or hard conditions.As sensors evolve, the data labelling needs to be updated. This titanic effort is also an endless one.

The mainstream approach of the industry is to make extensive use of Machine Learning methods, relying on finding patterns in trained datasets to infer qualitative (statistical) conclusions on new samples.

Our unique processing approach is best described, using the example of LiDAR data as follows:

Making an autonomous car almost 100% sure needs a combination of split-second reactions and complex judgments about the surrounding environment that humans do instinctively.

An effective strategy might be to investigate what makes humans more successful than the current crop of automated vehicles and then replicate that process.

The human brain is a jumble of different systems and processes, much more complex than any logically laid out computer.

The human brain can be modelled as actually having two different processes for assessing risk and process the information of the outside world, and these two processes are located in different parts of the brain.

The more primitive of the risk assessment processes is the responsibility of the Amygdala.

This is the part of the brain that is responsible for the fight or flight reflex. It is intuitive and instinctual. The amygdala does not reason. It simply reacts.

In the neocortex, the human brain takes a more logical and deliberate approach to risk assessment. The neocortex needs as much data as possible before concluding.

Wecall these two different Systems fast thinking (Effortless, Amygdala-type) and slow thinking (Requires learning, Neocortex). They are part of the dual process theory.

The fast thinking System 1 and the slow thinking, like the neocortex, is called System 2 thinking.

Just like humans need their senses to perceive the environment, autonomous cars use sensors such as LiDAR to gather information about the car’s surroundings. The question is what autonomous cars do with the data gathered from the sensors. The main trend is for the sensors to send the raw data to the brain (AI/Machine Learning) of the car. The AI then translates the different data points into objects to drive behaviour.

We at DGWorld think that this process is suboptimal.

The AI has togo through too much useless data. This extra information takes up valuable network bandwidth and costs too much regarding time and energy consumption. Another popular approach goes about things the opposite way.

In this second approach, the AI receives processed information or “objects” from the sensors instead of raw data points.

The problem with this approach is that the AI could not get rich-enough information, especially for multi-sensor fusion purposes.

Also, because of limited processing power available, the sensors typically cannot process as fast or accurately as the AI located in a central ECU, and in this there is significant duplication of data processing from the different sensors.

The inefficiencies of both approaches made us think differently.

Applying Dual Process Theory to the Perception challenges in Autonomous Cars

When applying Dual Process Theory, the car would use System 1 thinking to react to imminent danger and System 2 thinking for other more complex and deliberate tasks.

Right now most of the attention in autonomous cars development is in using System 2 thinking.

The AI/Machine Learning is an excellent parallel to the neocortex of the human brain.

But, cars need a parallel to the Amygdala—which until now hasn’t existed, or at least not as an explicit design choice.

Our approach allows the Autonomous vehicle to use both System 1 and System 2 thinking simultaneously when using LiDAR as a perception sensor.

The LiDAR sensors feed raw data directly to an embedded software running in real time on a tiny and low-power chip, no need for GPU or high-end computers.

This chip becomes the Artificial Amygdala of the AI.

The AI, instead of working on raw data, can instead work on a classified point-cloud data.

As this data is classified at the point level and not at the object level, it becomes a set of enriched raw data: a level of abstraction high enough to be useful, but low enough to reduce time-to-decision, processing power, energy consumption and communication bandwidth.

Instead of using raw data from LIDAR, the AI can use the output fromour DG-LIDAR System.

The DG-LIDAR plays the role of the Amygdala and classifies data points from the LiDAR for each individual frame.

There is no need to wait for several frames before the system can make a decision.

Because the classification is deterministic (no learning or a priorknowledge is required), a high level of safety and ISO 26262 compliance is a lot easier to achieve.

The Artificial Amygdala is not meant as a replacement for the System 2 thinking that AI needs to do. Instead, DG-LiDAR is a parallel system.

It provides fast-acting System 1 thinking for critical situations.

But the System 1 to be useful, it needs to be smart: providing basic Ego-Motion and Free space information is not useful enough.

For the Artificial Amygdala to be useful, it should be able to solve some of the key perception challenges of the Self-driving car:

  • Ego-Motion: understanding frame per frame how the vehicle is moving (if there is no reference map). Localization of the vehicle if there is a reference map.
  • 3D Mapping: creating a moving 3D map around the vehicle, allowing for a virtual frame from the sensor created by the integration of hundreds of actual sensor frames. These two features are commonly called SLAM, for Simultaneous Localization and Mapping.
  • Classification: each point of the Lidar, in real-time, is classified in one of the following categories:

    (Yes, we can classify points as "Moveable objects" without requiring Learning, no a priorinformation nor reference map)

  • Object detection: for moving and moveable categories, the points are clustered and tracked over time.

Meeting High Public Expectations for Safety and Consistency

Before the public accepts autonomous cars, the error rate will need to be near zero. However, System 2 thinking will not get the industry to the point of public acceptance on its own.

A deterministic, low-power, fast and smart System 1 and an AI-based System 2 thinking together is safer than just System 2 thinking alone.

Just like human drivers need their Amygdala’s to keep them safe, so do autonomous cars need System 1 thinking in the form of an Artificial Amygdala to keep them from getting into accidents. A dual process approach like the DG-LIDAR technology can not only make the autonomous cars safer than human drivers; it can make autonomous carsreally smart.

DGWorld 3D SLAM

While pinpointing the position of a Smart Machine and its surroundings seems like a solved issue, the reality is that it is still a challenge when the objective is to allow a mass-produced, affordable and robust solution.