While Machine Learning is a wonderful tool for high abstraction-level problems, we do not think it should be considered as the onlyor definitive solution for most of the problems related to Situation Awareness, because of several serious drawbacks:
This black-box nature makes it much harder (yet not impossible) for these systems to meet functional safety requirements like those in the ISO26262 standard.
Besides the problems of overfitting and under-fitting, Neural Networks operate in an opaque way. Several studies have revealed that artificial perturbations on natural images can easily make DNN (Deep Neuronal Networks) misclassify objects and hamper its functionality through the insertion of effective algorithms to generate alternate samples called “adversarial images”.
The mainstream approach of the industry is to make extensive use of Machine Learning methods, relying on finding patterns in trained datasets to infer qualitative (statistical) conclusions on new samples.
Our unique processing approach is best described, using the example of LiDAR data as follows:
Making an autonomous car almost 100% sure needs a combination of split-second reactions and complex judgments about the surrounding environment that humans do instinctively.
An effective strategy might be to investigate what makes humans more successful than the current crop of automated vehicles and then replicate that process.
The human brain is a jumble of different systems and processes, much more complex than any logically laid out computer.
The human brain can be modelled as actually having two different processes for assessing risk and process the information of the outside world, and these two processes are located in different parts of the brain.
The more primitive of the risk assessment processes is the responsibility of the Amygdala.
This is the part of the brain that is responsible for the fight or flight reflex. It is intuitive and instinctual. The amygdala does not reason. It simply reacts.
In the neocortex, the human brain takes a more logical and deliberate approach to risk assessment. The neocortex needs as much data as possible before concluding.
Wecall these two different Systems fast thinking (Effortless, Amygdala-type) and slow thinking (Requires learning, Neocortex). They are part of the dual process theory.
The fast thinking System 1 and the slow thinking, like the neocortex, is called System 2 thinking.
We at DGWorld think that this process is suboptimal.
The AI has togo through too much useless data. This extra information takes up valuable network bandwidth and costs too much regarding time and energy consumption. Another popular approach goes about things the opposite way.
In this second approach, the AI receives processed information or “objects” from the sensors instead of raw data points.
The problem with this approach is that the AI could not get rich-enough information, especially for multi-sensor fusion purposes.
Also, because of limited processing power available, the sensors typically cannot process as fast or accurately as the AI located in a central ECU, and in this there is significant duplication of data processing from the different sensors.
The inefficiencies of both approaches made us think differently.
Applying Dual Process Theory to the Perception challenges in Autonomous Cars
When applying Dual Process Theory, the car would use System 1 thinking to react to imminent danger and System 2 thinking for other more complex and deliberate tasks.
Right now most of the attention in autonomous cars development is in using System 2 thinking.
The AI/Machine Learning is an excellent parallel to the neocortex of the human brain.
But, cars need a parallel to the Amygdala—which until now hasn’t existed, or at least not as an explicit design choice.
Our approach allows the Autonomous vehicle to use both System 1 and System 2 thinking simultaneously when using LiDAR as a perception sensor.
The LiDAR sensors feed raw data directly to an embedded software running in real time on a tiny and low-power chip, no need for GPU or high-end computers.
This chip becomes the Artificial Amygdala of the AI.
The AI, instead of working on raw data, can instead work on a classified point-cloud data.
As this data is classified at the point level and not at the object level, it becomes a set of enriched raw data: a level of abstraction high enough to be useful, but low enough to reduce time-to-decision, processing power, energy consumption and communication bandwidth.
Instead of using raw data from LIDAR, the AI can use the output fromour DG-LIDAR System.
The DG-LIDAR plays the role of the Amygdala and classifies data points from the LiDAR for each individual frame.
There is no need to wait for several frames before the system can make a decision.
Because the classification is deterministic (no learning or a priorknowledge is required), a high level of safety and ISO 26262 compliance is a lot easier to achieve.
The Artificial Amygdala is not meant as a replacement for the System 2 thinking that AI needs to do. Instead, DG-LiDAR is a parallel system.
It provides fast-acting System 1 thinking for critical situations.
But the System 1 to be useful, it needs to be smart: providing basic Ego-Motion and Free space information is not useful enough.
For the Artificial Amygdala to be useful, it should be able to solve some of the key perception challenges of the Self-driving car:
(Yes, we can classify points as "Moveable objects" without requiring Learning, no a priorinformation nor reference map)
Meeting High Public Expectations for Safety and Consistency
Before the public accepts autonomous cars, the error rate will need to be near zero. However, System 2 thinking will not get the industry to the point of public acceptance on its own.
A deterministic, low-power, fast and smart System 1 and an AI-based System 2 thinking together is safer than just System 2 thinking alone.
Just like human drivers need their Amygdala’s to keep them safe, so do autonomous cars need System 1 thinking in the form of an Artificial Amygdala to keep them from getting into accidents. A dual process approach like the DG-LIDAR technology can not only make the autonomous cars safer than human drivers; it can make autonomous carsreally smart.
While pinpointing the position of a Smart Machine and its surroundings seems like a solved issue, the reality is that it is still a challenge when the objective is to allow a mass-produced, affordable and robust solution.
However, they provide an absolute position on Earth, which is very useful for high-level mapping and other user-interface purposes.
Interestingly, there is significant innovation in this field, including technologies that have the potential to provide much precise location data like PPP-IAR (Precise Point Positioning with Integer Ambiguities Resolution).
Also in this field there are significant improvements in precision at lower costs as new uses cases emerge, but this can't change its fundamental working principles.
They provide a robust 0 velocity information ("I'm sure that I'm not moving") that IMUs are unable to provide in a reliable or cost-effective way.
For best results, you can combine these different sensors and methods to get a more robust localization. These are often integrated with a previously-created map, which also creates the new problem of being able to update them.
While we believe these approaches are useful and should be used together, DGWorld´s Sensor Fusion deliver an uncorrelated Localization output (Relative Ego-motion if there is no reference map, Absolute Localization if there is a map) thanks to our 3D SLAM algorithm (Simultaneous Localization and Mapping) that relies only on Perception data.