No lidars, no high-definition maps and no hand-coded rules. It’s a solution that Tesla CEO Elon Musk would approve of. A small-scale startup based out of Cambridge, U.K taught a car to drive itself in a simulated environment before unleashing it on the streets using only cameras and an average satellite navigation system.
Taking an end-to-end machine learning approach to building a self-driving car, Wayve.ai aims at perfecting a system that used imitation and reinforcement learning coupled with cameras and sensors instead of creating a new platform. This system will follow a route that is entered into the navigation system.
The same model-based deep reinforcement learning system, according to their data, gives it the ability to learn to drive like a human in new environments, based on data either given or learned from past experiences.
Wayve’s system creates all the rules itself and evolves based on safety driver interventions rather than relying on ready-made data.
In the current automotive landscape, most autonomous driving tech is based on platforms which helps the car function according to the rules they carry. However, Wayve’s system creates all the rules itself and evolves based on safety driver interventions rather than relying on readymade data.
It is this differentiation that makes Wayve’s self-driving platform scalable to new areas. From where we are, the most autonomous vehicles use high-definition maps, lidar, and other technologies to build a 3D world in which they can identify objects and signs and learn where it is and where to go. Wayve’s tech can lead to significant cost cutting by forgoing expensive lidar or data-intensive HD maps. Their algorithm means that one can a Wayve vehicle and put it in any new area and it will leverage its past experience to adapt quickly to new environments, including rain and snow.
A common thread running through all the autonomous systems being built today is its vulnerability to cyber attacks. The same risks became acutely apparent after one of the most high-profile connected cars hacks to date. In July 2015, a pair of researchers – now working at GM’s Cruise Automation Unit – took control of a Jeep Cherokee…remotely. Later in 2016, in a similar incident, the same hackers returned, only this time, they used a laptop plugged in directly into the vehicle’s CAN bus to control the vehicle.
While the traditional autonomous tech that is now being developed in various iterations, the same poses a major threat in terms of safety.
Eyeing high scale and high speed, automakers are now preferring off-the-shelf coding to save time and effort. These complex codes are causing vehicles to bloat, increasing the surface for the hackers to operate on.
According to the publication Code complete: A Practical Handbook of Software Construction, math proves that between one and 25 defects can be found per 1,000 lines of code. In addition to this, Fabbre estimates that in five years, roughly 0.05 vulnerabilities are discovered per 1,000 lines of code. This means for a vehicle that contains 50 million lines of code, 2,500 vulnerabilities will be discovered on the platform.
The threat, as we see, is rooted in the underlying challenge where numerous functions in the car rely heavily on software today, calling for additional lines of unnecessary code. Hence, cutting out the excess and ensuring the software is as lean as possible, would be a viable solution for the automotive industry.
In today’s production practice, engineers often leverage software libraries where functions can be downloaded with a click. Development time can be slashed, but they inadvertently end up with extra code that is surplus to requirements.