Tesla using AI and ML

 Machine Learning Case Study on Tesla
Tesla is now a big name in the electric automobile industry and the chances that it will continue to be the trending topic for years to come are really high. It is popular and extensively known for its advanced and futuristic cars and their advanced models. The company states that their cars have their own AI hardware for their advancement. Tesla is even making use of AI for fabricating self-driving cars.



With the current progress rate of technology, cars are not yet completely autonomous and need human intervention to some extent. The company is working extensively on the thinking algorithm for cars to help them become fully autonomous. It is currently working in an advert partnership with NVIDIA on an unsupervised ML algorithm for its development.




This step by Tesla would be a game-changer in the field of automobiles and Machine Learning models for many reasons. The cars feed the data directly to Tesla’s cloud storage to avoid data leakage. The car sends the driver’s seating position, traffic of the area, and other valuable information on the cloud to precisely predict the next move of the car. The car is equipped with various internal and external sensors that detect the above-mentioned data for processing.
Tesla uses  Autonomy Algorithm

Develop the core algorithms that drive the car by creating a high-fidelity representation of the world and planning trajectories in that space. In order to train the neural networks to predict such representations, algorithmically create accurate and large-scale ground truth data by combining information from the car's sensors across space and time. Use state-of-the-art techniques to build a robust planning and decision-making system that operates in complicated real-world situations under uncertainty. Evaluate your algorithms at the scale of the entire Tesla fleet.


Evaluation Infrastructure

Build open- and closed-loop, hardware-in-the-loop evaluation tools and infrastructure at scale, to accelerate the pace of innovation, track performance improvements and prevent regressions. Leverage anonymized characteristic clips from our fleet and integrate them into large suites of test cases. Write code simulating our real-world environment, producing highly realistic graphics and other sensor data that feed our Autopilot software for live debugging or automated testing.



Neural Networks

Apply cutting-edge research to train deep neural networks on problems ranging from perception to control. Our per-camera networks analyze raw images to perform semantic segmentation, object detection and monocular depth estimation. Our birds-eye-view networks take video from all cameras to output the road layout, static infrastructure and 3D objects directly in the top-down view. Our networks learn from the most complicated and diverse scenarios in the world, iteratively sourced from our fleet of millions of vehicles in real time. A full build of Autopilot neural networks involves 48 networks that take 70,000 GPU hours to train đŸ”¥. Together, they output 1,000 distinct tensors (predictions) at each timestep.


Using Billions of Miles to Train Neural Networks


Training data is one of the fundamental factors that determine how well deep neural networks perform. (The other two are the network architecture and optimization algorithm.) As a general principle, more training data leads to better performance. This is why I believe Tesla, not Waymo, has the most promising autonomous vehicles program in the world.


With a fleet of approximately 500,000 vehicles on the road equipped with what Tesla claims is full self-driving hardware, Tesla’s fleet is driving about as many miles each day – around 15 million – as Waymo’s fleet has driven in its entire existence. 15 million miles a day extrapolates to 5.4 billion miles a year, or 200x more than Waymo’s expected total a year from now. Tesla’s fleet is also growing by approximately 5,000 cars per week.

There are three key areas where data makes a difference:

  • Computer vision
  • Prediction
  • Path planning/driving policy

Computer vision

One important computer vision task is object detection. Some objects, such as horses, only appear on the road rarely. Whenever a Tesla encounters what the neural network thinks might be a horse (or perhaps just an unrecognized object obstructing a patch of road), the cameras will take a snapshot, which will be uploaded later over wifi. It helps to have vehicles driving billions of miles per year because you can source many examples of rare objects. It stands to reason that, over time, Teslas will become better at recognizing rare objects than Waymo vehicles.

For common objects, the bottleneck for Waymo and Tesla is most likely paying people to manually label the images. It’s easy to capture more images than you can pay people to label. But for rare objects, the bottleneck for Waymo is likely collecting images in the first place, whereas for Tesla the bottlenecks are likely just labelling and developing the software to trigger snapshots at the right time. 



Path planning/driving policy

Path planning and driving policy refer to the actions that a car takes: staying centred in its lane at the speed limit, changing lanes, passing a slow car, making a left turn on a green light, nudging around a parked car, stopping for a jaywalker, and so on. It seems fiendishly difficult to specify a set of rules that encompass every action a car might ever need to take under any circumstance. One way around this fiendish difficulty is to get a neural network to copy what humans do. This is known as imitation learning (also sometimes called apprenticeship learning, or learning from demonstration).

The training process is similar to how a neural network learns to predict the behaviour of other road users by drawing correlations between past and future. In imitation learning, a neural network learns to predict what a human driver would do by drawing correlations between what it sees (via the computer vision neural networks) and the actions taken by human drivers.

Still frame from Tesla’s autonomous driving demo. Courtesy of Tesla.

Imitation learning recently met with arguably its greatest success yet: AlphaStar. DeepMind used examples from a database of millions of human-played games of StarCraft to train a neural network to play like a human. The network learned the correlations between the game state and human players’ actions, and thereby learned to predict what a human would do when presented with a game state. Using only this training, AlphaStar reached a level of ability that DeepMind estimates would put it roughly in the middle of StarCraft’s competitive rankings. (Afterward, AlphaStar was augmented using reinforcement learning, which is what allowed it to ascend to pro-level ability. A similar augmentation may or may not be possible with self-driving cars – that’s another topic.)

Tesla is applying imitation learning to driving tasks, such as how to handle the steep curves of a highway cloverleaf, or how to make a left turn at an intersection. It sounds like Tesla plans to extend imitation learning to more tasks over time, like how and when to change lanes on the highway.



Comments