Is Musk's vision too grand? Experts say Tesla's "Robotaxi dream" requires breakthroughs in AI
Elon Musk plans to transmit a large amount of existing Tesla vehicle footage to Tesla's artificial intelligence system, allowing the algorithms to learn safe driving under a technique called "imitation learning." However, this imitation learning-based approach to building a fully autonomous driving system currently has flaws, and further development requires breakthroughs in artificial intelligence, which may take some time
Musk bets that robotic cars will drive Tesla into a new era of profitability, but some media analyses suggest his approach may be flawed.
On Monday, November 4th, The Wall Street Journal reported that Musk's plan for achieving autonomous driving revolves around what he calls "end-to-end artificial intelligence." Musk plans to transmit a large amount of existing Tesla vehicle footage to Tesla's AI system, allowing the algorithm to learn safe driving.
Musk's approach sharply contrasts with that of other companies developing autonomous driving technology—industry leader Waymo, a subsidiary of Google, also uses a significant amount of AI but breaks down the autonomous driving problem into more defined tasks, using data from multiple sensors like lasers and radar to give the car a richer environmental view.
In short, Musk hopes to invent an AI system that learns by observing human driving, requiring imitation learning technology; whereas companies like Waymo are helping their autonomous driving systems improve through correcting mistakes during the AI driving process, which requires reinforcement learning technology.
AI experts indicate that Tesla's method of building a fully autonomous driving system based on imitation learning requires breakthroughs in AI, which may take some time.
Flaws in Musk's Method
Musk believes that Tesla's advantage lies in its vehicles' built-in cameras, which can capture a vast amount of real-world driving footage, allowing Robotaxi to obtain extensive real driving video data, including all data from the existing Tesla Full Self-Driving (FSD) system.
Training Tesla's AI using this passively recorded data requires a technique known as imitation learning. Computer scientist Timothy B. Lee states that to benefit from this data, Tesla's AI must watch millions of hours of human driving videos and attempt to mimic human actions.
Experts point out that Musk's method has flaws.
First, systems primarily trained through imitation learning may fail when faced with behaviors outside the training data range.
Second, Tesla's excessive focus on the "end-to-end AI" system has led to a complex internal black box, making it difficult to understand why the system behaves in certain ways and hard to find ways to correct those behaviors.
For example, Tesla's current Full Self-Driving system can operate on most city streets and highways but requires high driver supervision, as the system may make sudden and potentially fatal decisions—such as attempting to turn directly into the path of other vehicles, running red lights, or failing to stop for trains in foggy weather...
Federal automotive safety regulators recently announced that they are investigating the role of Tesla's Full Self-Driving system in fatal accidents Waymo co-founder Anthony Levandowski stated that Elon Musk's goal of launching a fully autonomous driving system within a year is unrealistic. Creating the type of autonomous driving system that Musk envisions may require further breakthroughs in artificial intelligence technology, and it is unclear when these breakthroughs will be achieved