Elon Musk: Yeah, exactly. Manufacturing exception. So I do think it’s quite a powerful sustainable advantage because there just is no place to go to order the machines that make our next-gen car that don’t exist.
Dan Levy: Great. Thank you. As a follow-up, your release does not mention Dojo. So if you could just provide us an update on where Dojo stands and at what point you expect Dojo to be a resource in improving FSD or do you think that you now have sufficient supply of Nvidia GPUs needed for the training of the system?
Elon Musk: I mean, the AI hardware question is, that is a deep one. So we’re obviously hedging our bets here with significant orders of Nvidia GPUs. Or GPU is the wrong word. There really needs to be — there’s no — it doesn’t — you can’t produce graphics, so that’s what. It’s not a graphics processing unit. Neural net processing unit or something like that. Yeah. GPU is a funny word, like Vestigial. A lot of our progress in self-driving is training limited, something that’s important with training, it’s much like a human. The more effort you put into training, the less effort you need in inference. So just like a person, if you train in a subject, sort of classic 10,000 hours, the less mental effort it takes to do something.
If you remember when you first started to drive, how much of your mental capacity it took to drive. It was — you had to be focused completely on driving. Then after you’ve been driving for many years, it only takes a little bit of your mind to drive and you can think about other things and still drive safely. So the more training you do, the more efficient it is at the inference level. So we do need a lot of training. And we’re pursuing the dual path of Nvidia and Dojo. But I would think of Dojo as a long shot. It’s a long shot worth taking because the payoff is potentially very high. But it’s not something that is a high probability. It’s not like a sure thing at all. It’s a high-risk, high-payoff program. Dojo is working, and it is doing training jobs, and we are scaling it up, and we have plans for Dojo 1.5, Dojo 2, Dojo 3, and whatnot.
So I think it’s got potential, but I can’t emphasize enough. High risk, high payoff. So I think it still makes sense given the — even if it’s a low probability of success — I’m laboring the subject. It’s a very interesting program. It has the potential for something special. There’s also our inference hardware in the car. So we’re now on what’s called Hardware 4, but it’s actually Version 2 of the Tesla-designed AI inference chip. And we’re about to complete design of — the terminology is a bit confusing. We’re about to complete design of Hardware 5, which is actually Version 3 of the Tesla-designed chip. Because the Version 1 was Mobileye, Version 2 was Nvidia, and then Version 3 was Tesla. And we’re making gigantic improvements from Hardware 3 to Hardware 4 to Hardware 5.
I mean, there’s a potentially interesting play where when cars are not in use in the future that the in-car computer can do generalized AI tasks, can run a sort of GPT-4 or GPT-3 or something like that. If you’ve got tens of millions of vehicles out there, even in a robotaxi scenario where they’re in heavy use, maybe they’re used 50 out of 168 hours, that still leaves well over 100 hours of time available — of compute hours. It’s possible with the right architectural decisions that Tesla may in the future have more compute than everyone else combined.
Martin Viecha: Thank you. The next question comes from Colin Langan from Wells Fargo.
Colin Langan: Great. Thanks for taking my questions. As we’re thinking about going into 2024, the press release talks about hitting 36,000 or slightly above in Q4. And the comments in the release talk about approaching the natural limits. And it sounds like you’re continuing to try to whittle that away, but that sort of implies there’s not much left. In addition, you have the hourly wage increase. I guess we’ll add to that into next year. And I thought you said raw material costs are kind or — that benefit is sort of almost played out. So is there an opportunity to continue to go below the 36,000, or should we kind of be modeling that it kind of stays at this level into ’24?
Vaibhav Taneja: We are definitely aware of the cost increases which are coming through because of the wage increases. But like I said, we keep looking at other cost opportunities and try and figure out where else can we cut down. So there is definitely more opportunity to bring down costs further. I won’t specifically guide to a number which we will try and get to, but there’s definitely more opportunity there.
Andrew Baglino: Yeah. We’re chasing lots of cost opportunities on the design side still for 2024, north of eight figures is what we’re just in my organization, and Lars has got a bunch. And then from a commodities perspective, it’s such a long cycle time through the whole material supply chain that even with what we’ve already seen to this point…
Vaibhav Taneja: There’s more to come.
Andrew Baglino: There’s more to come on commodities reductions.
Lars Moravy: And there’s still some tailwind left on the commodities.
Andrew Baglino: That’s what I mean.
Lars Moravy: Aluminum and steel.
Andrew Baglino: Yeah and battery material.
Elon Musk: It boggles my mind to think that if we make a 1% improvement in costs, that’s $1 billion. So it’s like, on average, if we reduce the cost by one penny, $1 billion.
Andrew Baglino: What?