Mobileye Global Inc. (NASDAQ:MBLY) Q1 2024 Earnings Call Transcript

Page 3 of 3

So it’s very volatile in such a quarter when we deliver only 3.5 million chips. So we do expect an increase in Q2 and Q3 of the ASP for EyeQ. About €0.40 or €0.50. On the total year, yes, we expect EyeQ ASP to go down as it did in 2023 and approximately $0.50, $0.75 year-on-year, continuing the normalization of the mix as compared to a very rich level that we had in 2021 and 2022. So this had a modest impact in 2023, and we expect a similar impact in 2024 for the full year.

Ananda Baruah: Very helpful. Thank you. Appreciate that.

Operator: Our next question comes from Adam Jonas with Morgan Stanley. Please proceed with your question.

Adam Jonas: Well, first, I just want to share my thoughts to the Mobileye team and the community and people with Israel during the ongoing situation in and [indiscernible]. On seven months ago, you posted on LinkedIn that Tesla’s decision to adopt an end-to-end generative AI approach to full self-driving to Triannual Networks was “neither necessary, nor sufficient for full self-driving programs. Do you still feel the same way today, Amnon?

Amnon Shashua: Yes, indeed. Now, in my prepared remarks I mentioned that on the EyeQ6 we’re going to have end-to-end both perception and actuation, and that does not contradict the point that we made. The Tesla end-to-end is the sole technology. Our end-to-end is just one engine on top of multiple engines in order to create a decomposable system that is explainable, that is modifiable, that you can explain what it does to regulatory bodies, that you can customize the driving experience for OEM. And if you look at some of our competitors like Waymo, they have the same view, that there’s a very, very strong reliance on neural networks, on data-driven networks, language models, but at the end of the day it needs to be a system that is designed to be explainable and modifiable.

So we’re not against end-to-end. We’re against end-to-end being the sole engine for the system. So I’m back at the CES a few months ago in January. I presented, mobilized the end-to-end perception engine, what I call the multi, by the power of five, how to build a end-to-end perception engine, and this is running on the EyeQ6, and we have also another engine which also includes actuation. So this is going from videos to actuation as an end-to-end, but it’s a component, it’s a subsystem of a more complex system.

Adam Jonas: Thanks, Amnon, for clarifying, and just as a follow-up, I know you’ve said that some of your design wins are also for supervision, include internal combustion architectures, and some people on this call might be a little skeptical as to whether their OEM customers would have software-defined internal combustion vehicles. So I guess my question is, when would you actually — well, theoretically and practically possible, when would you expect, based on your visibility of today to see a supervision fitment in production internal combustion architecture vehicle?

Amnon Shashua: So the 12-target group wins with 17 models, nine of them are combustion engine models. So 50% of the models is going to be combustion engine, and it doesn’t have to be a software-defined vehicle. It’s a system, just like ADAS is a system, it’s really encapsulated in our ECU, so it doesn’t have to be a software-defined vehicle, and all the air updates is done through our ECU, so everything is self-contained.

Moran Shemesh: If I may follow up on this, I think that maybe a few years ago, some OEMs said that their future plans in terms of future architecture software-defined vehicles will be based on EVs, under the assumption that EVs will become the leading powertrain for their cars and towards the back half of the decade. What has changed for some OEMs in the last year is that the plans are today maybe a little bit more moderate in terms of the EV percentage versus combustion engine, or a hybrid, but this still means that they are kind of aligning their architectures to the powertrain in a more balanced way, as opposed to going all-in on EVs for future technologies.

Dan Galves: Thank you. Thank you, Adam. We can take one more question, Maria.

Operator: Okay. Our last question comes from Chris McNally with Evercore ISI. Please proceed with your question.

Chris McNally: Thanks so much, team. Last but hopefully not least. Maybe we could dive into some of the supervision detail on the potential wins for second half. Would love to know if we look at the wins by type with the RFPs, is it sort of the old model-by-model RFP approach where we’ve seen the legacy OEMs kind of bid this out in the past? Or maybe DXP or sort of the wider Audi push deployment has led to a broader fleet deployment for the potential RFPs, i.e., could we have hundreds of thousands of vehicles in the per OEM and the 27-plus time frame?

Amnon Shashua: Yes. So that — I have is for — and normally, what we do is to see kind of the plans for OEMs in launching specific vehicle models, but it’s more a platform question as opposed to specific vehicle models. So normally, a platform will include a few vehicle models that will be launched according to their plans. And then we’re not kind of going one by one in kind of a rigorous process with each OEM. It’s a bundle of cars and car models that can be — the volumes can vary according to [indiscernible] of course. But when we have a deal, it can include multiple car models as we had with Volkswagen Group, which with one announcement we covered 17 car models with multiple brands and then with all geographies and so on and so forth.

Chris McNally: Really appreciate that. And maybe just a follow-up. If we could follow on to Adam’s question and sticking to this topic of at least for now supervised eyes-on performance, autonomous evolution to the side. And in the past, I think Mobileye has discussed something like you were hoping for 10 times better miles per disengagement from supervision when we compare it to something like full self-driving. I think a lot of those comments were pre-version 12. Any thought on how you think supervision, again, as a supervised eyes-on system, the competitive statistics stacks up today?

Amnon Shashua: No. We are targeting the current generation with EyeQ 5 is improving all the time, but we have over-the-air update every two months or so. We are close to achieving a 100 hour mean time between interventional highways less so in urban, but it’s more than 1 or 2 hours of mean time between intervention. On the EyeQ 6 system, as I mentioned in my prepared remarks, just for the camera subsystem, it’s about 1,000 hours of mean time between intervention on highways. Now I don’t know what is the mean time to intervention on Tesla’s Version 12. I don’t know if anyone measured that, but these are the kind of things that we measure in terms of KPIs on how we progress.

Operator: There are no further questions at this time. I would now like to turn the floor back over to Dan Galves for closing comments.

Dan Galves: Thanks, everyone, for your time, and we will talk to you next quarter. And thanks for the Mobileye team for the session. Thank you.

Operator: This concludes today’s teleconference. You may disconnect your lines at this time. Thank you for your participation.

Follow Mobileye Global Inc.

Page 3 of 3