So our strong ability in managing extensive GPU-centric cloud with very high GPU utilization reach has continuously enhanced our AI infrastructure. So as a result, we can help enterprises build and run their models and develop AI native apps at low cost on our cloud. Secondly, the EB family of models have attracted many customers to our cloud. Over the past few months, we have consistently enhanced ERNIE’s performance, receiving positive feedback from the customers. We also offer ERNIE models in different sizes to better accommodated customers’ needs regarding cost structures. And thirdly, we were the first company in China to launch a model-as-a-service offering, which is a one-stop shop for LLM and AI native application development. So ERNIE Bot [ph] makes it easy for enterprises to use LLMs. We’re also providing toolkit to help enterprises easily trim or fine-tune their models and develop applications on our cloud.
So with the toolkits, customers can purpose build to cost effectively by incorporating their proprietary data, and they can directly use ERNIE API to power their own applications as well. We can also help them to support different product features using different models adopting the MOE approach in the app development. So as a result, enterprises can focus on identifying customers’ pinpoints rather than expanding their efforts on the program. So all of these initiatives have helped us establish a first-mover advantage in Gen AI and LLMs. For your last question, as more customers use our Maps platform to develop AI native applications aimed at attracting users, substantial user and customer insights will be generated and accumulated on our cloud.
So these insights will also allow us to further refine the toolkits. As our tools become increasingly customer friendly and help enterprises effortlessly fine tune models and create apps. They will be more inclined to stay with us. Additionally, it is worth noting that at the current stage of employing large language models, it is crucial for customers to create suitable prompts for their chosen models. So since they had to invest a considerable effort in building and accumulating their best props — prompts for using large language models. And it becomes intelligent for them to switch to another model because they will have to reestablish their prompt portfolio. So as a result with increasing adoption and active usage of our platform, customer satisfaction and switching costs will help us increase the customer retention.
Charlene Liu : Thank you.
Operator: Our next question today will come from Miranda Zhuang of Bank of America Securities. Please go ahead.
Miranda Zhuang : Good evening. Thank you for taking my questions, which is about the AI chip, wondering what’s the impact on your AI development after the recent further chip restriction from U.S. Is there any update on the alternative chips? And given the chip concern, how is Baidu developing the AI model product and monetization differently versus the overseas peers, what can be achieved and what may become difficult? And what will the company do to keep up with the overseas peers in the next few years? Thank you.
Robin Li: In the near term, the impact is minimal for our model development or product inventions or monetization. As I mentioned last quarter, we already have the most powerful foundation model in China and our AI chip reserve enables us to continue enhancing earnings for the next one or two years. And for model inference, it requires less powerful chips. Our reserve and the chips available on the market are sufficient for us to power many AI native application for end users and customers. And in the long run, we may not have access to the most cutting-edge GPU. But with the most efficient homegrown software stack. Net-net the user experience will not be compromised. There’s ample room for innovation in the application layer, the model layer and the framework layer.
Our end-to-end self developed four-layer AI architecture along with our strong R&D team will support us in using less advanced chips for efficient model training and inferencing. This provides Baidu with a unique competitive advantage over our domestic peers. And for enterprises and developers, building applications on ERNIE will be the best and most efficient way to embrace AI. Thank you
Operator: Our next question today will come from Ken Fong of UBS. Please go ahead.
Unidentified Analyst: Good evening, management. This is [indiscernible] on behalf of Kenneth. Thanks for taking my question. So my question is, recent days, we have seen numerous developments in text to video or video generation technology. So how do you envision this technology impacting the broader AI industry development in China and what implications that it would hold for ERNIE. Could you elaborate on your strategic road map for ERNIE moving forward? Furthermore, how does ERNIE currently perform in text generation and text to image, text to video generation task? And what improvements do you foresee in these areas. Thank you.
Robin Li: This is Robin. First of all, multi model or the integration of multi-modalities such as text to audio and video is an important direction for future foundation model development. It is a must-have for AGI. And Baidu has already invested in this area, and will continue to do so in the future. Secondly, if we look at the development of foundation models, the market for large language models is huge and still at very early stages. Even the most powerful language models in the world are still not good enough for a lot of applications. There are plenty of room for innovation. Smaller-sized models, MOE and agents are all evolving very quickly. We strive to make this offering more accessible to all type of enterprises and solve real world problems in various scenarios.
And thirdly, in the realm of visual foundation model. One, notably, a significant application with vast market potential is autonomous driving, in which Baidu is a pioneer and global leader. We have been using diffusion and transformer to train our video generation models for self-driving purposes. We have also consistently made strides in object classification, detection and segmentation, thereby better understanding the physical world and the rule of the physical world. This has enabled us to translate images and videos captured on the road into specific tasks resulting in more intelligent, adaptable and safe autonomous driving technology. Overall, our strategy is to divest the most powerful foundation models to solve real-world problems, and we continue to invest in this area to ensure our leadership position.
Thank you.
Operator: Thank you. And ladies and gentlemen, at this time, we will conclude the question-and-answer session and conclude Baidu’s fourth quarter and fiscal year 2023 earnings conference call. We do thank you for attending today’s presentation. You may now disconnect your lines.