Cheetah Mobile Inc. (NYSE:CMCM) Q3 2024 Earnings Call Transcript December 17, 2024
Operator: Good day and welcome to the Cheetah Mobile Third Quarter 2024 Earnings Conference Call. All participants will be in listen-only mode. [Operator Instructions] After today’s presentation, there will be an opportunity to ask questions. [Operator Instructions] Please note today’s event is being recorded. I would now like to turn the conference over to Helen Jing Zhu, IR for Cheetah Mobile. Please go ahead.
Helen Jing Zhu: Thank you, operator. Welcome to Cheetah Mobile’s Third Quarter 2024 Earnings Conference Call. With us today are our company’s Chairman and CEO, Mr. Fu Sheng; and our Director and CFO, Mr. Thomas Ren. Following management’s prepared remarks, we will conduct a Q&A section. Please note that the CEO script will be presented by an AI agent. Before we begin, I refer you to the Safe Harbor statement in our earnings release, which also applies to our earnings conference call today as we will make forward-looking statements. At this time, I will now turn the call over to our CEO, Mr. Fu Sheng. Please go ahead, Fu Sheng.
Sheng Fu: Hello, everyone. Thank you for joining us today. Cheetah Mobile once again achieved accelerated revenue growth in Q3, driven by our service robotics and Internet businesses. This consistent growth results from our strategies to expand the use cases of our wheeled service robotics and expand into overseas markets as well as the resilience of our legacy Internet business. Industry demand for service robots continues to rise, especially in the overseas markets and in restaurants, hotels, factories and offices. Business owners are using robots more often to help their staff and improve efficiency. In the past weeks, I visited many customers and partners in Europe and Southeast Asia. In fact, I am still in Europe today, meeting with our local partners to further strengthen our presence, building a strong local distribution network is very important for our global expansion as it will set us apart from our peers.
That’s why I have spent significant effort on this initiative. During my conversations with local partners, l learned that our robots are helping them solve labor shortages. Some customers in Europe shared that using Cheetah’s robots have reduced employee absences and turnover. Meanwhile, some Japanese customers told us our robots are much more reliable than other players’ offerings and are switching to our products. In September, we launched a new robot for factory and fulfillment center use. This robot can autonomously deliver goods to move low payload items from transit warehouses to assembly lines. We are currently optimizing the product to better meet the needs of customers in overseas markets. This highlights the importance of receiving feedback and input from local partners.
We believe this product will become an important part of our service robotics business in 2025. While the robotics industry is still in its early stages, it will be a trillion dollar market. Robots are becoming essential helpers for humans and will happen in the belt markets first, and LMs will speed up this growth by enabling robots to do more tasks and making them easier to deploy than ever before. Through our conversations with investors, we’ve noticed a lot of interest in how LMs are making our robots smarter and driving steady revenue growth. Today, I will share what we’ve achieved so far with LMs in our products and what’s coming next. First, we are using LMs to improve the way our service robots interact through voice, thanks to our strong far-field voice recognition.
Q&A Session
Follow Cheetah Mobile Inc. (NYSE:CMCM)
Follow Cheetah Mobile Inc. (NYSE:CMCM)
Robots already hear user as well. Now with LMs, they understand users’ questions more clearly and respond better, making the overall experience much smoother. For instance, in restaurants, our robots don’t just deliver food doing busy hours. It can also help attract customers, boosting the return on investments for restaurant owners by taking on more roles and because LMs breakdown language barriers, we are expanding these voice-enabled robots to international markets. We also working on [indiscernible] a system that lets customers set up tasks for robots using voice prompts. For example, you can tell a robot to check each table at 2 PM to see if anyone wants to order more food before the kitchen closes at 2:30 PM The robot will go to each table, skip the ones without customers and even allow people to place orders if the ordering system is late.
Without LMs, this kind of functionality would be nearly impossible or would require writing a lot of complicated code. Secondly, we are using more multimodality models to improve our robot into our autonomous driving. One area that we are working is to enable our robots to map out a large factory as they move and look around once the map is ready. R&D team can mark key locations and the robot can then navigate the factory on their own to deliver goods. Factory owners want to change these key locations. They can easily update the map whenever needed. Based on our initial testing, thanks to LMs, the time it takes to set up our robots can dramatically reduce from about two days to just two hours. We are already using vision-based autonomous driving technology, in some cases, and plan to expand it further.
For instance, our robots user vision-based autonomous driving technology to avoid people and obstacles and understand their surroundings. Over time, we aim to achieve an end-to-end navigation system. This will allow our robots to handle more complex environments entirely on their own. Third, we’re adding robotic arms to our robots to help them do specific jobs. Some of these arms can press buttons, which is useful for delivering things between floors. In particular, in the overseas markets, where business owners are reluctant to adjust their elevator access control systems due to security concerns. Others can pick up and sort items, which is great for use in factories. These arms are powered by on device, multimodality models, making it easier to automate routine tasks.
When it comes to LMs, we use advanced models through API calls to support some of the features we have discussed. At the same time, we are also developing our own models. In November, we launched an 8×7 billion mixture of experts model covering many languages, including Chinese, English, Korean and Japanese. We have made it open source and use it to power our robots, especially the agent OS features. Additionally, we have trained smaller on device models to support indoor autonomous driving and control robotic arms turning to LM based applications. We recently introduced AirDS, an AI based data service platform, consisting enterprises in data and building prompts for their LM based applications. AirDS was built on top of our insights into developing LMs and LM based apps.
So far, we have received positive customer feedback on our LM based applications and we will continue to enrich our portfolio. Our goal is to offer relatively standardized SaaS products, allowing businesses to use LMs gain efficiency. Before handing the call over to Thomas for the financial highlights, I want to stress this, Cheetah Mobile is in a good position to tap into the growing market for service robots and LM based apps. We bring years of experience from the PC and mobile phone areas as well as expanding into international markets and we have strong LM expertise. We’ve already made solid progress in growing our revenue and cutting losses. This is just the beginning of Cheetah Mobile’s turnaround.
Thomas Ren: Thank you, Fu Sheng. Hello, everyone on the call. Please note that unless stated otherwise, all money amounts are in RMB terms. In Q3 2024, our financial results demonstrated solid execution on two strategic objectives. Number one, accelerating revenue growth driven by both the sales of robots and the legacy Internet business; and number two, enhancing our operational efficiency to reduce our operating losses sequentially. In the third quarter of 2024, our total revenues increased 16.6% year-over-year, marking the second consecutive quarter of accelerating revenue growth compared to 11.6% in Q1 and 12.3% in Q2. Our real-based service robots continue to be a key driver of growth. Additionally, our legacy Internet business remained resilient achieving solid revenue growth and margin expansion.
Our profitability, we made further progress. Non-GAAP gross profit rose 14% year-over-year and 7% quarter-over-quarter to RMB131 million with non-GAAP gross margin expanding to 68% in the third quarter from 65% in Q2 and 63% in Q1. Non-GAAP operating loss was RMB61 million in the quarter, reduced from RMB63 million in Q2 and RMB66 million in Q1. We continue to focus on managing costs and expenses. Notably, we consolidated the teams of Cheetah and Beijing OrionStar, streamlining staff and services with overlapping functions. For example, in Q3, we reduced bandwidth cost and cloud costs, professional and legal service fees and certain labor costs related to G&A and operations. We are also decisively investing in AI, using AI to enhance our service robotics business.
For example, our non-GAAP R&D expenses increased 25% quarter-over-quarter in Q3, and now about 60% of our revenues are invested in R&D. As Fu Sheng has said in the past, while we focus on developing products that generate immediate revenue and profits, we remain attentive to the latest technological advancements. Another highlight is the continued trend of our legacy Internet business, which grew 26% year-over-year and 18% quarter-over-quarter. Operating margin, excluding share-based compensation expense for this segment improved to 10% from 6% last year. As of September 30th, 2024, we maintained a strong balance sheet with cash and cash equivalents of RMB1,831 million or US$218 million, long-term investments of about RMB886 million or US$126 million.
In closing, we are making solid progress in expanding revenues and narrowing losses. We are confident in our investments in AI because we see a significant market potential of integrating LLMs into our service robotics business. At the same time, we remain disciplined in reducing losses and driving efficiency in our AI operations. Thank you. This concludes our prepared remarks. Operator, we are now ready for the Q&A session.
Operator: Thank you. [Operator Instructions]
Unidentified Analyst: My question focuses on Cheetah’s robot business. May I ask for the year 2025, what specific goals have been set for the robot business in terms of shipment volume, revenue growth and revenue proportion?
Unidentified Company Representative: Let me answer this question. Well, for 2025, some of our specific important goals should be set to still be in the process of formulation. I have also recently visited many channels overseas. I think the overall general direction can be determined like this. First, our base income will definitely increase in the next 2.5 years and the proportion in the entire revenue scale of Cheetah should also increase. Specific ones should be said that we will combine this wave. That is by Thursday, we have summarized a lot of experiences in these overseas markets and then make certain specific assumptions and we think of doing the setting of specific goals in this way. Then our idea is that because the robot industry is very hot in the capital market today, but in fact it’s growth, that is the growth in the short-term.
I don’t think it will be as highly explosive as the Internet was back then, right? I also have to tell everyone that it should be a continuous and gradually accelerating process because today, for robots in the combination of software and hardware, sorry, this is a bit in the combination of business than the physical implementation and the aspect of channel construction, the investment is much larger than that on the Internet. It’s much larger than the exponential growth back then. So I think for the robot business, we will look more at a long-term goal. We may, for example, be able to make the commission income account for more than half of the entire company within three years. And including that, we can be among the top few globally, at least one of the top three such service system providers.
This is our big goal. And for the detailed goals, we still need some discussions and careful deductions.
Unidentified Analyst: The cash cow business of Cheetah, that is the Internet business, performed very well this quarter, with both revenue and profit margin remaining stable. May I ask how should we predict the revenue and profit margin trends of the Internet business in 2025? Will it show a steady growth trend or gradually slow down and decline? I’d like to raise a few questions about robo training. In the process of robo training, how do we overcome the problem of scarce data? What facilitating roles can large models play in training data? In fact, Cheetah robots have deployed many robots globally. So in the future, is it possible for us to continuously train Cheetah robots in a data-driven manner to continuously enhance their intelligence level?
Furthermore, in terms of the training methods of robots, how far are we from achieving the goal of robots self-learning and training by watching videos of humans performing tasks? Compared to leading robot companies overseas, how big is the gap in this aspect domestically?
Unidentified Company Representative: Well, the first one is a rather complex technical question you asked. I’ll try to use my understanding and that of our company to simply help you analyze and explain it. Regarding the data requirements for robot training, it can be divided into several aspects. One aspect is that we divide the robot into several components. One component is navigation, which is equivalent to a small indoor self-driving. This matter because the environment is relatively limited within an indoor space has basically been solved through some engineering technologies based on certain models and sales before. However, due to the addition of underlying data last time, indoor navigation of robots will become, how to say it, that is to say its implementation will become more real time and its reliance on sensors will become smaller.
This is what we are currently doing, right? One of the things we are doing recently is to promote the indoor navigation of our robots for mathematics and chemistry. Our next robot can also be equipped with a higher-level chip, purely achieving indoor navigation through vision. This is also being gradually advanced. You can also compare it. Look at today’s new energy vehicles. In the end, it was Tesla’s urban self-driving that made significant progress, and adding LiDARs and various radars and multi-modalities actually seem to be less important now than SSD. One important reason is the emergence of this transformer and the mechanism of such large models. This mechanism, as the underlying implementation for end-to-end processing, can handle many things.
So this is one aspect. Regarding this aspect of data, just as you said, we have deployed many robots, and they have been running in various scenarios before. This can actually achieve a considerable amount of data. And the road conditions it faces are not as complex as those on highways nor are the speed requirements as high. So in this part, we think it’s okay. Data is not a major demand, especially for a company like us that already has many robots on the ground providing services every day. We think that in this part of indoor navigation, we don’t have any. Really, the second one might be a concept of self-learning skills that is currently quite popular. For this, I think, it is still more theoretical at present. The computing performance and exactly what kind of new driving force it is actually don’t have a particularly clear definition, right?
Some say it’s circular and some say it’s this, human-like or dual-closed loop. The data in this aspect is indeed relatively scarce because previously, all including the robotic arms in the factories you mentioned, were constructed based on not the data system at the core, but the code channels for automation at the core. We think our approach is to take it step by step. I have constantly expressed a viewpoint on many occasions that I believe that for humanoid robot, there is still a long way to go before they truly land and become commercialized products, right? It’s unlikely to truly achieve commercialization without 5 or 10 years. Although you can see that many of their demonstration effects are good, but for them to truly become commercialized products, there is still a long way to go.
So it’s more about a pragmatic approach. So we will combine our scenarios. For example, we start by completing some simple tasks in the interaction between some robotic arm and the real world. I won’t go into the specifics of this because it’s related to our technical rules. Our idea is not to come up with a perfect product that can do everything and solve all universal problems. Our idea is instead because we have scenarios on the ground today, we combine the scenarios themselves more. To complete the continuous collection and training of this data, we think this requires some time. It may not be as optimistic as we might think today in terms of investment. But I think we will complete one scenario after another step by step. You can also see that in some foreign startups, the tasks they complete with venture capital are very simple, but I think this way is easier to implement.
If they come up with something like cooking and making meals right away, it’s basically a laboratory product because there are too many constraints in performance, make it very difficult. So for the second question, how far are we from achieving the goal of robot completing tasks by watching human videos for improvement? Well, it’s still quite far. Most of what we see now are demonstration videos. I can give you an example, like that time, it was all over a moment. What did it do? And then it would learn anyway. But its success rate is very low. Maybe in the papers it published, it was 70% or something. Of course, there will be progress and this 70% is still in a specific situation. For example, on the desktop, it’s not the entire desktop but a designated area, app.
So it’s not as urgent as we thought. Consider autonomous driving. Many teams have been working on it since 2016 and 2017. It’s been eight years now. In a two-dimensional road surface situation today, no autonomous driving company has achieved the L4 level, right? This autonomous driving and at the beginning, everyone was very optimistic thinking that once recognition was achieved, autonomous driving would be possible, right? But today, Tesla also announced that in 2026, it will have robo taxi landing, including such cars. I think the time for robots to watch humans perform tests for self-learning is not more optimistic than this autonomous driving because it’s more of a three-dimensional mechanical system with more mechanical structures involved.
This is our judgment on a major technological trend. But as for how big the gap is between domestic mechanical companies and foreign ones, I frankly think it’s not significant. Because today, with large models, including after they are developed, domestic updates are also very fast because the underlying algorithms, even those that can be shared, that is at the AI level, once the algorithm itself achieves a breakthrough, the difficulty for everyone to learn is not high. The real difficulty lies in how to engineer this algorithm, how to use more data for training and how to train it more efficiently. In fact, Chinese teams have an advantage in this regard in the country. At least there is no gap in doing this kind of large-scale data engineering.
So today, I don’t think there is a big gap. This is about the training aspect of the existing method, including everyone using some domestic university model APPs. In fact, the productization and other aspects are actually quite good. Maybe if the gap really exists, it might be in some new path. For example, if there is a new method emerging, I think there will be some gap in the country. It’s very difficult to come up with some particularly innovative methods. But once a certain method emerges, the speed of domestic follow-up is very fast. There is not much gap. This is my view, personally.
Unidentified Analyst: Since the beginning of this year, the company has been continuously reducing losses every quarter. May I ask what the subsequent pace and specific plan for reducing losses are? Is there a clear timetable for achieving profitability?
Unidentified Company Representative: Reducing losses is definitely our top priority. So today, all enterprises are shouting about cost reduction and efficiency improvement. For us, we have surely achieved a certain scale of loss reduction this year. But I want to say that for this loss reduction, because we participated in some research and development and training of large language models, our pace has slowed down a little bit. But after we review the MV this round, we will put more of our efforts into the development of agents and the implementation related to the intelligence of robots. Such R&D costs will be reduced significantly compared to the large language models in the past 10 years. We definitely have a certain loss reduction plan internally and a timetable for profitability.
Well, I think due to the significant changes in the market and technology, we may not be able to disclose this very clearly externally because now I think that this wave, just like the friend’s question earlier about some progress in training because we have seen some business opportunities and we also see that due to the support of large language model for service robots whether it’s their underlying planning ability, task decision-making ability, including their interaction ability, including their interaction ability, and it will expand their satisfaction in various markets and expand the market in various markets. So we also need to maintain a certain flexibility. If such an opportunity arises, we may have to make some more investments in R&D, of course.
Overall, we will definitely aim to make the company profitable and create value for shareholders. This major goal is beyond doubt.
Unidentified Analyst: How do you view Agentic AI or AI agent? How difficult is this technology? For example, we have seen that recently, the AI agent of Zhipu has been able to independently help users search and place orders on Baidu Maps and Meituan. Can AI agent accelerate the application of large models in the C and D? How should we consider the value distribution in this?
Unidentified Company Representative: Well, okay, thank you. This question is quite professional. I think the popularity of this term, which has become especially popular recently, essentially for an AI agent, it is more like some traditional types of software. What was it called before? That kind of software essentially emerged because the model’s ability has not reached a certain level, that kind of appreciation and smaller foundation. And then we need to use a part of human thinking chains and a part of human planning to guide the large model. Just like I wrote the first 45 sentences like this and after that, the effect, including what you saw with the so-called points of Zhipu, I think it might be a paradigm shift in new software.
That is, previously, when we wanted to write a software for a library, we have to rely entirely on a large amount of code and logic to complete it. But today, a lot of code logic is handled by large language models. Then the reduction in the R&D cost of this kind of software on this end and the improvement of its user experience at every step. It is indeed a great opportunity. I have to say that we, ourselves, also have, we have seen that our Internet business also has some attempts in this regard. But I want to say that for this matter of an AI agent, whether it is able to complete these queries or place orders, to achieve the high level of stability and satisfaction of traditional systems. What does that mean? For example, if I give it an instruction, can it definitely do what I want to search for?
But this is a very crucial point for the landing of large models on the C end or B end today. This point is not idly mentioned in the industry, but we discovered this problem when we were doing it ourselves. For example, if you use traditional code to implement it, when you select a point to place an order because humans are very precise in their operations, basically, your operation is 100% similar, right? Sometimes it might fail. Sometimes it might not meet your expectations. And sometimes it might give you errors. But large models don’t know what they don’t know. They have hallucinations. So this point is that for an AI agent to truly land, a lot of effort needs to be invested in this aspect. Coming back to your question, can it accelerate the landing of large model applications on the C and D?
It definitely can, especially on many C and D apps. Today, some apps have already started, they have begun to use this. For example, I can give a few examples like image translation type and some educational types. This has clearly started. Yes, for the value distribution, I think it will still bring a wave of practice for application manufacturers. So this is my view. But personally, I’m not particularly optimistic about the landing on the C end in China as a startup company because the capabilities of large domestic manufacturers in this aspect are extremely strong. And for them, it will also be very, very fast. So if you make a small innovation in an area that others are familiar with, it should be quickly followed. Thank you.
Unidentified Analyst: I’d like to ask a few questions about the application of large models in the enterprise level. We have observed that Cheetah has conducted many explorations in the application of large models this year, covering areas such as by training, sales management and data services. May I ask how willing our enterprise customers to pay for the application of large models at present? In the office scenario, is the application of large models still mainly limited to specific working scenarios with a higher error tolerance rate. With the advancement of large model technology, especially the emergence of AI agent, can this effectively enhance and improve the hallucination problem of large models, thus enabling the application of large models to completely or partially replace manual work?
Unidentified Company Representative: Yes. So your question itself is very revealing. I think you have covered basically every point. First, I think the willingness of enterprise customers to pay for the application of large models completely depends on how much input output ratio this application can bring to them. And there is one point that needs attention that is this input output ratio has to be high, that is relatively high. Because essentially, it means the reengineering of many internal processes of the enterprise and the redefinition of some positions. For an enterprise, if there is not a high enough value, they are not willing to promote it. Currently, it seems that the enthusiasm of enterprise customers to pay for the application of large models, how to say it, it is becoming more and more rational.
What I know is that last year, many enterprises spent a lot of money on the authorization or privatization authorization of a large model. This year, it is obvious that they no longer pay for these or very little. Instead, they ask more about what kind of directly usable thing you can provide me, well, and your question about whether it is indeed in specific scenarios with a relatively high error tolerance rate for the current large models, for example, in training and some summaries of sales. Because even if it is slightly inaccurate, it is still okay, right? That is you roughly see a general idea or that for something like training, if it can achieve an accuracy rate of more than 90%, a large number of people can accept it manually. But in some aspects such as data insight, I think the entire industry is still exploring together.
Yes, it can effectively enhance and improve the hallucination problem of large model because it uses traditional codes or test planning to confine the ability of the large model in a particularly vertical environment. At this time, when the theme of the large model is particularly vertical, the probability of its error will decrease, especially in this wave, the current ability of the large model has reached this level. When everyone is investing in this, therefore, defining the application of this large model in the B end will definitely gradually achieve. More and more replacements here, just in combination with the next question, I also gave an example that is today, if we do the application of large models, if it is a firm application, theoretically, it is difficult to satisfy customers enough.
This is after our exploration for such a long time. Let me give an example. For example, you don’t know if you bought the latest iPhone, right? I also especially bought one that can use the overseas version to try. You will find that what it can truly reflect is very little. The experience is not that long. And including what Microsoft pushed, I think both of these applications define them too broadly because they are large companies I think on this point. But anyway, they have enough business plots and they are slowly moving in this direction. But as an enterprise like us, we must do a very clear and vertical application. And our idea is to penetrate one point, then make the experiment replicable and then move to the next point. So basically, these are some of my understanding.
Thank you.
Unidentified Analyst: Recently, many people have discussed a lot about the slowdown of the scaling law. What’s your view on this? What are the impacts of the slowdown of the scaling law on the development of the large model application industry?
Unidentified Company Representative: You really don’t need to worry about it. There are still some disputes on this. But one point is that whether the key law itself is slowing down or not is unknown. But at least recently, especially in the past one or two months, everyone has been talking about the insufficiency of data, right? Because basically, on the Internet, the good and precise data that can be used for large model training approximately, well, in the industry, this is not certain. But what I heard is probably around 20 to 30. It’s basically about this amount of data. Because although there is some data, its quality is not high and it might make the model less usable, right? Including now in our industry, everyone is keeping an eye on it.
For example, GPT has not been released yet, right? You can see that this time, in 12 days, it’s basically the enhancement of the productization of the original model capabilities in the past. What does this mean? It might imply that in a certain sense, today’s top-level models globally, is not likely that the model capabilities will increase easily for a period of time. This has been at least four or five months, right? Before this, we saw GPT 3.0, GPT 3.5 and then 24.024. Each step was quite fast before, but now it’s been a long time. So we think that at least in reality today, the growth and expansion of the capabilities of the top large models in this industry. We don’t know whether the scaling law is slowing down or not, but this is definitely slowing down.
But this is a good thing for startups themselves, especially for companies like us that do applications because when the model capabilities were developing rapidly before, many things you did really, when it was 3.8, let’s not talk about it. When my new model came out, it burdened you. There were indeed some such projects at the time. After they finished, when the model came out, they added this capability and then it didn’t sell. But today, due to the slowdown in the growth of the top model capabilities, everyone is thinking about how to better utilize these capabilities with agents. This is a major idea for doing applications. So therefore, we think this is beneficial to us. We can also be at ease, right? We don’t participate in the model selection competition anymore.
We just do a good job with the application itself because we entered the game relatively early today. We still spend a lot of effort on doing some scenarios. Then for us, this scenario is the best way to polish it for a lifetime. Because in the end, it’s essence still needs to be combined with customers and the market to know how satisfactory this thing is. So I think it’s a good thing for companies like us. As for what this means for the industry, I don’t have such a high perspective. Well, thank you.
Unidentified Analyst: We understand that President Fu’s attitude towards robot development is extremely pragmatic, focusing on wheeled robots that can achieve large-scale applications at the current stage rather than in the field of bipedal robots. However, we have also noticed that Cheetah robots are attempting to integrate robotic arms and launch embodied intelligent robot products as defined by Cheetah. I would like to ask how you view the changes in the robot industry trend over the past three months? For example, what promoting effects do large models have on the implementation of robots? How do you view the future competitive landscape of the robot industry?
Unidentified Company Representative: The changes over the past few months, how do I feel? In the robot industry, the concept of large model has become increasingly popular. Then due to the addition of Tesla’s Optimus, the humanoid robot has become quite popular several times, but I still adhere to my viewpoint, robots are too broad a term. So when people mention it, they always have the fantasy of using humans as the prototype to create a perfect product that can solve all problems. In fact, I think this path is at least very, very difficult. We can look back. When it comes to autonomous driving in the past, initially, there were two routes competing. One route was led by Google’s Waymo, which pursued having won a highly functional vehicle with many sensors, expensive LiDARs and very perfect algorithms to achieve.
This individual was quite strong to achieve stable autonomous driving. The other route was Tesla’s approach at that time, which was that I didn’t think I could definitely achieve it, so I would start with visual sensors, continuously add sensors. Eventually, it was found that to end training. Up to today, we can basically say, right? The second one is that with the in-depth exploration of scenarios and continuous experimentation with data, the current technological changes are better than the initial perfect assumption of having the highest quality engineers and the most advanced sensors. Currently, it seems that this approach is much better. So I think it should be the same for robots. It is unlikely to have a situation where I am different and can complete everything.
Just like humans, they still need to drive a car or use a trolley or some tools to assist to achieve more practical applications, right? So today I think although the concept of robots was very popular recently, there is still a considerable gap between the concept and practical implementation. Regarding the hallucination problem mentioned in large models, in fact, it is like this. It is a fundamental algorithm issue. It has such problems. Sometimes in language, the answers are somewhat acceptable. But when it comes to actually performing in action or interacting with objects, there cannot be any errors. And one such errors occur, it will make it very difficult for the machine to be implemented, right? For example, in a restaurant or in some reception scenario, you will find that when there is no one supervising it, it has to operate for over 10 hours a day, 100 of times a year and cannot make mistakes in thousands of scenario.
A little mistake will affect your customers’ confidence and whether they will recommend you to others. So I think large models are definitely helpful for the general direction of robots. But when it comes to actual implementation, it should be advanced step-by-step in combination with scenarios, which is also why I am not very optimistic about humanoid robot. I think for the future pattern of this, this question is very broad. Well, I believe that eventually, it might be like the example I gave, more pragmatic robot manufacturers will emerge continuously. The final competitor that polishes within a product will win. I am not optimistic about proposing a very big concept today and then moving towards a moon landing operation. This, I don’t think it can succeed because there is a fundamental logic that robots are an industry with a highly integrated combination of hardware and software.
And the hardware system insight is very complex. It is not like a car, which mostly has real structures and such. It has many mechanical structures inside and the progress of mechanical structures in the hardware system is not supported by Moore’s Law. Software can double its performance or have its cost in 18 months. Hardware has to progress gradually. Cars have been around for over 100 years, right? And there is also the revolution of smart cars this time. So I think in the future, anyway, we are firm in our own path, which is to continuously combine with scenarios based on humanity and then add some robotic arms and some features in some scenarios with urgent needs and pain points to complete some high quality and highly reliable actions to achieve greater expansion.
Okay. Thank you.
Unidentified Analyst: We have noticed that the company has a large amount of net cash on its books. May I ask if there are any plans for share buybacks or dividends in the future?
Thomas Ren: Let me answer this question. I’m the company’s CFO, Thomas speaking. Thank you for your question. First of all, Cheetah has always had an open attitude towards shareholder return. We, the management, also attach great importance to shareholder returns. Historically, we can see that we have distributed dividends twice and have also carried out multiple share buyback plans. Those two dividend distributions were based on the exit of our important investment projects and obtaining cash returns, which were then given back to our shareholders. But for the future, whether there will be dividends or other means, in fact, there are many factors that we need to consider. For example, as our Vice President mentioned earlier, we actually still have some investments to make in this technology, including our business is also transforming from C2 to be the development of AI large models and robot business also requires certain investments.
At the same time, we also feel that currently the overall economic environment is rather uncertain. For us, maintaining a relatively sufficient cash reserve is also quite important for the development of the business. Therefore, in the future, we will continue to maintain a relatively cautious financial strategy, ensure that the company has sufficient flexibility and risk resistance ability when facing market fluctuations. If in the future, our Board of Directors approve the plan for share buybacks and dividends, we will make an announcement to the market as soon as possible. Thank you.
Unidentified Analyst: At present, the development of large models in China is rapid from the perspective of performance and efficiency, such as the accuracy of content generation, generation speed and reasoning costs. Have you felt any significant differences among various models in actual operations or has the competition of large models gone beyond the model capabilities themselves and extended more towards productization and ecological construction?
Unidentified Company Representative: You should know that the domestic ecosystem is rather complex, and it’s not appropriate for me to evaluate which model product is better. Everyone can try them for themselves and have their own feelings. I often use them back and forth. However, it’s true that there are some differences in the experience. Well, I think your question today is very good. I think the competition of large models has exceeded the model capabilities themselves from the very beginning because the ability of the model itself, as we recently launched a function called this Data Treasure, you will find that today the ability of the model itself depends on the data. And today, this data because it’s relatively public on the Internet when it comes to teaching and such, if you spend more effort on this high-quality data and do more engineering work, your model ability can be quite good.
Just like we actually have two models. One is with four billion parameters. A 17×8, 81 and the 21×8 mode. In fact, the results on the list are also quite good because we are here, in fact, the competition of large models today is still mainly comes from your attention to this matter and the investment in data. And I think at this stage, it’s certain that today’s competition must move towards productization and ecological construction. Whoever can truly increase user satisfaction to the improvement of the product experience can almost eliminate the gap in some indicators of the underlying model. I think this no longer affects the matter. Look at the list. Today, this one is on top. Tomorrow, that one is on top, and then another one comes up for a while.
In fact, I think this matter may have passed. It’s already moved towards productization and ecological construction. I think in a recent interview, I saw that when someone asked Charlotte Mang, what was lacking? He said it was the product, right? For this large model, this AI competition, I think the speed is very fast. So now it has shifted from the initial technical enthusiasm or technical comparison to product comparison and ecological comparison. This is a very obvious stage. So I see that recently, some domestic ones or entrepreneurs have mentioned that next year will be the year of ecological explosion and application explosion. In principle, I agree with this point because the model’s ability has reached a relatively high level and there isn’t much difference among everyone, and it’s not easy to improve further.
Just like the data issue we mentioned earlier, unless there is some special new paradigm, then now the efforts will be made towards the product and the ecology. And because it has reached a certain level and everyone is investing, more attention will be paid to the user experience point. Well, so I think there will be significant progress in terms of products and ecology next year. Thank you.
Unidentified Analyst: I’d like to ask a technical question about large model training. How is the technical capability of Chinese enterprises under the new paradigm of reinforcement learning in large model training? We understand that the new paradigm of reinforcement learning is characterized by the lack of readily available open source models and academic papers for direct reference.
Unidentified Company Representative: This is really quite academic. Firstly, I think under the paradigm of reinforcement learning, the technical capabilities of Chinese enterprises are not bad. Reinforcement learning has existed for a long time, right? I think mainly because of the launch of OP9, everyone discovered that reinforcement learning might also be needed in language models, right? You can look at the past. After AlphaGo played Go, some major Chinese companies or teams also did well in playing Go. Basically, as I just summarized, maybe there are just a few points, one or two points difference in some specific evaluation indicators. I think this doesn’t really affect the final productization and implementation, including our speech recognition and previous visuals.
Unidentified Analyst: We can see more and more merchants in China using service robots. The most common places are restaurants and hotels. I would like to ask, what is the current market share of Cheetah in these two scenarios? From the overall market perspective, how much has the robot penetrated the restaurant and hotel markets? How much growth space is there in the future? How much market share can Cheetah’s robots capture in these two segments in the next three years?
Unidentified Company Representative: Well, regarding the ranking of market share, because there are relatively few reports in this industry and such, they are all rather sketchy. Of course, for restaurants and hotels, we started relatively late because we initially focused on reception services. I think we should be among the top few. It should be approximately such a share situation. From the overall market perspective, the penetration of robots in restaurants and hotels is currently still in the early stage. I think in China, it might be around 5% for hotels, which might be a bit more because there are not as many in restaurants. It is still in the early stage with future growth. In foreign countries, I think it is even earlier.
This time, during my interviews in Europe with many of our customers, right? This shows that the market space in all aspects is still very, very huge. You can imagine the foreign market as the situation we had three to five years ago. Today, in China, some markets do not solely depend on the robot market itself, right? For example, as we all know in restaurants, it is about cost reduction and efficiency improvement. And there are also some situations where their purchasing power has declined. This is a fact. But in the long-term, I am very optimistic about these two scenarios, including some extended scenarios. What we do is not just delivery. Although we are enhancing the voice capabilities using large models in this aspect of voice, when it reaches a certain level, we believe it will greatly expand the work in restaurants and hotels.
I don’t have this confidence, but can we achieve it based on the execution ability of our own team, but this is my biggest realization in the past year or so. Because it is essentially a 2B system, a lot of effort is needed in building the sales channel, sales management and the establishment of the sales team. This is also why I often visit many customers now, both overseas and domestic because I think our technical capabilities are definitely among the top in this industry. Then it’s about which scenarios we enter and what products we make. And in terms of volume and customer satisfaction in all aspects, we can do well. In fact, the real difficulty here is not the technology and products for us. More is the establishment of the sales channel and the entire sales network.
Regarding this wave of Chinese enterprises going global, I talked to a friend yesterday. I think the biggest difference from the last wave of Cheetah Mobile SAPP going global is that we must go deep, do some local channel construction and understand the local market and be able to formulate the corresponding strategy. If we can do this, I believe our moat will be much deeper than the last wave of Cheetah Mobile’s wave of using online advertising and ranking to promote. But still the effort and difficulty required for this will also be relatively large. This is our view. But I still have confidence in achieving the top position in the market within three years.
Operator: Ladies and gentlemen, at this time, we will conclude our question-and-answer session. And at this time, the conference has now concluded. We do thank you for attending today’s presentation and you may now disconnect.