Wallstreetcn
2023.07.22 02:16
portai
I'm PortAI, I can summarize articles.

Will the open-source large model LLaMA 2 play a role similar to Alphabet?

Without Android, there would be no mobile Internet. LLaMA 2 is almost on par with GPT3.5 based on evaluations. In other words, it has become usable in terms of creating value. If it continues to develop, it will truly become an engine for a new intelligent ecosystem, similar to the role of Android.

In articles such as AI Big Model No Business Model?, I have expressed this view many times: Don't compare the future application mode of big model to public cloud, big model will eventually be the core of cloud operating system (new general-purpose computing platform), and its landing form will be much like Watson in the past, landing in the form of systematic super application. If this is the case, there is no doubt that an open source, cheap "Android" is needed to really start this landing. Unexpectedly, it has not been a month since LLaMA 2 appeared and performed well. So will LLaMA 2 play the role of Android in the mobile Internet in the AI industry? ## Android and Historic Moments There are two crucial historic moments throughout the mobile Internet: the release of the iPhone, which went down in history with Jobs, and the release of the Android of the same year, which is somewhat unknown by comparison. But in fact, for the mobile Internet, the actual impact of the latter is greater than the former. About 15% of the global mobile phone sales are Apple phones every year, and the rest are basically Android phones. In other words, there is no mobile Internet without Android. Then what exactly is an operating system like Android? Why is it so critical? Let's simply popularize the concept of an operating system.! The simplest diagram of the operating system is not very accurate. There are usually very complicated modules in the Kernel (kernel) position, but fortunately it can clearly explain what the operating system is. Android is in the middle Kernel position. For the above applications such as WeChat and tremolo, they actually have no access to microphones, cameras, memory, networks, etc. All the use of specific physical devices must pass through systems such as Android. ## What are the benefits of this division of labor? Reduce development and application costs. In the early days of the IT industry, there was no division of labor above, and it was usually one company (such as IBM) that did everything. This leads to very high demands on personnel. If the difficulty of developing general terminal applications from the perspective of technical complexity is 1, the difficulty of developing modules in many operating systems is estimated to be 10, and the difficulty of both is not in one dimension. At the same time, there is only one operating system, but there will be countless applications (Android-App Store). Therefore, a more efficient division of labor is needed, so the industry is divided again and the operating system appears. Efficient division of labor system can lead to the development of the whole ecology, for mobile Internet Android is the most critical fulcrum of ecological development. If the WeChat team needs to make Android by itself, it is likely that we will not be able to see WeChat. If the operating system in the technical sense is the diagram above, the operating system in the commercial sense is an engine that powers all applications, reduces their development costs and shortens their cycles. The big model plays this role, but * * if it has only one (chatGPT) and is only used by OpenAI itself, it is just like iOS, it is impossible to build a variety of mobile phones at all, and there will be no real start-up of the mobile Internet. In the past, it was impossible to build a system like Watson based on chatGPT. The consideration of various data risks cannot be offset by a technological progress. Who is willing to upload all the data to OpenAI? Who is willing to rely on a black box that is completely unknown! But in the past other big models are too bad, **LLaMA 2 is changing this situation, from the evaluation to see it basically catch up with GPT3.5, that is to say, in the dimension of creating value it becomes usable, in the dimension of landing because of open source and can hedge risk concerns * *. Further development will really become a new intelligent ecological engine, similar to the role of Android. It has also been mentioned many times in the previous series of articles that our general way of looking at large models is problematic. **If you can see the technology and capabilities, but you can't see that its landing needs a complex system to support, then you can't figure out its true value creation method, and you will argue whether it is Hu or something, but it is not right. ## Will AI applications explode? If the evaluation results are correct, then you need to start building Watson-like systems in different fields now. Or it might be too late. *The big model is critical and provides the opportunity to build intelligent applications, but just as WeChat is really valuable on the mobile Internet rather than a naked Android, the big model also needs to grow its own applications on it. It is easy to see the key points of this kind of system super application with the system thinking mode. Let's change a picture to illustrate this.! Alexa infrastructure is essentially the same as Android. The above figure is the basic architecture diagram of Amazon Alexa. Why use it, * * because Alexa is by far the most cloud-like operating system. * * In fact, OpenAI's plug-in is also in this mode. It can be said that OpenAI is far better from the perspective of large models. This intelligent progress has greatly reduced the development cost of applications (Alexa Skills Kit in the above figure), but OpenAI still has a long way to go from the perspective of system completeness, such as intervention and control of large-scale devices. Based on this architecture diagram, it is easy to see the key points of super applications based on large models: * * large models: * * will be in the middle part, responsible for providing general intelligence capabilities. It should also be matched with other types of algorithms, such as perceptual algorithms, recommendation algorithms, etc. In the above figure, there is a little decomposition of interaction into speech recognition (ASR) and natural language understanding, which will be completely replaced by the big model, but will not change the architecture. * * AIoT Infrastructure: * * In order to support the large model to run well and connect with other parts, AIoT uses nothing but large-scale equipment management communication. These parts add up to the Kernel of the past system, that is, the role played by Android (the middle part of the picture above). User side (on the right side of the figure above): To provide full perception, this perception must have time and space dimensions, and must have field and historical data. When we interact with the smart speaker, we should first shout a voice, such as Xiao Ai's classmate. It can't hear no matter how powerful the big model behind it is. Then the interaction must require the person's modeling data (history) and location, etc., before you can do a good job of content output. This part will vary greatly in different scenarios, such as Watson's scenario, which may require a person's 24-hour ECG, medical history, and genetic data. These parts are critical and require a combination of IoT and algorithms to achieve the goal. Big models play a less critical role in this process. * * Application side (left side of the above figure): * The scene of the smart speaker needs to be connected to various data sources (chatGPT plug-ins do this). The weather cannot be generated by large models. For Watson-like systems, industry data and regulations are needed here. * * Compared with the past, the application side is getting thinner, but the problem is that the three parts together are the applications under the intelligent cloud system. If analogy is necessary, the creation of this new application is a bit like the need to build mobile phones belonging to special fields. * * There are countless fields that need this kind of systematic super application, such as medical treatment, education, taxation, enterprise, retail, military, home, etc. A long time ago, DeepMind even tried to make such an application for Google data center to manage air conditioning. Artificial intelligence applications will indeed explode. In the past, large models were the starting point and obstacles, not the application itself. LLaMA 2 offers the possibility of crossing such barriers at low cost. ## Will the general profit (break-even) come? In the previous series of articles, I have summarized many times: from a business point of view, the entrepreneurship of artificial intelligence in the past 10 years has actually failed collectively, because it has not run through any decent business model. The short-term losses of companies that really do their own big models will intensify because inputs increase and are often zeroed out, while bargaining power does not increase. So now look at how this will change? A little change, but not for companies that make big models, but for companies that apply big models. It is possible to create a system-based super application with great effectiveness at a lower cost, and it is even possible to simply measure its effectiveness and commercial value by how many people are equal to it. If LLaMA 2 continues to progress, it is equivalent to this super application can always use a cheaper but more powerful engine. At this point the overall cost is under control, and the surrounding parts also need to be invested, but they don't lead to a high level of input-output imbalance like large model development. This time the effect is obvious, the past AI algorithm actually solved some painless problems, do not create commercial core value, do gates and smart speakers can create what core value? But now is not the same, the big model in the technical progress to ensure that **in use and not use will lead to the difference between the broadsword pair * *. For example, for an enterprise, who can imagine an enterprise that does not use computers or the Internet at all? More importantly, the Key of the data flywheel seems to be in the hands of an enterprise that makes systematic super applications, which leads to a higher upper limit. On the * * pondering matter * Musk was a businessman first, starting with the release of xAI I drew a picture like this:! The three phases of the data flywheel The ultimate pursuit of every big model in every field must be to run this data flywheel, but unfortunately it seems that so far it has not run except for AlphaGo others. But this does not mean that the future will be the same. Whoever can run it first in the future will be the champion in that field and win. Who is most likely to run this flywheel in the current industrial chain? Obviously, it is the people who have successfully landed the system-type super application, and they are the ones who have the scene and users. Looking back a few years from now, we might think that LLaMA 2 is a moment similar to the Android release in 2007. ## * * Summary * * I personally mainly made systems in the first ten years, made strategic investments in the middle, and mainly made production and research of artificial intelligence in the next ten years. It may be the reason for this background that the more I look at the commercialization path of large models, the more I think it is a systematic super application. I hope that interested students will contact me to discuss how to establish a new business model under the new technology elements. Indeed, as the DeepMind buddy said: * * Don't pull the Turing test, now the core is to see if we can end the ten-year loss of the artificial intelligence industry, which is more critical! * *