Wallstreetcn
2023.09.11 03:52
portai
I'm PortAI, I can summarize articles.

Buy more H100 cards, Meta will train a new model in early next year, with capabilities comparable to GPT4.

To bridge the performance gap, Meta is developing a new artificial intelligence model to benchmark against OpenAI.

To narrow the gap with competitors like Google, Microsoft, and OpenAI, Meta plans to train a new model starting next year, which may be open-sourced.

According to insiders at Microsoft, the upcoming AI model being developed by Meta is benchmarked against OpenAI's GPT-4. It is designed to assist the company in building services capable of generating complex text, performing analysis, and other outputs.

In order to establish the necessary data centers for this project, Meta is acquiring more H100 units. Despite its collaboration with Microsoft to offer Llama 2 on Microsoft's Azure cloud computing platform, Meta intends to train the new model on its own infrastructure.

Meta's plan is to commence training this AI model in early next year, with the goal of making it several times more powerful than the Llama 2 model released two months ago. In July of this year, Meta launched the Llama 2 model in an attempt to challenge OpenAI's dominance in the large language model (LLM) market. However, Meta acknowledged in its paper on Llama 2 that there still exists a significant performance gap between Llama 2 and closed-source models like GPT-4 and Google's PaLM-2.

The most powerful version of Meta's Llama 2 model has been trained with 70 billion parameters, which are variables used to measure the size of an AI system. OpenAI has not disclosed the size of GPT-4, but it is estimated to be approximately 20 times larger, with 1.5 trillion parameters. Some AI experts suggest that there may be alternative approaches to achieving the functionality of GPT-4 without reaching such a scale.

Reportedly, Meta CEO Mark Zuckerberg is actively pushing for the model to be open-sourced, which would reduce its cost and increase its universality.

However, this open-source approach pursued by Meta also raises potential concerns. Some legal experts express worries that open-sourcing the model could increase the risk of using copyrighted information and lead to the generation or dissemination of false information.

Sarah West, a former advisor to the Federal Trade Commission and current advisor to the FTC, also voiced her concerns:

"You can't easily predict what a system will do or its vulnerabilities—some open-source AI systems only offer limited transparency, reusability, and scalability."