Wallstreetcn
2024.07.23 01:25
portai
I'm PortAI, I can summarize articles.

Meta's Llama-3.1-405B leaked, available for download, outperforming GPT-4o!

Meta's Llama 3.1-405B version data has been leaked, with the upcoming model's performance surpassing that of GPT-4o, but the inference cost has tripled and the encoding performance is poor. Individual developers may not be able to afford such a large parameter model, making it suitable for enterprises and government public sectors. The model has already leaked and can be downloaded quickly, but it cannot run on general GPUs. Some netizens have a negative attitude towards the models released by Meta, believing that the cost-effectiveness and functionality are not worth looking forward to. Furthermore, the leak originated from Microsoft's Azure Github. This model has high computational requirements and is not as cost-effective as GPT-4o mini

On the early morning of July 23rd, someone leaked that the evaluation data of Meta's Llama 3.1-405B had been exposed, and tomorrow the largest parameter model in the Llama 3 series may be released, along with a Llama 3.1-70B version.

This is also a functional iteration based on the 3.0 version, even the performance of the 70B base model exceeds that of GPT-4o.

Even the magnet link has been leaked, with about 763.84G available on the "AIGC Open Community". Originally available on huggingface, the library was later deleted.

The download speed is also decent, around 14M per second, indicating that quite a few people are downloading this model.

However, this model definitely cannot be run on a regular GPU. With such large parameters, individual developers cannot afford the deployment (unless you have some H100), it is estimated to be used by enterprises and government public departments.

Some netizens poured cold water on the upcoming model from Meta. Compared to OpenAI's latest GPT-4o mini version, the inference cost of Llama 3.1-70B has tripled, but the coding performance is much worse.

In terms of cost-effectiveness and functionality, there is nothing to look forward to in Meta's new model.

Some even saw the above-mentioned released model on GitHub, but it was quickly taken down, indicating that some people may already be able to use it.

Some also expressed that they believe this leak event is real because it came from Microsoft's Azure Github However, this model has a large number of parameters, which requires a high GPU specification, not as cost-effective as GPT-4o mini.

Although the model is free, it is quite difficult to run, without enterprise-level computing power infrastructure, it is really impossible to use. So, this is good news for enterprises.

Some people pointed out that even with significant optimization of the Llama 3.1-405B model, quantized to 5 digits, it still cannot be applied to consumer-grade GPUs, which really has very high hardware requirements.

If this evaluation data is true, then it is a huge benefit for most countries around the world. Because this is Meta's top model in the Llama 3 series and all weights are open source, which means everyone can use the free AI model.

However, if you want to develop generative AI applications, you also need a strong AI computing power foundation, high-quality data, and fine-tuning techniques.

Due to regulatory agencies and various laws, Meta has been delaying the release of the 405B series models. So, is this leak intentionally released by Meta, because this is their old tradition, last year's Llama model did the same thing,

At that time, the "AIGC Open Community" also tested it, and it was indeed their original model. Looking forward to tomorrow, to see what else Meta can come up with.

Author: AIGC Open Community, Source: AIGC Open Community, Original Title: "Meta's Llama-3.1-405B Leaked, Available for Download, Performance Surpasses GPT-4o!"