The largest AI model Llama 3.1 from Meta has been released – its largest version has 405 billion parameters

Meta✴ announced the release of Llama 3.1, the largest open artificial intelligence language model to date, with more than 400 billion different parameters. According to Meta✴ CEO Mark Zuckerberg, the model could surpass GPT-4 in performance in the near future, and by the end of the year it will become the most popular AI assistant in the world.

Image Source: Reuters

According to The Verge, the development of the new model required large investments. Llama 3.1 is significantly more complex than earlier versions released just a few months ago. The older version of the AI ​​model has 405 billion parameters and was trained using more than 16,000 Nvidia H100 accelerators. Meta✴ does not disclose the investment in its development, but based on the cost of Nvidia chips alone, it is safe to assume that we are talking about hundreds of millions of dollars.

Despite the high cost of development, Meta✴ decided to make the model code open source. In a letter published on the company’s official blog, Zuckerberg argues that open-source AI models will overtake proprietary models, just as Linux became the open-source operating system that powers most phones, servers and gadgets today.

One of the key updates was the expansion of the geographic availability of the Meta✴ AI service, which is built on Llama. The assistant is now available in 22 countries, including Argentina, Chile, Colombia, Ecuador, Mexico, Peru and Cameroon. In addition, if previously Meta✴ AI only supported English, now French, German, Hindi, Italian, Portuguese and Spanish have been added. However, it’s worth noting that some of the new features are currently only available in certain regions or for specific languages.

Image source: Meta✴

An interesting Imagine me feature has also appeared, which uses the generative AI model Imagine Yourself, TechCrunch reports. This model is capable of creating images based on a user’s photo and a text query in the required context. For example, “Imagine me as a surfer” or “Imagine me on the beach.” After which the artificial intelligence will generate the corresponding image. The feature is available in beta and is activated by entering the phrase “Imagine me”.

Image source: Meta✴

Meta✴AI will also receive new image editing tools in the near future. Users will be able to add, remove and change objects in images using text queries. And starting next month, the developers promise to introduce an “Edit with AI” button to access additional fine-tuning options. Later, new shortcuts will be available to quickly share AI-generated images to feeds, stories, and comments in Meta✴ apps.

Let us remind you that Meta✴ AI was launched in September 2023. The service is based on the large language model Llama 2 and provides users with the ability to obtain information, generate text, translate into various languages ​​and perform other tasks using artificial intelligence.

admin

Share
Published by
admin

Recent Posts

TikTok to Shut Down Its Instagram Clone on May 8

Chinese short-video service TikTok is set to shut down its TikTok Notes section on May…

3 hours ago

Meta Loses Head of Fundamental AI Research

Meta✴'s VP of AI research Joelle Pineau has announced her departure from the company. Her…

3 hours ago

Meta to Release Smart Glasses with Display and Price Tag Over $1000 by End of Year

Meta✴ is preparing a more expensive version of smart glasses as part of a joint…

3 hours ago

China Allows EHang Electric Jets to Transport People by Air, but Air Taxi Services Still Banned

On March 28, 2025, Guangdong EHang General Aviation and its two partner air transport operators…

3 hours ago

A World in a Box of Locusts and Single-Player Co-op: Details on Disco Elysium’s Cancelled Kuno and Kunu Spin-Off

An internal presentation of Project X7, a cancelled ZA/UM spin-off of Disco Elysium led by…

3 hours ago