The largest AI model Llama 3.1 from Meta has been released – its largest version has 405 billion parameters

Meta✴ announced the release of Llama 3.1, the largest open artificial intelligence language model to date, with more than 400 billion different parameters. According to Meta✴ CEO Mark Zuckerberg, the model could surpass GPT-4 in performance in the near future, and by the end of the year it will become the most popular AI assistant in the world.

Image Source: Reuters

According to The Verge, the development of the new model required large investments. Llama 3.1 is significantly more complex than earlier versions released just a few months ago. The older version of the AI ​​model has 405 billion parameters and was trained using more than 16,000 Nvidia H100 accelerators. Meta✴ does not disclose the investment in its development, but based on the cost of Nvidia chips alone, it is safe to assume that we are talking about hundreds of millions of dollars.

Despite the high cost of development, Meta✴ decided to make the model code open source. In a letter published on the company’s official blog, Zuckerberg argues that open-source AI models will overtake proprietary models, just as Linux became the open-source operating system that powers most phones, servers and gadgets today.

One of the key updates was the expansion of the geographic availability of the Meta✴ AI service, which is built on Llama. The assistant is now available in 22 countries, including Argentina, Chile, Colombia, Ecuador, Mexico, Peru and Cameroon. In addition, if previously Meta✴ AI only supported English, now French, German, Hindi, Italian, Portuguese and Spanish have been added. However, it’s worth noting that some of the new features are currently only available in certain regions or for specific languages.

Image source: Meta✴

An interesting Imagine me feature has also appeared, which uses the generative AI model Imagine Yourself, TechCrunch reports. This model is capable of creating images based on a user’s photo and a text query in the required context. For example, “Imagine me as a surfer” or “Imagine me on the beach.” After which the artificial intelligence will generate the corresponding image. The feature is available in beta and is activated by entering the phrase “Imagine me”.

Image source: Meta✴

Meta✴AI will also receive new image editing tools in the near future. Users will be able to add, remove and change objects in images using text queries. And starting next month, the developers promise to introduce an “Edit with AI” button to access additional fine-tuning options. Later, new shortcuts will be available to quickly share AI-generated images to feeds, stories, and comments in Meta✴ apps.

Let us remind you that Meta✴ AI was launched in September 2023. The service is based on the large language model Llama 2 and provides users with the ability to obtain information, generate text, translate into various languages ​​and perform other tasks using artificial intelligence.

admin

Share
Published by
admin

Recent Posts

Alibaba Cloud Reduces Data Center Assembly Time by 50% Using Modular Architecture

Alibaba Cloud presented at its annual Apsara conference a modular data center architecture called “CUBE…

9 mins ago

The release has crept up unnoticed: the classic version of Resident Evil 3 will appear on GOG very soon

The original Resident Evil 3: Nemesis turned 25 years old yesterday, and the digital distribution…

38 mins ago

Biden and Modi agreed to build a chip factory in India

The United States and India have reached an agreement under which a new semiconductor manufacturing…

1 hour ago

An insider has revealed the main source of inspiration for the multiplayer Assassin’s Creed Invictus – Fall Guys

Image Source: Mediatonic Among the available formats are team deathmatch, every man for himself, and…

3 hours ago

Seasonic has released a PRIME PX-2200 power supply with a power of 2200 W for $500

Seasonic has released the PRIME PX-2200 2200 W power supply. The new product was first…

3 hours ago