Meta✴ announced the release of Llama 3.1, the largest open artificial intelligence language model to date, with more than 400 billion different parameters. According to Meta✴ CEO Mark Zuckerberg, the model could surpass GPT-4 in performance in the near future, and by the end of the year it will become the most popular AI assistant in the world.
According to The Verge, the development of the new model required large investments. Llama 3.1 is significantly more complex than earlier versions released just a few months ago. The older version of the AI model has 405 billion parameters and was trained using more than 16,000 Nvidia H100 accelerators. Meta✴ does not disclose the investment in its development, but based on the cost of Nvidia chips alone, it is safe to assume that we are talking about hundreds of millions of dollars.
Despite the high cost of development, Meta✴ decided to make the model code open source. In a letter published on the company’s official blog, Zuckerberg argues that open-source AI models will overtake proprietary models, just as Linux became the open-source operating system that powers most phones, servers and gadgets today.
One of the key updates was the expansion of the geographic availability of the Meta✴ AI service, which is built on Llama. The assistant is now available in 22 countries, including Argentina, Chile, Colombia, Ecuador, Mexico, Peru and Cameroon. In addition, if previously Meta✴ AI only supported English, now French, German, Hindi, Italian, Portuguese and Spanish have been added. However, it’s worth noting that some of the new features are currently only available in certain regions or for specific languages.
An interesting Imagine me feature has also appeared, which uses the generative AI model Imagine Yourself, TechCrunch reports. This model is capable of creating images based on a user’s photo and a text query in the required context. For example, “Imagine me as a surfer” or “Imagine me on the beach.” After which the artificial intelligence will generate the corresponding image. The feature is available in beta and is activated by entering the phrase “Imagine me”.
Meta✴AI will also receive new image editing tools in the near future. Users will be able to add, remove and change objects in images using text queries. And starting next month, the developers promise to introduce an “Edit with AI” button to access additional fine-tuning options. Later, new shortcuts will be available to quickly share AI-generated images to feeds, stories, and comments in Meta✴ apps.
Let us remind you that Meta✴ AI was launched in September 2023. The service is based on the large language model Llama 2 and provides users with the ability to obtain information, generate text, translate into various languages and perform other tasks using artificial intelligence.