IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

What’s the Difference Between Meta’s Llama 2 and ChatGPT?

Meta, the company that owns Facebook and Instagram, has released the latest iteration of its large language model, dubbed Llama 2. But how does the new tool differ from OpenAI’s wildly popular ChatGPT?

Menlo,Park,,Ca,,Usa,-,October,29,,2021:,Meta,Sign
Shutterstock
(TNS) — Meta released the latest version of its AI model called Llama 2 Tuesday, betting that its open-source style will attract users across tech, academia and beyond.

"Open source drives innovation because it enables many more developers to build with new technology," Meta CEO Mark Zuckerberg wrote in a Facebook post Tuesday. "It also improves safety and security because when software is open, more people can scrutinize it to identify and fix potential issues."

Users of the large language model, an algorithm trained on a huge amount of data to respond to user prompts, will be able to download it directly from the company, or access it via cloud providers like Microsoft, which Meta has partnered with to distribute the tool.

Making the entire model open source, meaning developers can inspect its code, is in contrast to how some other AI companies operate.

San Francisco's OpenAI, maker of the ChatGPT chatbot software, and Google, which released its Bard AI earlier this year, have opted to keep the details of how those products work a guarded secret.

Meta said its Llama 2 software will also be available free of charge, also in contrast to Google and OpenAI's products.

OpenAI's products, in particular, have seen huge uptake by companies and the public alike since they began to be released late last year. Salesforce CEO Marc Benioff said earlier this year the company has already begun incorporating ChatGPT software into its products.

And while Meta has ground to make up in the AI race, allowing its model to be open source potentially allows a wider array of startups and companies to integrate the technology into their products without having to pay. It's a notable detail at a time when venture capital funding can be in short supply.

Safety is a constant concern with AI models that have been shown to "hallucinate" — at times making up false information and passing it off as fact.

Meta said the new model had been safety tested, including by third parties that tested its limits with adversarial prompts. The company also pointed to a policy covering what the software can and cannot be used for, creating the possibility that open source or not, access could be revoked by a user who uses the tech for ill purposes.

©2023 the San Francisco Chronicle, Distributed by Tribune Content Agency, LLC.