July 18, 2023
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closedsource models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.
Written by
Louis Martin
Kevin Stone
Peter Albert
Amjad Almahairi
Yasmine Babaei
Nikolay Bashlykov
Soumya Batra
Praj Bhargava
Dan Bikel
Lukas Blecher
Moya Chen
Guillem Cucurull
David Esiobu
Jude Fernandes
Jeremy Fu
Wenyin Fu
Brian Fuller
Cynthia Gao
Vedanuj Goswami
Naman Goyal
Anthony Hartshorn
Saghar Hosseini
Rui Hou
Hakan Inan
Marcin Kardas
Viktor Kerkez
Isabel Kloumann
Artem Korenev
Punit Singh Koura
Marie-Anne Lachaux
Thibaut Lavril
Jenya Lee
Diana Liskovich
Yinghai Lu
Xavier Martinet
Todor Mihaylov
Igor Molybog
Yixin Nie
Andrew Poulton
Jeremy Reizenstein
Rashi Rungta
Kalyan Saladi
Alan Schelten
Ruan Silva
Ranjan Subramanian
Xiaoqing Ellen Tan
Binh Tang
Ross Taylor
Andrew Kuan
Puxin Xu
Zheng Yan
Iliyan Zarov
Yuchen Zhang
Melanie Kambadur
Sharan Narang
Aurelien Rodriguez
Robert Stojnic
Thomas Scialom
Publisher
arxiv
June 14, 2024
Sheng-Chieh Lin, Akari Asai, Minghan Li, Barlas Oguz, Jimmy Lin, Scott Yih, Xilun Chen
June 14, 2024
June 14, 2024
Mostafa Elhoushi, Akshat Shrivastava, Diana Liskovich, Basil Hosmer, Bram Wasti, Liangzhen Lai, Nas Mahmoud, Bilge Acun, Saurabh Agarwal, Ahmed Roman, Ahmed Aly, Beidi Chen, Carole-Jean Wu
June 14, 2024
June 13, 2024
Ava Spataru, Eric Hambro, Lena Voita, Nicola Cancedda
June 13, 2024
May 24, 2024
May 24, 2024
Product experiences
Foundational models
Product experiences
Latest news
Foundational models