AI at Meta’s Post

View organization page for AI at Meta, graphic

802,813 followers

As part of our focus on developing Llama 3 in a responsible way, we created a number of resources to help others use it responsibly as well. This includes new trust and safety tools like CyberSec Eval 2. Paper ➡️ https://go.fb.me/gy1h0d Cybersec Eval 2 expands on its predecessor by measuring an LLM’s susceptibility to prompt injection, automated offensive cybersecurity capabilities, and propensity to abuse a code interpreter, in addition to the existing evaluations for insecure Coding Practices and Cyber Attack Helpfulness.

  • No alternative text description for this image
Jakub Gania

Junior Full Stack Web Developer

1mo

Here is also a collection of many interesting resources about Llama 3. https://www.linkedin.com/feed/update/urn:li:activity:7192699251703873536

Muzammil Behzad, Ph.D.

AI Scientist @ Silo AI | AI/ML/CV Enthusiast | IEEE ICIP 3MT Winner | Valedictorian | Medalist

1mo

The safety-utility tradeoff adds a fascinating dimension as it sheds light on the delicate balance between securing LLMs against malicious prompts and maintaining their usefulness. Would be interesting to see how FRR rolls out.

Incredible progress with CyberSec Eval 2, enhancing Llama 3’s framework not just for performance but for security and ethical integrity as well. This kind of rigorous security evaluation is crucial as AI systems become more integrated into critical infrastructures!

David Sithole

Technology Advisor, Key Account Director, AI-ML & GPU Computing, Cybersecurity Enthusiast

1mo

Wow this is wonderful

Like
Reply
Siddhartha Bhomia

Director, Operational Risk Management- Technology and Cybersecurity at Citi New York City Metropolitan Area

1mo

Insightful!

See more comments

To view or add a comment, sign in

Explore topics