At this year's Mozilla Festival, David Manzano-Macho, PhD, our VP of Engineering, will participate as a panelist in Mozilla's Data Future Labs Showcase on Thursday, 13th of June. The Data Future Labs (DFL) Showcase highlights local builders around the world developing data tools and platforms that prioritize the needs and interests of their communities. This edition will be focused on shifting the narrative around what’s possible in data stewardship and Trustworthy AI in Europe and will include organizations like Imperial College London, Spawning, First Languages AI Reality, and the Data Provenance Initiative. #MozFest
Mozilla.ai
Technology, Information and Internet
Democratizing open-source AI to solve real user problems
About us
Our mission is to build, commercialize, and open source components and tools that make it easy for developers and users to develop AI agents to solve real-world use cases.
- Website
-
https://mozilla.ai/
External link for Mozilla.ai
- Industry
- Technology, Information and Internet
- Company size
- 11-50 employees
- Type
- Privately Held
- Founded
- 2023
- Specialties
- Artificial Intelligence, Open Source, and Machine Learning
Employees at Mozilla.ai
Updates
-
Next week, Julie V. Belião, Senior Director of Product Innovation at Mozilla.ai, will be speaking at the TAUS Massively Multilingual AI Conference 2024 at the panel "No English Please: How to move towards a truly multilingual AI" on Wednesday, 19th of June. The panel will also feature Kareem Darwish from aiXplain, Christian Federmann from Microsoft, Gina Moape from Mozilla's Common Voice Initiative, and Lucie Gianola from France's Ministère de la Culture. English is the dominant language in Tech and AI research. Large Language Models are trained more than 90% on English language data. Inevitably, the LLMs look at the world through English filters. The focus of the session will be how to overcome this language bias and how to move towards a more democratic and inclusive world of AI. Thank you Jaap Van Der Meer and Anne-Maj van der Meer for the invitation. https://lnkd.in/gn6MTWc6
-
-
Perhaps we shouldn't be asking, "What if AI technology falls into the wrong hands?" but instead ask ourselves, "What if AI technology falls into the right hands?" See and hear Mozilla's Mark Surman address this issue at this year's edition of republica GmbH. #TrustworthyAI
re:publica 2024: Mark Surman - Can AI be trustworthy?
https://www.youtube.com/
-
Over the past few months at Mozilla.ai, we engaged with a number of organizations to learn how they are using language models in practice. We spoke with 35 organizations across sectors like finance, government, startups, and large enterprises. Our interviewees ranged from ML engineers to CTOs, capturing a diverse range of perspectives. Our interview summary notes for the 35 conversations amounted to 18,481 words (approximately 24,600 tokens), almost the length of a novella. To avoid confirmation bias and subjective interpretation, we decided to leverage language models for a more objective analysis of the data. By providing the models with the complete set of notes, we aimed to uncover patterns and trends without our pre-existing notions and biases. For this, we used Llama-3-8B-Instruct-Gradient-1048k by Meta and Gradient; Phi-3-medium-128k-instruct by Microsoft; and Qwen1.5-7B-Chat by Alibaba Cloud. To read the GenAI trends across 35 organizations, check out our latest learnings by Stefan French! #machinelearning #LLM #GenAI
Uncovering GenAI Trends: Using Local Language Models to Explore 35 Organizations
blog.mozilla.ai
-
#Hiring: Take a look at the three Lead roles in our ENG Team: - Machine Learning Engineer Lead - Platform Engineering Tech Lead - Engineering Lead of Professional Services We're looking for mission-driven individuals who have experience leading technical teams. https://lnkd.in/dZHm_wiB #machinelearning #AIJobs #techjobs
Mozilla.ai Open Lead Roles
linkedin.com
-
Picking a Summarization Model: Abstractive or Extractive? Finding a good model for summarization is a daunting task, as the typical intuition that larger parameter models generally perform better goes out the window. For summarization, we need to consider the input, which will likely be of a longer context size, and finding models that efficiently deal with those longer contexts is of paramount importance. In our business case, which is to create summaries of conversation threads, much as you might see in Slack or an email chain, the models need to be able to extract key information from those threads while still being able to accept a large context window to capture the entire conversation history. We identified that it is far more valuable to conduct abstractive summaries or summaries that identify important sections in the text and generate highlights, rather than extractive ones, which pick a subset of sentences and add them together for our use cases since the final interface will be natural language. We want the summary results not to need to be interpreted from often incoherent text snippets produced by extractive summaries. How do we evaluate a “good” summary? The most difficult part is identifying the evaluation metrics useful for judging the quality of summaries. Evaluation is a broad and complex topic, and it’s difficult to have a single metric that can answer the question, “Is this model a good abstractive summarizer?” For our early exploration, we stuck to tried-and-true metrics, limiting our scope to metrics that can be used with ground truth. Ground truth for summarization includes documents that are either manually summarized or bootstrapped from summaries generated by models and approved by humans. These include: ROUGE, BLEU, METEOR, and BERTScore. Read more: https://lnkd.in/ewATFbzN #LLM #machinelearning #textsummarization
-
What does "open" mean in the AI era?
What does "open" mean in the AI era? I'm pleased to share our latest collaborative paper with the Columbia Institute of Global Politics, where we brought together 40+ top AI scholars and practitioners to create a framework for understanding and advancing openness in AI. 🌐 Our new paper presents a descriptive framework to understand how each component of the foundation model stack contributes to openness. 📚 Historically, open source technology has been a cornerstone for innovations ranging from art creation to vaccine design to app development used globally. Openness will be just as crucial for the future of AI development. 💡 Website: https://lnkd.in/esb86Xgg Paper: https://lnkd.in/eGeb2Uzt Adrien Basdevant Ayah Bdeir Camille François Yann LeCun, Kevin Bankston, Brian Behlendorf, Merouane Debbah, Sayash Kapoor, Helen King-Turvey, Nathan Lambert, Stefano Maffulli, Nik Marda, Justine Tunney, Govind Shivkumar, Deborah Raji, Aviya Skowron, Amba Kak, Irene Solaiman, Stella Biderman, Alix Dunn, Martin Tisné Mozilla Mozilla.ai
-
-
Mozilla.ai reposted this
📢 Lors du rassemblement à l’Élysée des plus grands talents français de l'IA autour du Président de la République Emmanuel Macron, Marina Ferrari, Secrétaire d’État chargée du Numérique, annonce le lancement des #CommunsDémocratiques : le premier programme mondial #IA au service de la #démocratie. --- Portée par la France avec Make.org, Sciences Po, Sorbonne Université et le CNRS - Centre national de la recherche scientifique, cette initiative est menée avec le soutien de partenaires internationaux de renommée mondiale : Hugging Face, The Aspen Institute, Mozilla.ai, Project Liberty's Institute et GENCI. --- Marina Ferrari, Secrétaire d’État chargée du Numérique : "La France défend une approche vis-à-vis de l'IA qui se veut ouverte, juste et responsable. Le projet "Communs démocratiques” de Make.org avec Sciences Po, Sorbonne Université et le CNRS - Centre national de la recherche scientifique, permettra d'avoir des solutions d'évaluation et de correction des biais dans les systèmes d'IA pour en assurer un usage responsable dans les processus démocratiques." Le programme réunira plus de 50 chercheurs et ingénieurs sur deux ans avec un objectif clair : utiliser la puissance de l’#IA au service de la résilience démocratique. --- Make.org Axel Dauchez, Alicia Combaz, David Mas, Alexis Prokopiev, Solène Lécuyer, CNRS, Antoine Petit, Benjamin Piwowarski, Sciences Po, Jean-Philippe Cointet, Martial Foucault, Sorbonne Université, François Yvon, Raja Chatila, Gaël Lejeune, GENCI, Philippe Segers, Hugging Face, Yacine Jernite, Mozilla.ai, Moez Draief, Project Liberty's Institute, Paul Fehlinger, The Aspen Institute, B Cavello, Yale University, Hélène Landemore, Berkman Klein Center for Internet & Society at Harvard University, Inria, Djamé Seddah, OECD.AI, Karine Perset, Columbia University, Asma MHALLA, Omidyar Network, Michelle Barsa, Bpifrance, Paul-François Fournier, Vincent Rapp, Marina Ferrari, Jean-Noël Barrot, Bruno Bonnell, Georges-Etienne Faure, Guillaume Avrin.
-
-
The paper "Towards a Framework for Openness in Foundation Models" is out now! This paper presents a framework for grappling with openness across the AI stack. It surveys existing approaches to defining openness in AI models and systems, and proposes a descriptive framework to understand how each component of the foundation model stack contributes to openness. The spectacular team of collaborators includes Mozilla.ai's Victor Storchan as well as Adrien Basdevant Camille François Kevin Bankston Ayah Bdeir Brian Behlendorf Merouane Debbah Sayash Kapoor Yann LeCun Mark Surman Helen King-Turvey Nathan Lambert Stefano Maffulli Nik Marda Govind Shivkumar Justine Tunney The paper is a result from Mozilla and Columbia Institute of Global Politics bringing together over 40 leading scholars and practitioners working on openness and AI for the Columbia Convening. Spanning prominent open source AI startups and companies, nonprofit AI labs, and civil society organizations, these leaders focused on exploring what “open” should mean in the AI era. Following the convening, the paper presents a framework for grappling with openness across the AI stack. It enables — without prescribing — an analysis of how to unlock specific benefits from AI, based on desired model and system attributes. Furthermore, the paper also adds clarity to support further work on this topic, including work to develop stronger safety safeguards for open systems. #opensource #machinelearning
Towards a Framework for Openness in Foundation Models
foundation.mozilla.org
-
#Hiring Alert: We're scaling up our ENG team with three Lead roles: - Machine Learning Engineer Lead - Platform Engineering Tech Lead - Engineering Lead of Professional Services We're looking for passionate and mission-driven individuals who have experience leading technical teams. Join us on our mission to democratize open-source AI to solve real user problems! Follow us and stay updated. #machinelearning #AIJobs #techjobs
MZAI - Engineering Lead Roles
linkedin.com