We’re committed to building and deploying AI responsibly and helping others do the same, which is why we’re sharing details about our approach in this inaugural report.
Today we’ve released our first Responsible AI Transparency Report. This inaugural annual report provides insight into how we responsibly build and release generative AI applications, support our customers’ responsible AI journeys, and learn, evolve, and grow as a responsible AI community. It illustrates the governance we believe is essential and serves as a supplement to the system-level transparency that we recognize is important too. We welcome dialogue around the report and on our ongoing efforts to promote transparency in our AI governance processes.
Your AI moderation systems need work
From a CIO perspective, accidental or intentional leak of confidential company documents in shared drives, have to be prevented from being loaded into Gen AI models and become public training data. The attachment forwarding to personal emails are usually tracked for IP theft for decades now. USB port logs are tracked for thumb drive usage and downloads. There should be a simple and effective network policy control that stops contamination of corporate documents into public Gen AI models. The design needs to be at the tool productivity tool level. Also at some point, if not already, I assume O365 will support Llama and Gemini models and who knows how many more models will show up, alongside MS Copilot/ChatGPT models for enterprise clients to use.
Interesting quote on the values of Responsible AI - “…transparency, accountability, fairness, inclusiveness, reliability and safety, and privacy and security…”
It's commendable to see Microsoft taking proactive steps towards responsible AI deployment. Transparency and accountability are key in building trust and ensuring ethical use of AI.
Best of luck!
Visionary leadership from Microsoft in proactively documenting their ethical AI roadmap! If you're passionate about transformative technologies being harnessed for positive impact through rigorous, multi-stakeholder safety processes, then consider subscribing to "All Things AI" - it's essential reading. Get a daily drip of frontier developments illuminating pathways for deploying AI reliably at scale, explained clearly. Seamless LinkedIn signup (https://shorturl.at/rtwyP) — no email needed!
In case you are interested in hardware hacking with FLIPPER ZERO, I leave it to you: https://youtube.com/playlist?list=PLF8ZsMzrshuAT2uZU8-paPW-km1jGtE9g&si=l0AHW9uVEhDvuRQE
As a consumer of RAI in Microsoft, I feel assured that AI assets released from Microsoft follow the responsible AI rigour. It has been learning to understand what all goes into making AI assets secure for all our consumers.
Versatile Administration Veteran: 18+ Years of Expertise, Dynamic Problem Solver, and Decision Maker Eager for New Challenges and Continuous Growth! Embracing Learning Beyond Portfolios.
1moKudos for the development! Just a thought: Technology continually advances for the betterment of humanity, with AI often perceived as the next evolutionary step. However, it's crucial to ensure that AI complements human capabilities rather than replacing them, thereby preventing potential adverse outcomes.