Our first inaugural Responsible AI transparency report
Various people working on computers.

Our first inaugural Responsible AI transparency report

If you were to come across an image of two farmers using an AI system in a corn field, it might not be immediately evident that this image was generated with AI.  

But if this image were created with Microsoft Designer, it would have Content Credentials embedded in the metadata, labeling the image as AI-generated and noting the exact date and time of creation. Because these Content Credentials are cryptographically signed and sealed, they’re also tamper-evident. The provenance, or origin, of Designer-created images can be checked on multiple websites, including Microsoft’s Content Integrity Check tool.  

This is just one example of how Microsoft is approaching the development and deployment of AI responsibly, with our broader approach and more product examples included in our recently released Responsible AI Transparency Report. We’re committed to promoting transparency in AI across public and private sectors, as well as contributing to the growing body of public knowledge.  

To advance AI responsibly, we need transparency in many different forms – including transparency about the technology underpinning AI systems, who’s involved in making decisions, and where AI is deployed. Our Responsible AI Transparency Report is a step toward more clarity around our AI governance processes and focuses specifically on how we build generative AI systems responsibly, how we make decisions about releasing these systems, and how we learn and thus evolve our Responsible AI program.  

In the report, we describe our responsible AI work using the AI Risk Management Framework developed by the National Institute of Standards and Technology (NIST). Building on our cross-company governance to establish roles and responsibilities, we map potential risks of generative AI applications through AI impact assessments, privacy and security reviews, and red teaming. We establish metrics to measure risks such as groundedness and content risks, as well as ways to evaluate the effectiveness of potential mitigations. Then we manage these identified risks, and any new risks that arise, through platform- and application-level mitigations like content filters, appropriate human oversight, and ongoing monitoring. 

Map. Measure. Manage.

One example of this process in action involves Copilot Studio, which integrates generative AI into Microsoft 365 to enable customers without programming skills to build their own copilots. During the map, measure, manage process, our engineering team identified the risk of these copilots giving “ungrounded” answers in response to prompts, meaning the copilot’s outputs contained information that wasn’t present in the input sources. Our engineering team took steps to manage this risk, including improving groundedness filtering and introducing citations. These approaches helped improve the accuracy of copilot responses to topically appropriate questions from 88.6% to 95.7%.   

Some high-risk AI systems require a level of attention and oversight beyond what’s laid out here, which is why we established our Sensitive Uses team in 2017. As our transparency report explores, the Sensitive Uses team has received over 900 submissions since 2019, including 300 in 2023 alone.  

In the past few years, we’ve also released 30 responsible AI tools that include more than 100 features to help our customers ensure that their own AI systems are designed, developed, and deployed responsibly. We use metrics from these tools to inform our own decision-making about how effective our risk mitigations are and whether a product is ready for launch. 

There is no finish line for responsible AI. 

We’ve learned over the years that a human-centered approach to building AI systems results in not just a more responsible product, but a better product overall. At Microsoft, we have a team of more than 400 people working on responsible AI, half of whom do so full-time. Distributing Responsible AI Champions throughout the company helps ensure that the job of advancing AI responsibly doesn’t fall solely to a single team, but rather is a function of every team across the organization.   

People remain at the center of our responsible AI progress. 

While technology sector-led initiatives are an important force in advancing responsible AI, we know that we can’t do it on our own. Governments play an important role in charting the path forward on AI.  Microsoft has long said that we need laws regulating AI that protect people's fundamental rights while allowing for positive uses of the technology to continue. Governments should continue to convene stakeholders to develop best practices and contribute to the development of standards. 

We also understand that regulation can and should be context-specific: When AI systems conceived in advanced economies are used in developing ones, these systems either may not work or may cause harm. In 2023, we worked with more than 50 internal and external groups to better understand how AI innovation may impact regulators and individuals in developing countries. We remain committed to these and other efforts to ensure that we maximize AI’s potential societal benefits while minimizing potential harms.  

AI can be a powerful tool that transforms how we live and work. Today, radiologists are using AI to detect breast cancer up to four years before it develops and students using AI to tailor stories to their reading level. What happens tomorrow is our collective choice. By designing systems with people in mind, working in collaboration with a broad range of stakeholders, and iterating as we go, we can advance AI responsibly – for the benefit of us all.   

Read the full report here: Responsible AI Transparency Report.

As new technologies evolve seamlessly, more people and businesses can reap the benefits.

Like
Reply
bong hyuk choi

(AI·ESG·DX 융복합 전문가, 직장내 장애인인식개선교육전문강사)

1w

국립재활원 장애발생예방교육의 효과성을 높이기 위한 보조자료로 제작된 이 애니메이션 동영상은 유치원생과 초등학생이 교육을 더 쉽게 이해하고 접할 수 있도록 구성되었으며, 실생활에서 의도치 않게 발생할 수 있고 후유증이 큰 사고들을 예방하기 위해 제작되었다. https://youtu.be/EDhtMO--dvI

Like
Reply
Sakshi Pandey

Education Professional at Edureka

2w

Thanks for sharing

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics