Today we’re proud to announce the launch of our #AI Intersections Database.🎉 The database is a live tool that facilitates connections between AI and human rights issues, while providing a space for advocates to forge partnerships and find supporters. Learn more about the project and how you can be featured, here. ⤵️ https://mzl.la/4aYT526
Mozilla’s Post
More Relevant Posts
-
Policing and Climate Change | Global Crime Governance | Associate Professor @ ANU RegNet | Associate Editor Policing & Society
The disruptive potential of #generartiveAI is undeniable but we need to ensure our regulators and policy makers understand these technologies and have the knowledge and skills required to govern them effectively. That is a major focus of our ongoing work at School of Regulation and Global Governance (RegNet) and our new Graduate Certificate and Master of Technology Governance programmes have been developed with this purpose in mind.
Powerful #artificialintelligence is a "sociopath", with the ability to "trash human rights [and] attack democratic societies", writes Federal MP Julian Hill. Which is why we're right to worry about uncontrolled #generativeAI technologies. As he pens in this op-ed, "Governments must act in the public and national interest to establish guardrails and determine how and where to apply both the accelerator and the brake". And fast: https://lnkd.in/gVTGaiNX
Australia needs an AI commission to tame sociopathic technology
https://www.themandarin.com.au
To view or add a comment, sign in
-
AI governance and human rights: Resetting the relationship This research paper aims to dispel myths about human rights; outline the principal importance of human rights for AI governance; and recommend actions that governments, organizations, companies and individuals can take to ensure that human rights are the foundation for AI governance in future. https://lnkd.in/dXH6BEFg -Posted by OneUp
To view or add a comment, sign in
-
Powerful #artificialintelligence is a "sociopath", with the ability to "trash human rights [and] attack democratic societies", writes Federal MP Julian Hill. Which is why we're right to worry about uncontrolled #generativeAI technologies. As he pens in this op-ed, "Governments must act in the public and national interest to establish guardrails and determine how and where to apply both the accelerator and the brake". And fast: https://lnkd.in/gVTGaiNX
Australia needs an AI commission to tame sociopathic technology
https://www.themandarin.com.au
To view or add a comment, sign in
-
2022 Conflict Coach of the Year | Conflict Management Specialist | Coach | Mediator | Consultant | Trainer
AI may not eliminate systemic bias. Rather, it may scale it! In this episode, Brené Brown and Dr S. Craig Watkins discuss what is known in the AI community as the “alignment problem” — who needs to be at the table to build systems that are aligned with our values as a democratic society? And, when we start unleashing these systems in high stakes environments like education, healthcare, and criminal justice, what guardrails, policies, and ethical principles do we need to make sure that we’re not scaling injustice? https://lnkd.in/gu7Hr8nE
To view or add a comment, sign in
-
Without deliberate intervention, the well-known biases that exist within #AI systems can perpetuate major social inequities in healthcare, home ownership, law enforcement, hiring, and more. But by understanding and recognizing AI’s inclination toward biases, leaders can develop tools and processes to advance value-neutral data and mitigate societal harm. As this article in the World Economic Forum points out, such solutions are much less complex than attempting to solve the problem of inherent bias in human minds. https://lnkd.in/gA9tBKBx
AI bias may be easier to fix than humanity’s. Here's why
weforum.org
To view or add a comment, sign in
-
*** #20Talks *** Developments in AI and the future of democracy work hand in glove. What is the role of politics and public institutions? Our talk with Nataša Pirc Musar, PhD, President of the Republic of Slovenia, interviewed by Julia Hodder, explores the privacy implications of AI and its impact on democracy, and the global response to its fast adoption around the world. Watch the full video at https://europa.eu/!fjgdVf #EDPSXX
To view or add a comment, sign in
-
I am honoured to be chosen to talk to you dear EDPS - European Data Protection Supervisor. Data protection and AI are the topics of today and the future. Continue the good work. Wojciech Wiewiorowski #dataprotection #artificialintelligence #aiact
*** #20Talks *** Developments in AI and the future of democracy work hand in glove. What is the role of politics and public institutions? Our talk with Nataša Pirc Musar, PhD, President of the Republic of Slovenia, interviewed by Julia Hodder, explores the privacy implications of AI and its impact on democracy, and the global response to its fast adoption around the world. Watch the full video at https://europa.eu/!fjgdVf #EDPSXX
To view or add a comment, sign in
-
What are the real human rights risks of artificial intelligence? Head over to our blog post to read more about the concerns the human rights community has about AI, and how we can “build a resilient civil society ecosystem that speaks up for the human rights of individuals before technological advancement.” ✨ https://bit.ly/NTAIrisks
To view or add a comment, sign in
-
Helping organizations adopt GenAI AI via strategic planning, project management, and ongoing support to stay ahead of their competitors | Partnering with MSPs to enhance their service offerings
Unpacking the New Executive Order on AI - Part 3: Advancing Equity and Civil Rights This is the third post in a multi-part posting where I am unpacking each section of the recent Executive Order regarding Safe, Secure, and Trustworthy Artificial Intelligence, for those unfamiliar it or for those who appreciate a summarized version. Key takeaways from Section Three - Advancing Equity and Civil Rights 1. Guidance for Fair AI Use Establish guidelines to prevent AI from promoting discrimination by landlords, federal benefits programs, and federal contractors. 2. Tackling Algorithmic Discrimination Foster best practices to curb civil right violations stemming from algorithmic biases. 3. Fairness in Criminal Justice Formulate guidelines to ensure fair AI applications within the criminal justice system. Next, I will be unpacking section 4 - Standing Up for Consumers, Patients, and Students. #ArtificialIntelligence #AIEthics #CivilRights #JusticeSystem #AlgorithmicFairness #TechPolicy #ExecutiveOrder #missionimpossible #GenAI #Datamangement
To view or add a comment, sign in
412,489 followers
Researcher at The Bridge Initiative | Mentor at Georgetown Pivot Program
1moTagging Georgetown CCT and Georgetown University McCourt School of Public Policy to explore possible synergies