Without deliberate intervention, the well-known biases that exist within #AI systems can perpetuate major social inequities in healthcare, home ownership, law enforcement, hiring, and more. But by understanding and recognizing AI’s inclination toward biases, leaders can develop tools and processes to advance value-neutral data and mitigate societal harm. As this article in the World Economic Forum points out, such solutions are much less complex than attempting to solve the problem of inherent bias in human minds. https://lnkd.in/gA9tBKBx
Sasha Braude’s Post
More Relevant Posts
-
Independent GenAI Consultant | Helping Organizations Lead with GenAI | Strategic Planning, Use Case Identification, Project Management, Change Management, and Data Readiness
Unpacking the New Executive Order on AI - Part 3: Advancing Equity and Civil Rights This is the third post in a multi-part posting where I am unpacking each section of the recent Executive Order regarding Safe, Secure, and Trustworthy Artificial Intelligence, for those unfamiliar it or for those who appreciate a summarized version. Key takeaways from Section Three - Advancing Equity and Civil Rights 1. Guidance for Fair AI Use Establish guidelines to prevent AI from promoting discrimination by landlords, federal benefits programs, and federal contractors. 2. Tackling Algorithmic Discrimination Foster best practices to curb civil right violations stemming from algorithmic biases. 3. Fairness in Criminal Justice Formulate guidelines to ensure fair AI applications within the criminal justice system. Next, I will be unpacking section 4 - Standing Up for Consumers, Patients, and Students. #ArtificialIntelligence #AIEthics #CivilRights #JusticeSystem #AlgorithmicFairness #TechPolicy #ExecutiveOrder #missionimpossible #GenAI #Datamangement
To view or add a comment, sign in
-
Chief Commissioner - Data and Marketing Commission (UK) | Specialist Advisor - Law At Work (Channel Islands) | Atadgen (Jersey) | Trustee - Jersey Hospice Care
Worth reading this letter and noting the long list of signatories. Blind spots in this area are very real and lead to intended and unintended consequences for the whole of society. The potential negative consequences (often affecting the most vulnerable) may not be intended but when you only involve elite tech leaders and policy-makers in decision making they are entirely predictable. Interesting to note that the US have sought to address this concern by running a year long project to talk with and hear from a wide range of individuals, groups and communities on the issue of AI (https://lnkd.in/eJdjGAXk) Who is not in the room is as important (perhaps even more important) than who is.
Open Letter to the UK Prime Minister
ai-summit-open-letter.info
To view or add a comment, sign in
-
How will President Biden's Executive Order on AI actually help with public safety, discrimination, equity and fairness in development, and job displacement? We will review the Executive Order in more detail this week. Stay tuned.
To view or add a comment, sign in
-
The law of averages When we seek the average, which is what happens in consensus-ruled organisations, there is a downward trajectory of intelligence and decisions. This is acceptable if average is desired. Average removes the laggards and the edge dwellers. If we want to be exceptional, if we long for excellence, if we want to be a stand for something, it cannot be found in the average. AI works on the law of averages. We are entering a period where we will be overcome by average. If you seek the exceptional, if you want to be a change-maker and new world creator, then be prepared to take risks, rock the boat, push the envelope, and not appeal to everyone. You cannot have both. https://lnkd.in/g_ZV82cu #syntropicworld #syntropicenterprise #beautyofbeginnings
To view or add a comment, sign in
-
-
Gordon McKay Professor of Computer Science at Harvard John A. Paulson School of Engineering and Applied Sciences
I am curious if colleagues working on #AIpolicy have thoughts about this work. More and more jurisdictions have policies saying that people who receive negative decisions made by or with the aid of algorithms should have a right to "appropriate grievance redressal mechanism" but the details are left unspecified. Naveena Karusala has just completed a study of (currently algorithm-free) application and contestation processes in the context of housing/land benefits in the US and India. The goal is to inform future technology design. One finding is the key role of accompaniment (as defined by Paul Farmer): "We find that transparency of information and processes is insufficient to go through with contestation; accompaniment helps applicants logistically and emotionally navigate public services and contestation, influence other stakeholders to achieve better outcomes, and escalate and coordinate collective forms of contestation." Naveena’s work suggests that rather than focusing on crafting explanations of algorithmic decisions, #AI could be used throughout the process to help ensure that people who are eligible for benefits correctly receive them: identifying individuals who may not apply on their own, helping prepare and contextualize information to make applications relevant and informative, navigating contestation strategies, and more. This work reminds me of a conversation with Joanna Bryson a few years back when I was looking for examples of how people go about contesting algorithmic decisions -- how do they process the decisions, how do they decide that the contestation is the way to go, how do they navigate the contestation processes? At the time, Joanna and some of her colleagues shared some examples of high profile contestations but I wanted to understand if we are designing processes such that contestation is an option by anyone other than superheroes.
Understanding Contestability on the Margins: Implications for the Design of Algorithmic Decision-making in Public Services
eecs.harvard.edu
To view or add a comment, sign in
-
A lot of AI conversations are very tactical at the moment, but what does that really mean for enterprises and regulated organizations? My dear friend Kristina Podnar has you covered. As a warmup for the event below, listen to a recent podcast on navigating the AI landscape https://lnkd.in/gqvAUp4x
Can you avoid the AI pitfalls and hedge against the unknowns, especially as laws and regulations unfold in real time? Learn from Kristina Podnar at #cmskickoff24 in Florida in January as she'll talk about how AI changes everything when it comes to content creation. Kristina is the author of "The Power of Digital Policy" and we're very happy to have her with us for the first time in person since the Boye Philadelphia 12 conference!
To view or add a comment, sign in
-
-
I’ve had a dive into the complex world of Ethical Artificial Intelligence use in Health and Social Care. In this video I explore 4 key approaches you could implement immediately to ensure that you are ethically using AI in health and social care settings. After watching this short video, let me know your thoughts #EthicalAI #Healthcare #SocialCare https://lnkd.in/ePhBstsr
Ethical AI in Health and Social Care: 4 Key Approaches
https://www.youtube.com/
To view or add a comment, sign in
-
👀 If you're interested in the future of #AI and #tech regulation, you won't want to miss this interview with Kara Swisher on CNBC. She discusses where we're headed with AI, the need for regulatory guardrails and the positive impact on healthcare. Despite policymakers playing catch up, regulation is on the horizon and that will impact adoption and innovation across healthcare. Check it out here: https://lnkd.in/eN48uY-M #airegulation
CNBC's Fast Money (@CNBCFastMoney) on X
twitter.com
To view or add a comment, sign in
-
In our fast-paced, AI-driven world, trust has become the cornerstone of our interconnected lives. As Reggie Townsend insightfully points out, trust is the key to nurturing meaningful relationships and ensuring the smooth operation of civil societies. He emphasizes, "Trust in AI must be established right from the get-go, even before the first line of code is written." http://2.sas.com/6042w2Kpi #ArtificialIntelligence #ResponsibleAI #TrustworthyAI
To view or add a comment, sign in
-
-
Some clear thoughts from Kriti Sharma about AI regulation and the recent EU legal framework. No doubt that the right guardrails will open possibilities for healthcare system and patient access to care. Thanks for setting the tone Kriti! #biopharma #ai
This week, I joined Ian King on the Ian King Show on Sky News to discuss AI regulation. I championed the need for industry and government to converge to put in place a framework that mitigates risks while also unlocking the opportunities AI offers in a safe and transparent way. At Thomson Reuters, we firmly believe that having the right guardrails in place will ensure we reach communities with the benefits of AI, such as facilitating access to justice and driving financial inclusion. I also asked the Prime Minister for a dedicated initiative on skills redesign in the UK. Watch more here! https://lnkd.in/erZ7HvYh
To view or add a comment, sign in
-