OECD Washington Center’s Post

OECD Washington Center reposted this

View organization page for OECD.AI, graphic

34,044 followers

Today at the #aisafetysummit, OECD DSG Ulrik Vestergaard Knudsen shared ideas from the OECD Expert Group on AI Futures about approaches to mitigating risks should humans "lose control” of AI systems. Five key priorities surfaced: 1. Create “liability rules for AI-caused harms”. Right now, the unknowns are making some actors hesitant to adopt AI while not providing the right incentives to ensure systems are designed and deployed safely. 2. Increase investments in R&D on AI ethics and safety, dangerous capabilities assessments, and explainability. The latter point on explainability is especially important. If humans don’t understand how these “black box” models work and the key sources and rationale for any decision, recommendation, or prediction they make, we cannot ensure that their outputs are aligned with human values and preferences. 3. Control the training and deployment of advanced, powerful AI models. This would help ensure that models are not publicly deployed before rigorous evaluation. 4. Oversight and tracking for highly capable AI systems. One example would be to require registration and oversight of large pools of compute power. For the time being, training frontier models requires access to incredible amounts of compute power and tracking these resources could provide an understanding of who is developing frontier systems and where, even for systems that are not registered. 5. Research and discuss which human preferences and values AI systems should prioritise. This may sound straightforward, but it is very complex. VISIT THE OFFICIAL WEBSITE https://lnkd.in/ekrY994Y Audrey Plonk Karine Perset Celine Caira Luis G. Aranda Jamie Berryhill Orsolya Dobe Jacqueline Lessoff Yuki Yokomori  Lucia Russo Noah Oder Fabio Curi M Besher M. John Leo Tarver ⒿⓁⓉ Riccardo Rapparini Rashad Abelson Angélina Gentaz Valéria Silva #oecd #trustworthyai #elonmusk #aisafety #artificialintelligence #aipolicy #aisystems #oecdai

  • No alternative text description for this image

AI should never be used at predicting social outcomes. Because: (1) creates an even greater demand for personal data. (2) massive transfer of power and control to tech companies from domain experts. (3) harder to explain the decisions - transparency (4) ai only has to create a veneer of intelligence - not good enough for predicting social outcomes. (Source: @random_walker)

Jordan Panayotov

Founder I Inventor I Creator I Strategic Advisor I SDGs, CSR, ESG, Sustainability, Impact Assessment, Artificial Intelligence (AI) for Good

8mo

Ulrik Vestergaard Knudsen Few points that I'd like to make: 1. AI as any other technology is not 'Good' or 'Bad'. People that use technologies have good or bad intentions. 2. All policies, programs, projects, investments and business activities have social impacts which ultimately are impacts on health & wellbeing. AI just magnifies these impacts. 3. Given the exponential growth of AI, we can expect nothing but significant growth in magnitude of these impacts. 4. Apparently, everyone excitedly talks about 'using AI for Good', 'making the world a better place', ensuring that AI is 'safe & fair'. 5. However, nobody explains: - What is Good? - Good for Who? - How Do We Judge For Fairness? - How to Make the World a Better Place? - How to Make Sure that AI Has Humans' Best Interest in its Mind? 6. Given all points above, regulation of AI at applications layer is not fit-for-purpose to ensure 'safe & fair' use for Good, because: ... (I'll continue in 'Reply' to this comment due to word limit)

Marina Yastreboff

GAICD, MAICD | 2023 Asia Pacific Community Leadership Award (Innovation Sector) | Technology Transfer & Innovation | Legal and Compliance Professional

8mo

Thank you for sharing your insights with us Ulrik Vestergaard Knudsen, we look learning more from the summit and how we can contribute to the good work from the Australasian region AUSCL Australasian Society for Computers + Law Audrey Plonk Karine Perset Celine Caira Luis G. Aranda Jamie Berryhill Orsolya Dobe Jacqueline Lessoff Yuki Yokomori Lucia Russo Noah Oder Fabio Curi M Besher M. John Leo Tarver ⒿⓁⓉ Riccardo Rapparini Rashad Abelson Angélina Gentaz Valéria Silva, OpenAI #oecd #trustworthyai #elonmusk #aisafety #artificialintelligence #aipolicy #aisystems #oecdai

Chris Marsden

Professor of Artificial Intelligence (AI), Technology, and the Law at Monash University

8mo

Useful checklist. Hasn't all of this now been covered by the White House EO and associated programmes (3-5) and the EU AI Act (1-2)?

Rufo Guerreschi

Towards a global constituent assembly for AI and digital communications

8mo

How can loss of control risk ever be even remotely pursue without some new global organization enforcing globally and painstakingly certain bans as was done for nuclear weapons? What are we talking about?

Like
Reply
Kaare N.

#Public policy #Innovation #Sustainability #Leadership

8mo

Super important, let’s not "lose control" to the dude next to Ulrik Vestergaard Knudsen

Like
Reply
Steve MacFeely

Director of Data and Analytics WHO

7mo

No mention of data?!

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics