The Tech Coalition will fund new research on generative AI and online child sexual exploitation and abuse (OCSEA) through its Tech Coalition Safe Online Research Fund. The first project to be funded will be research from the University of Kent on the impact of generative AI child sexual abuse material (CSAM) proliferation, focusing on how generative AI CSAM may reshape attitudes, norms, and behaviors among people who engage with CSAM and on how the perpetration and prevention ecosystems may respond. Additional projects will be chosen by the end of the year for funding in 2025. Application details will be announced in the coming months, so stay tuned! The new funding was announced today at an industry briefing on generative AI with key stakeholders hosted by the Tech Coalition. This was the second briefing of its kind and took place in London with select UK child safety experts, advocates, and members of law enforcement and government. Among them were representatives from the Home Office, Internet Watch Foundation (IWF), Lucy Faithfull Foundation, the National Center for Missing & Exploited Children (NCMEC), and Safe Online, as well as 14 Tech Coalition member companies, including Adobe, Amazon, Bumble Inc., Google, Meta, Microsoft, OpenAI, Roblox, Snap Inc., and TikTok. These briefings are designed to develop a shared understanding of the potential risks predatory actors pose to children through the misuse of generative AI and the ways companies are currently addressing those threats, as well as to identify and initiate new opportunities for stakeholder collaboration on this issue. Read more here: https://lnkd.in/epR9RYhW
Tech Coalition’s Post
More Relevant Posts
-
So excited to continue our TC Safe Online Research Fund collaboration with Tech Coalition - thank you Sean Litton & Kay Chau for being forward looking and propelling GenAI to the top of our research agenda around prevention and response to online child sexual exploitation and abuse (CSEA). It was great to be at the Gen-AI industry briefing - with so many relevant take-aways for our research agenda: · We must build on past knowledge on child safety - we are not starting from zero and that is our current key advantage · Foundational research is critical - deepen our understanding on behaviors of users, bad actors, risks for child safety, etc. · Brining more visibility and transparency into model development - safety frameworks around foundational training datasets · Research on how mitigation measures apply across cultural contexts and languages, as well as looking into model alignment · How can GenAI platforms be used to signpost support, prevention messages, diversion measures, etc. · The need to bring the knowledge from the communities outside and diversify perspectives · Work collaboratively - researchers, child protection and safety experts, industry, law enforcement, etc. Thanks to Natalie Shoup on our team, for leading on this exciting work. Stay tuned for more info on this collaboration.
The Tech Coalition will fund new research on generative AI and online child sexual exploitation and abuse (OCSEA) through its Tech Coalition Safe Online Research Fund. The first project to be funded will be research from the University of Kent on the impact of generative AI child sexual abuse material (CSAM) proliferation, focusing on how generative AI CSAM may reshape attitudes, norms, and behaviors among people who engage with CSAM and on how the perpetration and prevention ecosystems may respond. Additional projects will be chosen by the end of the year for funding in 2025. Application details will be announced in the coming months, so stay tuned! The new funding was announced today at an industry briefing on generative AI with key stakeholders hosted by the Tech Coalition. This was the second briefing of its kind and took place in London with select UK child safety experts, advocates, and members of law enforcement and government. Among them were representatives from the Home Office, Internet Watch Foundation (IWF), Lucy Faithfull Foundation, the National Center for Missing & Exploited Children (NCMEC), and Safe Online, as well as 14 Tech Coalition member companies, including Adobe, Amazon, Bumble Inc., Google, Meta, Microsoft, OpenAI, Roblox, Snap Inc., and TikTok. These briefings are designed to develop a shared understanding of the potential risks predatory actors pose to children through the misuse of generative AI and the ways companies are currently addressing those threats, as well as to identify and initiate new opportunities for stakeholder collaboration on this issue. Read more here: https://lnkd.in/epR9RYhW
Tech Coalition | Tech Coalition Announces New Generative AI Research
technologycoalition.org
To view or add a comment, sign in
-
It is clear there are advantages to the use of AI, but there is still so much that is not yet known about these tools. Children using AI are potentially exposing themselves to new risks of harm online, and their lives may be reshaped more fundamentally by them in the future. More work is needed to fully understand how children can safely interact with these new technologies, and what strong safeguards should look like. I will continue to raise these issues in the implementation of Ofcom’s Children’s Code under the Online Safety Act regime and in my engagement with Ministers on tackling child sexual abuse and exploitation in the UK https://lnkd.in/e-defv8k
Page not found | Children's Commissioner for England
childrenscommissioner.gov.uk
To view or add a comment, sign in
-
Safeguarding, PSEA & Child Protection Manager & Equality, Diversity & Inclusion Lead in International contexts
I've just stumbled upon a crucial report from the Internet Watch Foundation (IWF) that sheds light on the alarming use of AI in generating child sexual abuse imagery. This report is a must-read for anyone who's interested in the intersection of AI and our daily lives. Here are some of the key findings: In just one month, a staggering 20,254 AI-generated images were discovered on a dark web forum dedicated to child sexual abuse. Out of this distressing total, the IWF analysts chose to assess 11,108 images that were deemed most likely to be criminal, leaving 9,146 AI-generated images that either didn't feature children or contained children in non-criminal contexts. Shockingly, 2,562 of these images were identified as criminal pseudo-photographs, and another 416 were categorized as criminal prohibited images. These findings underscore the urgent need for awareness and action to raise awareness of the misuse of AI technology in this profoundly disturbing manner. I encourage you to explore the full report for a deeper understanding of the issue: https://lnkd.in/dwtKu9F2 #AI #ChildSafety #internetsafety #Safeguarding #DigitalSafeguarding
How AI is being abused to create child sexual abuse material (CSAM) online
iwf.org.uk
To view or add a comment, sign in
-
I'll reiterate, Trigger warning: child abuse using AI. Thankfully, these harmful things have been removed in this instance, but it's one more reason why we really need to continue pushing for regulation and responsible AI
MBA | CEO | Strategist | Ecologist | Mom | Researcher | Directrice générale | Aula Fellow for AI | AI Safety Community Junior Researcher, Future of Life Institute | Guest Researcher at NTNU: DigiKULT
Trigger warning: child abuse using AI. For TLDR: - Researchers have found that the LAION open data sets and others used by major popular AI image generators contain links and labels for millions of child abuse images. - including at least Google and Stable Diffusion - The datasets have been copied and propagated around the world. - This has been known in some cases at least since 2021. - There is no technical fix to finding all the images in the data. - They’ve taken the data sets down from Hugging Face and elsewhere - The article mentions algorythmic-disgorgement as a partial potential fix. (There’s currently no mechanism for reliably taking info out of a trained AI system, including with algo disgorgement.) - Depending on where you are it is extremely illegal/ criminal to even accidentally possess some of the material made available by the dataset, including in the US and in Canada. - Bad people have already begun exploiting this nightmarish situation. - Survivors may be re-victimized as past images are used to create new ones. - Who will be held to account for our laws? I don’t have answers. I’m feeling deeply shocked. https://lnkd.in/dRWUkg7G
Largest Dataset Powering AI Images Removed After Discovery of Child Sexual Abuse Material
404media.co
To view or add a comment, sign in
-
📈 Since early 2023, cases of perpetrators using generative AI to create child sexual abuse material and exploit children have been increasing. Perpetrators have already exploited ‘open-source’ versions of AI image generators that allow users to produce any images – including illegal ones. 📌In the near term, a range of safety measures are necessary to make existing tools safer. ➡️Tomorrow our Executive Director Iain Drennan is joining the Tech Coalition ’s Generative AI Industry Briefing in London to look at generative AI risks and the best ways to address them, as well as to identify and initiate new opportunities for collaboration. #GenAI #safeAI #safetybydesign #sharedgoals
To view or add a comment, sign in
-
Today we convened an industry briefing on the impact of generative AI on online child sexual exploitation and abuse (OCSEA). We brought together key U.S. stakeholders in the ecosystem to develop a shared understanding of the potential risks predatory actors pose to children through generative AI and the ways companies are currently addressing those threats, as well as to identify and initiate new opportunities for stakeholder collaboration. Reps from 27 of our Member companies, including Adobe, Amazon, Discord, Google, Meta, Microsoft, NAVER Z (ZEPETO), Niantic, Inc., OpenAI, Pinterest, Snap Inc., TikTok, VERISIGN, Verizon, VSCO®, Yahoo, and Zoom, joined select child safety experts, advocates, and members of law enforcement. As generative AI develops and the child safety ecosystem evolves, our Members are building a deeper understanding of the issues and challenges, so they can continue to be proactive in their efforts to reduce risk, incorporate safety by design, and innovate solutions to help keep children safe. Additionally, the tech industry and the stakeholders with whom industry engages to thwart OCSEA will continue to adapt their approaches and systems to address this new threat, as they have with past changes in technology. For this reason, today’s briefing culminated with several new multi-stakeholder efforts, among them including: - Red teaming: With input from the U.S. Department of Justice, we will help companies explore ways to test for and mitigate OCSEA risks. - Information sharing: We will advance utilizing the Lantern program to securely share information that supports robust safety evaluations and mitigation methods for generative AI CSAM and related OCSEA incidents. - Industry classification system: We will review and update the Industry Classification System to address different types of AI-generated OCSEA. - Reporting: We will work with the National Center for Missing & Exploited Children (NCMEC) to help develop a process to efficiently and effectively refer cybertip reports of AI-generated OCSEA to NCMEC. Our work to understand the impact of generative AI on OCSEA began earlier this year when we started bringing Members together regularly to identify emerging challenges and share learnings. In addition, together with Thorn, we co-hosted a webinar to convene experts on the topic of understanding child safety risks with generative AI, and at the Crimes Against Children Conference we brought together industry to identify and address generative AI challenges. We look forward to continuing to facilitate discussions about OCSEA and the rapidly changing space of generative AI. See the full blog post to learn more: https://lnkd.in/er_9JkQR
Tech Coalition | Tech Coalition Hosts Generative AI Briefing for Key U.S. Stakeholders
technologycoalition.org
To view or add a comment, sign in
-
The fight against the sexual exploitation of children is a continuous one. New technologies present new arenas where this fight must necessarily be brought. Enter Julie Inman-Grant, Australia's eSafety commissioner, whose #SafeybyDesign approach to the development and use of generative AI is an important part of the discussion on the topic of the potential harm that generative AI can cause in the area of child sexual exploitation. By way of example: "The inability to distinguish between children who need to be rescued and synthetic versions of this horrific material could complicate child abuse investigations by making it impossible for victim identification experts to distinguish real from fake." AI is here to stay. How we implement and what concerns we address while doing so is where we're at now. #stopchildtrafficking https://lnkd.in/ghvx_-is
Curbing the Potential Harms of Generative AI
linkedin.com
To view or add a comment, sign in
-
According to a new study from the Stanford Internet Observatory, LAION, an open source dataset used by many companies including Google, contains thousands of CSAM images. “If you have downloaded that full dataset for whatever purpose, for training a model for research purposes, then yes, you absolutely have CSAM, unless you took some extraordinary measures to stop it,” said David T., lead author of the study and Chief Technologist at the Stanford Internet Observatory. Highlighting the importance of involving subject-matter experts, David says, “They made an attempt that was not nearly enough, and it is not how I would have done it if I were trying to design a safe system.” To avoid future disasters such as this the AI community needs major improvements in data governance. They should: * Ensure that everyone who publishes significant datasets involves external experts in their curation process. * Ensure that those who use the datasets run tests to avoid ethical issues and inadvertent liability. LAION was probably only the tip of the iceberg. The fact that no companies, including those that claim to be champions of open source like Meta or Mistral, don't release their dataset suggests that there's a lot more going on inside those company and that doesn't receive nearly as much scrutiny. On the company Discord, LAION employees said, "It is not realistic to expect from us better standards than the industry at large has", highlighting the role of both large and small players. https://lnkd.in/g2e5akev #AI #artificialintelligence #aisafety #airisk #airiskmanagement
Largest Dataset Powering AI Images Removed After Discovery of Child Sexual Abuse Material
404media.co
To view or add a comment, sign in
-
All 50 states call on Congress to address AI-generated CSAM - The Verge: Attorneys general from all 50 US states are urging Congress to establish a commission focused on investigating the impact of AI on child exploitation. They propose that the commission develop solutions to prevent the creation of AI-generated child sexual abuse material (CSAM). The attorneys general highlight how bad actors can use AI tools to create deepfakes and realistic sexualized images of children, emphasizing the need for stricter regulations. They argue that while there have been efforts to regulate AI in areas such as national security and education, the safety of children should not be overlooked. - Artificial Intelligence topics! #ai #artificialintelligence #intelligenzaartificiale
All 50 states call on Congress to address AI-generated CSAM
theverge.com
To view or add a comment, sign in
3,367 followers