This week, Thorn’s Head of Data Science, Dr. Rebecca Portnoff, joined a remarkable panel at the All Tech Is Human event in NYC! Together with industry and civil society leaders like Matthew Soeth, Afrooz Kaviani Johnson, Sean Litton, and Juliet Shen (she/her), the panel discussed the misuse of generative AI to sexually abuse children and the safety by design principles we have collaborated on to stop this misuse. Were you at the event? We'd love to hear your takeaways. A huge thank you to all who attended and contributed to these vital conversations! Together, we can build a safer digital world. #TechForGood #TrustAndSafety #GenerativeAI #ChildSafety
About us
We are Thorn. Our mission of defending children from sexual exploitation and abuse is deeply embedded within our core—a shared code that drives us to do challenging work with resilience and determination. Here, you’ll work among the best hearts and minds in tech, data, and business, creating powerful products that protect children’s futures. Unleash your own formidable talents while learning among peers and growing every day. All in a supportive environment of wellness, care, and compassion. Build your career as we help build a world where every child can be safe, curious, and happy.
- Website
-
http://www.thorn.org
External link for Thorn
- Industry
- Non-profit Organizations
- Company size
- 51-200 employees
- Headquarters
- Manhattan Beach, CA
- Type
- Nonprofit
- Founded
- 2012
- Specialties
- technology innovation and child sexual exploitation
Locations
-
Primary
Manhattan Beach, CA 90266, US
Employees at Thorn
Updates
-
The misuse of generative AI technologies is harming real children. From creating realistic #deepfakes from benign images to generating new abuse acts involving previous victims, these technologies can exploit and harm unsuspecting users in profound ways. As a society, we have the ability to prevent the use of generative AI to further sexual harms against children. That’s why we collaborated with gen AI leaders to commit to Safety by Design principles in the development and deployment of generative AI models. Learn more about how we can work together to mitigate the misuse of generative AI: https://lnkd.in/e3MdJrwX
-
Thank you Quantifind for hosting an amazing Women in Risk Event! This year, Thorn VP of External Affairs, Pam Smith had the pleasure of speaking on a panel to share about our work to combat child sexual abuse online. Read her key event takeaways below: 1. Technology is critical in addressing criminal activity 2. AI is a topic of high interest, both in terms of how it is changing harms and also how it can stop them 3. Financial sextortion is a key emerging threat intersecting with multiple risk areas
Our fifth Women in Risk event was a success! A sincere thank you to our esteemed panel speakers, Megan Hodge, Rafi Aliya Crockett, CAMS, and Pam Smith, for sharing their inspiring stories of disrupting risks and championing women in the risk space. To all who attended, your presence and active participation enriched the event and reinforced our collective commitment to growth and learning. Your passion fuels the spirit of empowerment at #WIRE as we continue in this journey to combat financial crime. Join us as we uplift, support, and empower one another. Join #WIRE today: https://lnkd.in/gG3qFJDR
-
As an organization building technology to defend children from sexual abuse, it is a great honor to be recognized for the excellence of the solutions we create. Our donors and supporters make this possible — empowering online platforms to combat the spread of child sexual abuse material and child sexual exploitation at scale. We are thrilled to be recognized in the General Trust & Safety category of this year’s Marketplace Risk Solution Provider Excellence Program. A big thank you to the Marketplace Risk Advisory Board for this acknowledgment. Congratulations to all our fellow honorees! We look forward to connecting with everyone at the conference next week and continuing to work together to enhance safety across online platforms. #TrustAndSafety #MarketplaceRisk2024 #SolutionProviderExcellence
Founder & CEO, Marketplace Risk | Mission-driven thought leader | Marketplace + digital platform trust & safety, risk mgmt + legal strategy | 20 years advising + counseling founders, leaders, investors + boards.
Today we announced the honorees of the Solution Provider Excellence Program. The goal of the Program is to recognize solution providers with proven track records navigating the many complex challenges that marketplaces and digital platforms face and to help platform leaders identify solution providers that have been peer reviewed by other platforms and the Marketplace Risk Advisory Board. You can meet all of the honorees and learn about their solutions at the 2024 Marketplace Risk Management Conference in San Francisco next week! And, you can read more about the Program and this year's honorees here: https://lnkd.in/drE8eTjT This year's honorees are: Identity Authentication / Verification: Enformion, Incognia, Persona, Prove and Mesh Payment Platforms / Financial Services: Trolley Fraud Prevention / Chargebacks: EverC, Unit21 Screening / Background Checks: Sterling General Trust & Safety: ActiveFence, Thorn Multi-Vendor Platform: randevu.tech Content Moderation: Stream
-
Signed into federal law on May 7, 2024, the Revising Existing Procedures On Reporting via Technology Act (or the #REPORTAct) represents a crucial advancement in the regulatory framework guiding how digital platforms manage and report child safety incidents. Read our blog post where we break down its components and explain how it could impact our mission to defend children from sexual abuse: https://lnkd.in/giBuqdst
The REPORT Act Is Now Federal Law – Here's What It Means for Child Safety
-
“It was really important to us leading this initiative that we include transparency as part of how these principles get enacted. That it’s not enough to have your moment to say, “I am committing to these,” but that you have to actually do the work, you have to actually show that you’re doing the work and share back with the public.” Dr. Rebecca Portnoff, Vice President of Data Science at Thorn, joined Lily Jamali on Marketplace Tech to discuss new design principles aimed at fighting child sexual abuse. Tune in via Marketplace by APM: https://lnkd.in/e7fcWjRZ
Rethinking the lifecycle of AI when it comes to deepfakes and kids - Marketplace
https://www.marketplace.org
-
Thorn reposted this
We're privileged to to announce a world-class lineup of speakers for our Safety By Design panel at The Future of Trust & Safety on Tuesday, May 14, at Betaworks in New York City. Child safety has a long-standing tradition of industry collaboration and support to reduce Child Sexual Abuse Material (CSAM) and Child Sexual Abuse (CSEA) content. Generative AI adds a new layer that creates an opportunity for abuse through the misuse of technology. This panel will explore current efforts to reduce risk and exploitation of minors in online spaces. Panelists include Dr. Rebecca Portnoff (Head of Data Science, Thorn), Sean Litton (President & CEO, Tech Coalition), Afrooz Kaviani Johnson (Child Protection Specialist, UNICEF), and Juliet S. (Community Advisory Board, Integrity Institute). The panel will be moderated by Matthew Soeth (Head of Trust & Safety and Global Affairs, All Tech Is Human). Apply to join us live in New York City: https://lnkd.in/erj4aDKz
The Future of Trust & Safety: May 14, 2024 | New York City — All Tech Is Human
alltechishuman.org
-
We are excited to share our latest white paper, a collaborative effort with All Tech is Human, presenting recommended mitigations to enact the Safety by Design principals that generative AI leaders have committed to. As we harness the power of AI to create content at unprecedented scales, we must also confront the serious implications such technologies have on the safety and well-being of children. Dive into our comprehensive guide of mitigations to implement #genAI safety and design principles effectively. Join us in shaping a safer digital future: https://lnkd.in/e3MdJrwX
-
#GenerativeAI holds immense potential but also presents significant risks to child safety. Some of these threats can be mitigated through “red teaming”, a practice in which an independent group challenges an organization's strategies, policies, or systems by assuming an adversarial role or perspective. Learn more about how child safety red teaming fits into a Safety by Design approach to #AI development in our latest whitepaper. Download your copy: https://lnkd.in/gSaAqzYv
Importance of Child Safety Red Teaming for Your AI Company | Thorn.org
-
You may have heard the news about our historic partnership with generative AI leaders to prevent the misuse of generative AI in perpetuating sexual harms against children. We couldn’t reach this milestone without the generosity of donors big and small. As a nonprofit, we rely on funding from people and foundations around the world who care as deeply as we do about defending children from sexual abuse. Thank you to our longtime Thorn supporters and new ones for believing that we can build a better world together.