Whether we’re talking about a classroom full of 12 year olds in my hometown of Cleveland, Ohio (U.S.), or a Zoom room full of aspiring technologists logging in from all over Harare, Zimbabwe, how we educate students will dictate our collective future. And it’s not just what we teach them directly about AI, it’s also about what we indirectly teach them about it via our actions: When can we employ AI ethically? What policies do we champion to protect students’ rights? How can it bolster or hamper their education? And most importantly — how can we supply young learners with a holistic education in AI so they can help create true equity?

Often, the answers to those questions don’t bode well for our future. In the United Kingdom, administrators at some schools use AI-driven software that “actively listens” to students in bathrooms. They say it will help them address bullying and vaping, but surveillance in what should be a private place violates students’ privacy and has the potential to heap additional harm on the backs of trans and nonbinary students. This is a prime example of how AI impacts Community Health and Collective Security, which covers issues related to drivers of community health, including individual freedoms, collective safety, education, and overall well-being.

Remote technology also imparts the wrong AI lessons. From apps that monitor their web surfing to video platforms, students have experienced unprecedented surveillance in the years since COVID-19 first struck the globe. In an investigation of education tools used for remote learning around the world, researchers found that 89 percent of the 163 EdTech products reviewed employed data practices that either risked or outright infringed upon the rights of students as young as three, sharing their personal information with advertisers and other data brokers. Kids deserve privacy, too.

Paradoxically, they are discouraged from using AI tools of their own volition. We’ve seen students punished when AI checkers falsely deduced that they used it to write papers. And in the United States where I live, Black and Asian-American students are more likely to be accused of cheating than their white counterparts. Even as I work to educate my own child about the proper uses of AI both inside and outside the classroom, I worry that the education system will continue to use this technology to reinforce harms long inflicted on students of color.

But I have hope. There can be a future where well-informed technologists drive the ways we use AI, and educating the next generation through a responsible lens will help us get there, creating evangelists for trustworthy AI along the way. I turned to two of Mozilla’s own Responsible Computing Challenge (RCC) fellows to discuss the road map that gets us there. RCC helps educators develop and pilot computing, humanities, library and information science, and social science curricula that teaches students how to apply a responsible, ethical lens to their work.

A curriculum focused on responsible tech and ethical AI helps students address long-standing social and economic challenges, enabling them to think beyond coding to how technology can positively impact society.

Jibu Elias

Responsible Computer Challenge Lead, India



Dr. Chao Mbogho (she/her) and Jibu Elias (he/him) are our RCC leads in Kenya and India, respectively. Dr. Mbogho is an award-winning computer scientist and educator who researches ways to improve computer science education. She also runs a mentoring nonprofit for tech upskilling, KamiLimu, and was the first Kenyan to win the OWSD-Elsevier Foundation Awards for Early-Career Women Scientists in the Developing World. Elias is an AI ethicist, activist, and researcher with wide-ranging expertise on the AI ecosystem in India. He also serves as the research and content lead for INDIAai, a government-sponsored AI initiative.

Here, Dr. Mbogho and Elias share about the upsides of collaboration, how AI can help grow the next generation of responsible technologists, and what they would do if they had magical powers.

Portrait photography of Jibu Elias, Responsible Computer Challenge Lead, India

Jibu Elias. Photo credit: Sibi Manjale

Portrait photograph of Chao Mbogho, Responsible Computer Challenge Lead, Kenya

Photo courtesy of Chao Mbogho

Rankin: Why is it important to create curricula that center responsible tech and ethical AI?

Elias: Frequently, AI models are developed and deployed without thorough consideration of their potential consequences on society, leading to issues like biases in AI, which reflect the existing inequalities in our society. The typical response to such problems in the tech community involves applying more technology — often without addressing the underlying socio-economic issues. If we are to develop AI tools that are exponentially more powerful, it is crucial that the next generation of technologists can approach their work from socio-economic and ethical perspectives. The best way to shape this mindset is through education, by integrating these considerations into curricula. In countries like India, the quality of computer science education beyond top universities like the Indian Institutes of Technology (IITs) is often poor, due to underpaid teachers and a focus on producing cheap labor for the tech service industry rather than fostering holistic development. A curriculum focused on responsible tech and ethical AI helps students address long-standing social and economic challenges, enabling them to think beyond coding to how technology can positively impact society.

Mbogho: In Kenya, tech curricula at university mostly have just a single course on ethical computing. Thus, while students learn computing skills, they are not well prepared to combine these technical skills with responsible innovation nor to push this change at their workplaces. It is important to support educators to build curricula that centers responsible tech and ethical AI so that a) the educators can upskill on realities of the industry where technology has to be human-centered otherwise it might cause more harm than good; b) they can shape curricula to be holistic and multidisciplinary; and c) they can better prepare students to always think about building technology that inculcates all responsible metrics, and ultimately become champions of responsible technology in their communities.

Rankin: Why is collaboration important for effectively tackling issues at the intersection of AI and education?

Mbogho: Effective responsible computing is multidisciplinary, calling for input from various fields like technology, philosophy, ethics, law, and social justice. This approach enhances the outcome of teaching and innovations, and encourages healthy conversations and buy-in from different stakeholders. In designing curricula to include responsible computing, working across departments or institutions not only expands the potential for wide adoption, but it also ensures that various contexts, backgrounds, and abilities are considered, creating opportunities for checking assumptions and biases. Lastly, collaboration among students creates opportunities to learn from others’ viewpoints, and builds the culture of working within teams.

Elias: This interdisciplinary approach also ensures that AI tools are both educationally sound and practically beneficial, as educators can guide technologists in creating technologies that truly enhance learning. Furthermore, collaboration fosters innovation by combining resources, reducing duplicated efforts, and enabling more effective scaling of successful initiatives.

Rankin: What are some ways AI can support education?

Elias: AI can transform education by providing personalized learning experiences tailored to individual needs, strengths, and learning styles. It can automate administrative tasks, like grading, freeing educators to spend more time teaching and engaging with students. And virtual reality (VR) and augmented reality (AR) can create immersive learning environments that simulate real-world scenarios, making learning more engaging. AI also plays a crucial role in supporting students with disabilities and learning differences by providing tools that enhance accessibility and inclusion.

Mbogho: The era of ChatGPT has controversially led to discussions around academic integrity and concerns around ethics, inclusivity of data, and accuracy. However, this and other emergent tools present an opportunity for discussions on the importance of human skills, such as creativity and adaptability, which may not be replaced with AI. The emergence of AI supports the importance of education systems strengthening their training on the non-technical skills that are crucial for success in the 21st century; effective and ethical use of AI is not a practice that replaces humans, but one that is complementary to human skills.

Rankin: If you had a magic wand, what is the first thing you would use it to fix, as it relates to AI and education?

Mbogho: I would use my magic wand to create equitable access to education and tools, and to ensure that anyone who is pursuing a course in computing gets holistic education that combines tech skills and human skills.

Elias: A significant gap still exists in resource and knowledge accessibility, especially in the Global South, between urban schools with substantial resources and schools in underserved rural areas that lack basic educational infrastructure. My first action would be to address the access and equity issues in AI-driven education. I would ensure that every student, regardless of background or location, has equal access to AI-enhanced education.

Rankin: It’s no magic wand, but the work of RCC is integral as we strive to create a future iteration of AI that doesn’t recreate the biases it currently exacerbates.

This post is part of a series that explores how AI impacts communities in partnership with people who appear in our AI Intersections Database (AIIDB). The AIIDB maps the spaces where social justice areas collide with AI impacts, and catalogs the people and organizations working at those intersections. Visit to learn more about the intersection of AI and Community Health and Collective Security.


Related content