Raising Children in the Era of Artificial Intelligence - Part Two

Raising Children in the Era of Artificial Intelligence - Part Two

Artificial Intelligence (AI) has the potential to fundamentally rework basic aspects of the modern world: healthcare, energy, entertainment, governance, gaming; the list goes on. As this process unfolds, we, the adults, must prepare our children to inhabit a future rife with new risks and opportunities. At PwC, we employ a Responsible Technology approach to emerging technology to comprehensively model this landscape. 

In this second of two blogs (Part One available here), I cover issues related to jobs, health and security with input from Jonnie Penn, an AI researcher based at the University of Cambridge, who conducts research on AI’s impact on society as a fellow for Google.

1. Jobs and Skills 

PwC research suggests that the factor most highly correlated with potential job automation is the education level of the worker that currently performs it. Hence, to ensure that our children have ‘future-proof’ skills, some form of government intervention will almost certainly be needed. In the medium term, this could include a revision of the UK’s elementary-school curriculum, both in terms of content (i.e. data literacy, probability, neural networks) and teaching methods (i.e. a focus on interpersonal skills), as well as deeper investment from government in vocational education, training, and retraining with a focus on emerging tech and AI. There are warnings, however, that this effort could backfire if made in haste. “A desire to ‘optimise’ children has led to ugly outcomes in the past,” warns Penn, “In the early 20th century, parents in California and the UK considered eugenics to be the key to unlocking their community’s prosperity. Time has shown that intuition to be both scientifically and morally misguided.” In this case, Penn states, we should listen to children and young people to ensure that they are a part of the civic process. 

2. Health and Wellbeing

Children in wealthy families today might grow up surrounded by AI-powered voice assistants that sound or act human. How does this interaction influence children’s well-being? A research group at MIT are investigating how children perceive AI technology by studying how they interact with various virtual assistants like Amazon Alexa and Google Home. Early evidence suggests that these interactions may alter children’s perception of their own intelligence in comparison to that of the agent’s “mind”. Further research will be needed to understand how such agents might serve as responsible companions for children, whether embedded in toys, games, or otherwise. “There is still no substitute for time spent playing in nature,” Penn warns. “The benefits of that activity have had more ‘research and development’ than today’s virtual assistants.” He points to the lack of knowledge around potential side-effects from digital technology usage as one reason why Steve Jobs, Bill Gates, and other tech leaders famously banned their own children from using cell phones. 

3. Privacy, security and integrity

Data privacy and data ethics are both hotly debated subjects in contemporary AI research, particularly when children’s privacy is at stake. In 2016, a prominent toy company launched an AI-powered product that gave the manufacturer unprecedented access to children’s lives. One year later, it was pulled from shelves due to consumer outrage. If properly implemented, AI-powered toys and games could become a force for good that empower children to benefit from personalised and adaptive learning. For this to happen, however, we must regulate these technologies “as we do with refrigerators, cars, and other products,” says Penn, Before products are used and not afterwards.” At PwC, we explore how to design and deploy responsible AI that meets strict ethical and legal standards.

While it is important that we prepare our education system to meet the needs of a changing job market, health and well-being cannot be sacrificed to accomplish these ends. Our experience at PwC has shown that once a topic is brought to public attention, the debate usually generates action plans with tangible outcomes. To begin this process, we should listen to our children more. As my colleague at PwC, Rob McCargow has done, we can ask them, “How do you feel about AI?” The answers might lead to surprising outcomes!

Image credit : Dall E

Catarina Squillante

Advisory | Innovation | Artificial Intelligence | Strategy | Digital Transformation | Young Leader | Triatleta

8mo

I loved the post! I would add to the vision of CREATIVITY: how children can use AI to expand their cognitive abilities, broaden their knowledge, and accelerate their learning, enabling them to harness their creative essence in even more amazing ways. Furthermore, this applies to education within schools as well! Utilizing AI as an assistant to address doubts and delve deeper into academic subjects, while aligning with the demands of the contemporary world. Thank you for the article; it has generated numerous insights for me!

Like
Reply
Veronica Costache

Women of Excellence Awardee WEF | Life and Business Skills Reset Expert | CEO Alexz Educational | Author

1y

Love it! 👏

Interesting read...on the "use it" theme, I have created workbooks for children (8-11) to develop their creative thinking skills & AI literatcy. Children need first to understand how this tech works and how they can grow next to it.

FWIW: as part of my work on an ecosystem that is designed to decentralise AI ( https://devdocs.webizen.org/ ) - whereby I'm refactoring work that's been developed generally over a long period of time: I found it difficult to get any useful feedback / input about digital identity governance, which is in turn required for AI agents and with respect to my work #HumanCentricAI. I wrote two posts about it recently. This is the latter one, the other is linked in it. I suspect that the means to address these sorts of problems is non-trivial. (Although the ecosystems I'm defining should have a positive impact generally for future users of that system, if I manage to get it done!) Article: Is Chat GPT the most useful tool accessibly available to solve social problems? Here is a recent example. https://www.linkedin.com/pulse/chat-gpt-most-useful-tool-accessibly-available-solve-social-holborn

Like
Reply
Maria Luciana A.

Head of AI Public Policy and Ethics @ PwC UK | Award winning Responsible AI pioneer I LinkedIn Top Voice

1y

And here is part two Simon Humphreys

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics