9 min readGenerative AI is here. How should your safeguarding plan change?posted 7 months ago

With the launch of ChatGPT and Google Bard, Generative AI has entered our mainstream lives. It was followed by an explosion of tools that leverage this technology. Children will be curious about using these tools and rightly so. In this blog, we share why it is wiser to indulge children’s curiosity about AI, and instead of curtailing their use or exposure to AI, we should bring safeguarding measures in place to ensure that the kids are safe when they do so. 

There’s no denying that Generative AI is fundamentally changing how we work, interact and engage online. It is a ubiquitous tool — children are using it to write their University applications, taking help with homework, and the like. Universities are, in fact, adopting measures to make their staff and students AI literate. Young children, then, will be using these tools so as not to fall behind their peers. 

In fact, if you think about the career outcomes of the proliferation of AI tools, we should encourage our kids to learn AI-based tools and acquire the skills that are in demand in the job market. Lightcast’s research shows that 3 times more job adverts are asking for AI skills than a decade ago. In fact, jobs, where AI skills are requested in the employer’s job description, tend to pay more than the same jobs which don’t mention these skills by almost 20%. AI will also change how we work, fundamentally, as Fortune shares and managerial and team structures will look very different. If we want our children to succeed as professionals when they grow, we have to set them up for success right now. 

It is also important to acknowledge that Artificial Intelligence (AI) has been a part of our lives already, only in much subtler ways: AI is embedded in the recommendations across most learning platforms and social media platforms, they drive video games and toys, and find steady use in chatbots. All of these tools and technologies were already impacting how children consumed content and interacted in the virtual world. 

The Positives: AI for a More Inclusive Classroom

Before we dive into the imperatives of your safeguarding plan and the updates it needs to accommodate the aftermath of the Generative AI explosion, we want to look at the overall impact of AI. But more importantly, it’s key to understand that removing or curbing access to such tools is a drastic measure. For the reasons mentioned above, our children will fall behind if they don’t receive adequate exposure and skills in working with AI. 

AI can also help us — parents, educators and school leaders — create a more rounded educational experience for all types of students. Homework assistance is just the tip of the iceberg. Generative AI can unlock creative solutions to customise learning to a child’s learning style and pace, which can be extremely useful for neurodiverse kids, irrespective of whether they have a diagnosis or not. Personalised learning experience can elevate their assimilation process. 

Children can also use AI to experiment with art, music, stories, software, and the like, all of which can help them work through blocks of creative hindrances and navigate through the learning process more joyously. AI tools also promise a better interface for communicating through text, speech, and visuals, particularly for children with disabilities.

Keeping in mind the plethora of positive benefits on the table, we cannot dismiss AI or separate our children from the technology. But we also need to acknowledge the threats that children might face in the process.

Some risks include:

  1. Phishing attacks

Phishing attacks are rampant on the internet. It is impossible to tell if the link you are clicking is correct or not. Generative AI tools are becoming sophisticated at an alarming pace. They will make phishing attacks even more subtle. Protecting against phishing attacks should be the first priority.

  1. Impersonation

The risk of impersonation existed previously because of deepfake technology. Under such instances, extremely realistic videos or audio are distributed that impersonate specific individuals. Generative AI can take it up a notch and create fake profiles on social media, circumvent CAPTCHA protection, etc. Once a fake profile is made, the number of things that can wrong is endless. They can be used to spread misinformation, and launch targeted attacks…the possibilities are endless. 

When we think about this scenario in a school setting, such attacks could also become bullying or harassment if they target a specific kid. Because the output is so realistic, any distorted video clip can be circulated and passed off as real. Such misuse of children’s photos and video content isn’t unheard of. Misinformation can also spread in the same way. 

  1. Career Planning

Generative AI, as mentioned before, will change the way we work. Just like smartphones and the internet have eliminated many jobs and created new ones, so will AI. Children’s future professional life is very hazy right now. Some jobs that exist today will not exist when they grow up, and many new ones would have come up. 

It impacts how we impart education to children today and what skills they have to offer in the job market when they graduate. The Department For Education comments, “The education sector needs to prepare students for changing workplaces, including teaching them how to use emerging technologies, such as generative AI, safely and appropriately.”

  1. Exacerbation of the digital divide

Dr Philippa Hardman, an affiliated scholar at the University of Cambridge, highlights a highly pertinent point. She points out that we like to believe AI to be a tool that democratises education and makes it equitable. But in reality, access to AI tools is a major determinant to an individual child’s success. 

In her words, “As the WEF’s Future of Jobs Report 2023 recently reported, the most likely future is one where AI literacy and machine learning skills are at a premium, making AI education a new and valuable currency and making pupils who don’t develop these skills vulnerable not just to inequity in education but also - and as a direct result - in access to employment.” 

Some children would simply not be able to develop these skills as fast as their peers because they may not have access to devices at home, among other possible reasons.

  1. Bias, discrimination and wellbeing concerns

The output from any AI tool is only as good as the data it learns from. If the data has inherent biases, it is likely the tool will also discriminate against certain individuals. This is a widely known and discussed flaw of AI. The datasets are continuously improving, so the output gets better and biases are reduced, but in the context of education, it is extremely important to tread carefully. 

Plenty of AI tools, otherwise considered recreational or entertainment, distort the faces of individuals. While this may seem harmless, it can trigger body image issues and consciousness about looks in young children. The impact of any AI tool on well-being and mental health should also be taken into account. 

The Way Forward for Safeguarding

We need to pre-empt any threats to safeguarding this exercise may pose. The Government has already issued some guidelines. We will be drawing and building upon that foundation to explain the changes. 

  1. Safety by design” approach: One of the primary principles that should inform your safeguarding policy is “safety by design.” This means instead of fitting safeguarding mechanisms later, or as a reactionary measure, we should look at doing so within the design of the safeguarding policy or the tool. For example, using clean data for training algorithms or introducing technical barriers to harmful content through ethical AI. Such measures can be included in the development, deployment, and maintenance stage of the AI tool itself. 
  2. Curbing data-related cyber risks: Generative AI stores and learns from the data we feed into it. Everyone, from children to teachers, should be instructed not to enter personal and sensitive data into generative AI tools. Treat any data you input as data you have shared with the internet. Would you share something with ChatGPT that you wouldn’t want on the internet? Then refrain from sharing it. How can we ensure the privacy and security of sensitive data stored in systems? Strengthening cyber security measures is key for schools. 

  1. Addressing threats including online sexual abuse and exploitation, biased or false information: In the education sector, everyone should be made aware of the extent to which Generative AI can create believable content like credible scam emails, doctored photographs and even realistic video content that isn’t true. They should also discuss the ways in which false or misleading information can be generated. It is worth looking into a database of resources or tools that help identify risks in AI tools, or to track the different functions that AI tools can now perform. 

For example, sharing how Midjourney can be used to create believable images that aren’t real and then going through a few use cases of how it can be misused will act as an alert to watch out for.

  1. Encourage children to proceed with caution: Encourage children to speculate about the authenticity of the content they consume, and apprise them of the rampant malpractices, so that they can make a judgement call on whether they should believe something they come across on the internet or not. 

As educators, you may also want to look into drawing a plan for the acceptable usage of Generative AI. This guidance should be rooted in equitable access and lay down the types of tools and extent of school work students can do with the help of these tools. 

As far as your safeguarding policy is concerned, we recommend looking into the above points and coming up with checks to ensure that the AI tools promote privacy-by-design usage, do not misuse data, keep clear records/history of usage, and so on. The emergency of Generative AI necessitates that you revisit how you look at safety and add safeguards in place. 

Reference Guidelines:

If you want to create an action plan, here are some resources you can consult:

  1. UNICEF’s Policy Guidance on AI for Children 
  2. World Economic Forum’s AI for Children toolkit 
  3. Department For Education Guidelines

Generative AI has entered the mainstream with tools like ChatGPT, prompting the need to adjust safeguarding plans for children. Embrace their curiosity about AI, but implement safeguards. AI is changing work and learning, so equip kids with AI skills. Risks involve phishing, impersonation, career uncertainty, digital divide, bias, and well-being concerns. Safeguard by designing safety into AI tools, curbing data risks, addressing threats, and encouraging cautious content consumption. We cannot ignore Generative AI and its impact; we can be proactive in ensuring our children’s safety as they navigate the novelty.

Share this post
Kritika M Narula

Kritika M Narula

Kritika is a research and media professional based in India.