Dr. Zhao contributes expert input to the Internet Matters report

On July 19, Internet Matters published their latest report of Me, Myself and AI, exploring how children are using AI tools and the unique risks they may face.
Dr Zhao was one of the four experts invited to provide additional inputs to inform the recommendations of the report.
The previous report by IM, published in February 2024, shows that while 54% of children are actively engaged with generative AI tools and use them for schoolwork or homework, only 40% of the schools have spoken to their students about using AI in relation to schoolwork or homework.
The new report shows that using genAI to help with schoolwork continues to the most common use by children, while it also shows that a significant proportion of children are using genAI to seek advice (nearly 25%) or companionship (over 35%), with an even higher proportion from vulnerable children.
A key concern highlighted by this research is that children are using AI chatbots in emotionally driven ways, including for friendship and advice, despite many of the popular AI chatbots not being built for children to use in this way. Almost a quarter (23%) of children who use AI chatbots have sought advice from the tools, and over a third (35%) of children who have used AI chatbots said chatting with an AI chatbot feels like talking to a friend, with this figure rising to 50% for vulnerable children.
The report offers timely confirmations of the risks children are face, including i) over-reliance and emotional attachment; ii) exposure to inaccurate and harmful advice; and iii) their high trust in advice and blurred boundaries between AI and their relationships with these technologies
While these findings highlighted the need to increase safeguarding in technologies not designed for children, they also raise the importance of strengthening support for children’s agency during their interactions with these technologies.
For example, to mitigate children’s over-reliance on these technologies, it would be critical to provide mechanisms to encourage children’s critical thinking. Designs could be explored to encourage shared interactions with caregivers or educators, rather than solitary use. Future designs could also include AI prompts that ask children to reflect on their own opinions, feelings, or solutions, before or after receiving responses, emphasising that their ideas matter. Finally, like what has been explored in screentime mitigations, technologies could consider including designs for disengagement, through e.g. introducing built-in pauses or soft cues encouraging breaks, to reduce compulsive usage and foster balance with offline life.
However, while these mechanisms have the potential to nudge children towards behaviour changes, we must not neglect the need to encourage deep reflections, encouraging children to recognise their capacity to exercise agency and their development of personal values.
For example, to facilitate children’s critical thinking, designers could consider using AI to teach critical thinking skills by prompting children to self-reflect, e.g., How might someone double-check this answer? or Who else could you ask about this?
Better transparency could also be provided to offer traceable sources or summaries of how the AI formed its response, encouraging verification and learning about source quality.
As part of the launch of the report, an Online Panel was hosted by Internet Matters, attended by experts from industry and policymakers. If you missed the panel, please stay tuned via their web site or give their report a read.