In just a couple of years, the conversation around generative AI has shifted from “should we use AI?” to “how do we use it ethically and responsibly?” While AI offers many opportunities, its integration into early childhood education and care (ECEC) demands careful thought and reflection.
This discussion has become more urgent than ever with increasing concerns around child safety and changes to the National Quality Framework. The changes, which came into effect from 1 September, state that services must have policy and procedures around the safe use of digital technologies and online environments (Regulation 168), including AI.
The Productivity Commission’s interim report Harnessing data and digital technology states that AI can “speed up productivity growth” and help workers operate more efficiently and effectively, therefore boosting productivity. At the same time, it cautions against heavy-handed regulation, which is currently being reviewed by the federal government and those who work in child protection.
While Associate Professor Kate Highfield, from the University of Canberra Faculty of Education, cautions against moral panic, she says the key is to “put the humans that we work with first, over the technology.”
Responsible use of AI, she says, requires a deep commitment to privacy, transparency with staff and families, and taking steps to ensure it supports, rather than replaces, relational and professional expertise.
Services should have an AI policy in place that demonstrates how systems are managed in an ethical and responsible way. It is also important to allocate key personnel who will be responsible for overseeing strategy, policy and risk management.
How is AI used in ECEC?
As outlined in our 2023 Amplify article on this topic, generative AI can act as a helpful administrative assistant making tasks less time-consuming and more efficient. This may include assistance with spelling, grammar, summarising information, polishing ‘tricky’ emails, or translating languages for families, or the use of design software, such as Canva, for newsletters or posters.
The AI functions embedded in ECEC proprietary software such as Storypark, OWNA or Mana can also assist with streamlining programming and planning specific to the sector. These platforms can help with communication between educators and families, including updates and newsletters; story writing; and language translation.
Prioritising privacy and data protection
One of the most critical considerations when using AI is privacy and data protection. The Office of the Australian Information Commissioner (OAIC) and other regulatory bodies strongly recommend that educators avoid uploading personal or sensitive information into open-access AI tools. This can include full names, dates of birth, images, or biometric data (unique physical or behavioural characteristics) of children or adults. A simple but effective practice is to use only first names or initials when necessary.
Assoc. Prof Highfield stresses the importance of choosing AI tools and platforms that have clear, transparent privacy policies and robust data security measures.
Proprietary, organisation-endorsed tools such as Microsoft Copilot, or ECE-specific platforms that have a dedicated AI policy, are often better choices than public-facing tools, which can lack transparency around where they store their data, or how data can be removed or managed.
Large language models (LLMs) may use the information they are fed to learn and improve. While this makes them more powerful, it also means that any data input could become part of their training set. This can introduce a significant safety risk, especially when it comes to sensitive personal data about children, families or staff.
Storypark specifies that their AI partners "do not use customer data to train or improve their models" and delete data after processing. Chief Customer Officer, Katie Dowle, says the company has “responsible AI commitments”, which have a strong emphasis on privacy, security, and ethical use.
“We only share the data that is needed at the point of request,” Katie says. “We don't send things like children's photos that are not relevant to that request... We're not sending any additional data about the child or the teacher. It's really confined to that specific request, and then it is subsequently deleted.”
She says transparency with customers is maintained through resources such as an AI fact sheet: “The fact sheet shows what happens to data, how it’s deleted and not used for training. We really spell out the information and try to break it down for customers.”
Assoc. Prof. Highfield applauds this type of transparency and also stresses the importance of services communicating clearly with families and staff about the implementation of AI, how it is used in the service, and the steps taken to protect children's digital footprints.
“Families have a right to know and to raise concerns,” she says.
Building digital literacy and critical thinking
While AI can be a big time-saver, continuous critical reflection is paramount. This is where professional judgement comes in, according to Assoc. Prof. Highfield. While an AI model might write a piece of documentation, she says it is important to question: “Is that accurate? Is that fair? Is that appropriate? An educator's role is not to simply accept AI output but to use it as a starting point.”
Fact-checking is imperative because AI systems are known to ‘hallucinate’ or produce inaccurate, nonsensical or false results, despite presenting them in a plausible way.
Bias is another key concern as AI systems may have been trained on wrongful information and may perpetuate and amplify stereotypes or lead to discrimination. This is particularly worrying for children and families experiencing vulnerability, who are more susceptible to discrimination, or simply may not be considered in planning, due to algorithmic bias if they are not properly represented in datasets.
Another ethical consideration when using AI is the environmental impact it has. Data centres use a significant amount of power and drinkable water for temperature control. Building the centres also comes at a cost, with a huge loss of vegetation, and e-waste, which all gives pause for thought.
Professional development and planning
The responsible use of AI is not just about having the right tools, but also the right skills. Services should consider creating professional development opportunities for staff to improve their digital literacy skills and encourage critical thinking.
ACECQA's "Questions to guide reflection on practice" is an excellent starting point for discussion. The questions posed include:
Using AI for everyday practice
Assoc. Prof. Highfield warns of the risk of “over reliance” on AI and says the key is using it to support practice without sacrificing professional autonomy or the genuine relationships educators build everyday with children and families.
While AI can be a valuable tool for tasks such as translating a story or refining an observation, it should never replace the core professional responsibilities of an educator.
The foundation of effective pedagogy remains an educator's ability to draw upon research, deep content knowledge, and evidence-based practices. After all, it is through nuanced observation and thoughtful reflection that an educator truly gains a profound understanding of a child's unique identity and learning journey.
Warnings and considerations when using AI:
ECEC services should:
*Unless agreed to by the child’s family and the platform states how and where those photos are securely stored, and when they will be deleted.
Further reading and resources
Office of the Australian Information Commissioner: Guidance on privacy and the use of commercially available AI products | OAIC
eSafety Commissioner: Generative AI – position statement | eSafety Commissioner
ACECQA: PolicyGuidelines_SafeUseOfDigitalTechOnline_final.pdf
ACECQA: OSG_Tools_ArtificialIntelligence(AI)AndOnlineSafety_QuestionsToGuideReflectionOnPractice.pdf
NSW Government: Artificial intelligence (AI) procurement essentials | info.buy.nsw
Productivity Commission: Harnessing data and digital technology
CELA: AI and documentation: What are the ethical considerations
CELA training: Effective use of technology with children webinar