Why AI in education is inevitable, and what sector players must prepare for

LoHo Learning

GenAI tools have the capability to improve learning outcomes and improve delivery efficiencies.

Photo credit: Shutterstock

Artificial intelligence (AI) is penetrating education as an enabler of learning. Whereas its application remains on the wish list in many learning institutions, the urgency to integrate it cannot be ignored.

This calls for preparedness, which necessitates retooling of the human resources within the education ecosystem, as well as having in place the necessary technical infrastructure.

The Guidance for Generative AI in Education and Research Report 2023 by the United Nations Educational, Scientific and Cultural Organisation (UNESCO), underlines the wide-ranging capacities for information processing and knowledge production enabled by AI, which has huge implications for education, as they replicate the higher-order thinking that constitutes the foundation of human learning.

The report further underscores that generative artificial intelligence (GenAI) tools are increasingly able to automate basic levels of writing and artwork creation, necessitating education policy-makers and institutions to revisit why, what and how we learn in this new phase of the digital era.

Presented as a double-edged sword, GenAI tools have the capability to improve learning outcomes and improve delivery efficiencies on one hand. On the other hand, they can create a sense of digital poverty in less endowed institutions, especially those in the Global South.

This is by virtue that GenAI tools rely on voluminous amounts of data and massive computing power, in addition to its iterative innovations in AI architectures and training methods, mostly available to the largest international technology companies and few economies (the United States, People’s Republic of China, and to a lesser extent Europe).

The rapid pervasion of GenAI in technologically advanced countries and regions has accelerated exponentially the generation and processing of data and has simultaneously intensified the concentration of AI wealth in the Global North.

Human-centred approach to AI

Therefore, to harness the potential benefits of GenAI in education, it first needs to be regulated. Regulation of GenAI for educational purposes requires a multi-sectoral synergy and policy measures based on a human-centred approach to ensure its ethical, safe, equitable, and meaningful use.

The 2021 Recommendation on the Ethics of Artificial Intelligence provides the requisite normative framework to start addressing the multiple controversies around GenAI, including those that pertain to education and research.

It is based on a human-centred approach to AI, which advocates that its use should be at the service of the development of human capabilities for inclusive, just, and sustainable futures. Such an approach must be guided by human rights principles, and the need to protect human dignity and cultural diversity.

Furthermore, the 2019 Beijing Consensus on Artificial Intelligence (AI) and Education affirms that the use of AI technologies in education should enhance human capacities for sustainable development and effective human-machine collaboration in life, learning and work.

Moreover, there are calls for further actions to ensure equitable access to AI to support marginalised people, including persons with disabilities, and address inequalities while promoting linguistic and cultural diversities. The consensus suggests adopting whole-of-government, inter-sectoral and multi-stakeholder approaches to the planning of policies on AI in education.

Consequently, the AI and Education: Guidance for Policy-makers (UNESCO, 2022), emphasises that a human-centred approach means examining the benefits and risks of AI in education and also the role of education as a means of developing AI competencies.

It proposes concrete recommendations for the formulation of policies to steer the use of AI to first enable inclusive access to learning programmes, especially for vulnerable groups; secondly to support personalised and open learning options; thirdly to improve data-based provisions and management to expand access and enhance the quality in learning; and fourthly to monitor learning processes and alert teachers to failure risks.

According to the Technical Advisor for FAIR Forward AI for All, GIZ Kenya, Mark Irura, if ethical and meaningful use of AI is to be realised within the Kenyan education sector, “there will be a need to localise the natural language to generate content, for example in Kiswahili, and a need to invest in data banks that are consistently updated to enhance accurate and factual reporting”.

He said this at the Kenya EdTech Summit 2023 organised by EdTech East Africa, on September 20-21.

How about the question of copyright infringement and AI?

The regulation of the use of copyrighted materials in the training of GenAI models and defining the copyright status of GenAI outputs are emerging as new accountabilities of copyright laws. It is worth noting that presently, only China, European Union (EU) countries and the United States have adjusted copyright laws to account for the implications of GenAI.

The US Copyright Office, for instance, has ruled that the output of GenAI systems, such as ChatGPT, are not protectable under US copyright law, arguing that “copyright can protect only material that is the product of human creativity”, according to the US Copyright Office, 2023.

The European Commission underlines that within the European Union, the proposed EU AI Act requires developers of AI tools to disclose the copyrighted materials they use in building their systems (European Commission, 2021). China, through its regulation on GenAI released in July 2023, requires the labelling of outputs of GenAI as AI-generated content and only recognises them as outputs of digital synthesis.

What is at stake for the education sector?

Now that GenAI is anticipated to penetrate deeper into the education sector, learning institutions need to develop capacities to leverage its potential benefits and attendant risks. Based on such understanding, stakeholders that include governmental regulatory agencies, providers of AI-enabled tools, and institutional as well as individual users, can validate the adoption of AI tools. Moreover, teachers and researchers need to start or continue strengthening their capacities towards the proper application of GenAI.

 Ms Marangu is a communication and public policy analyst