My op-ed on “How AI will transform education”

This piece was originally published in Turkish here. Translated via DeepL

How AI will transform education

We are in the early years of the productive AI revolution. It is hard to predict what kind of transformation will take place in which sector, but it will undoubtedly affect every industry. Education is one of the most critical ones.

 

I hesitate to make predictions about a field as huge as education, but I will try anyway in this article.

However, I consider myself most competent in higher education, where anecdotal examples and general observations will come from.

As I started writing this article, I kept thinking about this news story: In 1986, the National Council of Teachers of Mathematics in the US adopted a policy recommending that “calculators be integrated into the school mathematics program at all grade levels for classroom work, homework, and assessment.” a  group of  math  teachers  protesto protested.

It is conceivable that issues such as the use of ChatGPT, which is a hot topic of discussion today, will disappear after a while.

Before I go any further, let me briefly touch on the difference between classical AI and generative AI.

Although not wholly separable, classical artificial intelligence (AI) is mainly used in tasks such as logical inference, problem-solving, simulation, and optimization. Generative artificial intelligence (GAI) is a more advanced AI system. Using techniques such as deep learning and neural networks, it learns from large data sets and uses this knowledge to produce new, original content. AI can create realistic and authentic images, text, music, and more based on existing data.

The excitement we are experiencing right now is precisely because these advanced AI tools are possible.

When I asked the AI about the topic of the article

For this article, I asked ChatGPT, Gemini, and Claude the same question:

What can you say about the possible role of generative AI in education and the transformations it will bring?

Almost the same answers came back, and they mostly ignored the “productive” part of the question. I will go through their answers and add my comments.

First, it should be noted that the adjective “productive” in the current wave of Generative AI (GAI)s may soon lose its meaning.

What we understand from AI will already be GAI, which reminds me of a time when what many users understood from the internet was social media.

We will see increasing convergence between various AI models. The following situations point to a combination of AI and other models.

What transformations are coming?

The most talked about development is “personalized learning.”

Machine learning algorithms can analyze students’ interests, learning styles, and performance to deliver personalized learning experiences. Adaptive learning systems can adjust the content and difficulty level according to each learner’s needs.

When virtual and augmented reality applications are added, students can gain experience in interactive and multidimensional learning environments.

The second significant transformation is related to intellectual evaluation and feedback processes.

Closely related to the first point: Automated grading systems can assess students’ written work quickly and consistently. Feedback mechanisms can identify students’ strengths and weaknesses and provide personalized recommendations. They can also monitor students’ performance and provide up-to-date reports to teachers and parents.

The workload of teachers and students can be lightened

The third important heading concerns educational resources and tools.

Smart search engines can help learners quickly find resources that suit their needs. They can also enrich the learning experience with content recommendations and customized learning materials. On the other hand, AI-powered tools can lighten teachers’ workloads and allow them to work more efficiently.

For example, detailed lesson materials, exercises, examples, and explanations can be created based on the outlines or outlines prepared by the teachers.

At this point, Google Gemini added the following: With GAI, virtual laboratories and experiments can be created, 3D models and animations can be used, or interactive stories and games can be developed.

Another topic is accessibility and inclusion.

The argument goes something like this: Online learning platforms and virtual classrooms can democratize access to education, overcoming geographical barriers. Furthermore, voice command systems, text-to-speech tools, and other assistive technologies can make education more inclusive for students with special needs. Likewise, students with creative talents can be educated in a more specialized way.

In addition, the development of educational chatbots, ease of language learning and translation, more fun and interactive lessons, and more time for teachers to focus on their students are also mentioned.

“Don’t let the students see this.”

But what is the practical situation?

The current analysis may lose its validity since we are talking about a revolution in its early stages. However, it would not be inaccurate to say that a change of mentality in education is required for these things to happen.

In this process, teachers or trainers need to be trained first. In some cases, trainers lag behind students.

Instructors can lack the competence we can call AI literacy. A good example is the reaction of some instructors who saw ChatGPT for the first time and said, “Students should not see this.”

PPI tools will not automatically produce better training material; they need the experience of a trainer to make them do that.

Even though the trainers are ready, it is important to note that the GAI tools are not yet stable. Due to internal development problems, regulations of public institutions, and crisis-like events, it is difficult to say that they are very stable.

If stability is also achieved, we need to consider the infrastructure problems on the users’ side.

UGC systems often require good internet connections and powerful computers. Unlike social media, they are not based on a “freemium” business model. Good service requires money. This will bring a new version of digital inequality, and the promised transformations in education will come later for some disadvantaged groups.

Finally, it is essential to draw attention to systematic discriminations in the datasets. The consequences of these discriminations may be more visible in educational content related to social sciences and communication. Even if they are not intentional, it is necessary to see that the information produced will be Western-centered and that it will take time for data sets to expand to all geographies.

Educators need to be aware of and prepared for these aspects of content production and experience using URM tools.

A return to pen and paper?

With transformation on the horizon, evaluating teaching processes is the first dimension of the issue. Academia is a relatively slow-transforming institution. Its first encounter with new technology is more reactive. Before creative transformations, more thought is given to incorporating URM into the existing system or defending against it.

One of the first reactions is a return to classic exam formats. Students who are now typing on keyboards and have partially lost the ability to write by hand are expected to return to pen and paper.

Programs like Turnitin, which tries to determine the plagiarism rate, have started to show the level of AI similarity. However, the program itself says that unlike plagiarism, the AI similarity rate is not certain and that there is a high probability of “false positive” results. A faculty member who does not understand the process well may draw false conclusions.

Translations from advanced translation programs can sometimes be considered AI content. Most initial reactions are more at the stage of “evaluating” the learner. There has not even been a step towards developing or personalizing content for the student. The real promise of AI goes beyond assessment to a higher quality and systemic transformation of education.

In this process, the contribution of every stakeholder, from AI developers to the public, from trainers to students, is vital. There is no ready-made roadmap that will be realized automatically.

Confronting artificial intelligence

It is very important to confront AI, accumulate and share the practices we propose, and take into account the constraints mentioned above.

I want to emphasize this: Each discipline will have its roadmap. Top-down solutions often do not work. What I do as a communicator may not work for a lawyer colleague. We need the contribution of different stakeholders.

Let’s finish with a few examples:

Shortly after ChatGPT 3 was published, my History of Thought course had a make-up exam. In this course, which is my most classic course in terms of assessment, I carried the same questions from the final exam to the make-up exam and asked the students to ask these questions to ChatGPT. The students would examine the result, confirm it, and find resources accordingly.

At that time, there were no plugins for ChatGPT, so this method was possible.

Various academic tools are now available through the app store—a small example of the development of AI tools.

Another time, students were asked to try a “prompt engineering” experiment. Our campus does not seem to have entered the data sets yet. Therefore, when the prompt “santralistanbul” is entered, the image of our campus does not appear. In this case, the student’s homework was to find prompts. There had to be such prompts so that our campus image would appear.

Our student friends are currently designing a customized GPT. We don’t have the code knowledge to implement the design ourselves, but at least we are looking for a GPT to meet specific needs.

 


Discover more from Erkan's Field Diary

Subscribe to get the latest posts sent to your email.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.