
The concept of “singularity” regarding technology has been around for decades. The general idea is straightforward. Technology, especially artificial intelligence, would develop the ability to improve itself over time. AI is another concept that goes back to about 70 years ago. Eventually, technology could “out-think” people. At that point, technology would take over: this is the singularity. Both science fiction and films have delved into this likelihood. I like Isaac Asimov’s novels because they are more science predictions than science fiction. Films, both old and current, explore this likelihood too. The only question has always been, when?
Enter the large language models (LLMs)! The tools that have brought the day-to-day application of artificial intelligence to everyone’s fingertips or audio command. And all these driven by advances in the technology for designing and fabricating electronic chips, and human ingenuity. I must qualify “everyone” here because I know that in my country, Uganda and many developing countries, the majority are actually excluded through increasing global and national digital divides.
What are the implications? One of the most challenging issues is the invasion of current approaches to education. In fairness, many of these approaches are severely outdated. Artificial intelligence applications will spew out fluent and convincing answers. Never mind that some answers are hallucinations or biased. They respond to literally any question. This has reduced months of research, analysis, and writing drudgery to hours, minutes, and seconds. This of itself (other than the hallucinations, to which there are increasing approaches to minimizing as well as the intrinsic source-driven biases of LLMs) are not bad. The danger is that students worldwide, at all levels, are mastering and applying AI applications to their classwork and assignments. The common thinking is that these tasks develop and refine skills and abilities. These skills are believed to keep humans ahead of AI.
We have a grim reminder that we need to revisit and understand the objectives of education. We must explore how these objectives are achieved and evaluated. Additionally, we need to appreciate the fact there are two ways the singularity can be achieved. One is by AI getting better and outperforming humans in all respects. The other is by human beings retrogressing through over-dependence on AI for answers, direction, and decisions. It appears to me that via our educational systems, we as human beings are taking the latter path – a self-subjugation to a seemingly superior entity.
I am tempted to take this to the absurd level. LLMs are developed based on what people put out online. What people put out online is increasingly generated by LLMs. Who is in charge, and what is the endgame? This is food for thought for each of us.