The concept of “singularity” regarding technology has been around for decades. The general idea is straightforward. Technology, especially artificial intelligence, would develop the ability to improve itself over time. AI is another concept that goes back to about 70 years ago. Eventually, technology could “out-think” people. At that point, technology would take over: this is the singularity. Both science fiction and films have delved into this likelihood. I like Isaac Asimov’s novels because they are more science predictions than science fiction. Films, both old and current, explore this likelihood too. The only question has always been, when?

Enter the large language models (LLMs)! The tools that have brought the day-to-day application of artificial intelligence to everyone’s fingertips or audio command. And all these driven by advances in the technology for designing and fabricating electronic chips, and human ingenuity. I must qualify “everyone” here because I know that in my country, Uganda and many developing countries, the majority are actually excluded through increasing global and national digital divides.

What are the implications? One of the most challenging issues is the invasion of current approaches to education. In fairness, many of these approaches are severely outdated. Artificial intelligence applications will spew out fluent and convincing answers. Never mind that some answers are hallucinations or biased. They respond to literally any question. This has reduced months of research, analysis, and writing drudgery to hours, minutes, and seconds. This of itself (other than the hallucinations, to which there are increasing approaches to minimizing as well as the intrinsic source-driven biases of LLMs) are not bad. The danger is that students worldwide, at all levels, are mastering and applying AI applications to their classwork and assignments. The common thinking is that these tasks develop and refine skills and abilities. These skills are believed to keep humans ahead of AI.

We have a grim reminder that we need to revisit and understand the objectives of education. We must explore how these objectives are achieved and evaluated. Additionally, we need to appreciate the fact there are two ways the singularity can be achieved. One is by AI getting better and outperforming humans in all respects. The other is by human beings retrogressing through over-dependence on AI for answers, direction, and decisions. It appears to me that via our educational systems, we as human beings are taking the latter path – a self-subjugation to a seemingly superior entity.

I am tempted to take this to the absurd level. LLMs are developed based on what people put out online. What people put out online is increasingly generated by LLMs. Who is in charge, and what is the endgame? This is food for thought for each of us.

2 thoughts on “The Singularity – Is Artificial Intelligence already taking over Humanity?

  1. Dear Tusu,
    I find your blog quite addictive – blame it on my current exposure to the bigger world of tech through my ongoing Stanford MBA. Back to the article, those last two paragraphs have brought back memories from the learning culture even at grad school. As you aptly describe our current crossroads, it becomes clear that the AI-singularity isn’t a distant sci-fi fantasy, it’s unfolding now, with both promise and peril woven into its fabric. The rapid emergence of AGI‑adjacent systems means we’re already living through a transformation where machines can outthink us in specific domains. The Times, wired.com, reddit.com, and several other platforms have scholars who are writing about this trend. Your words challenge us not only to witness this moment, but to figure out how to shape it.

    The resounding point is your framing of this era as a human reckoning. Geoffrey Hinton, and many others, have voiced legitimate concerns that unaligned AI could eclipse our relevance entirely. Not to mention the already shrinking nature of our brains because of reduced intense and targeted learning activity. But you remind us that it’s not too late. We must anchor our organisations and learning institutions, with their cultures, values, and purposes, around intentional stewardship. If we don’t treat AI as a collaborator designed with our humanity in mind, we risk losing control of the narrative.

    Your closing call, to keep agency, design carefully, and never forget what makes us human, is exactly what’s needed. This isn’t just a technical challenge; it’s a moral and cultural one. Thank you for shedding a light on that path. Your voice adds clarity, courage, and conviction at a moment when we urgently need all three.

  2. Thanks Maxima. You additional analysis is spot on – if indeed anything can be spot on in these times when subjectivity defines what is accepted as reality.

Leave a Reply to Maxima Nsimenta Cancel reply

All fields marked with an asterisk (*) are required