
This website uses cookies to improve your experience. We\'ll assume you\'re ok with this, but you can opt-out if you wish. Read More







No, this is not an attempt to predict how the world with Artificial Intelligence might look in the year 2060. This is about looking back at AI in the past—because it does have a history, often forgotten in today’s hype. More specifically, this is a look back at my own journey through this field.
The year is 1988, and the place is Uppsala University in Sweden. I had just graduated with a Master of Science in Computing Science and taken up a job as researcher and lecturer at the university. I volunteered to teach one of the courses that sounded particularly interesting: a four-week course in Artificial Intelligence. To give some context about the time: my faculty had just received its first Macintosh computers—dual-floppy models without hard drives. We didn’t have access to seriously powerful computers, and there wasn’t much data available. The Internet existed, but the World Wide Web hadn’t been invented yet. Universities shared publications via FTP servers and bulletin boards, which felt revolutionary at the time.
I took the challenge of teaching Artificial Intelligence very seriously—probably a bit over the top, to be honest. Even at that time, the field of AI was remarkably wide, spanning both symbolic (algorithmic) and non-symbolic (e.g. neural networks) approaches. I did a lot of reading before starting to prepare my lectures. The next challenge was finding a suitable textbook for the course. Going through everything I could find, I concluded that none of the books were good enough. So, being a bit mad, I decided to write my own textbook for the course. In the end, I worked like crazy: write a chapter, Xerox it, hand it out to the students, deliver my two-hour lecture, grab a couple of hours of sleep, then jump onto the next chapter. Rinse and repeat.
I tried to cover a lot of ground in my lectures: neural networks (yes, they did exist back then), different variants of mathematical logic, natural language understanding, and so on. I remember giving a logic proof of the “three wise men” problem during one lecture. We had six sliding blackboards at the front of the lecture hall, and I needed nine blackboards to get through the problem. (I temporarily got a bit lost in the middle of the proof, but managed to get back on track.)
I’m wired in such a way that I understand things better when I see them work. That was a bit of a problem with AI back then. The computers were rubbish, there wasn’t much data to work with, and there weren’t any decent tools available. This was particularly tricky when it came to neural networks, which I could pretty much only view from a theoretical perspective. Thirty-five years later, the contrast is stark. Our computers are cheap and powerful, there’s an abundance of data, and the tools—particularly for neural networks and machine learning—are amazing. Creating a simple neural network back then was seriously painful; today, frameworks like TensorFlow and PyTorch make it possible to build complex neural networks in a much more straightforward way, with good support for handling large datasets for training.
In 1990, I realised I wasn’t cut out for the academic world and instead started working as a software developer at a Swedish consulting company. There was still a lot of noise—and hype—around what was then known as expert systems. This was precisely when the so-called “second AI winter” was beginning. I did some work with expert systems, and I have to confess I developed a degree of scepticism about these applications. I remember developing an application for troubleshooting old telephone switching equipment. Whilst there had been enormous hype in the market through the 1980s, by the mid-1990s few people wanted to touch anything labelled “expert system.”
It has been fascinating to watch how the field of artificial intelligence has developed over the years. The big mass-market breakthrough really happened with large language models (LLMs) and generative AI for image and video generation. It’s worth remembering that many of these underlying technologies existed quite a long time ago. It is the abundance of computing power and data that has truly unlocked the potential of artificial intelligence.
I still carry a healthy dose of cynicism about the AI hype, even whilst knowing full well what amazing value it can deliver. Perhaps the most useful insight from my 35 years in this field comes when I speak to people who refer to AI as magic: It’s not magic—it’s maths!
Get in touch with us today to claim your free one-day consultation for new customers. Explore how Realitech’s expertise in Test-Driven Integration, agile delivery, and technology transformation can help reduce risk, accelerate progress, and deliver real value to your projects.
This website uses cookies to improve your experience. We\'ll assume you\'re ok with this, but you can opt-out if you wish. Read More