Disruption Part 2 – A Game of Queens

Before reading the rest of this article, a little Public Health warning: this is very geeky. In fact, it may work as a bit of a geek detector—if you find what follows amusing rather than bewildering, then you’re probably a geek too!

The story starts in the mid-to-late 1980s. Artificial intelligence was a popular topic, with much of the academic world focusing on symbolic—that is, algorithmic—approaches to AI. The non-symbolic approaches using neural networks hadn’t gained much traction in the circles I moved in. Within the symbolic AI world, considerable attention was devoted to logic programming, where a program was expressed as a formal logic specification and its execution corresponded to proving a logical theorem. Prolog, which had been gaining momentum since the late 1970s, was very much the flavour of the day in programming languages.

I graduated with my Master of Science in Computing Science from Uppsala University in Sweden in 1988. I spent a couple of years after that doing research and teaching at the same university. Most of our research projects were related to logic programming. One of the main challenges was to find scalable solutions, including using parallel computation models.

A popular example problem was the N-Queens problem: imagine a chessboard which has N rows and N columns. On that chessboard, you need to place N queen pieces in such a way that no queen can take any other queen. This problem was interesting because it illustrated how rapidly computational complexity could grow. As the value of N increased, the amount of computation needed grew very fast. The traditional solution approach was to place one queen on the board, then the next, ensuring that none of the pieces could take out each other. After repeating this process a number of times, it was quite likely that you would reach a situation where you couldn’t place the next queen on the board. In that case, you would have to backtrack—going back a step in history and trying another chess piece placement before proceeding forwards again. You might well have to go back several steps before moving forward. Fortunately, programming languages like Prolog are very good at handling this backtracking, which meant that the programming code for the N-Queens problem was quite short and straightforward.

At this time, there was a massive research project under way in Japan: the Fifth Generation Computer Systems (FGCS) project, which started in 1982 and had a budget of hundreds of millions of US dollars. Much of the focus was on symbolic AI and logic programming. Together with a couple of colleagues at my university, I managed to get an article on parallel execution of logic programs accepted at an FGCS conference in Tokyo. (The article wasn’t particularly revolutionary and wouldn’t improve anyone’s life significantly.) So I had the good fortune to fly over to Tokyo to attend the conference in November 1988.

I remember that one of the flagship exhibitions was a purpose-built, high-performance machine designed specifically for executing logic programs. Someone told me they had “vacuumed all of Japan for RAM memory” to build the machine, and that they had to perform a power-off-power-on cycle every morning to reset it. One of the demonstration examples was our N-Queens problem. I can’t remember the details exactly (and I haven’t found it published), but I think it managed to solve the problem with N=64. The team behind the machine was very proud of their accomplishment. Note that it used the classic one-at-a-time approach to the N-Queens problem, designed to maximise the ability to handle backtracking transparently in Prolog.

Fast-forward a couple of years. I had left the academic world and started working for a Swedish IT consultancy. Somehow I managed to convince my manager that I should attend the AI conference AAAI-90 in the summer of 1990 in Boston. At this point, the main AI buzzword in the market was Expert Systems, which was one of the things my company was working with at the time. What we didn’t realise then was that we were standing right at the edge of what would become known as the Second AI Winter, as the Expert Systems boom was about to collapse.

One presentation at the conference had the title “Solving Large-Scale Constraint Satisfaction and Scheduling Problems Using a Heuristic Repair Method”, authored by four researchers at NASA. Hidden behind this title was an idea to solve certain problems in a quite different way than before. One of those problems was indeed our favourite N-Queens problem. Instead of placing the queens on the chessboard one by one, their approach was basically to dump all the queens onto the board (in a reasonably clever way) and then fix any problems—that is, queens that could take each other—if and when needed.

I remember the presenter saying that they couldn’t afford any sophisticated computing resources and had to make do with a pretty standard workstation—a SPARCstation 1 from Sun Microsystems. They applied the solution approach to N=10, which took a few milliseconds, then tried N=100, which again took just milliseconds, scaling all the way up to N=1,000,000 and still landing on a few milliseconds. At this point, some of the people in the audience were furiously red in their faces, and I was afraid that someone would go and punch the presenter. Several people in the audience had spent their academic careers optimising the same incremental approach that the FGCS team had used, and here these researchers had ripped the carpet from underneath their feet, effectively turning a computing problem with exponential complexity into one with near-linear performance—using bog-standard hardware.

So what was the revolutionary approach these researchers came up with? Rather than using the conventional one-by-one method, they simply tackled the problem from a completely different angle. The key lesson here is that when you’re grappling with a genuinely difficult problem, it’s often worth attacking it in a radically different way rather than endlessly refining your first approach. A fresh perspective on the problem can yield a solution that’s not just different, but orders of magnitude better. As a fascinating sidenote, the paper actually addresses some of the challenges with neural networks—over 25 years before deep learning became mainstream!

Reach out to us!

Get in touch with us today to claim your free one-day consultation for new customers. Explore how Realitech’s expertise in Test-Driven Integration, agile delivery, and technology transformation can help reduce risk, accelerate progress, and deliver real value to your projects.