top of page
  • Dr Michelle Tempest

How was Artificial Intelligence born? How will AI evolve and affect the future of life, including he


With so much hype and expectation placed upon the shoulders of the two letter acronym AI, you may expect the letter ‘A’ to stand for Atlas. After all, it was Greek legend Atlas who carried “the weight of the world” on his shoulders. At this point in human evolution, AI is a ray of hope for our future. Excitement revolves around AI solving global issues too complex for human minds to comprehend, such as: how can we save the sea from plastic pollution? This week our Prime Minister, Teresa May, set a target for a “whole new industry around AI-in-healthcare”.

But where did AI come from?

The first use of the term AI can be traced back to 1956 in the American State of New Hampshire, when a summer conference was laid on by John McCarthy, an assistant professor of mathematics at Dartmouth College in Hanover. Along with three other researchers, Marvin Minsky of Harvard, Nathan Rochester of IBM and Claude Shannon of Bell Telephone Laboratories, they submitted a funding proposal to the Rockefeller Foundation that stated: “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” As a shortened advertising pitch for the conference, the term Artificial Intelligence was coined.

McCarthy was an aficionado of symbolic logic, a branch of mathematics that represents concepts as symbols. He wanted to expand the horizon of computers to be more than number crunchers and data processors, and push them into the next frontier of manipulating symbols to reason deductively from hypothesis to conclusion. In a similar way to Aristotle’s logical reasoning example: knowing that all men are mortal (major premise) and that Socrates is a man (minor premise), the valid conclusion is that Socrates is mortal. McCarthy was optimistic that computers could be far more than plain vanilla automation. However, it’s not clear if anything was accomplished during that summer conference, as the promised final report was never delivered. In fact, the same overly optimistic overtures have dogged the AI journey or roller coaster ride ever since. AI has travelled repeatedly from exaggerated optimistic highs, followed by deep dips of disappointment, quickly followed by new discoveries, renewed funding and a recumbent climb.

By the mid-60’s AI had a funding stream with deep pockets, the US Department of Defence

Millions of dollars were poured into nascent academic AI labs at MIT, Stanford University, and Carnegie Mellon University and some commercial research labs, such as SRI International. The consistent flow of money fostered multiple graduate students, who also went onto collaborate with other global universities. But by 1974, there was mounting criticism from the US Congress about the lack unproductive projects and not getting enough bang for their buck. As a result, the US government cut off exploratory research into AI and the British Government quickly followed suit. The following years were bleak and it became a struggle to obtain funding for AI research in a period that has since become known as the ‘AI winter’.

It took until the 1980’s for ‘expert systems’ to be developed that created AI computer programs that deconstructed tasks into symbolic forms for facts, rules and relationships. But AI remained heavily reliant upon human programmers to painstakingly encode. Such symbol systems remained dogged with a common problem and plagued by the vast number of possible sequences. Combinational explosions made it too difficult to examine all the options. Take the everyday example of Lego, a mere six eight-studded bricks of the same colour could be combined together in 915,103,765 ways!

By the late 1990s AI advanced further thanks to increasing computational power in accordance in Moore’s Law, so more advanced statistical techniques could be employed. The most famous step change came when Deep Blue, the first chess-playing system, beat the reigning world chess champion, Garry Kasparov on the 11th May 1997. It came as a surprise to the chess world and has been etched into the memory of the Russian grand master, who was forced to eat his words, after he had previously quipped “if any grand master has difficulty playing a computer – I’d happily offer advice.” It was hailed within the AI community as a major confidence boost to the entire sector and put AI firmly back on the global stage.

Machine Learning

The next breakthrough came with the upgrade to ‘machine learning’. Learning is perhaps key to all human intelligence, and is more than just knowledge, analogous to symbol systems that require code to be written upfront with knowledge captured, stored and used. Learning requires a more dynamic approach to problem solve novel situations, and has the ability to iterate improvements with training and practice. Machine learning has taken inspiration from neuroscientists who have studied neural networks in the brain.

In the 2000’s, IBM worked on an AI machine to answer questions, such as those posed in natural language by the TV host in the quiz game Jeopardy! The team developed a machine consuming four terabytes of disk storage, and named it after IBM’s first CEO, Thomas Watson. In 2011 ‘IBM Watson’ went head to head against former Jeopardy winner’s Brad Rutter and Ken Jennings and neither human nor machine participant had access to the Internet. It was a nail biting time for the AI world with a winning first place prize of one million dollars. ‘IBM Watson’ consistently outperformed its human opponents and won. It showed that AI neural networks had been able to mimic the human ability to not only understand the question but to also ‘best guess’ the answer. By 2013 ‘IBM Watson’ software was used for its first commercial application in management decisions for lung cancer treatment at Memorial Sloan Kettering Cancer Centre in New York.

March 2016 brought another exciting challenge for AI when it competed in the game Go. Go is far more difficult to play than other games, such as chess, and uses black and white pieces on a nineteen by nineteen board. The game dates back to ancient China and was considered to be an essential art for a cultured Chinese scholar, even getting a mention as a worthy pastime in the Analects of Confucius. Go is a prohibitively difficult game for traditional AI methods such as alpha-beta pruning, tree traversal and heuristic search. The AI machine developed was named AlphaGo and it succeeded in a significant milestone when it won four out of five games in a match against Go champion Lee Sedol. This feat should not be underestimated, as it leveraged two advancements.

First, it harnessed more mathematical powerful processing units for matrix and vector calculations by using graphic processing units (GPUs) that came about thanks to the gaming industry. Second, it had the ability to spot patterns after learning and searching through thousands of games. Then it managed to combine these two feats in a famous move, so well known in the AI world, that it’s simply termed ‘move 37’ in the second game against Lee Sedol. It was at this moment that AlphaGo turned perceived wisdom about the Go game on its head. AlphaGo played an entirely unexpected yet beautiful move. No human player had ever played that move and legend goes that Lee Sedol had to leave the room momentarily with shock. AlphaGo had recognised patterns and played a novel move in a moment of genius, which not only turned the course of the game but perhaps changed history forever - AI had been creative!

So, it’s clear that the optimism about AI has been worth the wait and although Hollywood has a tendency to anthropomorphise AI, from here it should be less about being scared if AI will supplant humans, and more focused on what AI will do for the world. In a similar way to Brunel being able to build a bridge, the most important quandary became how to build a bridge to safely ensure passage from one side to the other. AI is not just intellectually fascinating, it is morally crucial to consider as how it evolves as it will affect the future of life, including healthcare.

About the Author

Dr Michelle Tempest MA LLM MB BChir (Cantab) ACAT has expertise in medicine, psychiatry, psychotherapy, business, law and politics.

She has been a Partner at Candesic since 2013 and has led multiple projects reviewing market opportunities for investors, public and private providers to develop beneficial partnerships. She has delivered projects for NHS Trusts (acute, community and mental health), Private Hospitals, Specialist Hospitals, Private Patient Units (PPUs), Community Providers, Care Home and Care at Home.

In 2006 she edited the book 'The Future of the NHS' and more recently has delivered strategy projects for the UK government on ‘new ways of working’. She has an expert interest in medical technology companies, and has worked with several MedTech companies on expansion plans and advised throughout the entire life cycle of deals. Previously Michelle worked as a hospital doctor and liaison psychiatrist for over a decade, and continues to lecture in 'medical ethics and law' at Cambridge University.

Twitter @DrMTempest

316 views
Screenshot 2023-11-06 at 13.13.55.png
bottom of page