Bigger, Better, Faster

The improvements in AI are beginning to grab more headlines and attention. Yet this has only been a fairly recent surge. The idea of inanimate objects having the capacity to be intelligent resembling that of human beings has been around for a long time. Such fantasies were even a part of Greek mythology.

Yet the concept of computational artificial intelligence did not officially develop into its own field until the 20th century. Progress in mathematical logic not only produced the foundations for the first modern computers to come to into being but also for the possibility of building an artificial brain. In the 1950s, scientists from an array of fields, including mathematics, engineering and even economics, commenced the discussion of creating an artificial brain, which would later blossom into an academic discipline in 1956. It did so at a Conference at Dartmouth College in New Hampshire, where the now vastly popular term “artificial intelligence” was born.

But for a while, the field of AI seemed to not live up to all that it promised. It has taken a number of years for it to improve enough for many more economists, psychologists and other professionals to consider it seriously. But with the help of improvements in other technologies over time, AI became more sophisticated and has now exploded into such a  popular area. This is evident in how several tech giants, from Google to Apple, are now looking for talented individuals in the field to advance their own projects, notably self-driving cars.

The improvements in AI which has caused its rise to fame is particularly due to developments in three key areas of computer science; big data, better algorithms and better (and cheaper) computing power.

The availability of high volumes of information generated through several different digital means enables companies, governments and other organisations to observe interesting insights of which, perhaps, could not have identified before. Consider online shopping for example. Booksellers can observe what the bestsellers are and even tie purchases to individual customers through loyalty programs or other schemes. But with online shopping, a whole array of information is suddenly available. Retailers online can now see not only what customers bought, but also what they looked at, which promotions they were most influenced by, which genres they preferred, how much time they spent looking through the site before making a purchase. Retailers can then use this information to produced advertisements and promotions tailored to specific individuals or groups. These vast insights produced by big data is exactly what online retail giant Amazon relies on to sell its products.

For AI, masses of collected data provides machines with the means to learn and execute certain tasks. This is how Alpha Go was able to successfully defeat Go champion Lee Sedol; by sifting through a massive catalog of data on all the possible moves the machine could execute. Just like human brains rely on past experiences and examples to encounter and process new scenarios, machines are able to do the same with the use of big data.

As well as big data, advancements in algorithms has also enabled machines to, essentially, learn how to perform tasks independently. This is due to the research in artificial neural nets since the 1950s and has played a major role in today’s drastic improvements in AI. Just like a biological brain, layers of artificial neurons process information to produce particular outputs. It takes several of these layers, with millions of neurons, to recognise a human face, for example. Critically, Geoff Hinton of the University of Toronto was able to mathematically optimise these neural processes, a tweak now known as deep learning, which is important part of Google’s search engine, and has the potential to be used for a wide variety of other uses. “What got people excited about this field is that in learning technique, deep learning, can be applied to so many different domains, says John Giannandrea, who is the head of machine-intelligence at Google.

But to build a neuron network for AI software requires plenty of computing power. The introduction of GPU chips (graphics processing unit), originally used to advance video games, allowed neural networks to connect hundreds of millions of nodes and has proven instrumental in the development of AI. Several GPUs running neural networks is how Netflix can make reliable recommendations to its subscribers.

Replicating the capabilities of the human brain in a digital format has proven difficult. But years of research has now given the field of AI a new outlook. Plenty of companies, even some outside of Silicon Valley, are looking to implement AI into their products and services. Further advancements are on the horizon, as the fantasy of computers being able to perform human tasks is starting to come true. If so, what will the potential impacts be?

This article features in the Special Report titled ‘Will Humans Need Not Apply?’ (Ferbruary 2017) 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.