Definition of Artificial Intelligence (AI)

Definition: AI is math and code that makes decisions about data.

Keep those words in mind every time you hear the word AI. Every time you hear the words “machine learning” or “deep learning.” There is no simpler or more practical way to define artificial intelligence. Every time AI is used, someone somewhere is relying on math and code to make decisions about data.1

Math and code are the subject, and makes decisions about data is the predicate. The noun and the action. That’s what AI is and what it does.

The math and code will be any one of a variety of algorithms. The definition of algorithm is simply a combination of math and logic, and the logic is written in code. Those algorithms come under different names: deep artificial neural networks, support-vector machines, Gaussian models, multilayer perceptrons, LSTMs, convolutional networks, logistic regression, random forests. AI specialists will bandy about a swarm of terms related to these algorithms, their properties, and the technology used to implement them. People outside the field of AI just need to remember that all these terms are just algorithms, and they are used to make predictions about data.

Predictions eat information you have and spit out information you don’t have. For example, you can use historical data on traffic on a bridge to forecast future traffic on that bridge. You could also use the pixels in an X-ray to predict the likelihood that those pixels indicate the presence of cancer. In both cases, you used old data to make a prediction, to generate new information.

Here’s what’s cool: Making decisions about data, making those predictions, can be very costly. AI makes is less expensive. In fact, AI can make some predictions very, very cheap. AI is what is called a general-purpose technology (the notorious GPT), similar to semiconductors or electricity or steam-power, which transform many industries at once.

That is, any industry that cares about predictions, that wants to know something about the future, that uses data to inform its decisions, that needs to make some kind of diagnosis about the health of humans or machines – all these industries will be affected by AI and many of them will be transformed utterly. (Caveat: I’m not saying that will happen tomorrow, but it will happen in the next 10-20 years, spreading much as the Internet did.)

Other people might define AI as “a branch of computer science dealing with the simulation of intelligent human behavior in computers.” Well, we can make sense of that in our definition. Intelligent human behavior happens in context, in response to an environment. A doctor makes a diagnosis based on the symptoms she observes. Another word for simulation is imitation, emulation or mimicry. To imitate intelligent behavior, we must know the context, we must take in that data, the symptoms. In response to that context, we will make a prediction about what an intelligent person would do. So simulating intelligent behavior is just making a prediction based on data. That applies to every AI problem.

AI and its discontents dominate the news about technology. On the one hand you have hype, which says that AI is incredibly powerful and will either save or destroy us, and on the other you have nay-saying skeptics moaning about AI’s flaws and predicting an AI winter.2

It’s easy to get lost in long laments about how AI doesn’t learn like children does. That it requires too much data to learn how to make an accurate prediction. That it is easily flustered by small changes in data.

Footnotes

1) In this post, we use the terms “decisions about data” and “prediction” interchangeably.

2) Positive AI hype is probably best exemplified in IBM’s Watson ads. Watson can do anything including designing your dress and choosing your future spouse. Most large tech companies and all AI startups are at least a little guilty of that sort of hype. Negative AI hype is best illustrated by Elon Musk, Eliezer Yudkowsky, Nik Bostrom, Max Tegmark, and the flock of doomsday prophets who foresee an AI apocalypse. The third group, the skeptics of AI’s current abilities, include psychology professor Gary Marcus. Slicing more thinly, you will find many AI experts who have forged emotional relationships with a particular algorithm, usually one they wrote their dissertation about, who will throw shade on those lucky algorithms receiving more attention in the media. Sometimes this is combined with a commercial interest in promoting a company based on their special sauce, or an attempt to justify their poor technical decisions. Finally, you have a group of AI practicioners who roughly know what AI in its current state is capable of, and also what it can’t do. Yann LeCun is a good example of this group. These people are skeptical of the hype, horrified by the doomsday prophets, puzzled by the broad-brush statements of the skeptics, and in most cases privately astonished by the accelerating advances in AI that are either too incremental or academic to be covered by the media. That is, AI is at the same time both overhyped and under-rated

More Machine Learning Tutorials

Chat with us on Gitter