AI’s ‘intelligence’ doesn’t materialise out of nowhere.

In the same way you can’t expect a human to pluck the ability to speak a foreign language out of the ether, you can’t expect an AI to learn something it hasn’t been taught or told how to seek.

Building a strong AI is a balance between finding data to learn patterns from, and expert input on what form those patterns should take. The strongest AIs combine the best of both, learning from vast amounts of data and using sophisticated ways of analysing patterns within that data.

A powerful combination

In September 2020, The Guardian published an article written entirely by an AI called GPT-3. The article was crafted with such human-like fluency it was near-impossible to tell that it was computer generated.

Much like an illusionist performing recondite tricks, it looks like magic, but it isn’t. Instead, it’s the combination of domain-specific engineering, vast amounts of data, and the human touch.

First, the foundation of GPT-3 was attuned to generating long form text. It was then fed billions of words worth of data to learn from, ultimately leading to the article.

The end result, impressive as it is, is simply derived from a combination of the AI being taught specifically what to do and having a lot of data to learn from in order to do it.

On top of this, the content of the published Guardian article was actually handpicked by humans from several GPT-3 articles. So the result was more polished due to its creators’ final collation — it couldn’t quite achieve the perfect result solo, but it shows how a combination of humans and AI can be far stronger than either alone.

The Guardian published an article written entirely by an AI called GPT-3.
In 2020 The Guardian published an article written entirely by an AI called GPT-3.

Expecting the unexpected

When building a strong AI, there is also the element of uncertainty to consider. This is important because an AI that gives definite answers can potentially be open to misinterpretation, if a human relies on the AI’s outputs as being more confident than they actually are.

Let’s say you have an AI that is providing a forecast of the number of widgets you will sell in Oxford next week. It could predict that you will sell 20 widgets in that time frame and location, leading you to stock a little over 20 widgets in order to meet that demand.

But this kind of forecast is just an estimate, and there is a level of uncertainty in that. It may often be close or correct on occasion, but it won’t be right every time. This can leave you either overstocked or with insufficient goods to meet the demand. Here, one situation may be preferable to the other — depending on the costs of warehousing or perishability of the stock — but with a single sales estimate, it’s hard to balance your exposure to these asymmetric costs and benefits.

Instead, a stronger AI system can be built that gives a probabilistic forecast. For instance, on 90% of similar weeks in Oxford, you will sell between 15 and 25 widgets. This deeper information would put you in a better position to make stocking decisions that balance risk and reward.

An AI with this foundation has particularly useful applications to any sector utilising forecasts to balance supply and demand, such as fast-moving consumer goods, where better-informed decision-making can help reduce waste and increase profit margins.

The applications of this AI are also valuable for getting an idea of what an extreme scenario looks like. For instance, if the AI is predicting average sales of 20 widgets, are actual sales of 2 widgets unsurprising or a once-in-a-decade event? Taking the time to build uncertainty into the core of your AI can save you from unforeseen events further down the line.

Confident decision making

When you’re using an AI to pick from distinct outcomes, to account for uncertainties like those in the widget sales scenario you may want the AI to provide more than one answer. There are some situations where this is critical. For example, when you’re in a self-driving car piloted by an AI.

This AI takes a vast number of images and then interrogates itself as to what it’s seeing in the images so that it can make decisions, such as “this image is of a green light, therefore I can go.” But what happens if there is some ambiguity in what the AI is processing? As humans, if we were unsure whether a traffic light was red or green, we would most likely err on the side of caution and not go.

So, if an AI perceives a 60% chance that it is seeing a green light, and 30% chance it’s amber, and a 10% chance it’s red, it would be potentially catastrophic to go with the most likely option, even if there is only a slim chance of it being incorrect.

A strong AI is able to report outcomes with this kind of nuance, which means you can tune your decision making – whether human or algorithmic – to respond to those levels of confidence. This enables you to understand extremes and respond to scenarios in order to mitigate losses and reduce undesirable outcomes.

Achieving strong AI

Utilising AI to its maximum potential requires a strong understanding of its intended purpose as well as meticulous consideration of how you are going to use its analysis to make decisions.

This means you should be actively involved in building an AI that will meet your needs. You can’t leave AI to its own devices and assume it knows what it’s doing. People have to navigate uncertainty all the time and the same goes for AI.

Engage with your AI’s development: consider how you’re going to use it and give the AI the structure and data needed to excel in the task at hand. To see how the Smith Institute can help you accelerate innovation with strong AI, get in touch.