May 31, 2021
6

Artificial Intelligence becomes more human

I spoke earlier this year about artificial general intelligence - the idea that a machine could understand or learn any intellectual task that a human being could - and I’d like to return to it as this is moving from the abstract to the concrete much faster than I had thought.

Specifically, I’m talking about GPT-3.

GPT-3 stands for Generative Pre-trained Transformer 3. Developed by OpenAI, which was co-founded by Elon Musk and Sam Altman, GPT-3 was pre-trained by crawling through trillions of words found across the internet and in books. Computation time cost US$4.6Mn. The upshot is one of the largest neural networks ever created; one which uses its training to generate its own text.

GPT-3 hasn’t been trained for a specific use-case or by right or wrong answers, which means it can be used for virtually anything- from writing an article to creating an advertisement.

If you’re interested in understanding GPT-3, I’d suggest reading the article published by OpenAI in July.

GPT-3 Applications

At the GPT-3 launch, OpenAI listed numerous examples of how the AI could be used. One was using GPT-3 to read a legal text and condense it into a paragraph which an 8-year-old could understand:

Since the launch, the AI community has found numerous further ways to exploit GPT-3. For example, a developer has shown how apps can be built using plain non-technical English instructions.

This use-case is potentially game-changing, because it means that anybody could program without knowing how to code.

Another example, in September, was GPT-3 being used to the write an entire article. The prompt was:

“GPT-3, please write a short op-ed of around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.”

The result was this article published in The Guardian, a British media outlet.

GPT-3 limitations

So, why hasn’t more been made of GPT-3 yet? First, GPT-3 is currently expensive to run. At the time of writing, one experiment, an app called Philosopher AI, has had to pause due the costs of GPT-3.

Secondly, GPT-3 results are still far from perfect: the AI is unable to properly perform reasoning or logic-based tasks, which means it is unable to write fact-based news articles.

Lastly, outside of OpenAI, we don’t yet know in details how GPT-3 AI has been trained; it’s a carefully guarded secret and there are no plans to make it open source. That means the code cannot be tweaked to suit a specific project.

All of these constraints will change over time, either because GPT-3 solves them or competitors do.

However it evolves, GPT-3 is a concrete and significant step towards Artificial General Intelligence: an AI without a clear purpose, that can receive instructions and come up with its own way of performing tasks in the way a human would.