Photo by Arseny Togulev / Unsplash

OpenAI's GPT-3 can do more than you think

Aug 18, 2022

I share a bit about experiments with OpenAI's new language prediction model (GPT-3) beta and about it's potential in the future.

OpenAI

OpenAI's GPT-3 is a new language prediction model that has been making waves in the tech community. I've been experimenting with it and I'm impressed with what it can do.
GPT-3 is designed to predict the next word in a sentence, based on the previous words in the sentence. This is similar to how humans learn language. We don't just memorize words, we learn the rules of grammar so that we can generate new sentences.
GPT-3 is different from other language models because it uses a neural network instead of a statistical model. This means that it can learn relationships between words that are not explicitly stated in the training data.
For example, consider the following sentence: "The cat sat on the mat." If you ask GPT-3 to predict the next word, it will correctly predict "." (period). But if you ask it to predict the next two words, it will correctly predict "the mat" (article + noun).

This ability to generalize from limited data is what makes GPT-3 so powerful. It allows us to build models that can learn complex tasks without needing large amounts of training data.
One area where GPT-3 could be particularly useful is natural language processing (NLP). NLP is a field of computer science and artificial intelligence that deals with understanding and generating human language.
Current NLP models are often limited by the amount of training data they have access to. For example, if you want to build a model that can understand medical texts, you need a large corpus of medical texts to train your model on. But this data is often proprietary and difficult to obtain.
With GPT-3, we can train NLP models on much smaller datasets because GPT-3 can learn from very little data. This could open up NLP to a whole new set of applications where data is scarce or proprietary.

Another area where GPT-3 could be useful is building chatbots. Chatbots are computer programs that simulate human conversation. They are often used as customer service agents or virtual assistants.
current chatbots are limited by their lack of understanding of human conversation. They often rely on simple keyword matching algorithms which can lead to frustrating conversations for users.
With GPT-3, we can build chatbots that understand human conversation by learning from actual human conversations

What's the catch?

The funny thing here is that I didn't actually write the article - just the summary at the start. Could you spot that it wasn't written by a human?
All I did was to provide some input about the topic with some tags. This was inspired by maraoz so creds to him for the clever idea. I wonder what improvemnets has been made since summer 2020.

There's a few hypothetical uses for this technology and I'm eagerly watching what different kind of services that can be thought up and created.

Per Andersson

Senior Cloud Engineer at GESHDO with a passion for Cloud and IT-infrastructure.