Experimenting with GPT-3

About a year and a half ago, OpenAI rolled out GPT-3, a massive text-prediction transformer model which shattered many assumptions about the difficulty of understanding and creating written language. GPT-3’s predecessors (GPT and GPT-2) had shown that generating sensible responses to a variety of input texts was possible with enough data, but GPT-3 took that …

Experimenting with GPT-3 Read More »