GPT-3: The Good, the Bad, the Mind-Blowing | Episode 54


In today’s episode of the Animalz podcast, I enlist Animalz Head of R&D, Andrew Tate, to talk through the good, the bad, and the downright mind-blowing aspects of GPT-3, the uncannily powerful natural language model from OpenAI.

In today's episode, we:

  • Share examples of the type of content you can create with GPT-3
  • Analyze the quality and calibre of its writing
  • Answer the question on every content marketer’s lips: should we be worried?

Mentioned in the Episode

Follow Andrew and Ryan

Listen to the episode above, or check it out in your favorite podcast app.


Key Takeaways

1:37 - What the heck is GPT-3 anyway?

Andrew explains the concept of a natural language processing model, and explores GPT-3's ability to create original writing from any given prompt.

"So GPT-3 is a natural language processing model. And basically what it is is it's gone off and it's read pretty much everything that there is on the internet. About I think it's something like 175 million parameters or something along those lines."

6:06 - Getting GPT-3 to write titles, URLs and blog posts

Andrew prompts the model to create titles, URLs and blog post excerpts, and GPT-3 spits back a host of perfectly plausible suggestions.

"A few examples would be like 'the best ways to calculate churn.' And then it gave me 'a primer on churn: defining forecast and improving.' 'When churn affects profitability.' 'The effect of churn on lifetime value and churn rate.' 'Retention rate versus churn rate.' 'Churn rate, how to calculate customer churn and stop it.' And apparently that one would be written on Investopedia."

9:56 - Writing a novel with AI

Ryan gets GPT-3 to finish writing an excerpt from his novel, and the results are eerily convincing.

"I'll just read a quick excerpt from it, "He leaned back against the tree, watching the boat bob as the water flickered with moonlight. The branches above rocked with each gust of breeze, the innumerable leaves whispering, "I'm here. I'm still here." He smiled at the irony of it. He still lived.""

11:34 - Don't head to your underground bunker... yet

GPT-3 excels at short-form creative content, but loses its way with long-form factual content.

"For actual content marketing, good writing on the internet, it's not going to be a replacement for writers. It's more going to be a tool that writers use to probably augment their current process and also to just make them better."

15:22 - Automating the less interesting parts of marketing

The biggest opportunities with GPT-3 lay in its ability to augment and support a human writer. Andrew explores content repurposing, turning existing articles into promotional tweets.

"Say for the article that you've just written, okay, here's the title of that article and here's the meta description. And then you say generate a tweet for this, and it will generate a tweet with emojis, with the right hashtags in it as well. And you can say, generate 10 tweets and it will do it in five seconds. And that's your social media for that article already generated."

17:13 - GPT-3 as your creative co-pilot

GPT-3 can function as a powerful aide for brainstorming new article ideas, meshing concepts to create novel angles and framings in a matter of seconds.

"In the same way you might get a dozen people in a room to brainstorm ideas, the value of their output is not necessarily coming up with a beautiful, fully fleshed out article. It's finding those novel framings that you would never have come up with yourself, taking the component parts of whatever you want to write about, meshing them together in all kinds of weird ways and seeing what new stuff comes out. And GPT-3 can do that at scale, in a fraction of the time, with a vast literary canon."

21:15 - With great power comes great responsibility

Anyone building with GPT-3 needs to be conscious of the potential for bias in the model's output.

"When you're using this on the back end, one of the things which OpenAI are very explicit about is that because this is trained on the entire internet and the internet is not a very nice place a lot of the time, you have to be very careful of toxicity and biases in whatever you are generating."


Full Transcript

Ryan: (00:06)
Today's episode of The Animalz Podcast, I've enlisted, Animalz head of R&D Andrew Tate to talk through the good, the bad and the downright mind-blowing aspects of GPT-3, the uncannily powerful natural language model from OpenAI, we share examples of the type of content you can create with GPT-3, we analyze the quality and caliber of its writing and we answer the question on every content marketers lips. Should we be worried?

Ryan: (00:43)
Welcome to another episode of The Animalz Podcast. I am joined today by the AI that we have trained to replace Animalz head of R&D, Andrew Tate.

Andrew: (00:54)
Nice to be on first time around as an AI, at least.

Ryan: (00:58)
I'm sure you'll learn a lot from the experience and your second time will be even better.

Andrew: (01:01)
Yeah, exactly. And then I'll replace you.

Ryan: (01:04)
Yeah. Well, as you can probably guess we are talking today about GPT-3, the topics that has sent Twitter, my emails, all of our Slack channels absolutely mad in the last couple of weeks. Especially as we are a content marketing agency, anything between the intersection of technology and writing is going to be very interesting for us. And that's already proven to be the case for GPT-3. So, Andrew, somebody who is much more versed in these matters than I am, what actually is GPT-3?

Andrew: (01:37)
Yeah. So GPT-3 is a natural language processing model. And basically what it is is it's gone off and it's read pretty much everything that there is on the internet. About I think it's something like 175 million parameters or something along those lines. And it's basically like a neural network and it's gone off, read all this stuff and then tried to condense it down to the rules of language. And the point of that is that it's trying to learn how we write and how we communicate, in terms of writing, at least. And then there are the uses for it, which I guess we can come on to. And the point is that from that model that it's generated, can it predict from any given word you give it what the next word is going to be and the word after that and word after that so that it can build sentences and paragraphs and entire corpus of writing basically.

Ryan: (02:46)
So obviously the thing to point out is that this is brand new stuff it's creating. It's taking, as you say, the rules as it understands human writing to follow and it's using that to create brand new material that is not likely existed in any capacity anywhere in human history.

Andrew: (03:02)
Yeah, exactly. So, yeah, it's not just plagiarizing what everybody else has ran or not in like the strict definition of the word of plagiarism. Yeah. It basically is making up stuff as it goes along, but trying to frame it within these language rules and then frame it within whatever you give it to start off with as well, which is called a prompt within my GBT-3. And you can see like a way of understanding its creativity in this respect is that it comes up with names and it comes up with titles. And if you go and Google these people, they don't exist. And it comes up with URLs that don't exist and companies that don't exist, but all of these things fit within whatever you gave it to start off with. It's not just making up random stuff, it's making up creative answers to what you prompted it to do.

Ryan: (03:51)
And it's been fascinating playing with it. The URL thing you mentioned there, I saw the examples you generated. It obviously follows the precise schema of a URL, but it's even kind of idiosyncratic in the way that different companies, URL structures tend to be. So some use sub-folders, some use sub-domains. It just absolutely strange. So maybe we can talk through some of the examples of things we've actually generated. Stuff we've put in and then they haunting output it has generated for us as a result.

Andrew: (04:21)
So yeah, you and I have played around with it. Actually probably in different ways. So I've been thinking about it in terms of the work we do at Animalz and content and business writing and seeing what I could generate through that. So the first thing I did when I got access last Monday was literally type out, within the playground that they give you access to, "Please write me a blog post on single side on authentication." And that's what I wrote like, just that as a single prompt.

Ryan: (04:50)
Being polite was crucial for that as well.

Andrew: (04:52)
Yeah, exactly. Yeah. It's important to be polite to GPT-3 and it spit out like three, 400 words, which made absolute sense. It was about SSO. It defined what it was, told you what you should be using it for, the benefits of it. You could put that on the internet if you were running an SSO company or writing content for SSO. And nobody would know that it was written by a computer. Nobody would know that the only thing a human did was say, "Please write me a blog post about SSO." I found it hilarious more than anything else like, "Wow, this is incredible that it did this from such a simple point." And then after that kind of been playing around with it in different ways, one of them was to try and generate titles for articles based on what is already out there. So one of the interesting about GPT-3 is it's what's called a few shot learner. And the idea with that is that it has this model of language.

Andrew: (06:06)
And if you give it a prompt, it does a good job, but actually if you give it a few examples of what you want, it will do a drastically better job, and by a few I just made like two or three. It doesn't need like a hundred different examples, it just needs a couple of them. So I wrote a quick kind of app, which you put in a search term to it, the app then goes away and grabs like the first few titles for that search term in Google, uses those as the prompt GPT-3 and GPT-3 then outputs a ton of different titles for you, which in theory are then SEO optimized to the point that they could rank on page one of Google, which is pretty nice. So a few examples would be like the best ways to calculate churn. And then it gave me a primer on churn, defining forecast and improving. When churn affects profitability. The effect of churn on lifetime value and churn rate. Retention rate versus churn rate. Churn rate, how to calculate customer churn and stop it. And apparently that one would be written on Investopedia.

Ryan: (07:13)
Nice, it specified the website as well.

Andrew: (07:16)
Yeah, exactly. It specifies websites. And so like one of the things I did was the best Facebook ads and, yeah, it specified that one of those should be on AdEspresso. 10 tips to kickstart your Facebook advertising strategy on AdEspresso.

Ryan: (07:31)
I feel like I've read that. Yeah.

Andrew: (07:32)
Yeah, exactly. It sounds exactly like the kind of articles you would expect to come up on page one of the search page results. And that's the thing, it all makes complete sense when you're reading through it. Then from that, I think the way that we've been trying to go, you and I in different ways, is generating longer and longer content and seeing how that comes out. So I did, at the end of last week, like spun up a quick website with a few examples where basically what I did was just give it a title and a one-sentence kind of description of what I wanted the article to be like. And then just wanted to see what it wrote for me. So one of the examples I had was this one, which is graffio morphism. Graffio morphism is the phase we are in right now, all the tools and processes for remote worker analogs of an office totem. So that sentence doesn't actually make that much sense. One, graffio morphism is a word I made up and this and the sentence itself is completely esoteric, right? Like you don't really... You could write that in a much better way, for sure.

Andrew: (08:51)
But from that it generated new technologies or new tools of what fuel innovations and change. Tools are connected to mindsets and these, in turn, reflect historical context. For example, when we replaced voicemail and physical calendars with emails and to-do lists, we took an old behavior and placed it with a new one. But rather than doing away with the old, we merely added the new. This was the phase of adaptation and it's still with us. The phase of adoption can only come once we remove the old from the new. And that is the kind of like idea that I was trying to get to with this weird prompt of the idea that yeah, like we need to move to new remote tools rather than just trying to copy what office people did beforehand. And it makes sense. And I wrote like maybe 750 words by bootstrapping off that. So whatever I liked about the output, I would then use as the next input, the next prompt. And it would kind of like build and build the article at the time. And I know you've done that in creative writing.

Ryan: (09:56)
Yeah, so while Andrew was busy diligently trying to apply this to our business, I was completely self-indulgent trying to write my novel with it. I took basically the first opening chapter of like a little book I've self published. And I was just curious to see what it outputted. And this was the thing that actually above all else made me sit up and pay attention to it. Because when you, I think, use business terminology, it's quite easy. Even if it sounds like it makes sense, you can quite often spot disconnects between terms it's mashed together or the overall framing of the piece, because there's objective information in there. I think the thing with creative writing it, there's so much more you can get away with effectively. And the output it gave me, it may as well have been a novel I've read somewhere.

Ryan: (10:44)
I'll just read a quick excerpt from it, "He leaned back against the tree, watching the boat bob as the water flickered with moonlight. The branches above rocked with each gust of breeze, the innumerable leaves whispering, "I'm here. I'm still here." He smiled at the irony of it. He still lived." And it goes on to like introduce a couple of interesting plot points about like abandoning a village that's destined to die at nightfall, which was nowhere within my original prompt. I feel like if I played around with this enough, you could actually iteratively build a story structure out of the output it gave you. Yeah, absolutely fascinating. So, I mean, as people that write for a living in many different forms, should we be worried about this? Should we be like moving to our underground bunkers right about now?

Andrew: (11:34)
Yeah. So I think not, basically, at the moment. My first impressions haven't really changed on this, which is that it is absolutely amazing, but it is not as amazing as a good human writer. And I think that what you and I have seen playing around with this over the last week or so is that short-form stuff with good prompts, excellent app, like incredible. I think the thing which blew me away was I did a quick tweetstorm on this last week and the first few tweets in the tweetstorm were written by me and then I use those as the input to GPT-3 to see what it came up with. And it came up with some absolutely full-on proper tweets, which fit perfectly with the rest of the tweetstorm and just added them on. They made sense. So why not add them on? So that kind of like short-form stuff I think it's really great at. But it loses its way when it's asked to write longer-form stuff. At the moment who knows what will come of it.

Ryan: (12:50)
As the perpetual caveat with this, isn't it?

Andrew: (12:53)
Exactly. Yeah. What GPT-4 or -5 or GPT-N will show who knows, but at the moment, I think it's kind of like three ways of thinking about it, which briefly would be the bottom end of content, the kind of stuff where writers get paid a few dollars for two or 300 word SEO-optimized articles, like content factory kind of stuff. Those could well easily be replaced by this. Give it a good prompt and it will spout exactly what you want. But for actual content marketing, good writing on the internet, it's not going to be a replacement for writers. It's more going to be a tool that writers use to probably augment their current process and also to just make them better. I think over time as well.

Ryan: (13:49)
That's a good point that even within human writing, there is already a massive gulf in the output that people create in the same way that some people create absolute literary masterpieces, like my book, of course. At the other end of the spectrum, yeah, you do have even like spun content, one of these old kind of black hat SEO techniques where you basically just take existing articles and mix them up a little bit and try and create something that's technically new as a result of it. So even if GPT-3 has no match for like the Shakespeares of our world, or even any vaguely competent writer, there is still this subset of content in the world, which is bad to begin with. And that is definitely something that GPT-3 is going to be able to pluck away and do a better job at, I think, in less time as well.

Andrew: (14:37)
Yeah. So you've read about copycat content before, and this is almost like a beautiful example of what copycat content like the finished article of it kind of thing where it's not plagiarism, but it's just taking what's out there and making something new, but not something drastically new and not something which takes like full on creativity to do. And then your point about the time cost, you can write an article with this definitely less than an hour, but it's not going to be a good article and the amount of time that you would then have to put into editing it and revising it, if you want to produce something good with this, probably takes that way, I think.

Ryan: (15:22)
Okay. One of the things you pointed out is there actually is a really valuable use case for some of the lower-skilled parts of content marketing. In the sense of content repurposing is a great example. A big part of my job is spent actually taking parts of existing articles and doling them out into new formats, be it social media content, meta descriptions, information that is designed as a summary or an insight into the broader, larger piece of content. That's something that this could do beautifully I'd imagine.

Andrew: (15:54)
Yeah. So an example for this week, yeah, would be like tweets, it's excellent at it. You can imagine. I haven't tried it on LinkedIn, but it would be the same kind of thing, where, especially, where this kind of like few shot learning technique where you would for your company, you probably have a certain style guide for social, whether that's kind of like whimsical or it's very business like, whatever, you give it a few examples of what you've done before. Maybe it's your tweet style is full of emojis and hashtags. So you give it three examples of good ones that you've done before. And then say for the article that you've just written, okay, here's the title of that article and here's the meta description. And then you say generate a tweet for this and it will generate a tweet with those emojis with the right hashtags in it as well. And you can say, yeah, like generate 10 tweets and it would just do it and it will do it in five seconds. And that's your social media then for that article already generated.

Ryan: (16:56)
I worry if we fed it too much LinkedIn, we create the world's most arrogant, self-aggrandizing AI in the process. That's one of the risks of this, I think. Another use case you pointed out was this idea of actually using it almost like a copilot or someone to brainstorm with. How are you thinking about?

Andrew: (17:13)
Yeah. That's kind of the way that it seems most interesting to me in terms of how it could help a writer. Every writer understands the concept of writer's block. You've got a certain way through an article and you're just kind of stuck on it and you're not entirely sure where you're supposed to go next. And it's kind of the idea like within engineering you have pair programming, right? Like you'll sit there with somebody and you'll bounce ideas off them, or maybe they'll take over for a bit and you'll help them out. And it can kind of act like that where it's kind of, like you say, like a copilot, where you can say, "Okay, I've written this. I've got some ideas, put it into GPT-3 and see what it comes out with. Maybe it will come out with a new concept that you haven't thought of."

Andrew: (18:07)
When you and I were playing around with this yesterday. It generated like a thought about like the future of the CMO position within marketing and the difference between traditional marketing and the future of marketing. And it's kind of like, "That's an interesting idea," to build out that idea takes a human. I don't think GPT-3 is going to be able to build out that entire idea, but it just allows you to generate new ideas. It allows you to continue to build an article over time when you might have previously been stuck. It might give you that kind of like push in one direction and you can run it a hundred times and generate a hundred new ideas. And it might only be like one or two that are interesting, but still that's a really big bias, a really novel tool for a writer to have, I think.

Ryan: (19:04)
In the same way where you might get a dozen people in a room to brainstorm ideas, the value of their output is not necessarily coming up with a beautiful, fully fleshed out article or whatever. It's finding those novel framings that you would never have come up with yourself, taking the kind of component parts of whatever you want to write about, meshing them together and all kinds of weird ways and seeing what new stuff comes out of it. And GPT-3 can do that at scale, a fraction of the time, with a vast literary canon that virtually no amount of people could actually pull together and use. It's fascinating. The output then obviously doesn't need to be particularly legible, it just has to plant a seed that you're interested in, that you can go away and explore and build on yourself.

Andrew: (19:46)
Yeah, exactly. That's a great way of putting it, like planting a seed. The thing that reminded me of was in the early days of Animalz, we weren't remote, we were located in an office in New York, which meant that for every article we were writing, there was somebody sitting next to you, you could bounce ideas off, but when you went through the editing process, it was about like sitting there and talking through an article a bit more. It kind of reminds me of that but when you're remote, it's a bit more difficult to do that and to have people to bounce ideas off because most communication is structured. But with this, it's like, "Yeah, I'll just have a chat with GPT-3 and see what it comes out with." I mean, you can actually literally have chats with this thing. It's programmed to be able to do that. But yeah, you can put ideas in there, see what it comes up with and run with it.

Ryan: (20:36)
I've had a genius idea. None of us really likes sitting on Slack, getting disturbed a lot. Can we plug this in to pretend to be us in Slack and then go off and live our merry lives?

Andrew: (20:47)
I absolutely am sure that somewhere, of all the people that have been using this over the last couple of weeks, somebody has built a Slack bot to just pretend to be [crosstalk 00:20:57].

Ryan: (20:57)
That's where the real money is, I think. That's how you monetize this properly. Yeah. Amazing. Is there anything else you want to cover? Anything people should be aware of, do you think?

Andrew: (21:06)
I think over the next few weeks, we'll see more and more tools come out. I saw, in fact, VWO? WVO?

Ryan: (21:13)
Oh, visual website optimizer?

Andrew: (21:15)
Yeah, exactly. They've come out with something, I think just this morning, which is like a AB testing. You've got a headline on your site and you want to see what other options there are and AB test. It will suggest different headlines for you. It has a great thing. So we'll see more and more things like that come out over the next few days or the next few weeks. When you're using this on the back end, one of the things which OpenAI are very explicit about is that because this is trained on the entire internet and the internet is not a very nice place a lot of the time, you have to be very careful of toxicity and biases in whatever you are generating. For short-form stuff, which may be prompted in this few shot learning way by tweets, by business writing, it's probably not going to be such a big issue.

Andrew: (22:10)
But it's something we should have to be really, really careful. And you can imagine in longer-form stuff, becoming more and more of an issue. So that's something which we're thinking strongly about in terms of what we might build with this and how we might use it, just to make sure that we're not adding to the problem over time. So that's something to think about. But for me, like, yeah, I'm just looking forward to new ways of seeing how people are using it. I saw yesterday, somebody has linked this up to another type of AI. What's called a generative adversarial network, which is when you see like these completely generated faces that are indistinguishable from photos. Someone has linked this up, so you just type in, "Please give me a woman with brown eyes and blonde hair," and then it'll go out and generate a photo of that kind of person. I've seen it with generating images, generating complete programs. I think there's a lot of opportunity for this, not just within writing, but within a lot of different kind of [inaudible 00:23:26] of business for sure.

Ryan: (23:28)
Well, above all else, I feel privileged to be in an industry where we get to toy with this and learn from it and actually see the potential this has to make our lives better, more interesting, more fun, more creative in a lot of cases. So yeah. I mean, thank you, Andrew, for setting me up with it. Wouldn't have made nor tail of how to use GPT-3 without your guidance.

Andrew: (23:48)
Yeah, no worries. Yeah. I'm glad that other people are using it. And yeah, thanks to OpenAI for giving us access. Greg Brockman gave us access to this to try and see what we can build out. And we just like kind of thesis of Animalz is to make content better, make content more helpful. So we're thinking about how we use this to do that.

Ryan: (24:09)
In terms of GPT-3 as well, you will hear much more from us about it in the coming weeks, I suspect. We're working on an article written by and about GPT-3, which is going to be interesting. It's been fun. Again, kind of slightly terrifying seeing the output just good, just really good in places. So I'll link to that when it's published in the show notes as well. And, who knows, we may even see some Animalz-related innovations using GPT-3 in the future as well.

Andrew: (24:37)
Yeah. Let's hope so.

Ryan: (24:39)
Cool. Well, thank you, as always, Andrew. I will talk to you soon.

Andrew: (24:42)
Thanks very much, Ryan.