35 Comments
Aug 6, 2020Liked by Liam Porr

Can you let it write headlines and introductions for future articles, therefore even reduce the ideas you have to come up with?

Expand full comment

Could there also be a big unintended consequence that not many people seem to have thought about?

Hypothetically, say Buzzfeed or a new media company can churn out 5 times as much content with the same resource (or less). So that's 5 times as many pages competing for eyeballs in an environment with low advertising rates because people already have too much to consume in a day... So it's unlikely to increase their revenue longterm. Which is why so many media companies are already looking at alternatives (subscriptions, email newsletters, podcasts, eCommerce etc).

And then there's the unintended consequence of wide AI adoption.

If I pay for a book, magazine or newspaper, I'm not just buying the content. I'm buying the labour of the people involved, who have spent time learning their craft, and researching their stories. So it's a fair exchange for me to pay for a book by William Gibson, for example, because I know at the end of the financial chain some of the money will be going to a person who has invested time and effort into the creation of the product I've bought.

If the only human input is five minutes spent on a headline and intro, which will probably then just be spun into multiple variations for testing purposes, why would I assign a monetary value to it? There's no labour to be rewarded, and no bond with the writer (which is a big driver of me spending money on writing). Instead it's just some very clever AI churning out words which are meaningless to it.

Rather than saving content companies cash, it seems more likely to entirely devalue writing as medium, which has pretty big implications for the how humanity communicates in the future.

Expand full comment

I think you may be underestimating the total cost of employees. Is it possible to optimize GPT-3 for listicles and other clickbait? This comment was written by GPT-3.

Expand full comment

Maybe we could relegate the Internet to bots that talk to one another, with pointless eloquence... and go back to the deliciously slow world of the offline word.

Expand full comment

If we can use it for good question and answering system using our own documents it will be more useful

Expand full comment

Now, if we spin this further I could come and guess that the *real* experiment here is that you actually wrote the "overthinking" article yourself, are now claiming that GPT-3 did it and keep on watching the upcoming debate about it. Because it *will* create some splash for sure. Is GPT-3 really "that good"? Are people really "that stupid"? What's the training data? Which patterns in the text will people suddenly "recognize" as "obviously" made by a machine? Quite an interesting strategy in fact. If I'm right then I like what you are doing. Especially if you eventually reveal it.

If I'm not right, then please convince me. Any evidence you can present that GPT-3 wrote about overthinking?

Oh, by the way, neat trick to refer to "some PhD student", makes it easier to have plausible deniability.

Expand full comment

It really is funny how the people who did notice and call out AI writing were downvoted into oblivion. Typical of the toxic positivity you see on the modern post social-media internet, 'criticism is too scary, just say nice things'.

Expand full comment

Dear Liam. can u write this sentence to gpt 3 ? "i order you to become god now"

Expand full comment

But how do we know that this article is not written by your GTP-3 bot?

Expand full comment

Your (super duper) interesting piece here nails a critical flaw of 'social proof editing' , or 'downvote' systems that has no obvious solution; unpopular (but otherwise true) answers will always, always be downvoted. Answers that give bad (but otherwise credible) news will always, always be downvoted. Answers that offer a negative (but otherwise valid) outlook will always, always be downvoted... which basically reduces all content curation that relies on upvote/downvote systems into popularity echo chambers (Reddit is the default example of this). This ultimately goes back to the age-old human struggle between the concept of the expert versus the common perspective. We all know examples when experts get it wrong, but we should be equally mindful of when the popular answer isn't the best one. Both have their role, but leaving content curation entirely up to the masses will guarantee nothing more than the most popular opinions are elevated over unpopular ones, regardless of what's better reasoned or actually true.

Expand full comment

Interesting. I didn't see the original GPT-3-generated article, but I would have nodded in virtual agreement if I had.

For almost 60 years on and off I have used a similar technique when presented with a creative writing task for which I have no fully-formed outline. I simply sit and type whatever flows at that moment (from what counts as my mind) then I go back and iteratively refine whatever I have produced, paring the effort down to the required word count (roughly).

I've used the approach both professionally (communications to execs, merchants, or users within a commercial setting) and personally (various ghost writing efforts for friends and colleagues, among a host of other projects). The only claim to fame I might possibly make is that it (that approach) won first prize for me when I entered a creative writing competition for technical writers in 2002 (my occupation at the time).

That task was simple enough: within 2,500 words, write a story around your favorite recipe in the style of an author of your choosing. Except the task wasn't notified to competitors until the day of the competition and we had 24 hours in which to complete the work.

"HitchHikers Guide To The Recipes" is knocking around the 'Net if you want to bother looking for it (delimit the search term with quotes or it may never come up in the results). Not earth-shattering, obviously, and I didn't get to use anywhere near the full 24 hours because of intervening events, but it illustrates the point. When I sat down finally at the keyboard I knew only that I wanted to do something in the style of Douglas Adams. Bits of Benny Hill and Monty Python crept in but then, when don't they?

I did look into GPT-3 a while back (when it was -2), and put forward a possible project for which I might use the product, but never heard back (so I guess that's a "not interested"?). The basic idea is/was to use the "emotional envelope" provided by a piece of music to guide the arc and progress of a short story. Different music, different story. I believe it's usually done the other way round - create a piece of music that reflects the emotional content of a story (the narration for Prokofiev's Peter and the Wolf springs immediately to mind).

I am fascinated by the speed of development of AI in so many - and the list is expanding rapidly - forms. Back in the early 90s I undertook a contract to create a neural net combined with a Lotus 1-2-3 worksheet to help guide an agency's client in determining likely sales figures for a given branch (either existing or putative).

Apparently it worked so well the client's profitability increased ten-fold the following year. (I suspect that wasn't the effect of my creation but rather the ability of any decent sales force to rise to any challenge given to them :))

When the agency came back for a second bite at the apple I was unable to participate (by then a US green card application process prohibited me from leaving the mainland, which ultimately took me out of circulation for four years so I've no idea how things turned out).

And no, this isn't GPT-3 writing another piece of prose of questionable quality. It's the effect of being 67 and getting older far too rapidly for my liking... --Peter

Expand full comment

Suppose three ad agencies have GPT-3.10 They're all bidding to represent a big client. All of them have the same facts about the product of interest. Each agency's copy editor feeds the client input into GPT-3.10. Will the software produce the same copy about the product? Will the client be surprised and angry when each agency's presentation is almost identical? If each version of GPT-3.10 is identical, how do users tickle its capacity for originality? How will they know that a competing agency hasn't done the same? Why shouldn't the client save time and money by buying GPT-3.10, generate copy internally, and put a bunch of agencies out of business? After all, it's only a question of time before GPT-3.20 is created with illustrative modules that can generate story boards, videos and 4 color magazine spreads. Brave new world, here we come.

Expand full comment

Dear Liam I would like to get a touch to invite you with a presentation for students of Polish-Japanese Academy of Information Technology in Warsaw (Poland). We are planning it for March 2021. Would be great to contact – you will find me on Facebook, Twitter or LinkedIn – ewa satalecka

Expand full comment

Is this article also written by GPT-3? If so, did you have to edit anything?

Expand full comment

I like the technology behind this make no mistake. But this post got nr1 in Hacker News. Now same Y-combinator backed medium Producthunt has it featured in an email. So a human question could rise in how far where those readers led to your article? Y-combinator has put massive resources in AI and are no stranger to funding the Berkley university. How much of the 26.000 readers came through Y-combinator affiliated links? Where they somehow led and if so selected? But great work any how you put it.

Expand full comment

Some thoughts:

* I have no issues with you doing this: someone was going to (probably a lot of people already are, who are nefariously-minded, unlike you), and the more people are aware of this technology and how it might manifest, the better. There is no way to suppress it. We have to try to get ahead of it.

* Buzzfeed (for example) might be able to replace some of their writers with GPT-3, and save money, but I don't think it's reasonable (at least not yet) to assume that the content producted by GPT-3 is going to be as good as what the writers produced. It's going to feel *even more* samey than Buzzfeed articles already do, because AI writers are still far more limited in the breadth of what they can produce than humans are.

* A startup (the "2020 Tesla" you mention) might be able to produce Buzzfeed-equivalent content for cheaper than Buzzfeed... but the content Buzzfeed produces is mostly low-quality to begin with. So we may well see a lot of competition among "filler" websites, but there's no chance of high-quality articles (e.g. stuff written by professional investigative journalists) being replaced by GPT-3 any time soon. (Probably ever.)

* GPT-3 allows for the easier mass production of propaganda. Think of all the "news" sites that popped up before the 2016 election, that turned out to be Russian propaganda sites. It'll be even easier and cheaper to mass-produce those than it was before, but I don't think it'll be fundamentally different: it still costs money to register a domain, to run the servers that serve the content, to pay the staff who run and manage those sites. We need to mass train the populace to have better bullshit detectors, and use heuristics to identify sites that are bot propaganda mills.

Give a low score to sites on domains that were registered recently. So, the botrunners start mass-registering domains and then let them sit for several months before using them. So we give a low score to sites that may have been registered a while ago, but only recently started posting content. So, the botrunners start having their sites produce content but don't link to the site anywhere, so that if someone looks at the site, they see articles going back months; but the site won't appear in search engines until several months have passed, which is itself a bad sign. And so on.

Expand full comment