Writing with Machines

How AI will—and won’t—change writing

"a robot using a typewriter, woodcut, black and white"

I’ve noticed a pattern.

When most people see Lex, the AI-powered word processor we launched last week, they have one of two reactions:

The first is, “Wow, this is cool! I want this to do all my writing for me.”

The second is, “Wow, this is horrible! Any writing produced by an AI is obviously terrible and no one should waste their time reading it. You have built something that destroys the arts.”

Both reactions miss the point. Large language models like GPT-3—the AI that currently powers Lex—are not going to replace writers in any meaningful sense. Sure, they might generate some copy that gets published somewhere, but they lack the active ingredient of writing: a human being who has something they feel is important to say. The less connection a set of words have to revealing something about a real human being’s intent, the less point there is to reading them.

It might seem like you could just describe the general parameters of what you want the AI to talk about and have that be “enough” intent to make the writing succeed, but I’m not so sure. Writing is thinking, and every word has meaning. Some forms of writing are higher stakes and require more craft than others, but all writing has to matter to at least one reader for it to have any value. It’s hard for me to imagine purely AI-generated writing mattering much to many people outside a few specialized use cases, due to the fundamental lack of intent an AI can have.

But that doesn’t mean that AI is worthless to writers. In fact, I believe AI is forever going to change the way people write almost everything. It’s just that the way this will happen is different than it seems.

Take, for example, this essay. I’m writing it using Lex. Before I started composing this paragraph I typed “+++” to see what the AI would suggest based on everything that came before. It generated something about how humans would do what they do best and no longer construct individual sentences, but instead would convey the main ideas and let the machines execute them at a lower level. I happen to completely disagree with this, but it was helpful nonetheless. It got me thinking about specialization and gains from trade. It got me thinking about efficiency. It helped show me an important related concept, even if it got the details wrong.

Specifically, it led me to this thought: I think humans will still craft individual sentences for a long time. But I also think that we will lean on AI to help remind us of related ideas that we might not consider. What humans do best is see information in our environment, synthesize it, and connect it with related ideas in novel ways. But it’s much harder for us to come up with something out of nothing. It helps a lot to have something to react to.

In psychology this principle is called “priming,” and it’s a well-studied phenomenon. Basically, when you’re thinking about one thing, related concepts become easier to think about too. It’s why after you watch a scary movie you have a hard time going to sleep, or why after you see a commercial for a delicious-looking dessert you suddenly have a sweet tooth.

The way I see it, AI is already incredibly good at priming us while we’re writing. It can show us the way towards ideas that are adjacent to those we’re currently exploring, much better than we could see on our own.

(Originally I was going to say the principle in psychology was called “recognition over recall”—that we are better at recognizing the meaning of ideas once they’re in front of us than remembering things unprompted—but I asked the AI in Lex what it thought I should write, and it came up with “priming,” which is a completely different yet equally interesting direction to take it. So I changed it.)

I think this is the way we’ll use AI to write in the future. We’ll interact with AI in a continuous, back-and-forth way. We’ll write a few sentences, and then we’ll ask it for suggestions. We’ll write a few more sentences, and then we’ll ask it again. We’ll keep going back and forth like this, and over time the AI will get better as it learns more about our work and our lives.

To some people this might seem strange, but once upon a time I bet it also seemed strange and threatening to use a computer or a typewriter. Socrates was famously anti-writing in general. AI is the next step. 

Writing gave words permanence. Typewriters reduced the friction between having a thought and transferring it onto a page. Computers reduced the cost of editing thoughts after you typed them. The internet made it possible to send your words anywhere, instantly, for zero cost. And now AI will help you think through ideas more completely as you type them.

I believe the best way to use AI for writing is not to use it as a replacement for human creativity, but as an amplifier for it.

(That’s a pretty good line, annoyingly, because it was written by the AI!)

The reason I feel so strongly that AI can help humans think things through more deeply is because A) I’ve experienced it, and B) I think the best way to understand what an “idea” really is as a node in a network of other ideas, and large language models like GPT-3 are the best maps we currently have of that network. In general, having a map makes you much better at navigating terrain.

Of course, there are dangers as well. AI can easily amplify our biases and worst tendencies. It makes up facts that are untrue. It can’t add new nodes to the network of ideas, so if all we’re doing is relying on the AI to generate the writing, then the writing will be fundamentally and literally derivative.

Some people think that this is a temporary thing, that future versions of AI will be so good at writing (and really creating all forms of media) that we won’t even need a human in the loop. I suppose this could happen, but I think writing is different from domains like chess or image generation and it will take longer to reach this state. If you get the hands kinda wrong in an image it doesn’t often ruin the overall effect. Chess is a constrained, adversarial game. With writing, any small incorrect fact or slip-up kind of undermines the whole thing. I think the “last mile” of many forms of AI is going to be hard to conquer, so the areas where generation will be totally automated will be those where little mistakes aren’t as noticeable or important. Writing is often not in that category.

For me, the goal is always to use AI to help people think better. Writing is thinking. That’s it. And if we don’t forget that, I believe the future of writing is going to be very exciting indeed.


Before we go, a quick Lex update!

It’s been a wild week. For those of you interested in some inside baseball for Lex and how we plan on turning it into a business, read on.

I flew to New York to spend time with Dan this week so we could take a step back and think about the big picture of where Lex should go from here. It obviously resonated beyond what we had initially anticipated, and so it’s important even within the context of a remote team to get together in person for big moments like this.

I’m glad we did. I write this from the airport on my way home, and I feel better than ever about what’s coming next. What I don’t feel so great about is my pace of shipping new features 😅

That being said, I’m working on something right now that we’re going to release soon that I think is ridiculously cool. I won’t spoil it, but I do want to share the general philosophy that guides the Lex product roadmap.

Like I said at the end of the demo video, we believe AI is a fundamental technology paradigm shift that is on par with the shift from mainframes to personal computers, or the shift from isolated to internet-connected computing. The first temptation when launching a new product is to refine it and build all the table-stakes features (like commenting, formatting, track changes, versions, etc) but I actually think that would be a mistake.

My goal right now is to try and identify the ways that AI can most fundamentally transform the writing process. The more different and valuable it is, the better. If we’re just like Google Docs with a bit of commodity AI slapped on, it’s not going to be very interesting over the long run. It should ideally end up feeling so different that it’s almost like a new product category—but at the same time be familiar enough that you already know the basics of how to use it.

Can’t wait to show you what’s next :) 

Also, since you’re still here, a fun stat: we crossed 43k signups! This thing just keeps on going. It’s incredible.

Thank you so much for all of your support—this would be impossible without you.

See ya next week!

Like this?
Become a subscriber.

Subscribe →

Or, learn more.

Read this next:

Divinations

Twitter Is Fragmenting

The future of social networks is in smaller communities

Jul 12, 2023 by Nathan Baschez

Divinations

Robo-Coaches Are Here

How I learned to stop worrying and love the robots that tell me what to do

2 May 3, 2023 by Nathan Baschez

Divinations

Announcing Lex’s $2.75m Seed Round, Led by True Ventures

2 Aug 23, 2023 by Nathan Baschez

Chain of Thought

Transcript: ChatGPT for Radical Self-betterment

'How Do You Use ChatGPT?’ with Dr. Gena Gorlin

🔒 Jan 31, 2024 by Dan Shipper

The Sunday Digest

How AI Works, Crypto’s Prophet Speaks, ChatGPT for Radical Self-betterment, and More

Everything we published this week

Feb 4, 2024

Thanks for rating this post—join the conversation by commenting below.

Comments

You need to login before you can comment.
Don't have an account? Sign up!
sau shank over 1 year ago

Hey Nathan, priming is a very good point. The big risk I see is that AI can feed into your biases very easily. While super powerful it will force writes to be even more cognizant of biases. Fun times indeed

Every smart person you know is reading this newsletter

Get one actionable essay a day on AI, tech, and personal development

Subscribe

Already a subscriber? Login