Defensible Use of AI in Writing (Like This)
If AI helps me get ideas from my head to the readers' heads faster, that's good. Society thrives when ideas are shared, critiqued, and built on. But authors shouldn't take credit for other peoples' ideas -- they should synthesize others' ideas, including the "blurry JPG of the web" that is LLMs. The question is: what makes AI-assisted writing defensible? When is it helpful to society, and when is it plagiarism? When can you still be considered the author of AI-assisted writing?
Academic writing has generally clear guidelines on how co-authorship and acknowledgements work. In the fields I worked in, people who edit or provide feedback on a paper only get an acknowledgement, not co-authorship. Primary authors and co-authors work together on the writing as well as the research behind it, often with a senior supervisory author cited last.
For AI-assisted writing, is the LLM the primary writer, an editor, a senior co-author, a junior co-author, or just something that deserves an acknowledgement? Obviously it depends, so how can we be clear to readers the role of the humans and AI? Which of these roles is "defensible?"
The Stakes
Paul Graham writes that the expectation of writing in prestigious jobs historically forced individuals to learn how to think. Classical liberal education is a mechanism for improving thinking using writing, deep reading, and other techniques. If we outsource too much of the writing process, we risk losing that mechanism. Graham emphasizes the stakes of a shift to AI being a primary author by citing Leslie Lamport: "If you're thinking without writing, you only think you're thinking." I'd add -- if you're writing without either thinking or writing, you're not really contributing anything.
But that doesn't mean we should avoid the productivity benefits of AI entirely. The question is how to use it defensibly.
Defensible Patterns
Seth Godin utilizes AI not to do his writing but to challenge it. He explains: you "say to Claude, please find the internal inconsistencies. Please ask me five hard questions. Please criticize the structure." This use case is defensible because the human maintains agency. Godin suggests that the real opportunity is to "find a way to use human effort to create more value." I like the motivation of creating more value, of getting my ideas into others' heads faster and better, and I now use AI to help me do all of my work faster, including writing.
Ryan Law asserts that "Generative AI struggles to provide any information gain." A defensible use of AI, therefore, involves "front-loading the article structure." By ensuring that the core of the content comes from a human with lived experiences, an AI can provide context and feedback, can strengthen arguments and improve how well the ideas are communicated. Conversely, relying on the AI to generate the core structure, then rewriting or adding to it, is less likely to yield novel or valuable writing. As I'll describe below, my process starts with a brain dump of ideas, then uses AI's ability to organize, more than synthesize, to start to convert those ideas into a coherent story.
Laura Mohiuddin expands this into a "Defensible Content Playbook," which shifts the focus from transactional blogging to "creating defensible, quotable intellectual property." A threshold for value of professional content is that whatever you write should be good enough and novel enough for an AI system to cite you in the future.
On the far edge of defensible, Venkatesh Rao has an interesting blog that's primarily vibe-written. An LLM is the primary author, and he acts as a senior co-author, throwing out ideas for the AI to run with, providing feedback, but not actually typing the final version. I think that's defensible specifically because he's so transparent about the process. He generally includes a description of his prompts and processes at the end of each post. That's not what I do or want to do, but I appreciate it, and the posts are often thought-provoking.
The Joy of Creation
Anil Dash says that the joy of creation is a fundamental reason for writing: "I write because it brings me joy. I'm not going to ask Claude to write my blog posts in the same way that I won't ask it to solve my daily New York Times puzzles." For me, I enjoy getting my thoughts clear enough that I can write them down. I don't want to lose that part of the process, but the parts of writing that can sometimes be a slog, I'm happy to delegate.
Conversely, I enjoy software product development more than programming per se, so I'm happy to have the AI write code for me, both for my professional work and for hobby projects.
My Process
Here's how I wrote this article, using AI for certain parts, to model one approach to transparency.
My blogs start on my personal web site, which is built in Hugo, a website and blog generator. (I will later copy the post to Medium for more readers.) I'm writing in Cursor, the AI-centric development environment and text editor. Cursor's Agent sidebar let's me have AI read or edit what I'm working on.
- My first step is to use an AI command:
/new-blog-post Defensible Use of AI in Writing (Like This). That generates the Hugo front-matter for my post, saving me a few minutes of boilerplate typing. - I brain-dump bullet points into the document, in whatever order things come to me. Nothing is polished, but I want to get everything out of my head.
- I go to an AI with Deep Research capabilities (e.g., Google Gemini) and ask it to find me what else has been written on the topic, focusing on specific quotes and references over generalizations. Deep Research seems to me to be a lit-review put together by someone who doesn't really know what I want, and who writes in a very opinionated, very different style from me. I copy-paste some quotes and the URLs into my draft. At this point, I'd call this a first, very rough draft.
- I use another command, "rewrite blog outline", to generate a readable second draft. The AI reads my prior (not AI-assisted) writing, and a summary of my writing style, then uses that to convert my notes into a reasonably well-structured blog post. I ask it to minimize wording changes, to maintain references, and to suggest opportunities for illustrations.
- I do a deep rewrite editing pass to make sure that everything reflects my views and approach.
- I use a "critique blog post" command to have the AI write three critiques of my article, one from a supportive reader, one from a neutral reader, and one from a critical reader. This is generally great feedback about my writing, and gives more opportunities to rewrite, tighten, and clarify.
- I've also been testing a humanizer skill that finds remaining signs of AI-written writing and tries to clean it up. I review and either accept or reject the suggested changes -- so far, it has been mostly helpful.
- I do one final re-read myself, then publish it.
The AI isn't just editing -- it's helping with organization and suggesting connections and some of the content. But I'm maintaining control over the ideas, the voice, and the final product.
What Makes It Defensible
What makes AI-assisted writing defensible? Summarizing the above, I think we can state a few principles:
- Transparency: Be clear about how AI is being used, particularly if you're using it in a way that would necessitate co-authorship in an academic context.
- Human agency: Maintain control over ideas, voice, and final decisions.
- Information gain: Provide original insights, data, or synthesis that the AI couldn't generate on its own.
- Value creation: Use AI to help you get more of your ideas into more peoples' heads.
- Preserve the thinking: Don't outsource the parts of writing that help you think -- the initial brain dump, the synthesis, the final decisions.
As noted above, thinking about the roles of contributors to academic writing may also be helpful:
| Academic writing role | Contribution | AI credit? | Notes |
|---|---|---|---|
| First author | main ideas, arguments, final wording | definitely | give full credit, ala Rao, or plagiarism |
| Senior/final author | suggested directions, major review, inspiration | definitely | should specifically call out contributions |
| Secondary author | limited writing, contributed to the work | yes | should note briefly |
| Editor | grammar, spelling, layout, workflow | not needed | |
| Acknowledgement | feedback on writing | yes | could be footnote/endnote |
If you're using AI to help you think faster or communicate better, while maintaining your voice and your ideas, that seems defensible. If you're using it to avoid thinking, or to plagiarize others' ideas (even if synthesized by an LLM) as your own, that's not.
Note: This post was primarily human-authored, with AI assistance for research, editing, and organization. The AI filled a Secondary author role. The core ideas and final voice are mine.