"Sharing the prompt" is a category error. It assumes the value of a piece is in the instructions given to the model, rather than the proprietary input or the iterative editing that follows. There is a hard line between using an LLM to generate content from a void and using it to synthesize specific ideas.
If someone asks a model to "write a post about X," they are outsourcing the thinking, which results in the homogenized voice everyone is tired of.
Treating the act of refining text as a confession of shame misses the point of how writing works. Whether a draft begins as a model output, a dictation, or a scribbled note, the final responsibility belongs to the person who hits publish.
Improving prose to remove predictable patterns is the work of an editor. This process ensures the content is worth reading and respects the audience's time.
Comparing a software tool to "poisoning a well" turns a debate over style into a moral crisis that doesn’t fit the situation. If the information is accurate and the writing is clear, the water in the well is fine, regardless of the pump used to get it there. If the water tastes good, complaining about the plumbing is just a distraction.
Parents complaint is explicitly not about the style of the prose, use whatever you want to check your grammar and reduce redundancy. The complaint of poisoning the well is regarding content that is not intended to express anything at all, the old “why would I read what nobody bothered to write”
The issue is that you're conflating the process of transcription with the act of expression. If I feed an LLM my own raw research notes and technical observations and use it to help structure those thoughts into a readable essay, I haven't "avoided writing".
The "why would I read what nobody bothered to write" argument only applies to people who ask a bot to hallucinate an opinion from scratch. It doesn't apply to authors using the tool to clarify their own ideas.
LLM-generated text that is a hallucinated-from-scratch opinion is practically indistinguishable from LLM-generated text that is rooted in your research notes.
I find putting the former into my brain abhorrent to such an extent that I am willing to forego reading the few instances of the latter. I'd much rather have your raw research notes and observations.
> If I feed an LLM my own raw research notes and technical observations and use it to help structure those thoughts into a readable essay, I haven't "avoided writing".
> The "why would I read what nobody bothered to write" argument only applies to people who ask a bot to hallucinate an opinion from scratch. It doesn't apply to authors using the tool to clarify their own ideas.
You're wasting my time if you share LLM writing. If you're going to do it that way, share your notes and your prompt. Otherwise, you're being inconsiderate.
>Comparing a software tool to "poisoning a well" turns a debate over style into a moral crisis that doesn’t fit the situation. If the information is accurate and the writing is clear, the water in the well is fine, regardless of the pump used to get it there. If the water tastes good, complaining about the plumbing is just a distraction.
Speaking of poisoning wells, have you heard of this thing called Search Engine Optimization? Absolutely ruined the Internet.
For example it ignores the gazillion medium(-like) "articles" that are not much more than the output of a prompt. Here AI is not about style, is about content too. If you open such a post, maybe with the intent of learning anything, and you realize is AI slop, you might close it. Making it harder to recognize is poisoning the well in such cases.
I built something similar that uses an event-delegation framework built around a pub/sub architecture where custom hats (personas) define specialized workflows through topic-based subscriptions.
I keep asking ChatGPT to read and summarize HN front page while driving, and it keeps blundering. I don’t know if there’s a business for you in this, but I would pay.
Of course I always have questions about the subject, so it become the whole voice chat thing.
Interesting I recently added the ability to receive a daily email digest. Would just need a way to read it out. I'll look into what a conversational voice chat might look like.
https://github.com/mikeyobrien/ralph-orchestrator
reply