Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

i don't buy this logic. if i have studied an author greatly i will be able to recognise patterns and be able to write like them.

ex: i read a lot of shakespeare, understand patterns, understand where he came from, his biography and i will be able to write like him. why is it different for an LLM?

i again don't get what the point is?

 help



You will produce output that emulates the patters of Shakespeare's works, but you won't arrive at them by the same process Shakespeare did. You are subject to similar limitations as the llm in this case, just to a lesser degree (you share some 'human experience' with the author, and might be able to reason about their though process from biographies and such)

As another example, I can write a story about hobbits and elves in a LotR world with a style that approximates Tolkien. But it won't be colored by my first-hand WW1 experiences, and won't be written with the intention of creating a world that gives my conlangs cultural context, or the intention of making a bedtime story for my kids. I will never be able to write what Tolkien would have written because I'm not Tolkien, and do not see the world as Tolkien saw it. I don't even like designing languages


that's fair and you have highlighted a good limitation. but we do this all the time - we try to understand the author, learn from them and mimic them and we succeed to good extent.

that's why we have really good fake van gogh's for which a person can't tell the difference.

of course you can't do the same as the original person but you get close enough many times and as humans we do this frequently.

in the context of this post i think it is for sure possible to mimic a dead author and give steps to achieve writing that would sound like them using an LLM - just like a human.


You're still confusing "has a result that looks the same" and "uses the same process"; these are different things.

Why do you say it has a different process? When I ask it to do integrals it uses the same process as me

Not everything works like integrals. Some things don't have a standard process that everyone follows the same way.

Editing is one of these things. There can be lots of different processes, informed by lots of different things, and getting similar output is no guarantee of a similar process.


The process is irrelevant if the output is the same, because we never observe the process. I assume you are arguing that the outputs are not guaranteed to be the same unless you reproduce the process.

If we are talking about human artifacts, you never have reproducibility. The same person will behave differently from one moment to the next, one environment to another. But I assume you will call that natural variation. Can you say that models can't approximate the artifacts within that natural variation?


It's relevant for data it hasn't been trained on. LLMs are trained to be all-knowing which is great as a utility but that does not come close to capturing an individual.

If I trained (or, more likely, fine-tuned) an LLM to generate code like what's found in an individual's GitHub repositories, could you comfortably say it writes code the same way as that individual? Sure, it will capture style and conventions, but what about our limitations? What do you think happens if you fine-tune a model to write code like a frontend developer and ask it to write a simple operating system kernel? It's realistically not in their (individual) data but the response still depends on the individual's thought process.


I don't know if LLMs are trained to imitate sources like that. I also don't know what would happen if you asked it to do something like someone who does not know how to do it. Would they refuse, make mistakes, or assume the person can learn? Humans can do all three, so barring more specific instructions any such response is reasonable.

> Humans can do all three, so barring more specific instructions any such response is reasonable.

Of course, but reasonable behavior across all humans is not the same as what one specific human would do. An individual, depending on the scenario, might stick to a specific choice because of their personality etc. which is not always explained, and heavily summarized if it is.


>If I trained (or, more likely, fine-tuned) an LLM to generate code like what's found in an individual's GitHub repositories, could you comfortably say it writes code the same way as that individual? Sure, it will capture style and conventions, but what about our limitations? What do you think happens if you fine-tune a model to write code like a frontend developer and ask it to write a simple operating system kernel? It's realistically not in their (individual) data but the response still depends on the individual's thought process.

Look, I don't think you understand how LLM's work. Its not about fine tuning. Its about generalised reasoning. The key word is "generalised" which can only happen if it has been trained on literally everything.

> It's relevant for data it hasn't been trained on

LLM's absolutely can reason on and conceptualise on things it has not been trained on, because of the generalised reasoning ability.


> LLM's absolutely can reason on and conceptualise on things it has not been trained on, because of the generalised reasoning ability.

Yes, but how does that help it capture the nuances of an individual? It can try to infer but it will not have enough information to always be correct, where correctness is what the actual individual would do.


i think there's a lot to be said about the process as well, the motivations, the intuitions, life experiences, and seeing the world through a certain lens. this creates for more interesting writing even when you are inspired by a certain past author. if you simply want to be a stochastic parrot that replicates the style of hemingway, it's not that difficult, but you'll also _likely_ have an empty story and you can extend the same concept to music

I don’t see why editing is any different. If a human can learn it why not an llm

Even if the visualization of the integration process via steps typed out in the chat interface is the same as what you would have done on paper, the way the steps were obtained is likely very different for you and LLM. You recognized the integral's type and applied corresponding technique to solve it. LLM found the most likely continuation of tokens after your input among all the data it has been fed, and those tokens happen to be the typography for the integral steps. It is very unlikely are you doing the same, i.e. calculating probabilities of all the words you know and then choosing the one with the highest probability of being correct.

> the way the steps were obtained is likely very different for you and LLM

this is not true, any examples?


I explained in detail why it is true, and what would the opposite imply for you as a human being.

You are not able to write like Shakespeare. Shakespeare isn't really even a great example of an "author" per se. Like anybody else you could get away with: "well I read a lot of Bukowski and can do a passable imitation" or "I'm a Steinbeck scholar and here's a description of his style." But not Shakespeare.

I get that you're into AI products and ok, fine. But no you have not "studied [Shakespeare] greatly" nor are you "able to write like [Shakespeare]." That's the one historical entity that you should not have chosen for this conversation.

This bot is likely just regurgitating bits from the non-fiction writing of authors like an animatronic robot in the Hall of Presidents. Literally nobody would know if the LLM was doing even a passable job of Truman Capote-ing its way through their half-written attempt at NaNoWriMo


>Literally nobody would know if the LLM was doing even a passable job of Truman >Capote-ing its way through their half-written attempt at NaNoWriMo

As I look back on my day, I find myself quite pleased with this line.


You can understand his biography and analyses about how shakespeare might have written. You can apply this knowledge to modify your writing process.

The LLM does not model text at this meta-level. It can only use those texts as examples, it cannot apply what is written there to it's generation process.


no it does and what you said is easily falsifiable.

can you provide a _single_ example where LLM might fail? lets test this now.


Yes, what I said should be falsifiable. The burden is on you to give me an example, but I can give you an idea.

You need to show me an LLM applying writing techniques do not have examples in its corpus.

You would have to use some relatively unknown author, I can suggest Iida Turpeinen. There will be interviews of her describing her writing technique, but no examples that aren't from Elolliset (Beasts of the sea).

Find an interview where Turpeinen describes her method for writing Beasts of the Sea, e.g.: https://suffolkcommunitylibraries.co.uk/meet-the-author-iida...

Now ask it to produce a short story about a topic unrelated to Beasts of the Sea, let's say a book about the moonlanding.

A human doing this exercise will produce a text with the same feel as Beasts of the Sea, but an LLM-produced text will have nothing in common with it.


>You need to show me an LLM applying writing techniques do not have examples in its corpus.

why are you bringing this constraint?


Because the entire point is the LLM cannot understand text about text.

If someone has already done the work of giving an example of how to produce text according to a process, we have no way of knowing if the LLM has followed the process or copied the existing example.

And my point of course is that copying examples is the only way that LLMs can produce text. If you use an author who has been so analyzed to death that there are hundreds of examples of how to write like them, say, Hemingway, then that would not prove anything, because the LLM will just copy some existing "exercise in writing like Hemingway".


>Because the entire point is the LLM cannot understand text about text.

you have asked for an LLM to read a single interview and produce text that sounds similar to the author based on the techniques on that single interview.

https://claude.ai/share/cec7b1e5-0213-4548-887f-c31653a6ad67 here is the attempt. i don't think i could have done much better.


There is no actual short story behind the link? moon_landing_turpeinen.md cannot be opened.

You could not have done better? Love it. You didn't even bother rewriting my post before pasting it into the box. The post isn't addressed as a prompt, it's my giving you the requirements of what to prompt.

Also, because you did that, you've actually provided evidence for my argument: notice that my attitudes about LLMs are reflected in the LLM output. E.g.:

  "Now — the honest problem the challenge identifies: I'm reconstructing a description of a style, not internalizing the rhythm and texture of actual prose. A human who's read the book would have absorbed cadences, sentence lengths, paragraph structures, the specific ratio of concrete detail to abstraction — all the things that live below the level of "technique described in interviews.""

That's precisely because it can't separate metatext from text. It's just copying the vibe of what I'm saying, instead of understanding the message behind the text and trying to apply it. It also hallucinates somewhat here, because it's argument is about humans absorbing the text rather than the metatext. But that's also to be expected from a syntax-level tool like an LLM.

The end result is... nothing. You failed the task and you ended up supporting my point. But I appreciate that you took the time to do this experiment.


my bad, apprently claude doesn't share the Md. here it is https://pastebin.com/LPW6QsLE

> "Now — the honest problem the challenge identifies: I'm reconstructing a description of a style, not internalizing the rhythm and texture of actual prose. A human who's read the book would have absorbed cadences, sentence lengths, paragraph structures, the specific ratio of concrete detail to abstraction — all the things that live below the level of "technique described in interviews.

a human would have to read all the text, so would an LLM but you have not allowed this from your previous constraint. then allow an LLM to reproduce something that is in its training set?

why do you expect an LLM to achieve something that even a human can't do?


Why are you taking the LLM-hallucinated version of the argument as truth? I even clearly stated how the LLM-version of my claim is a misunderstood version of the argument.

Do you remember the point we're arguing? That a human can understand text about a way of writing, and apply that information to the _process_ of writing (not the output).

If you admit the LLM can't do this, then you are conceding the point.

I don't know why you're claiming that humans can't do this when we very clearly can.

An illustrative example: I could describe a new way of rhyming to a human without an example, and they could produce a rhyme without an example. However describing this new rhyming scheme to an LLM without examples would not yield any results. (Rhyming is a bad example to test, however, because the LLM corpi have plenty of examples).


>> i again don't get what the point is?

The point is that you dont become Jimi Hendrix or Eric Clapton even if you spend 20 years playing on a cover band. You can play the style, sound like but you wont create their next album.

Not being Jimi Hendrix or Eric Clapton is the context you are missing. LLMs are Cover Bands...


This is the plot of a short story of Borges’ called “Pierre Menard, the Author of Don Quixote.”

There's a relatively common pattern of "new tech idea => Borges already explained why that approach is conceptually flawed".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: