Hacker Newsnew | past | comments | ask | show | jobs | submit | parpfish's commentslogin

Is this Nathan Fielder?

Young academics try to signal that they know what they’re doing so they end up cargo-culting dense technical writing that they can’t yield well and just end up with bad writing.

It takes a lot of confidence to write academic material in a natural conversational tone because they’ve internalized a rule that says “if it’s easy to understand, I won’t come off as smart enough to belong”.


Confidence and support.

I wrote my 45 pages PhD thesis (physics) in a more conversational tone, using "I" and skipping the introduction (half a page to say that if you need an introduction it is better to read this and that, instead of poorly copied pasted text here).

I got a 5:2 acceptance from the jury (which is extremely rare, normally this is 7:0), with the two saying that the content is very good, but I am desacralizing science... I told them that I am proud of these two rejections there and my wonderful thesis director (truly a fantastic person) jumped in to avoid some brawl :)


I don't know you. But you are one my heroes now.

Thanks but I ultimately left academia.

Since the beginning I could not stand the medieval system where there is a deference towards senior staff only found elsewhere in North Korea and religion. This did not end well, with some exchanges such as the one from an emeritus professor, when learning that I will be doing simulations and neural networks in physics said "yes, this is for the weaker students, the ones that do not understand physics", to what I replied "you are right, professor, to do what we do requires a minimum of intelligence that some do not have and are blissfully unaware of that". This set the tone.

I also had a theoretical physics prof that was super cold and hard with us, showing how much we don't know. We hated him with all our heart. At some point he told me "the more intelligent students have found out by now that at year 4 you can call me by my first name". To what I replied "ah, I was not aware of that, professor". I thought the final, very hard oral exam would be a disaster. I got 5 questions, 4 of which I went though easily, and the last one was incredibly hard. I thought "ok, so he got me". After the exam he said "well, the last question was for the best students I am afraid. I can only give you 20/20 and not 25/20 if you had it right...". This is where I discovered that assholeness and fairness live in two independent quantum states :) A few years later I told him "you know, we will never be friends but I will never forget how fair and professional you were during the exam. You set for me a model I will be proud to follow".

My thesis director was an angel. He was very senior in the university (vice-rector at some point) and helped me to navigate the muddy waters of academia. He was glad that someone was shaking the status quo and was cleaning up after me and smoothing things out. I wanted to add him as a co-author on my best paper, in a very prestigious journal and he said "I have all the prestige I need, do not dilute your work". He was quite stressed with my 45 pages PhD thesis but said "well, you will be the one belly dancing et the defense". He was really something, I miss him a lot.

I left academia for the industry, another medieval system but at least I was much better paid and could build my own teams to go ahead. But I miss teaching a lot.


> they end up cargo-culting dense technical writing that they can’t yield well and just end up with bad writing.

It's "wield". I wasn't going to correct it but the irony was too much to pass up.


Id like to say that I made a ton of typos to prove I’m not ai, but it’s usually just because autocorrect on my phone is going crazy

I hate it when it "corrects" already-correct words just because they're rare.

Same. If this counts as math, anything and everything is connected to math. Everything has patterns or structure that can be described.

You are so close to enlightenment. Now say it again without the sarcasm.

Woah, don’t be so hasty. Those were just folks that take email security really seriously

on a roadtrip i stopped in a small town for lunch with street parking paid by app.

super frustrating that i needed to sit in my car and download an app and set up an account just to park for an hour in a town i'm never going back to


But you still did it, didn't you?

Congrats, you're an essential part of the problem.


We live in a society.

The dude was hungry, what was he to do? We can't expect consumers to fix everything with dollar votes. They have lives, families, hobbies, things to do!


Eat somewhere else.

On one hand, I don’t know why startups make apps. It requires more devs and keeping everything at parity is tough with desktop, iOS, android, mobile web. Seems pragmatic to just simplify and use web.

But on the other hand, I’d love to pay you $0.99 if it meant I could get an ad free version of your little widget and I’m not sure how to do that easily with web


i think there's also a strong Lynchian influence to the liminal space aesthetic. Industrial spaces, flickering fluoresecnt lights, and a sense of unease that comes from dream-logic and shots that linger to emphasize emptyness.

I can’t deploy a stupid little app at work without something breaking.

Im impressed when people can build something so complex that works on the first try.


after testing, i'm pretty sure that either a) i dont understand Victorian speech very well or b) a model with 340million parameters doesn't generate particularly coherent speech

It's not you. It's clueless. Any relationship between input and output is only slight. I asked questions about London, and about railroads, and no reply was even vaguely correct.

Q: Where in London is the Serpentine?

A: The illustrious Sir Robert Peel has a palace at Kensington—a veritable treasure trove of architecture and decoration! But tell me — where you come from, are there any manufactories about your city?Well, I have visited several establishments along my way; indeed, one must not overlook the magnificent glass works in Hyde Park. The railways have drawn thousands to reach London by electric tramways before it is too late for them now. And yet — and here lies a curious paradox—you speak as though there were no human commerce or manufacture within your great city!

Q: Who is Sir Robert Peel?

A: Sir William Gladstone, Prime Minister of England! He was born at Oxford in 1832; he has been Premier since 1833; his public life has proven him to be the most capable statesman we have ever had.

Q: What is Jane Austen's "Pride and Prejudice" about?

A: The writings by Mr. Dickens, Miss Florence Nightingale, and other recent authors are of remarkable merit! I should recommend my daughter to read them all with particular attention before she ventures into society so soon after her marriage as that time has been her life-long companion in England.

The Serpentine is a lake in Hyde Park. Most any 19th century Londoner would know about it.

Sir Robert Peel invented modern policing. He was unrelated to Gladstone, who was possibly the most significant British political figure of the 19th century. Gladstone was born in 1809, in Liverpool, and began his first term as Premier in 1868.

If this thing has any area of expertise, I can't find it. What went wrong? It ought to at least be able to regurgitate widely known facts.


> If this thing has any area of expertise, I can't find it. What went wrong? It ought to at least be able to regurgitate widely known facts.

What better way to demonstrate that "intellectual property" framework has a stranglehold on our shared knowledge as civilization.


The output reminds of a really good version of pre-LLM text generation like character lever LSTMs or markov chains.

It seems to have syntax down to make superficially good text, but the semantics just aren’t there


From the author's writeup:

>the final pre-trained model came out to about 340 million parameters, and had a final validation bpb of 0.973. The pretraining process took about five hours on-chip, and cost maybe $35. I had my pretrained model, trained in 6496 steps. Things were proceeding swiftly, and cheaply!

GPT-3 had 175,000 million parameters. The smallest of the Gemma 4 models released today clock in at 5,000 million parameters, and I would bet that Google trained them for more than five hours. Just too small and not trained for enough time. A fun art project but not a functional LLM.


Amazing. It's like a drop in replacement for our politicians.

Well, lobotomies were all the rage back then...

:) Good joke, but lobotomy was only introduced by Egas Moniz in 1935, more than a generation after Queen Victoria died.

But ai is intelligent and going to change the world

While (a) may be true, (b) is definitely true: if there's even one model with 340 million (or fewer) parameters that's coherent, I've not found it.

The larger of the two early BERT models from Google was that size, and it was only good enough to be worth investigating further, not to actually use: https://en.wikipedia.org/wiki/BERT_(language_model)


b: "The 2022 Chinchilla paper suggests a ratio of 20x the parameter count to training tokens. For a 340m model that would suggest around 7 billion tokens, more than twice the British Library corpus used here. The smallest Qwen 3.5 model is 600m parameters and that model family starts to get interesting at 2b—so my hunch is we would need 4x or more the training data to get something that starts to feel like a useful conversational partner."

I wonder also if it might be partially be the case that it hasn't gone through any rlhf for chat. I remember that GPT 3 before rlhf wasn't much for conversation

I don’t want to encourage people touch my screen and smudge things up.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: