Hacker Newsnew | past | comments | ask | show | jobs | submit | more parodysbird's commentslogin

The yearly dots are the individual points of data; so the lines between e.g. 2021 and 2022, despite being the main visual marker, do not indicate a change that happened in 2021 that leads to 2022, but rather a change that happened in 2022 compared to 2021.

GPT 3.5 came out early in 2022, chatGPT later the same year. Lots of change and community discussion that year around this sort of thing.


No, we do not try to answer the original trolley problem in earnest. We immediately reject and move on.


He's not a developer. He's really talking about consumer tech.


OK, let's try, "I have no idea how any of this works. However, I have come to the conclusion that it doesn't, and can't."

Closer? I'm just going by the headline here.


This is not a criticism he really mentions explicitly, but the issue I see with the valuation is that these products cannot be used in a production system by a responsible engineer. As in, I can't have LLMs autonomously plugged in as part of my product. The failure modes are not ever going to be predictable enough. Now, Microsoft will probably be able to charge a fortune at enterprise level, and managers will dream of a day that the LLMs can replace all those weirdo devs at their company, but that'll stay a dream.

All of the valuable uses are personal. It makes me personally feel more productive at work. It helps me personally understand some topic better. It gives me an idea in a personal project.

That's all really cool, but that is not what the valuation is about. The valuation is about a false science fiction and hype bubble about agentic this or that or AGI or whatever, and this is driving very questionable decisions for wasting possibly trillions of dollars and tons of energy.

The plus side is that there is some really cool personally useful tech here, and we will probably end up with very good open source implementations and cheaper used GPUs once the bubble bursts.


You also can't responsibly have a prolific apprentice (intern, first or second year, etc.) plugged directly to production.

On the other hand, today most money for SWEs goes to people who aren't "staff engineer" level. What if most money went to staff engineers who directed these interns and paired with new human apprentices to learn staff eng?

In the past few thousand years, apprentice/journeyman/mastery of trade was how trades worked, with an "up or out" model that kept the role pyramid the right shape for skill to shape the outcomes.

These days, far too many careers stay apprentice skill the entire career, mostly thanks to enterprises failing at engineering management as a skilled trade, so being unable to value and raise staff engineer caliber contributors. The enterprise learning machine is broken by false "efficiency".

LLMs change this. Staff engineer caliber SWEs are able to direct these indefatigable assistants, as if each staff engineer has a team of 10 who never need mental breaks to remain productive. There will of course be some number of junior devs who themselves have enough affinity for the role they will want to stick with the apprentice model and work to the staff engineer level. (And will always be solo or boutique teams of app/saas SWEs.)

As for the enterprise engineering management that couldn't tell the difference between a staff engineer and an apprentice, the LLM multiplies the difference to the point the outcomes are evident even to a non technical observer.

So one possible timeline for this is a raising of the median human skill level by attrition of those unskilled enough or unable to think critically enough to leverage the machine assistants as force multipliers or unable to survive directly mentored skill-up training and observation from the staff engineers.

You talk about personal value. Roughly, I agree with you completely, and am adding who I think those persons could be (or have to be given the current level of "thinking" by these tools) for the hype to deliver on value. (At a higher level of machine, closer to "AGI", this scenario changes.)

As is evident by downsizing a mediocre team and observing output go up and work more reliably, these forces could, if playing out this way, make dollars per human go up, productivity go up, quality go up, and enable a return to the millennia-proven model of apprenticeship for the trade.


This is a great point. Now I have extra reason to cheer on the bubble bursting.


He's just queried records with dead=FALSE. It's not active recipients of social security payments. All its showing is that some portion of people die without the death being officially logged on their record in the social security database.

Edit: Also, it's not clear that the death field is the sole criteria for determining the eligibility for payments (i.e. determining the recipient is alive)


Yep, it doesn’t sound like that’s the sole criteria based on this thread, which includes a NYT article (from 2023!) showing less than 50k users over 100 collecting payments despite over 18M records. https://xcancel.com/ThatsMauvelous/status/189135619250239902...

I can’t say how annoyed it makes me that Elon’s initial reaction to anything odd seems to be “fraud!!” rather than curiosity


> I can’t say how annoyed it makes me that Elon’s initial reaction to anything odd seems to be “fraud!!” rather than curiosity

To be fair, being curious about things isn't going to get as much support as screaming "GUBBERMINT FRAUD!" (a red rag to the GOP bull) when you're trying to trash any government departments that stand between you and more money.


Academic papers are supposed to be about the author... They're meant to be an author's work put forth to an intended community of "colleagues", not students or general public. No one should think that a general learner is supposed to turn to academic research papers as their main vector of learning content.


I personally never thought academic papers were about the author. They might have turned into an ego game, but I always assumed that the goal of academic writing was to effectively communicate complex ideas and research findings to the reader. If not, then it's no wonder our voices are the first ones the public ignores in a time of crisis.


Because the use of Bayesian models does not require Bayesian epistemology, which, unfortunately, a lot of people conflate.

Here is a decent paper by a statistician about this issue: https://arxiv.org/abs/1006.3868


That's missing the point--philosophy as a whole is not required for science. Invertebrates were adjusting their beliefs to observations (science) long before the first philosopher. I'm not confusing Bayesian models with Bayesian epistemology.

What this is at its essence is that science has allowed us to evolve, learn to kill lions and bears, create agriculture, build ships, cure diseases, travel to the moon, build AI, etc. And all this time while science has been empowering humans and saving lives, science has been under attack by philosophy. You have a scientist saying, "I observe that solar and lunar patterns are more consistent with the earth revolving around the sun" and a philosopher saying "ackchyually the bible says the sun revolves around the earth". When evidence (collected through scientific methods) for a hypothesis becomes overwhelming, the last refuge of ignorance is the philosopher saying, "ackchyually, you don't know that because nothing is truly knowable".

Epistemology is an attempt to understand how we know things, and Bayesian epistemology is probably the best description of how we know things based on science. It's a description, based on observation of how scientists practice science, of how science works.

So when philosophers come in and say Bayesian epistemology doesn't work, they're saying science doesn't work. It's yet another attack on science by philosophers.

And as I said in my other post, Popper's criticism of Bayesian epistemology is actually smart: he does understand what he's talking about, it just doesn't, ultimately, matter much, because the practice of science de facto works, in practice, even if the philosophical model says it doesn't. If all the nuance of Bayesian epistemology and Popper's ideas isn't captured, it's easy for it just to become a straw man argument for philosophers to say that science doesn't work. When it comes down to it, the way people talk about Popper and Bayesian epistemology is just a more sophisticated version of "ackchyually, you don't know that because nothing is truly knowable".

I'm not defending Bayesian epistemology, per se. I'm defending science, as it's practiced, because as I said, science is fucking important. Now, more than ever, in the era of anti-vaxxers and climate change denial, we desperately need people to believe in science.


> Invertebrates were adjusting their beliefs to observations (science) long before the first philosopher.

To underscore the bad science you are led to in terms of assumed truth, let alone hypothesis: there is very little evidence or justification or explanation that any of the processes used by the invertebrate here execute calculation that obeys the very specific axioms of probability and updates to a state in accordance with Bayes' theorem. Stimulus response is not Bayes' theorem. Updating a state from new inputs is not Bayes' theorem.


Learning from observation is the basis of science, and invertebrates certainly do that.

A lot has changed since invertebrates started doing that. Not only have we evolved more senses than the first invertebrates, we've also developed methods such as Bayesian inference to combine the results of multiple observations, as well as numerous methods for removing confounding variables such as control groups and regression analysis. Unsurprisingly this has led us to discover a lot more with science, with a lot more accuracy, than invertebrates.

And yes, updating a state from new inputs is not literally Bayes theorem, which is why nobody said it was. However, the process of updating a belief confidence from new inputs as it is done today can be modeled today using Bayesian inference. No, invertebrates don't do that--which is again, why I never said they did.

It's a bit tiresome to be corrected by people who clearly don't seem to understand that Bayes theorem, Bayesian inference, and Bayesian epistemology are all named after the same guy because they're all built on each other in that order. Yes, they aren't all the same thing, but if you're jumping in with that as if it's a correction, you certainly don't understand the concepts.


Could you give an example of where a philosopher has impeded science in the way you describe? Where it has been not just irrelevant, but obstructive? Irrelevant is fine - science and the philosophy of science have different goals. You might as well say that chemistry is irrelevant to mathematics.


> Bayesian epistemology is probably the best description of how we know things based on science.

This is wrong, and it's a bit ironic you are so adamant on a point that is bad philosophy and leads to bad science as a way of insisting that philosophy has no relevance for science.


Okay, what's your explanation for how we can trust science?


You're grossly misrepresenting both science and philosophy. Science is a conscious and self referential effort, it has nothing to do with animals learning how to survive in their environment. Philosophy is definitely not bible thumping.


> Science is a conscious and self referential effort

Is this supposed to be a meaningful sequence of words?

> it has nothing to do with animals learning how to survive in their environment.

As an animal who would have died in childbirth and taken my mom with me were it not for science, I disagree.


He's talking about Bayesian philosophy of science, not science, which ultimately does not rely on Bayesian epistemology.


Agreed--in fact, science doesn't rely on philosophy at all. If the entire field of philosophy disappeared, science would go on functioning just fine. In fact, science has generally been hindered by philosophy--it's seemingly impossible to discuss scientific methodology without some wanker interjecting "well ackchyually nothing is knowable". Animals with nervous systems were learning from observation before humans invented enough language to epistemologize, and will continue to do so with or without philosophers.

Bayesian epistemology is an attempt to model why science works--it relies on science, not the other way around.


Bayesian epistemology is not used in almost any domain in science, it does not model why science works, and it does not rely on science: it relies on metaphysics.


Science do in fact rely on philosophy that's how we got the scientific method.


The scientific method may at one time have been conceived by philosophers, but we are centuries away from that time, and in recent centuries, all the refinements and improvements to science have been done by scientists. The roots of the scientific method which one could reasonably call philosophy are so changed as to be considered invalid today.

The reverse is not true--scientists have written a lot of philosophy--and since they tend to base their philosophy in reality rather than logic based on speculation, it tends to be better philosophy than philosophers.


I don't think you know what you're responding to, but in any case, regarding Deutsch, he "laid the foundations of the quantum theory of computation, and subsequently made or participated in many of the most important advances in the field, including the discovery of the first quantum algorithms, the theory of quantum logic gates and quantum computational networks, the first quantum error-correction scheme, and several fundamental quantum universality results."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: