Hacker Newsnew | past | comments | ask | show | jobs | submit | more timr's commentslogin

No. US courts consider both, to the extent that it’s a bright-line divider between “conservative” judges and “liberal” ones, where the former are far more likely to profess strict adherence to the text of the law (particularly constitutional law).

In any case, there is always a difference between the “intent” of a large and diverse body of politicians, and the actual text of a law. Any practical legal system must take it into consideration.


> where the former are far more likely to profess strict adherence to the text of the law (particularly constitutional law)

This is a fiction and just an excuse conservative justices use to make conservative rulings when they don't like a law.

They are perfectly fine to abandon the text of the law whenever it doesn't move forward a conservative agenda. The shining example of this is the voting rights act. Something never amended or repealed by congress but slowly dismantled by the court counter to both the intent and the text of the law.

And if you don't believe me, I suggest reading over the Shelby County v. Holder [1] decision because they put it in black and white.

> Nearly 50 years later, they are still in effect; indeed, they have been made more stringent, and are now scheduled to last until 2031. There is no denying, however, that the conditions that originally justified these measures no longer characterize voting in the covered jurisdictions.

IE "We know the law says this, and it's still supposed to be in effect. But we don't like what it does so we are canceling it based on census data".

[1] https://supreme.justia.com/cases/federal/us/570/529/


> This is a fiction and just an excuse conservative justices use to make conservative rulings when they don't like a law.

Isn't this the other way around? If you cite "the spirit of the law" then you're ignoring the text in order to do whatever you want.

Finding a "conservative" judge who does the latter is evidence that the particular judge is hypocrite rather than any argument that ignoring what the law actually says is the right thing to do.

But you also picked kind of a bad example, because that wasn't a case about how to interpret the law, it was about whether the law was unconstitutional.


That's an uncharitable reading. Citing the "the spirit of the law" is not automatically ignoring the text in order to do whatever you want. It can be "how do I apply this archaic text about oxen (or whatever) to current events". Maybe the meaning is that stealing stuff in general is frowned upon, not just oxen. Or should we focus on how a Chevrolet Corvette is definitely not an ox?


Interpreting constitutions and very old laws requires a different frame than modern laws that merely say something different than you'd like them to.

Why does the First Amendment say "freedom of speech, or of the press" and make no mention of radio or TV or the internet? Because, of course, it was enacted in 1791. The drafters can't be expected to have listed things that didn't exist yet and it's obvious to everyone that it's meant to apply to this category of things even if we're now using fiber optics and satellites instead of dead trees.

But if you're interpreting a law from 20 years ago instead of 200 and nothing relevant has changed that the drafters couldn't have predicted when it was enacted, the fact that someone is doing what you said instead of what you meant is entirely down to you being bad at saying what you mean and that ought to be on you rather than on them.


I’m not saying it’s true or false. Hypocrisy is universal to politics, and it’s trivial to find examples throughout US history on all sides of the political spectrum. I’m just saying that the issue of strict interpretation is so fundamental to the US legal system that it’s a core philosophical debate for judges.


Is this a different meaning of "conservative" and "liberal" from the political sides, or is this reply blatantly partisan?


On the contrary, this doesn’t sound impressive at all. It sounds like a cowboy coder working on relatively small projects.

300k LOC is not particularly large, and this person’s writing and thinking (and stated workflow) is so scattered that I’m basically 100% certain that it’s a mess. I’m using all of the same models, the same tools, etc., and (importantly) reading all of the code, and I have 0% faith in any of these models to operate autonomously. Also, my opinion on the quality of GPT-5 vs Claude vs other models is wildly different.

There’s a huge disconnect between my own experience and what this person claims to be doing, and I strongly suspect that the difference is that I’m paying attention and routinely disgusted by what I see.


300k especially isn’t impressive if it should have been 10k.


Yes, well put. And that’s a common failure mode.


I would guess that roughly 0.000087% devs on the planet do it in 10k (if it is possible) and 37.76% would do it in 876k so 300k is probably right in some middle :)


To be fair, codebases are bimodal, and 300k is large for the smaller part of the distribution. Large enterprise codebases tend to be monorepos, have a ton of generated code and a lot of duplicated functionality for different environments, so the 10-100 million line claims need to be taken with a grain of salt, a lot of the sub projects in them are well below 300k even if you pull in defs.


I'm fairly skeptical of the LLM craze but I deeply respect Peter Steinberger's work over the years, he truly is a gifted software developer in his own right. I'm sure his personal expertise helps him guide these tools better than many could.


(OP) 1/3rd of the code is tests.

There's an Expo app, two Tauri apps, a cli, a chrome extension. The admin part to help debug and test features is EXTREMELY detailed and around 40k LOC alone.

To give some perspective to that number.


Yeah, I read the post. Telling me that there's a chrome extension and some apps tells me nothing. Saying that the code is 1/3 tests is...something, but it's not exceptional, by any means.

I've got an code base I've been writing from scratch with LLMs, its of equivalent LOC and testing ratio, and my experiences trusting the models couldn't be more different. They routinely emit hot garbage.


> A HUGE amount of the population in my quickly-regressing country don't believe that COVID was the killer that it in fact was.

I don't know what country you're referring to, but there's ample data that it's highly partisan in the USA, and you, too, might be misinformed. In particular, the political left wildly overestimates the lethality of Covid (both historically and in the present). See, for example [1]. Other sources [2,3] reporting on the same data also validate the overall partisanship, but unfortunately don't show the correct answer in a way that makes it easy to see the pattern.

[1] https://www.allsides.com/blog/partisan-divide-among-republic...

[2] https://www.brookings.edu/articles/how-misinformation-is-dis...

[3] https://news.gallup.com/opinion/gallup/354938/adults-estimat...


None of this refutes what I asserted.


To the extent that you asserted anything specific at all, it was that "a HUGE amount of the population" in your country don't believe that the virus was "the killer it in fact was".

I just showed you that a) there's a large misconception about the lethality of the virus, and b) people on the left side of the US political spectrum tend to systematically exaggerate the threat. In particular "the killer it in fact was" is often not a factual statement, but a partisan exaggeration of reality.


You've shown that there's a large misconception about the hospitalization rates of the virus, not its lethality.

https://ysph.yale.edu/news-article/study-finds-large-gap-in-...

There are studies that show the the "far right" (since you insist on interpreting this in a partisan lens) have a much higher death rate, after the introduction of covid-19 vaccination rates. IU'm going to make a wild assumption here: the far left and the far right want to avoid death at roughly equal rates. I interpret the finding above as a partisan underestimation of the lethality of covid.

80% of republicans believed (according to Gallup) that COVID death rates were falsely inflated. Only 47% of Republicans believe that COVID is more deadly than seasonal influenza, whereas 87% of democrats did.

Again, you refute a thing I didn't claim.


> You've shown that there's a large misconception about the hospitalization rates of the virus, not its lethality.

Hospitalization is upstream of death. You don't just get the virus and fall over dead. More to the point, to the extent that one group incorrectly believes that risk of hospitalization is higher than it is, it reflects their overall incorrect belief that the mortality of the virus is higher than it is.

> There are studies that show the the "far right" (since you insist on interpreting this in a partisan lens) have a much higher death rate, after the introduction of covid-19 vaccination rates.

No, there aren't. You're referring to this study [1], which was conducted in two states (Ohio and Florida), and was overgeneralized on NPR, MSNBC and other left-wing media outlets.

The study ran only until December 2021, and found an overall excess death rate of 2.8% for republican voters, which was 15% higher than the excess death rate for democratic voters, according to their model (in other words, democratic voters had an excess death rate of ~2.4% during the same period). The claim you're making extends only from the May-December period of 2021, where they found a roughly 8% difference in excess death rates between parties, on a baseline of approximately 25%.

In other words: both parties saw excess death rates of approximately 25%, and the "republican" part of the set was 8% higher [2]. But when you look at the data by state [3], there's hardly any difference for Florida, so this study is really describing a difference only in a subset of Ohio voters.

Again, you've probably been misinformed about what you think you know. When you actually look at the data, the results are far less dramatic than reported in the media.

[1] https://pubmed.ncbi.nlm.nih.gov/37486680/

[2] https://www.ncbi.nlm.nih.gov/core/lw/2.0/html/tileshop_pmc/t...

[3] https://pubmed.ncbi.nlm.nih.gov/37486680/#&gid=article-figur...


> We also used to enforce antitrust law.

I've been reading the Chernow biography of Rockefeller, and this simply isn't true. We've almost never enforced "anti-trust law", and it's basically never been particularly effective.

The Sherman Act was widely considered a failure (even after passage in 1890). It did little/nothing to affect the fate of Standard Oil, which actually grew for a decade after passage, to over 90% control of the market by 1904. This is despite the state of Ohio engaging in a much more successful legal attack, based on technicalities of the trust charter, having nothing to do with the federal law.

The thing that actually brought down Standard Oil was...competition. By the time the company was actually broken up in under the Sherman Act in 1911, it had declined to ~60% market share. The overall story is essentially the same as today: the law ends up being used to punish declining companies for prior bad behavior.


Laws mostly don't work by actually filing cases. They mostly work by deterring malicious activity because people don't want a case brought against them. The cases are only for the ones who fail to be deterred, which is pretty uncommon for large corporations because they can afford lawyers to tell them what not to do.

The problem comes when you erode what was meant to be a strong antitrust law through decades of narrowing interpretations and then it's not deterring them anymore.

> the law ends up being used to punish declining companies for prior bad behavior.

The solution to this one is to take the politicians out of it and allow customers to file antitrust class actions.


Or maybe it's something that parents can do with their children, since that's clearly the intent. It's also the convention for "letters to Santa" since...forever.

Honestly, it doesn't take much of a good faith effort to see this.


The first link is not a clinical definition, the second link is not a "meta study" (it's a substack article with absolutely no rigor). Moreover, the first link cites prevalence numbers wildly in conflict with the second link:

> Approximately 6 in every 100 people who have COVID-19 develop post COVID-19 condition

vs., for example:

> From one center in Wuhan, 1,359 survivors completed 3-year follow up and 54% had at least one persistent symptom of Long Covid

This only underscores the lack of clinical definition. Both of these suffer from the same fundamental error, which, again, is that there's no precise definition of the syndrome. They include symptoms that are common amongst healthy people, mix them with less-common things that are associated with Covid (e.g. anosmia) and try to call this a disease state. See the WHO's grab-bag list of possible inclusion criteria:

> Over 200 different symptoms have been reported by people with post COVID-19 condition. Common symptoms include: fatigue, aches and pains in muscles or joints, feeling breathless, headaches, difficulty in thinking or concentrating, alterations in taste.

So literally having "headaches" or "aches and pains" is enough to claim Long Covid, according to the WHO.

The Topol/Aly substack engages in the same logic, and you will see that the referenced charts and graphs cover everything from fatigue to heart attack. Aly, in particular, has based his entire long covid research on a single dataset of (largely elderly, unhealthy prior to infection) VA patients that he refuses to release, and routinely engages in statistical fishing expeditions for new "symptoms" within that dataset.


> And in our modern world, universities are still the best place for such apprenticeship.

I spent a good portion of my life in Universities -- and went as far as one can go in terms of educational credentials and taught at the university level -- and I cannot disagree more.

Universities produce job skills incidentally, if at all. It's simply not their goal [1]. Even today, at the best CS programs in the country, it's possible to get a degree and still not be better than a very junior engineer at a software company (and quite a few graduates are worse).

> We started with implementing simple data structures and algorithms and solving simple puzzles all the way to implementing toy OSes, databases, persistent data structures, compilers, CPUs, discrete simulations, machine learning models.

This was not my experience, nor is it what I have seen in most university graduates. It's still quite possible for a CS grad to get a degree having only theoretical knowledge in these topics, and no actual ability to write code.

This leaves open the question of where "the best place" is to learn as-practiced programming [2], but I tend to agree with the root commenter that the best programmers come up through a de facto apprenticeship system, even if most of them spend time in universities along the way.

[1] Their goal is to produce professors. You may not realize this if you only went as far as the undergraduate diploma, but that is mostly what academics know, and so it is what they teach. The difference between the "best" CS programs and the others is that they have some professors with actual industry experience, but even then, most of them are academics through and through.

[2] Code academies suck in their own ways.


> Universities produce job skills incidentally, if at all. It's simply not their goal [1]. Even today, at the best CS programs in the country, it's possible to get a degree and still not be better than a very junior engineer at a software company (and quite a few graduates are worse).

Having been self taught in both software and electrical engineering, I’ve experienced a lot of this.

In EE, it’s amazing how many graduates come into the job without ever having used Altium/KiCAD/Cadence for a nontrivial project or who can give you a very precise definition of impedance but don’t know how to break out an engineering calculator to set design rules for impedance controlled differential pairs. Or worse yet, people who can give you all the theory of switching model power supply but can’t read datasheets and select parts in practice.


Yeah the practical part is what does it. Students need time on their particular niche's software programs. Outside of Altium/KiCAD/Cadence there's also Mastercam, ANSYS HFSS, LTspice / SIMetrix/Keysight/CATIA/Synopsys/Dymola, among others.


I agree, however the model was clearly designed that the university considers first employment to be the apprenticeship and the university education to be rhetorical background education that makes it possible to follow and keep up in an apprenticeship. So really, the issue is companies don’t properly invest in training juniors… because they will leave after 2 years anyway… which is because they won’t provide them pay bumps equivalent to a change in position, which is also their fault, leading them to hire pricier individuals who just left another company looking for a pay bump instead. They pay the same in the end but trade individuals around pointlessly to do it, and have to retrain them on their software stack.

Kinda funny when you think about it.


I'll disagree with your "disagreement" - of course, I went to a relatively unique school: Waterloo computer engineering with co-op in the 90s. 8 study semesters, 6 work semesters. Clearly lets you see what "work" is like, and which parts of your studies seem relevant. Obviously, no one will use 100% of their engineering courses - they're designed to cover a lot of material but not specialize in anything.

True, grad school was focused on making professors - I did a master's, ended up being a lecturer for a while. Now a 20+ year software developer in the valley. But undergrad was focused on blending theoretical and practical skills. If they didn't, employers would have stopped coming back to hire co-op students, and would stop hiring the students at a high rate when they graduate.

I COULD have learned a lot of software myself - I was already coding in multiple languages before attending and had a side-software-contract before ever going in - and that was before the "web", so I had to buy and study books and magazines and I was able to do that reasonably well (IMHO).

Yet I never regretted my time in school. In fact, I had a job offer with my last employer before going back to grad school, and they hired me as a summer contractor at a very nice hourly rate back then.


Thank you for saying this clearly. I love universities. They are so far from supporting apprenticeships. Even phds — they don’t do enough work for the senior professors to count as apprenticeships. Maybe postdocs. But the system is not great—we need guilds.


Yea, I started to learn how to program in my early teens and made a lot of progress just messing around on my own. Then I went to University for a CSE degree and spent 4 years basically doing applied math. Yuck. Finally once I got out of University and into industry, I started learning again practical things like debugging, build systems, unit testing, application development, and so on. My programming skill growth quickly restarted.

Looking back, I'd consider my University degree to be essentially a 4 year pause on growing my programming skills.


I studied computer science in a university, not because I wanted to learn programming, but because I wanted to study computer science.

I admit that most development tasks don't need the knowledge you get from a CS degree, but some do.

But in computer science, it's also totally possible to be self-taught. I've learnt a lot on my own, especially after university. Computer science is good for that because it's generally accessible: you don't need an expensive lab or equipments, you can just practice at home on your laptop.


Polytechnic schools seem to do this well, research universities not so much.


> Even today, at the best CS programs in the country, it's possible to get a degree and still not be better than a very junior engineer at a software company (and quite a few graduates are worse).

I think it's important to differentiate the personal achievement of students and the training offered by their universities. For instance, the courses offered by CMU and MIT are super useful - insightful, practical, intense, and sufficiently deep. That said, it does not mean that every MIT/CMU graduate will reap the benefit of the courses, even though many will.

It goes without saying that it does NOT mean people can't teach themselves. I'm just saying universities offer a compelling alternative to training next gen of engineers.


> I'm old enough to remember what it was like to deploy a Rails application pre-Docker: rsyncing or dropping a tarball into a fleet of instances and then `touch`ing the requisite file to get the app server to reset.

If this is what you remember, then you remember a very broken setup. Even an “ancient” Capistrano deployment system is better than that.


Or there was “git push heroku main” or whatever it was back in the day. Had quite a moment when I first did that from a train – we take such things for granted now of course...


Honestly this is still a great way to deploy apps and still some of the best DX there is, IMO.

Costs a crap ton for what it is, but it is nice.


Yeah, it also wasn’t difficult to do the equivalent without heroku via post-commit hook.

Honestly, even setting up autoscaling via AMIs isn’t that hard. Docker is in many ways the DevOps equivalent of the JS front end world: excessive complexity, largely motivated by people who have no idea what the alternatives are.


I was working on Rails apps before AMIs or Heroku.


Me too. I'm not responding specifically to you with the parent comment. That said, "autoscaling", as a concept, didn't really exist prior to AWS AMIs (or Heroku, I guess).

My point is that a lot of devs reach to Docker because they think they need it to do these "hard" things, and they immediately get lost in the complexity of that ecosystem, having never realized that there might be a better way.


My recollection is that this is what many Capistrano setups were doing under the covers. Capistrano was just an orchestration framework for executing commands across multiple machines.

More than that, I worked for many enterprises that were using Rails but had their own infrastructure conventions and requirements, and were unable or unwilling to explore tools like Capistrano or (later) Heroku.


> More than that, I worked for many enterprises that were using Rails but had their own infrastructure conventions and requirements, and were unable or unwilling to explore tools like Capistrano or (later) Heroku.

Well, OK, so you remember a bad setup that was bad for whatever reason. My point is that there's nothing about your remembered system that was inherent to Rails, and there were (and are) tons of ways to deploy that didn't do that (just like any other framework).

Capistrano can do whatever you want it to do, of course, so maybe someone wrote a deployment script that rsynced a tarball, touched a file, etc., to restart a server, but it's not standard. The plain vanilla Cap deploy script, IIRC, does a git pull from your repo to a versioned directory, runs the asset build, and restarts the webserver via signal.


This was before Git! (Subversion had its meager charms.) Even after Git became widespread, some infra teams were uncomfortable installing a dev tool like Git on production systems, so a git pull was out of the question.

The main issue that, while not unique to Rails, plagued the early interpreted-language webapps I worked on was that the tail end of early CI pipelines didn't spit out a unified binary, just a bag of blessed files. Generating a tarball helped, but you still needed to pair it with some sort of an unpack-and-deploy mechanism in environments that wouldn't or couldn't work with stock cap deploy, like the enterprise. (I maintained CC.rb for several years.) Docker was a big step up IMV because all of the sudden the output could be a relatively standardized binary artifact.

This is fun. We should grab a beer and swap war stories.


If you call a stable, testable and „reproducible” (by running locally or on some dev machine) tarball worse than git pull then you are the one who is killing the solutions that work with unpredictable and unsafe world. I think beer with swapping stories is a good idea, because I would love to learn what to avoid.


Capistrano lost its meaning when autoscaling went mainstream (which was around 15 years ago now), yet people kept using it in elastic environments with poor results.


The parent wasn’t describing an autoscaling deployment system.

Rails has a container-based deployment if you actually need that level of complexity.


GP was talking about pre-docker deployments. You could totally deploy immutable Rails AMIs without both Docker and Capistrano.


AMIs were still pretty novel at the time I started (around 2007 like the GP). The standard deployment in the blogs/books was using Capistrano to scp the app over to like a VPS (we did colo) and then run monit or god to reboot the mongrels. We have definitely improved imho!


Totally, around that time I did that too (although I was working with LAMP stacks so no Capistrano), but with the rise of AWS, Capistrano got outdated. I know that not everyone jumped board on cloud that early, and even the ones that did, there was an adaptation period where EC2 machines were treated just like colo machines. But Ruby also used to be the hipster thing before 2010 so... :)

Anyway, never liked Capistrano so I'm probably biased


This is not a scientific paper. It is a "narrative review", which is another way of saying "editorial". It superficially looks like science, but doesn't do any of the methods, controls or (correct) statistics that you would expect in a legitimate meta-analysis. Do not take it seriously.

The "dedication" at the top should clue readers into what is going on -- it's not even trying particularly hard to look legitimate -- but alas.

Edit: this line, in particular, made me laugh. It is one of the more egregious examples of "pretending to do statistics" that I have seen recently:

> Table 1 summarizes the studies in the last decade examining a potential link between neuropsychiatric reactions and finasteride exposure. When prescribed mainly for AGA, all reports suggest that finasteride can cause depression, anxiety, suicide ideation, and suicides. Assuming a null hypothesis (finasteride does not affect mood) and a 50% chance of 1 result against this hypothesis, the probability of getting all 8 studies concluding against the null hypothesis by chance is 0.58 = 0.0039.


From your description it sounds like a review article? Which is common and normal. I haven't read it though because I'm not bald.


It isn't a review article, though the difference is more subtle, because a review article is also editorial in nature.

The difference here is that this article is someone with an axe to grind (again, see the "dedication", which is never done in a legitimate review; and the clownish application of statistics, which is so completely absurd that it implies incompetence, malice or both). I have no faith that this writer made a legitimate attempt to impartially weigh the evidence on this question.


Doesn’t mean this isn’t correct. These problems are definitely real.


> These problems are definitely real.

Maybe they are, maybe they aren't. Proof is required.


IMHO proof of safety of the drug should be required from the maker before they profit from selling it.

https://en.wikipedia.org/wiki/Precautionary_principle


Indeed. This is why the drug passed phase 1 (safety) trials when it was first approved in 1992 for treatment of prostate enlargement. Moreover, it subsequently passed additional rounds of clinical trials for the hair loss indication in 1998, and has been involved in more than 30 different clinical trials overall.

Now, I'm not going to argue that there can't be rare side effects that are only discovered with time. I'm also not going to claim that the original trials were perfect, or that research into the question isn't justified. But you're trying to assert that they didn't test for safety, and that's just factually incorrect.


I should take back my statement- I had a knee jerk reaction to someone saying that proof is required of harm for a drug, when I've seen so many cases of drugs being inadequately tested and then causing harm, and I think the precautionary principle is often not followed anywhere close to adequately when it comes to new chemical stuff we do to our bodies. We probably mostly agree in principle. I'm not saying they didn't do safety testing. I would suspect that the safety testing was flawed as it has been in every other case that I have looked into, and failed to catch possible harms that may now be happening.

Whether those harms outweigh the benefits overall remains to be seen and likely will never be known unless it's really really bad, which is likely not the case here.

I'd agree more research is probably justified, but there's likely little profit in it for anyone.


TFA cites eight peer-reviewed studies finding a link. Sounds like decent evidence to me.


> TFA cites eight peer-reviewed studies finding a link. Sounds like decent evidence to me.

There have been over 30 randomized controlled trials of this drug, and the author picked only eight papers, none of which were randomized, none of which were controlled, and all of which were based on mining self-reported data from patient databases.

Come now.


I checked closer and you're correct. Good call, best to stick with meta studies (eg, https://doi.org/10.1080/09546634.2021.1959506).


I’ve personally experienced it. There’s so much data out there it now comes with a warning label.


There's an FDA-mandated warning label for every drug. The current label for Finasteride [1] includes depression only as a "postmarketing experience", which is based on the same data you're reading here (self reports), and is not reliable for determining causality. [2]

There is no current listing for suicide, suicidal ideation, etc.

[1] https://www.accessdata.fda.gov/drugsatfda_docs/label/2012/02...

[2] "Because these reactions are reported voluntarily from a population of uncertain size, it is not always possible to reliably estimate their frequency or establish a causal relationship to drug exposure"


"Misery loves company" the paper


> ...is used for a purely cosmetic purpose

I know you probably didn't mean it this way, but Finasteride started as a treatment for prostate enlargement, and is still used for that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: