Healthcare organizations that can't (easily) send data over the wire while remaining in compliance
Organizations operating in high stakes environments
Organizations with restrictive IT policies
To name just a few -- well, the first two are special cases of the last one
RE your hallucination concerns: the issue is overly broad ambitions. Local LLMs are not general purpose -- if what you want is local ChatGPT, you will have a bad time. You should have a highly focused use case, like "classify this free text as A or B" or "clean this up to conform to this standard": this is the sweet spot for a local model
This may be true for some large players in coastal states but definitely not true in general
Your typical non-coastal state run health system does not have model access outside of people using their own unsanctioned/personal ChatGPT/Claude accounts. In particular even if you have model access, you won't automatically have API access. Maybe you have a request for an API key in security review or in the queue of some committee that will get to it in 6 months. This is the reality for my local health system. Local models have been a massive boon in the way of enabling this kind of powerful automation at a fraction of the cost without having to endure the usual process needed to send data over the wire to a third party
That access is over a limited API and usually under heavy restrictions on the healthcare org side (e. g., only use a dedicated machine, locked up software, tracked responses and so on).
Running a local model is often much easier: if you already have data on a machine and can run a model without breaching any network one could run it without any new approvals.
HIPAA systems at any sane company will not have "a straight connect" to anything on Asure, AWS or GCP. They will likely have a special layer dedicated to record keeping and compliance.
Aren’t there HIPPA compliant clouds? I thought Azure had an offer to that effect and I imagine that’s the type of place they’re doing a lot of things now. I’ve landed roughly where you have though- text stuff is fine but don’t ask it to interact with files/data you can’t copy paste into the box. If a user doesn’t care to go through the trouble to preserve privacy, and I think it’s fair to say a lot of people claim to care but their behavior doesn’t change, then I just don’t see it being a thing people bother with. Maybe something to use offline while on a plane? but even then I guess United will have Starlink soon so plane connectivity is gonna get better
It's less that the clouds are compliant and more that risk management is paranoid. I used to do AWS consulting, and it wouldn't matter if you could show that some AWS service had attestations out the wazoo or that you could even use GovCloud -- some folks just wouldn't update priors.
SnipVex clipjacking wallets is almost beside the point, the real failure is a printer vendor treating software like a side gig. Printer and hardware companies get a pass on basic infosec hygiene that would be unacceptable for open source maintainers.
until that changes, airgap your weird hardware setups I guess
Also this is a perfect storm for lateral movement. USB-borne worms still work frighteningly well in small biz environments, especially ones with no centralized IT and people plugging printers directly into Windows desktops with admin perms. Here SnipVex is just a cherry on top-a nice, opportunistic payload for the growing class of infostealers targeting crypto wallets
Maybe I got lucky, but in 2017 I bought a Brother DCP-L2520DW laser printer. No matter what OS, computer or network I connect it to, it seems to just work for everyone involved, always, and I don't think I've had a single issue with it since I got it nor did anything at all to set it up, basically installed CUPS on my desktop to get it to work and for Windows/macOS it just works.
Not affiliated, just happy user, at least some companies seem to be able to deal with it, regardless if it's open source (my stack) or not (my wife's Apple-stack).
I've bought almost the same model but a few years later. I also enjoy how effortless is connecting this printer to Linux. I have to install brlaser driver manually though.
But I did some research before buying (including here on HN) and Brother printers were praised for being reliable and having no problems with Linux drivers.
IPP Everywhere linked in the other comment, but there's also Mopria certified printers (https://mopria.org/certified-products). Which use WPP drivers on Windows.
I don't necessarily disagree, but isn't this because of extremely bad firm/soft/hardware design by the printer companies that then have to be supported by the open source stack?
They do happen all the time, though. One piece of software I work on frequently fails in CI when a dependency updates because it often triggers defender's automated "new threat" detection system some days after it's released. After another week or so it's fine, but it's a pain the neck.
Go look at the "build log" in your compromised jenkins server and download the (already compromised) build artifact and make sure it matches the mega.co.nz file?
Do you expect the average software engineer to be able to look at a .exe, pull up a disassembler, and know that all the assembly maps back to the source code?
The person who originally reported it was not super technical so if your software engineer can’t reproduce the customers steps to see the same error then you probably need better software engineers.
You say "Jenkins server" as if there's a CI setup involved.
I wouldn't be surprised if, in many cases, these companies just have whoever touched the code last run a build on their computer and ship that. (Which probably explains how some of the malware got there.)
It's not hard to replicate downloading a zip archive from the official location and find someone knowledgeable to look at it if you aren't yourself. A non-software-engineer did just that.
Real question is whether this is just symbolic or if the French state will actually redirect procurement pipelines + vendor mandates around these principles. i'd be more impressed if this came bundled with policy teeth, e.g. requiring all software vendors to deliver open-by-default interfaces or pushing funding toward open infra maintenance. Otherwise it's hardly much more than a manifesto
It will take time but yes. There are already numerous case studies. Libre office is already running on more than 500k gov computers. Anecdotical story, as a researcher I worked with a few French PhD students and they tend to send me documents Libre documents and spreadsheets.
As far as I can see, awful UI never stops people from using software that is "mandated" or "default". I mean have you seen Windows? MS Office? Web sites? Mobile apps??
I get it, but LibreOffice is awful in a much worse way than Office. On MacOS, the fonts and images just look low res and blurry. There's no polish, even though that is probably quite an easy fix.
But if they know the alternative is better. Everyone knows about ms office so they will complain/demand that instead. People put up with shitty software when they dont know about an alternative
MS Office is far from a good piece of software itself though. Frankly, the amount sub-menus and other bullshit I constantly have to fix for my parents does not make for a great experience either.
Mind, I barely actually use any Excel/Word/PowerPoint software, but I often have the feeling that a lot of user complaints for these types of things simply come down to: "It's not what I'm used to, therefore it's terrible.".
Yep. With known software there's always this "learned helplessness" of dismissing problems with "ah yeah, this is how it is". Even when it's quirky, inconsistent or just broken.
With new stuff, the blame will always lay on the new software even in situations where it's lack of skill or attention from the user.
I remember a University I used to work at as a dev moving a few classes of a few loud professors from open source Moodle to a paid product, and professors basically replicating Moodle's discussion board functionality by creating public wikis and hoping students wouldn't mess up when editing.
One day one professor approached me wanting a way to prevent students from messing up the "fake discussion board". He got a mouthful from the Dean who was nearby and was footing the bill of a few thousand per month on the expensive SaaS.
I'm convinced this happens in a lot of projects. If you're e.g. Microsoft, you can pay a few people to contribute maliciously to a GPL competitor's coding and governance full time.
It's trivial to throw a million or two dollars at making sure some project ultimately goes nowhere (but survives), and that particular bugs don't get fixed or particular features don't get added. I've got no story to tell, and I've never heard solid evidence of it happening, but it would just be unbelievably tempting to do.
Notice how they say “No PR” on every single repo ? So for sure no PR was open.
Putting a bit more energy, you are redirected to a whole other system which I have never seen anywhere else (and in this case; unique doesn’t mean good). After 5 minutes of trying to navigate what is probably the least intuitive software forge I ever had the displeasure to witness, you understand that clearly these guys live in a different UI/UX bubble than the rest of us.
Seems like they use gerrit. A lot of larger projects use gerrit for their code review. It is different, yes, but many prefer it over GitHub's "pull request" paradigm which really sucks for high velocity contributors.
This is bad faith. You are not obligated to contribute any sort of code to point out problems in an open source project.
When I go to a restaurant and order a steak, and it arrives and tastes awful, the waiter does not have the right to say to me "if you don't like it, cook it yourself". The chef does not have the right to say to me "tell me exactly what I did wrong, since you're claiming you're an expert on steaks".
No. Anyone can complain about a thing, and the fact that they haven't tried to fix the code themselves is utterly irrelevant.
The difference is that at a restaurant you’re paying for it. If you show up at a soup kitchen and complain that it wasn’t seasoned just right, that’s fully on you.
When my grandfather died, the only thing he asked to be buried with was a small pouch of roasted chestnuts. he used to say they reminded him of long cold walks home through wartime forests that smelled like smoke and bark.
Anyway after the funeral, I cracked one open by the fire and it was still sweet. RIP baba
This is a clever hack and a cute abuse of SQL joins to brute-force what’s essentially a 2-ply MDP over a finite space.
The core idea btw of using precomputed transition/score tables to simulate and optimize turn-by-turn play is a classical reinforcement learning method
What would be interesting here is to flip it: train a policy network (maybe tiny, 2-layer MLP) to approximate the SQL policy. then you could distill the SQL brute-force policy into something fast and differentiable.
i’d love to see a variant where the optimizer isn’t just maximizing EV, but is tuned to human psychology. e.g., people like getting Yahtzees more than getting 23 in chance. could add a utility function over scores.
Anyway this is a great repo for students to learn expected value optimization with simple mechanics.
Not surprised to see this. Whats interesting to me in all this is the misplaced faith in emergent structure.
Roam bet on the idea that if you link enough atomic notes, structure will self-organize.
Which is such a weird fantasy if you spend a few minutes thinking about it. Try writing code like that or building a company or just about anything else! Why should notetaking and archive development be any different
It's clear you need some sort of editorial hand to create something maintainable and future proof. Like zettelkasten had Luhmann’s obsessive discipline behind it. Evidently roam had um. enthusiasm and javascript?
and yeah, it’s telling that the comparison is to IDEs. Imagine an IDE that dumped every snippet you typed into a graph database and expected you to recompile coherence out of it by browsing links. thats what roam felt like after the honeymoon.
In general most of Roam's target should want to lean harder into opinionated workflows. there’s a reason tools like linear or notion are winning. they’re structured enough to relieve cognitive load, flexible enough to adapt. Roam tried to be emacs, but turns out most users don’t want to configure their own productivity dialect.
also, lol at the idea of "automated taxonomy". The entire knowledge management industry keeps rediscovering ontologies like they’re new. We are probably going to reinvent OWL at some point and give it a name like "neuroschema" or something
Aren't you describing (and Roam using) what is essentially brain mapping, which is a well-established technology based on how our memories actually work?
I'm not a fan of neurophysiology analogies because it veer into pseudoscience, but I'll play along.
Roam implemented static bidirectional links and called it associative memory. in reality, it's closer to mind-mapping software with backlinks. So without mechanisms for reinforcement (surfacing old notes intelligently), pruning (forgetting irrelevant junk), or plasticity (reorganizing in response to use), the system becomes a junkyard of half-formed thoughts.
I think this is the key mistake in Roam's design (and in many ways, obsidian and friends). They appeal to a dream some people have that maybe if you never forget anything, you'll get smarter forever. (Or something like that).
The problem is that there's many benefits to having a mind which forgets things. That property lets us grow and change over time - and move on from old ideas or old ways of thinking. Not necessarily because they're bad; but because we become a different person from the person who had that thought.
Trauma is an extreme case of this. Its essentially a disorder of memory; where we etch some old memory in stone. Because we don't let ourselves forget it, we inevitably build structure / thought patterns around that memory. "This one time __" - "As a result, deep down I believe that I am fundamentally ___ (unsafe / unworthy / stupid / unlovable / ...)". Trauma work is in many ways a slow process of learning to unclench your mind from those past experiences, to allow yourself to "move on" from them. (Ie, forget the emotional impact they have today.)
Its also kind of obvious in software or architecture. You can't just keep adding to an old structure forever. Software gets harder to build the bigger it gets. Same with buildings, books, teams and more. If everything new needs to fit with everything that has come before, its an O(n^2) job. Of course roam suffers from this too. The default "remember everything forever" default is naive and silly. Our brains don't work best like that.
There is no reason to forget. Your brain does memory crystallization whether you like it or not, this is not something that is up to you. There is no upper bound to memory as far as we know. https://notes.andymatuschak.org/Spaced_repetition_memory_sys...
You are just making a very silly "Appeal to nature" argument. Your notes, just as your memories, change and morph. For your memories, every time you access them, for your notes, every time you notice something you could improve. Old notes should not bother you, just ignore them if they're not relevant. They take a negligible amount of space on your devices. Personally, every note I've taken serves a purpose, even if their purpose is to just fill a spot so that I may be continually aware I've tackled a particular subject before even if it has not had any relevance for years.
> There is no reason to forget. [...] You are just making a very silly "Appeal to nature" argument.
I don't see it that way. I see it as a healthy, useful expression of continuous death.
In software, we don't start every program by first importing every line of code ever written. Why not? The computer has room for all that code. Why don't we import it all into our workspace? The reason, in my mind, is that each line of code in a computer program has a cognitive cost to it. A sort of, conceptual gravity, which makes reaching for further away ideas much more difficult.
When brainstorming, often a blank page is the best canvas for a new idea. We start companies with new stationary. New workbooks. We even have sayings for this - "Blue sky thinking" or "Greenfield projects". Ie, projects which don't inherit older, more established structures or code.
There's a balance of course. We also don't start everything from scratch either. In code we pull in libraries as we need them, and lean on our programming languages and operating systems. But you have to strike the right balance between new and old. Too much old and you're stifled by it. Too much new and you're trying to boil the ocean.
I think humans are like that too. I think our ability to crystalize new thoughts depends on our capacity to let go of old ones. I don't think the best minds spend their lives hoarding all the best knowledge. For my money, the old people I like the most are people who can be in the here and now. Knowledgable, sure. But also present. Open to surprise. Philosophically you want to combine whats happening right now with the best ideas from the past. And let the rest go.
At least, that's how I think of it for myself. If I'm a different person in 20 years from who I am now, I wish whoever I become the best of luck. I hope for them to be unburdened by all the cognitive misadventure I'm probably going through right now.
Correct. What I meant specifically is that we are unaware of a hard limit to memory, one that we have not found due to factors like our lifespans and cognitive decline, so it should not be something to worry and fuss over due to its current irrelevancy.
I personally find pleasure in reading my old notes, even ones that contain outdated ways of thinking, incorrect assumptions, etc. If anything, it helps me reflect on the growth that's occurred. I agree it's not necessarily productive to log everything all the time, though.
Me too. But again, its nice to re-read old notes which are "lost to time". The author of this piece is clearly finding the past is actively influencing the present:
> At least for me — and most of the people I know — we got a garbage dump full of crufty links and pieces of text we hardly ever revisit. And we feel guilty and sad about it.
It'll never work if you can't leave things behind.
Yeah me too! But old notebooks can just be left on the shelf and forgotten. I don’t think that’s reall true of roam. At least, not how a lot of people use it.
Really, I think the user in that case needs to be much more choosy about what they put in the database. It will save them time and greatly improve the signal-to-noise ration.
Heh. This landing page takes me to somewhere between deepmind circa 2014 and tesla's AI Day press decks.
I mean if you're actually training humanoids in under an hour with sim-to-real transfer that "just works" then congrats, you've solved half of embodied AI
the vertical integration schtick (from "metal to model") echoes early apple, but in the robotics space that usually means either 1) your burn rate is brutal and you're ngmi, or 2) you're hiding how much is really off-the-shelf
Clearly the real play here, assuming it's legit, is the RL infra. K-Sim is def interesting if it's not just another wrapper over Brax/Isaac. Until we see actual benchmarks re say, dexterous manipulation tasks trained zero-shot on physical hardware, it's hard to separate "open-source humanoid stack" from the next pitch that ends in "-scale"
Actually, we use COTS components for basically everything, that's how the price is so low. It's just that we do a lot to make sure we understand how everything works together from software to hardware
IMO humanoid companies do make a lot of big claims which is why it's important to make everything open-source. Don't have to take my word for it, can just read the code
IME the COTS angle cuts both ways. It brings costs down and makes iteration faster, but whats the moat then?
if the value is in integration, that’s fine, but integration is fairly fragile IP. Open source is good reputationally but accelerates the diffusion of your edge unless the play is towards community+ecosystem lock-in or being the canonical reference impl (cf. ROS, HuggingFace)?
Well, the point of being open-source is that I don't think there is much of a moat in the hardware, in the limit, and it's better to accelerate the ecosystem and start building standards. It's very similar to Tesla - electric cars are easier to build than gas cars, so the moat has to come from branding / integration / software (for reference, before K-Scale I worked on the FSD ML team at Tesla, which informed a lot of my thinking about what the right business model for this would look like).
I think humanoids are in their infancy. Eventually most of the margin will come from software capabilities, which we do plan to charge a lot of money for (like, download a software package and your robot can clean your house, that's probably worth something). But in order for that business model to work we need to have commodity, standardized hardware.
this all makes sense and is honestly the most coherent humanoid startup thesis i've seen outside of figure.ai. You're right that the unit economics of hardware are a trap unless you can commoditize the complements. And humanoid hardware clearly wants to become a commodity, but no one's finished the job yet and it seems brutally difficult (see: the ghost of Willow Garage)
The tesla analogy makes sense to me but with a caveat: they still spend billions on CapEx and own verticals like battery chemistry and drivetrain design. In this case you’re betting that the value collapses upward into software, like the shift from phones to apps, but for that to work, your software has to deliver exponential delta per dollar
With that I think the real risk is that your "clean your house" package is deceptively hard in the long tail, and you will end up with the iRobot Roomba UX. Novelty fades fast when it constantly gets stuck under the couch or whatever the equivalent of that is for humanoids. To be fair iRobot/Roomba is a household name but still "only" a ~$1.5B company, which seems meager compared to ambitions in this space
As an aside I would love to see an RFC-style doc on how you think humanoid software standards should emerge. ROS is still a frankenstein, and someone needs to kill it gently lol
The dangerous failure mode for humanoid robots is that they get off-balance and the usual compensation mechanism fails, so now you have a heavy chunk of metal slamming down. You don't want to be at the bottom of a flight of stairs that a robot is walking on.
If I made ~15M USD/yr and was much younger, I’d strongly consider buying this, specifically because it seems wide open. Others will just buy it and won’t think about the cost, but they’ll probably consider the community. You can’t have community for something like this unless it’s open. If it’s open you’ll get early adopters which can help develop the community.
You must focus on making it better and cultivating a community first.
You do not need 15M USD/year to buy our robots. With 15M USD you could get ~1666 Kbots, or ~15,015 Zbots.
For reference, for the current Kbot to be 10% of your annual income, you would need to make $90,000 a year. And we plan to drive the cost down much much lower for the hardware.
I suppose if one is teaching or evangelizing constructor theory, this could be sort of like an interactive textbook
Needless to say, constructor theory hasn't really earned a stable foothold in mainstream physics, and there's a lot of hype in this space, but that's not a criticism of this particular project, just good to know for anyone not familiar
The quantum gravity + graviton tasks stuff especially. without a falsifiable physical model backing it, this can feel like mathematized cosplay. But that has more to do with constructor theory vs this project
Would love to see someone do a pluggable backend so you could test different "task ontologies" against each other.
Mainly I came here to say that categories can likely be used to great effect here a la Geroch
For instance you can start by modeling tasks as morphisms between substrate states (objects), and then enforce composition explicitly. define constructors as functors that map tasks and substrates while preserving structure.
for quantum or irreversible effects, use monads to encapsulate branching and decoherence. Then one could represent task sequences as categorical diagrams and check for commutativity. Or embed substrates via Yoneda to expose behavior in terms of available tasks
>In this work, we show how to formulate fundamental notions of constructor theory within the canvas of process theory. Specifically, we exploit the functorial interplay between the symmetric monoidal structure of the category of sets and relations, where the abstract tasks live, and that of symmetric monoidal categories from physics, where concrete processes can be found to implement said tasks.
Right: Maya is a "language continuum" in the sense that geographically proximate speakers tend to understand each other well, and intelligibility goes down as you move further away from any given individual on the continuum
Prior to general travel for everyone being affordable, and broadcast media like television that can go everywhere, languages were affected by the same forces everywhere. So you'd get that effect pretty much everywhere in the world.
Even a lot of things that we think of as "the" version of a language are often effectively a particular dialect out of a complicated tapestry of local dialects being something that "everybody" has to learn because it is the language spoken by your rulers. It happened to "win" because the people speaking that dialect also won the local military conflicts and became the language of the court.
Parisian French isn't the same as Standard (Court) French, and it sound different in the South because It's only been the majority language for a century. It's super-imposed on top of another language's phonology. It's not a dialectal continuum thing.
Not wrong, but note that the difference is much less than language differences between different English speakers in England, even at short geographical distances.
I'll be frank: I think this idea that ${faveLang} is for misunderstood geniuses who truly understand computers where mainstream languages like Python are for dunces who only know how to glue together APIs is a large part of why such languages as Perl are nearing extinction. It turns out that there are people working on challenging problems in domains you've never heard of in Python -- and pretty much every other language. Give it a rest
In the real world, the ability of a lone genius to cobble together a script in an hour is actually not that much of an edge -- it is more important for people to write something that others can understand and maintain. If you can do that in Perl, great, and if writing Perl makes you happy: also great. But beware that smug elitism turns people off, it kills communities and also tends to signal a pathological inversion of priorities. All this should be in service to people, after all
I wonder how much of catering to the lowest common denominator / being a team player is an internalizing of corporatisms reduction of the worker to a fungible interchangeable cog.
As a solo dev it is a massive advantage to use sophisticated languages and tools without worrying if the dumbest person on my team can use them. It’s a strategic advantage and I run rings around far larger companies.
I agree with you that it is sad there isn't more diversity in languages and tools, and that generally organizations are using the same terrible slop. We could have such nice things
You lose me with the smugness. Make no mistake, you aren't smarter or better than someone else purely by virtue of your willingness to hack on BEAM languages or smlnj or Racket or whatever languages you like.
There are probably people smarter than you working in sales at $bigcorp or writing C# on Windows Server 2008 at your local utility. Novice programmers often have an instinct to rewrite systems from scratch when they should be learning how to read and understand code others have written. Similarly, I associate smugness of this form with low capacity for navigating constraints that tend to arise when solving difficult problems in the real world. The real world isn't ideal, sorry to say
That sounds like post facto rationalization, sour grapes, and perhaps a bit of learned helplessness. To paraphrase you ‘We can’t have nice things because nice things are in reality bad and unrealistic. People who do have nice things are not special.’
I could readily believe that your stated reality is true of the majority of solo devs, but it’s not true for me or those that I know. I understand that my sampling is biased and probably not the normal experience. I don’t seek to show off for my anonymous HN account and instead wanted to say that sometimes we can have nice things and it can work out successfully.
It's not learned helplessness et al, just a plea to drop the smug elitism if you want people to take you seriously. I actually want nice things, I hate writing brittle systems in languages that offer no meaningful guardrails, and setting up Rube Goldberg contraptions to get a poor approximation of e.g. basic BEAM runtime functionality.
Any success I have had in getting very boring companies to adopt nice things at all has not come from insulting people's intelligence and acting like I'm the smartest person in the room. I despise this kind of elitism that is rampant in certain technical communities. It turns people off like nothing else and serves no purpose other than to stroke your own ego -- it's pointless meanness.
I worked applied research at a few very big companies and did have a measured amount of success getting some advanced tech adopted so I know what it takes to move the needle. My lesson, and one I wish I learned sooner, was that the effort was not worth it. I had assumed that the lack of adoption was due to lack of exposure to ideas but having exposed these ideas to a large number of people I reluctantly came to the conclusion that it more of a lack of innate intelligence. I honestly wish it wasn’t so.
My goal has not been to fix big companies for a long time, I was just musing on the rational and commented to see what other people think on the topic.
> reduction of the worker to a fungible interchangeable cog
I see this trope a lot on HN, and I don't understand it. All of the highest skilled developers that I have met are the quickest to adapt to new projects or technologies. To me, they look like a "fungible interchangeable cog".
And every solo dev that I ever met thinks they are God's gift to the world -- "tech geniuses". Most of them are just working on their own Big Ball o' Mud, just like the rest of us working on a team.
If only the highest skill devs could quickly learn new projects then they are no longer interchangeable.
Your sampling of solo devs could very well be biased, similarly so could my sampling. Not working on a big ball of mud is a massive perk of being solo dev. It’s my company and I’ll refactor if I want to.
>>In the real world, the ability of a lone genius to cobble together a script in an hour is actually not that much of an edge
Any macro/multiplier is that way. You don't miss it, until some one shows you how to do it.
In the last six months alone the scenarios where I had to call upon Perl to help slam dunk a thing insanely laborious are dozens in number.
Its just that if you don't know this, or don't know it exists, you grow up being comfortable doing manual work with comfort.
Sheer amount of times, I have seen some one spend like half a day doing things which can be done using a vim macro in like seconds is beyond counting at this point.
Sometimes a language/tool gets to evolve with the operating system right at their birth. The relationship with between Unix, vim, and Perl is that way.
This is a unique combination, which I don't think will ever change. Unless of course we move away from Unixy operating systems to something entire new.
You are missing my point. For transparency, you are talking to someone who writes Racket in emacs on my Linux desktop, has used Rust macros to clean up awful code in widely used open source packages, and regularly generates code for all manner of purposes in lots of different languages. I know the slam dunk feeling of generating exactly the code that will topple a problem -- and I also know it's not actually that big an edge!
It matters little that you can generate code in an hour that would take your colleague days. It is nice for you and it provides a short lift for your team, but in the limit what matters is maintainability. Peter Hintjens writes fondly of metaprogramming and code generation, but also warns that it makes it difficult for others to work with you, and it's easy to fall into the trap of building abstractions for their own sake. The "edge" in technical work comes from seeing both the forest and the trees, and missing that technical work is in service of humans, first and foremost.
I am glad you enjoy writing Perl, and I like encountering people passionate about it in my work. But I still think there are good reasons why it's in decline, and Perl users should reflect more on that rather than assuming people aren't using it because they are dumb / not technical enough / don't think about problems as creatively or deeply.
I personally believe there are 2 types of Perl - development which you’re talking about here, and sysadmin/devops.
For the first category you’re right - these days there’s not much difference between Perl vs Java vs Rust because abstractions are the same.
But where OP’s smugness comes from I totally agree when applied to the second category - there’s an ocean of a difference between using tools like Perl, awk, sed, jq, and bash to transform Unix command inputs and outputs that it really is a massive superpower. Try doing a day’s work of a Unix admin using these tips compared to writing Java to do it. Oceans I say!
But I don’t think their being a basement dweller genius as you put it is because of Perl - it’s the smugness for the same reason why BOFH Unix sysadmin got their reputation - their tools are literal superpowers compared to GUI tools etc and they can’t believe everyone doesn’t use them!
I use nearly of these tools with the exception of Perl. I go to great lengths to make sure I have access to them because it's so critical for quality of life. I love them and I understand why people love them.
Here's the reason: these languages/tools are tactically very powerful. Tactics are immediate and decisive. Tactics are effectively tricks in the sense that if you can "spot the trick", you can -- with a tiny amount of work -- reduce a formidable problem to virtually nothing. Having a vast toolkit that facilitates such tricks is incredibly powerful and makes you appear to have superpowers to colleagues who aren't familiar with them.
But tactics are definitionally short-term. You deploy them in the weeds, or at least from the forest, (hopefully) never from the skies. Tactics aren't concerned with the long term, nor how things fit together structurally. They are not concerned with maintainability or architecture.
This is why it isn't actually that important that you can cobble together a 15 line Perl script in an hour to do something that would take any of your colleagues a week. Years from now, when you are gone and someone runs into a similar but slightly different problem, someone will find your Perl script, not understand it, and rewrite it all in Java anyway. Or assume it's too hard and give up. Maybe they will adapt your Perl script, but more likely it'll be seen as a curiosity
It sucks, because there is beauty in that approach of solving problems. As I said in another comment, I wish there were more diversity in tooling and languages. But at the same time, it's important to consider that people are fundamental. All of this is in service to that. And I personally would rather build software that people use over the long term.
I think there's a deeper truth here. Perl was notoriously difficult to make C language extensions for. Languages like Ruby and Python really took off because they had a much more approachable and useful C interpreter API which; honestly, made gluing various library APIs into the language far easier. This being the key to taking a very slow and memory hungry scripting language covering a fraction of POSIX into a useful domain extension and embedded language.
Ruby did better at the domain extension part and Python was better at the embedded language part. Perl 6 went entirely the other way. I think this was the real driver of popularity at the time. This also explains why gem and pip are so different and why pip never matured into the type of product that npm is.
True but I don't remember it being nearly as convenient to distribute those modules as it still required the whole build environment on the target and you still had to deal with perls exceptionally efficient but ancient and cumbersome object and type system.
XS wasn't _that_ bad once you got the hang of it; anyways, but I do remember ruby 1.6 coming out and being blown away by how improved the experience of creating distributable C modules was. The class system was flat and easy to access, you could map ruby language concepts into C almost directly, and the garbage collection system was fully accessible.
perl 6 started being discussed right around this time and I think it was clear in the early years that it wasn't going to try to compete on these grounds at all instead focusing on more abstract and complex language features.
Anyways.. even seeing your name just brings me back to that wonderful time in my life, so don't get me wrong, I loved perl, but that was my memory of the time and why I think I finally just walked away from perl entirely.
I don't know what caused this reaction. Was the OP being smug or elite? I did not read it that way. If anything, in my experience, C++ and Rust folks are way more smug/elite compared to Perl hackers.
In my experience, the biggest problem with Perl is readability. Python crushes it. Without list comprehension, it is also very slow during for loops. But, no worries: Most people writing Python don't care too much about speed, or they are using C libraries, like NumPy or Pandas or SciPy. I write this as someone who wrote Perl for years, personally and professionally. Later in my career, I came into Python, and realised it was so much easier to read and maintain large code bases, compared to Perl. To be fair, much like C, it is possible to write very clear Perl, but people quickly get carried away, using insane syntax. With Python, the whole culture, from the bottom up, is about readability, simplicity, and accessibility. I think my only "gripe" about Python is there are no references like Perl, but you can fake it with single-item-lists.
Probably the lines "Its just that the culture in Python world isn't made up of people who think about those problems or even in that dimension."
and "Most of the bad rep Perl gets is because programmers who only interact with http endpoints and databases tend to not understand where else it could be useful."
Organizations operating in high stakes environments
Organizations with restrictive IT policies
To name just a few -- well, the first two are special cases of the last one
RE your hallucination concerns: the issue is overly broad ambitions. Local LLMs are not general purpose -- if what you want is local ChatGPT, you will have a bad time. You should have a highly focused use case, like "classify this free text as A or B" or "clean this up to conform to this standard": this is the sweet spot for a local model