> Static types, algebraic data types, making illegal states unrepresentable: the functional programming tradition has developed extraordinary tools for reasoning about programs.
But none of these things are functional programming? This is more the tradition of 'expressive static types' than it is of FP.
What about Lisp, Racket, Scheme, Clojure, Erlang / Elixir...
Agreed, but the article begins with the previous quote, and is entitled "what functional programmers get wrong", so I feel like there's some preliminary assumptions being made about FP that warrant examining.
In practice, much of the article seems to be about the problems introduced by rigid typing. Your quote, for instance, is used in the context of reading old logs using a typed schema if that schema changes. But that's a non-issue in the FP languages mentioned above since they tend towards the use of unstructured data (maps, lists) and lambdas. Conversely, reading state from old, schema-incompatible logs might be an issue in something like Java or C++, which certainly are not FP languages as the term is usually understood.
So overall, not really an FP issue at all, and yet the article is called "what functional programmers get wrong". The author's points might be very valid for his version of FP, but his version of FP seems to be 'FP in the ML tradition, with types so rigid you might want to consult a doctor after four hours'.
> In practice, much of the article seems to be about the problems introduced by rigid typing … But that's a non-issue in the FP languages mentioned above since they tend towards the use of unstructured data
Haskell doesn’t have this problem. None of the “rigid typing” languages have this problem.
You are in complete control of how strictly or how leniently you parse values. If you want to derive a parser for a domain-specific type, you can. If you want to write one, you can. If you want to parse values into more generic types, you can.
This is one of those fundamental misunderstandings that too many programmers have, and it seems like it’ll never die.
But naturally the more you type, the more benefits and costs you accrue associated with typing. I'm not sure what the misunderstanding is. Of course you could write a C program entirely comprised of void*, but why would you? Equally, of course you could write a Clojure program with rigid type enforcement, but again, why would you? You're fighting against the strengths of your tools.
You don't pick up Haskell just to spend all your time passing around unstructured data, any more than you opt into the overhead of TS just so you can declare everything as `any`.
> You do not understand how typing works in Haskell.
Possible! But a little hyperbolic, perhaps. I think we're more likely to be talking past one another.
> You are free to work with primitives as much as you like.
Sure, but what's the point? If all of your functions are annotated as `myFun :: a -> b`, where a and b are arbitrary type variables, why are you writing Haskell? You're effectively writing Ruby with extra steps. You're nether getting the benefits of your rigidly typed language, nor the convenience of a language designed around dynamism.
Yes, ML-esque type systems are quite neat and flexible. But the more granular the typing you opt into, the more of the usual cost of typing you incur. Typing has inherent cost (and benefit!). And if you're not into the typing, ML-esque typed languages are a curious tool choice.
So to return to the original point, if you're passing data around in Haskell, you have more likely than not opted into some level of typing for that data - else why use Haskell - and will run into the exact issues with rigid type systems mentioned above. Can you parse and type cast and convert and what not? Sure. But no-one ever said that you couldn't, and that's precisely the busywork that dynamic languages are generally designed to lead you away from.
> Possible! But a little hyperbolic, perhaps. I think we're more likely to be talking past one another.
Direct, certainly. More than would be typically expected socially. But I don't think it's hyperbolic — I think there is genuine fundamental misunderstanding here.
I just don't think I have the misunderstanding that you think I do. I spent most of my programming career working in statically typed systems, including two years in Rust recently. Nothing in that article is new or surprising to me. Some of it is downright elementary.
If I may be so bold, I'd posit the misunderstanding is on your part. No one is saying things are impossible to model in rigidly typed systems - this is your key misapprehension about what is being said. What I'm saying is that different languages have different paths of desire, and the kinds of problems identified in the original article are more the kind of problems that tend to crop up with heavy use of types, than they are the kind of problem that has much of anything to do with functional programming.
You're thinking categorically, but I am not, so we're talking at cross-purposes. Perhaps too much static typing has crept into your thinking! (I jest, of course! :) )
If that's the case, then yes I think we're talking past each other. Although it's hard to square this with the argument you've been making — if you understood King's point, I don't understand how you can be arguing that Haskell idiomatically leads you into rigidity at version boundaries. The whole thrust of King's article is that this is a mischaracterization.
> What I'm saying is that different languages have different paths of desire, and the kinds of problems identified in the original article are more the kind of problems that tend to crop up with heavy use of types, than they are the kind of problem that has much of anything to do with functional programming.
I don't think this is correct at all. I don't think TFA has anything at all to do with types or FP (despite the clickbaity title), as numerous other people here have already pointed out. The article isn't attacking rigid types. The author's point is that no single-program analysis — typed or untyped — covers the version boundary (or system boundaries more generally).
A Haskell service that receives an unknown enum variant doesn't have to crash — you parse the cases you care about and ignore the rest. The "path of desire" you're describing isn't a property of the language.
I suppose "path of desire" here is a matter of opinion. In my experience, crashing on unknown inputs is not idiomatic Haskell, nor is it desirable.
> Has anyone defined a strict subset of C to be used as target for compilers? Or ideally a more regular and simpler language, as writing a C compiler itself is fraught with pitfalls.
The main reason you'd target C is for portability and free compiler optimisations. If you start inventing new intermediate languages or C dialects, what's the benefit of transpiling in the first place? You might as well just write your own compiler backends and output the machine code directly, with optimisations around your own language's semantics rather than C.
Imho, C89 is the strict subset that a compiler ought to target, assuming they want C's portability and free compiler optimisations. It's well understood, not overly complex, and will compile to fast, sensible machine code on any architecture from the past half century.
Say I am writing a transpiler to C, and I have to choose whether I will target C89, or some arbitrary subset of C23. When would I ever choose the latter?
The only benefit I could think of is where you're also planning to write a new C compiler, and this is simplified by the C being restricted in some way. But if you're doing this, you're just writing a frontend and backend, with an awkward and unnecessary middle-end coupling to some arbitrary subset of C. What's the benefit of C being involved at all in this scenario?
And say you realise this, and opt to replace C with some kind of more abstract, C-like IR. Aren't you now just writing an LLVM clone, with all the work that entails? When the original point of targetting C was to get its portability and backends for free?
> Having done this for a dozen of experiments/toys I fully agree with most of the post, would be nice if the the addition of must_tail attribute could be reliable across the big 3 compilers, but it's not something that can be relied on (luckily Clang seems to be fairly reliable on Windows these days).
This may be a stupid question, but if the function must tail, that's just a jump, no? Why not use goto?
> Obviously there was a solution, probably an easy one, but I didn’t even look for it
It's hard to take this seriously. It's the most obvious setting possible. Settings > Privacy & Security > Full Disk Access > tick the apps you want to have it.
What's even the complaint here? That Mac has solid app permissions, but you can't be bothered to open the settings?
I said it was likely an easy solution. Glad to see my intuition was correct!
I also said it was the “final straw”. No worries at all if you’re not familiar with that expression. It means that there were lots of similar slights previously, and that the event I mentioned, while minor, was the one that finally pushed me to make the decision I made.
> I also said it was the “final straw”. No worries at all if you’re not familiar with that expression. It means that there were lots of similar slights previously, and that the event I mentioned, while minor, was the one that finally pushed me to make the decision I made.
This sort of patronizing assholery is childish and unbecoming. Your comment would've been better without it.
> This kind of crap ticks me off and makes me respond in kind. I should be better, sure, but sometimes I'm not.
I think we're all struggling to identify any other possible interpretation of, and I quote, "obviously there was a solution, probably an easy one, but I didn’t even look for it". Your words are not ambiguous - you knew this would be an easy issue to solve, and you did not bother trying to solve it. And you say this as though it's someone else's fault.
Should Tim Apple come to your desk personally every morning and ask which MacOS defaults it would suit you to remove? Are we to understand that the obvious security benefits of sandboxing filesystem access pale in comparison to any inconvenience for you, even if that inconvenience is you merely having to bother to open the settings?
You're being totally unreasonable, and you're acting mean when your unreasonableness is picked up on. Learn to take a note, particularly when you're in the wrong, rather than becoming an irrationally defensive ball of spittle and venom. It'll serve you better in the long run.
The Canadian and Australian news link taxes are a naked hand out to powerfully connected individuals like Rupert Murdoch. They're completely incoherent as policy without that fact.
Big Tech spinelessly folded when they should have just banned news links instead. Google has no obligation to index or link to extortionist news media at all. Watch Murdoch U-turn in ten seconds when no one can find his trash online.
In general, there's far too much compliance with protectionist mandates from corrupt foreign governments. One silver lining of the mostly dark cloud of deglobalisation is the fact that US businesses should no longer care what Australian or Turkish or Russian laws say at all, if they're not in those markets.
MacOS has been getting a lot of flak recently for (correct) UI reasons, but I honestly feel like they're the closest to the money with granular app permissions.
Linux people are very resistant to this, but the future is going to be sandboxed iOS style apps. Not because OS vendors want to control what apps do, but because users do. If the FOSS community continues to ignore proper security sandboxing and distribution of end user applications, then it will just end up entirely centralised in one of the big tech companies, as it already is on iOS and macOS by Apple.
I knock on your door.
You invite me to sit with you in your living room.
I can't easily sneak into your bed room. Further, your temporary access ends as soon as you exit my house.
The same should happen with apps.
When I run 'notepad dir1/file1.txt', the package should not sneakily be able to access dir2. Further, as soon as I exit the process, the permission to access dir1 should end as well.
A better example would be requiring the mailman to obtain written permission to step on your property every day. Convenience trumps maximal security for most people.
Attempt at real life version (starts with idea they are actually not trustworthy)
- You invite someone to sit in your living room
- There must have been a reason to begin with (or why invite them at all)
- Implied (at least limited) trust of whoever was invited
- Access enabled and information gained heavily depends on house design
- May have to walk past many rooms to finally reach the living room
- Significant chances to look at everything in your house
- Already allows skilled appraiser to evaluate your theft worthiness
- Many techniques may allow further access to your house
- Similar to digital version (leave something behind)
- Small digital object accessing home network
- "Sorry, I left something, mind if I search around?"
- Longer con (advance to next stage of "friendship" / "relationship", implied trust)
- "We should hang out again / have a cards night / go drinking together / ect..."
- Flattery "Such a beautiful house, I like / am a fan of <madlibs>, could you show it to me?"
- Already provides a survey of your home security
- Do you lock your doors / windows?
- What kind / brand / style do you have?
- Do you tend to just leave stuff open?
- Do you have onsite cameras or other features?
- Do you easily just let anybody into your house who asks?
- General cleanliness and attention to security issues
- In the case of Notepad++, they would also be offering you a free product
- Significant utility vs alternatives
- Free
- Highly recommended by many other "neighbors"
- In the case of Notepad++, they themselves are not actively malicious (or at least not known to be)
- Single developer
- Apparently frazzled and overworked by the experience
- Makes updates they can, yet also support a free product for millions.
- It doesn't really work with the friend you invite in scenario (more like they sneezed in your living room or something)
> When I run 'notepad dir1/file1.txt', the package should not sneakily be able to access dir2.
What happens if the user presses ^O, expecting a file open dialog that could navigate to other directories? Would the dialog be somehow integrated to the OS and run with higher permissions, and then notepad is given permissions to the other directory that the user selects?
Pretty sure that’s how it works on iOS. The app can only access its own sandboxed directory. If it wants anything else, it has to use a system provided file picker that provides a security scoped url for the selected file.
Because security people often does not know the balance between security and usability, and we end up with software that is crippled and annoying to use.
I think we could get a lot further if we implement proper capability based security. Meaning that the authority to perform actions follows the objects around. I think that is how we get powerful tools and freedom, but still address the security issues and actually achieve the principle of least privilege.
For FreeBSD there is capsicum, but it seems a bit inflexible to me. Would love to see more experiments on Linux and the BSDs for this.
FreeBSD used to have an ELF target called "CloudABI" which used Capsicum by default.
Parameters to a CloudABI program were passed in a YAML file to a launcher that acquired what was in practice the program's "entitlements"/"app permissions" as capabilities that it passed to the program when it started.
I had been thinking of a way to avoid the CloudABI launcher.
The entitlements would instead be in the binary object file, and only reference command-line parameters and system paths.
I have also thought of an elaborate scheme with local code signing to verify that only user/admin-approved entitlements get lifted to capabilities.
However, CloudABI got discontinued in favour of WebAssembly (and I got side-tracked...)
A capability model wouldn't have prevented the compromised binary from being installed, but it would totally prevent that compromised binary from being able to read or write to any specific file (or any other system resource) that Notepad++ wouldn't have ordinarily had access to.
The original model of computer security is "anything running on the machine can do and touch anything it wants to".
A slightly more advanced model, which is the default for OSes today, is to have a notion of a "user", and then you grant certain permissions to a user. For example, for something like Unix, you have the read/write/execute permissions on files that differ for each user. The security mentioned above just involves defining more such permissions than were historically provided by Unix.
But the holy grail of security models is called "capability-based security", which is above and beyond what any current popular OS provides. Rather than the current model which just involves talking about what a process can do (the verbs of the system), a capability involves taking about what a process can do an operation on (the nouns of the system). A "capability" is an unforgeable cryptographic token, managed by the OS itself (sort of like how a typical OS tracks file handles), which grants access to a certain object.
Crucially, this then allows processes to delegate tasks to other processes in a secure way. Because tokens are cryptographically unforgeable, the only way that a process could have possibly gotten the permission to operate on a resource is if it were delegated that permission by some other process. And when delegating, processes can further lock down a capability, e.g. by turning it from read/write to read-only, or they can e.g. completely give up a capability and pass ownership to the other process, etc.
> Linux people are very resistant to this, but the future is going to be sandboxed iOS style apps.
Linux people are NOT resistant to this. Atomic desktops are picking up momentum and people are screaming for it. Snaps, flatpaks, appimages, etc. are all moving in that direction.
As for plain development, sadly, the OS developers are simply ignoring the people asking. See:
Yet we look at phones, and we see people accepting outrageous permissions for many apps: They might rely on snooping into you for ads, or anything else, and yet the apps sell, and have no problem staying in stores.
So when it's all said and done, I do not expect practical levels of actual isolation to be that great.
> Yet we look at phones, and we see people accepting outrageous permissions for many apps
The data doesn't support the suggestion that this is happening on any mass scale. When Apple made app tracking opt-in rather than opt-out in iOS 14 ("App Tracking Transparency"), 80-90% of users refused to give consent.
It does happen more when users are tricked (dare I say unlawfully defrauded?) into accepting, such as when installing Windows, when launching Edge for the first time, etc. This is why externally-imposed sandboxing is a superior model to Zuck's pinky promises.
In the case of iOS, the choice was to use the app with those permissions or without them, so of course people prefer to not opt-in - why would they?
But when the choice is between using the app with such spyware in it, or not using it at all, people do accept the outrageous permissions the spyware needs.
For all its other problems, App Store review prevents a lot of this: you have to explain why your app needs entitlements A, B and C, and they will reject your update if they don't think your explanation is good enough. It's not a perfect system, but iOS applications don't actually do all that much snooping.
I assumed the primary feature of Flatpak was to make a “universal” package across all Linux platforms. The security side of things seems to be a secondary consideration. I assume that the security aspect is now a much higher priority.
The XDG portal standards being developed to provide permissions to apps (and allow users to manage them), including those installed via Flatpak, will continue to be useful if and when the sandboxing security of Flatpaks are improved. (In fact, having the frontend management part in place is kind of a prerequisite to really enforcing a lot of restrictions on apps, lest they just stop working suddenly.)
Many apps require unnecessarily broad permissions with Flatpak. Unlike Android and iOS apps they weren't designed for environments with limited permissions.
It's truly perverse that, at the same time that desktop systems are trying to lock down what trusted, conventional native apps can and cannot do and/or access, you have the Chrome team pushing out proposals to expand what browsers allow websites to do to the user's file system, like silently/arbitrarily reading and writing to the user's disk—gated only behind a "Are you sure you want to allow this? Y/N"-style dialog that, for extremely good reasons, anyone with any sense about design and interaction has strongly opposed for the last 20+ years.
I intensely hate that a stupid application can modify .bashrc and permanently persist itself.
Sure, in theory, SELinux could prevent this. But seems like an uphill battle if my policies conflict with the distro’s. I’d also have to “absorb” their policies’ mental model first…
I tend to think things like .bashrc or .zshrc are bad ideas anyways. Not that you asked but I think the simpler solution is to have those files be owned by root and not writable by the user. You're probably not modifying them that often anyways.
I'm sure that will contribute to the illusion of security, but in reality the system is thoroughly backdoored on every level from the CPU on up, and everyone knows it.
There is no such thing as computer security, in general, at this point in history.
There's a subtlety that's missing here: if your threat model doesn't include the actors who can access those backdoors, then computer security isn't so bad these days.
That subtlety is important because it explains how the backdoors have snuck in — most people feel safe because they are not targeted, so there's no hue and cry.
The backdoors snuck in because literally everyone is being targeted.
Few people ever see the impact of that themselves or understand the chain of events that brought those impacts about.
And yet, many people perceive a difference between “getting hacked” and “not getting hacked” and believe that certain precautions materially affect whether or not they end up having to deal with a hacking event.
Are they wrong? Do gradations of vulnerability exist? Is there only one threat model, “you’re already screwed and nothing matters”?
I'm sure you're right; however, there is still a distinction between the state using my device against me and unaffiliated or foreign states using my device against me or more likely simply to generate cash for themselves.
A distinction without a difference. One mafia is as bad as another. One screws you in the short term, the other screws you in the long term, and much worse.
The problem in both cases is the massive attack surface at every level of the system. Most of these proposals about "security" are just rearranging deckchairs on the Titanic.
If you can't keep a nation state out (and you're referring to your own state, right?) then you can't keep a lone wolf hacker out either, because in either case that's who's doing the work.
It now seems to be best practice to simultaneously keep things updated (to avoid newly discovered vulnerabilities), but also not update them too much (to avoid supply chain attacks). Honestly not sure how I'm meant to action those at the same time.
In the early days, updates quite often made systems less stable, by a demonstrable margin. My dad once turned off all updates on his Windows machine, with the ensuing peril that you can imagine.
Sadly, it feels like Microsoft updates lately have trended back towards being unreliable and even user hostile. It's messed up if you update and can't boot your machine afterwards, but here we are. People are going to turn off automatic updates again.
Unless there's an announcement of a zero day, update a month after each new release. Keeps you on a recent version while giving security systems and researchers time to detect threats.
The easiest way to action as a user seems like it would be to use local package managers that includes something like Dependabot's cooldown config. I'm not aware of any local package managers that do something like this?
Debian stable. If you need something to be on the bleeding edge install it from backports or build from source. But keep most of your system boring and stable. It has worked fine for me for years.
I don't think you understand Debian. There's a new release every 2 years. A few months before every release there's the so called package freeze on the testing branch. The version the packages are on at that point that's the version they will have for the next stable release. Between releases the only updates are security updates.
Do you mean I should worry about the fixed CVEs that are announced and fixed for every other distribution at the same time? Is that the supply-chain attack you're referring to?
You basically need to make a trade-off between 0days and supply chain attacks. Browsers, office suite, media players, archivers, and other programs that are connected to the internet and are handling complex file formats? Update regularly, or at least keep an eye out for CVEs. A text editor, or any other program that doesn't deal with risky data? You're probably fine with auto update turned off
>Using notepad++ (or whatever other program) in a manner that deals with internet content a lot - then updating is the thing.
Disagree. It's hard to screw up a text editor so much that you have buffer overflows 10 years after it's released, so it's probably safe. It's not impossible, but based on a quick search (though incomplete because google is filled with articles describing this incident) it doesn't look like there were any vulnerabilities that could be exploited by arbitrary input files. The most was some dubious vulnerability around being able to plant plugins.
I agree with you regarding particular exploits by arbitrary input files against Notepad++ in particular.
I was trying - poorly it seems - to make a more general point regarding exposure to the internet and across "whatever other program" too. Something like 7-zip, VLC, syncthing, whatever other open source tools you may like, and how you use it exposing you to possibility of attack.
IE you are interacting with "the wild west of the internet" then the balance of update/not-update shifts more towards update. But if not, then the balance shifts to not-update.
But you are correct that either way it depends on the program in particular.
Supply chain attacks have impact on more systems, so it's more likely that your system is one of it. Opening a poisoned textfile that contains a exploit that attacks your text editor and fits exactly to your version is a rare event compared to automatically contacting a server to ask for a executable to execute without asking you.
The irony is that karma posts are so easy. Take something most of your audience already agrees with, triple down on some reductionist caricature of it, and smother it in pithy glibness. The shorter the better. Particularly effective if you set up a false dichotomy vis-a-vis the person you're replying to. It's a reflexive style of engagement for many, and HN is not immune to it.
I aim to avoid it these days, with varying degrees of success. I don't need fictitious internet points, I want to hear other people's genuine thoughts on a subject of interest. Or sometimes just to share something I thought was neat.
But since all social media are Pavlovian conditioning for points, you rarely get any fruitful exchange. And it seems to be getting rarer and rarer, sadly.
I wonder how one would structure social media to avoid it. HN is good, but the karma system is a double edged sword. Would it increase the quality of the discussion to retain the use of points for ranking posts, but hide point counts completely? Perhaps they could be represented by words: "Positive response", "negative response", but only past -3 and +3, with no changes in wording beyond that score?
Wrt my own posts I like the karma system as feedback for how well I'm getting my point across. Helps to understand what communication style resonates with people. I'd say the biggest flaw is not that it rewards snarky popular opinions, but that it overly rewards first movers on a topic.
I do think that pithy is good. The real world also rewards people who can convey an idea succinctly. ("Healthcare for all" for example is an effective rallying cry despite lack of implementation details.)
If it were an effective rallying cry, it would have worked at any point in the last forty years.
Politics is not assessed in terms of how the slogans sound, but what they achieve. Universal healthcare is further away today than it was in the '90s, and Democrats are less 'rallied' than ever.
Interestingly, the -fere in interfere comes from the Latin ferīre, meaning 'to hit', 'to strike', etc. My first guess would have been something like facere/fāre or -fer, but that quickly falls apart on reflection (to do across? between-bearer?).
Inter + ferire = to strike one another. Makes sense.
Bonus point: the aforementioned -fer ('bearer', like conifer or aquifer) is distantly related to ferīre, as it is to English to bear, Greek phérō ('to carry'), Slavic brat ('to take'), Sanskrit bhárati ('to carry'), etc. I suppose ferīre itself must be the result of semantic drift along the lines of 'to carry/bear' -> 'to bring forth [blows]' -> 'to strike/hit'.
> Inter + ferire = to strike one another. Makes sense.
I guess, but I don't really think of interfering as a mutual thing. I see interfere more like intervene or interpose, where the subject of the verb inserts himself between two other things. (As, in the example above, "my" neighbor places himself into the middle of the relationship between me and my television.)
If I'm interfering with you, it is not necessarily the case that you are also interfering with me. And it certainly couldn't be said that "we are interfering [end of sentence]" in the same way that it could be said "we are fighting".
The use of with to mark an indirect object does tend to suggest that the sense of the verb was more mutual at an earlier point, though.
But none of these things are functional programming? This is more the tradition of 'expressive static types' than it is of FP.
What about Lisp, Racket, Scheme, Clojure, Erlang / Elixir...
reply