And then you can do whatever you feel is an appropriate amount of research whenever a particularly privileged extension gets updated (check for transfer of ownership, etc.)
- brave://flags/#brave-extension-network-blocking
You can then create custom rules to filter extension traffic under brave://settings/shields/filters
e.g.:
! Obsidian Web
*$domain=edoacekkjanmingkbkgjndndibhkegad
@@||127.0.0.1^$domain=edoacekkjanmingkbkgjndndibhkegad
- Clone the GitHub repo, do a security audit with Claude Code, build from source, update manually
> Clone the GitHub repo, … build from source, update manually
I’d be ok to do that once per extension, but then I’ve got multiple PCs (m), multiple browser profiles (p), OS-reimages (r), and each extension (e) locally installed doesn’t sync — manually re-installing local extensions
m x p x r x e
times is too much for me. :-( (And that’s even if I’m only running Chrome, as opposed to multiple browser or Chromium derivatives.)
Yeah that one's too much for me too, I used to do this years ago, but not anymore. Especially since I found out Brave supports network blocking for extensions, which is something you generally set up once and then forget about it. I'm just giving people tools and ideas I didn't see mentioned elsewhere in the comments, it's up to everyone to figure out their particular threat scenarios and tradeoffs individually.
This could probably be automated though if someone wanted to tackle it. git pull, agentic code review, auto-build from source, install.
I don't know, but if there were, I wouldn't expect them to do anywhere near as good a job or – perhaps somewhat counterintuitively – be anywhere near as reliable. Static rules only go so far when it comes to this stuff. And assuming that you're starting from a trustworthy base, and Claude Code (or similar) can focus its attention on recent changes to the repo in particular, I imagine sneaking actual malware in there would be pretty hard without throwing up a bunch of red flags.
EDIT: The main challenge here is more likely to be the noise, as the LLM is more likely to flag too much than too little, so I'd recommend putting together a prompt that has it group whatever it finds by severity and likelihood of malicious intent.
EDIT 2: Re Anthropic link above – worth pointing out that finding intentionally introduced malware when you have access to the source code and git history is a hell of a lot easier than finding a 0-day. The malware has to exfil data eventually or do ransomware stuff, good luck hiding that without raising the alarm, plus any attempt at aggressive obfuscation will raise the alarm on its own. I'm not saying it's impossible, I am saying that I think it's very very hard.
Thank you! "Dead simple and super effective" is pretty much exactly what I was going for, so this is great to hear. If you ever find yourself using `pprint` debugging for any other reason than having forgotten that Playback exists, I would consider this a failure on my part and would very much appreciate an open issue or a message on the Clojurians Slack.
This is spot-on. In a perfect world every major OS would have proper, granular mandatory access control enabled by default and applications would come with a profile specifying precisely which resources they require – at least regarding the more critical stuff like keys and cookies – with attempts to access anything else triggering an optional notification. Hopefully macOS will become more granular that way and Apple will continue pushing and improving what they began with Catalina.
Meanwhile, in a less than perfect world there's XFENCE [0], previously known as LittleFlocker. It's basically LittleSnitch for files. It was originally developed by Jonathan Zdziarski and later sold to F-Secure.
The challenge is to set it up in such a way that the level of interaction is kept at a minimum while still providing some level of protection.
I might write a detailed blog post / howto about it, but meanwhile here's the TL;DR if someone wants to try this blacklist/greylist approach:
1. Set an 'Allow any app – rwc' rule for /Users to override the default 'Watch – rw' rule there, which would otherwise result in a ton of popups. This does not override the more specific watch rules for some critical resources like loginitems, etc.
2. Add watch rules for additional critical resources, like ~/.gnupg, ~/.ssh, ~/bin, possible password manager directories, Firefox/Chrome directories to prevent cookie extraction, etc.
3. Temporarily add a watch rwc rule for ~/, thus overriding the Allow rule for /Users.
4. Run any network connected software with a potentially large attack surface like browsers, torrent clients, vpn clients, etc. and give them the required permissions to your home directory using the popups. Make sure to put them through their paces in terms of file system access to cover all possible use cases.
5. When they are usable without any more popups, remove the temporary watch rule and add 'Deny rwc to /Users' rules for each one, thus overriding the general /Allow rule we created above. An application-specific watch rule would be nice here instead, but sadly that doesn't seem to be possible – watch rules apply to all applications.
Execute steps 3–5 for any other untrusted software you might want to install/run.
When combined with LittleSnitch to catch possible attempts at data extraction, this reduces the risk of rogue applications extracting/damaging critical data and limits the potential damage of possible RCE vulnerabilities in network connected software. And it does this with a minimum of interaction – after the initial setup phase.
I've been running LittleFlocker/XFENCE for a couple of years now and the setup described above for maybe a year and it works like a charm, currently on Mojave, previously High Sierra, all the way back to Capitan, if memory serves.
A whitelist approach would of course be more secure, but that's way too stressful and distracting for me.
I love the clojure community efforts, but clojure.spec is so confusing and bloat-y to me. :(
It reminds me of Frama-C and specification of C programs, which isn't exactly what I would want to do all the time to have some safety guarantees on my program. I feel that a strong type system would provide way more benefits.
Much as I love Clojure, I agree that clojure.spec, while very powerful, has a mess of a UX. That's why – shameless plug – I wrote Ghostwheel [1], which, to me, turns it into a whole other thing, especially when gen-testing, spec-instrumentation and tracing are used together.
Having inferred types in addition to this would be even better, but types are no replacement for generative testing or vice versa.
They really complement each other quite nicely, but also have a large overlap in terms of how much they can reduce the need to do manual debugging and enable clean changes/refactorings with minimal effort.
Author of Ghostwheel here – clojure.spec is certainly not a replacement for static typing, but it goes a long way to covering many of the same use cases, in fact longer than one might think at a cursory glance.
With Ghostwheel you write your function specs similar to how you'd write a type signature and you get automatic generative testing (including higher order function support) and side effect detection which – when combined with spec instrumentation (+ the upcoming evaluation tracing for the test execution) – can often tell you quite precisely where you screwed up in a much more immediate and granular manner than a simple unit test or mucking about in the REPL could. It really is a quite different experience from plain Clojure.
That being said, I'd love types in addition to this and I'm keeping a keen eye on ReasonML.
While that is true of the basic building blocks of the language, clojure.spec for example does have an overly verbose syntax (I would define it that way, even though it doesn't introduce new basic syntax elements) for function specs and that was one of the main motivations for writing Ghostwheel.
I wouldn't see this as some sort of development in Clojure though and I don't really know if that's what cutler meant anyway. Maybe he'll elaborate.
The font in the screenshots with the ligatures is in fact Fira Code and the font in the REPL screenshots without ligatures is Ubuntu Mono.
I guess those could be considered additional syntax, but they're also completely optional and interchangeable with the corresponding, more Clojure-like keywords.
Yeah, I wouldn't consider them syntax, either. There's a lot we can do to make our coding environments easier to work in, and that's independent of the source code itself. I'm glad to see more exploration in this space.
- https://github.com/beaufortfrancois/extensions-update-notifi...
And then you can do whatever you feel is an appropriate amount of research whenever a particularly privileged extension gets updated (check for transfer of ownership, etc.)
- brave://flags/#brave-extension-network-blocking
You can then create custom rules to filter extension traffic under brave://settings/shields/filters
e.g.:
- Clone the GitHub repo, do a security audit with Claude Code, build from source, update manually