Hacker Newsnew | past | comments | ask | show | jobs | submit | thatsaguy's commentslogin

What I get though is that ICEs might be subject to more stringent regulation at least for charging in the future.

At least in EU, cars with a compressed fuel tank (hydrogen, GPL or methane) cannot be parked in a closed or underground parking lot for one safety reason: risk of explosion. The risk is pretty low, and if you have such a tank you're also subjected to scheduled regular inspections and replacement of the tank, but you still cannot park underground. Not many people are aware or respect this limit, thanks to the fact that it's not easy to distinguish a car which has a dual fuel option.

Now imagine a lithium fire in a garage (maybe... your garage?).


> Now imagine a lithium fire in a garage

Much less scary than gas leaking, filling part of a large underground parking garage, then getting ignited and collapsing the entire multi-story residential building built on top of the garage.

Also, while looking for examples, I learned that compressed gas (hydrogen, methane) is much less of a problem due to being lighter than air, so it doesn't accumulate as much, which means these cars are sometimes allowed. LPG, being heavier, sticks around near the ground also posing a suffocation risk.


I'm not sure I'd dismiss the risk so much. With burn temperatures reaching 600C, nearby electrolytes being vented also resulting in smaller burst explosions, difficulty in suppressing the fire, cars with other flammable products nearby and the toxicity of some formulations?


GPL is legal to park in garages through most of the EU. Belgium and France have some limitations (local regulations apply / need to have safety valve).


Portugal had that limitation up until the point where installation started to be tightly regulated, I suspect the same for the rest of EU


Incidentally, SSDs also benefit from read and write locality. Although there's no seek penalty, there's still a large benefit from dispatching multiple reads and writes from/to the same cell. You get those for free by trying to minimize "seeks", although the underlying logic behind this optimization is simpler.


Sixel is nice (and I use it all the time), but we do have better nowadays.

iterm's image handling [1] is superior: it just encodes a modern and more efficient image format and sends it inbound to be decoded by the terminal. png was always at least equal if not more efficient than sixel at indexed colormaps. You can send jpegs for drastically more efficient true-color images.

The "OSC1337" is not unique to iterm. mlterm also supports it, and I do remember a few others as well. You can use "ranger" to have image previews on a remote server without having to jump through hoops.

[1] https://iterm2.com/documentation-images.html


some terminal emulators support specific formats, but sixel strength is that I don't have to specifically install there emulators.

Also, what if I use an OS where they don't work? (Windows 10?)


I have nothing against sixel, of course. But to put things in perspective, Sixel support by itself is not great. Definitely more supported than iterm's, but not universal.

But when developing a new program that wants to output graphics to the terminal, OSC1337 is damn easy (you'll likely not need any new dependency or image handling routines), while for sixel you'd probably need libsixel or write your own encoder.

As a developer, I vastly edge towards iterm's handling. (and to be honest, as a user as well. png encoding is faster too!)


I've been using FF with resistFinterprinting on since it was available. Letterboxing does break a lot of websites and apps, sometimes making them unusable due to incorrect positioning and scaling of the elements.


I don't know about other servers, but on the systems I manage gmail itself is the #1 spam source since 3 years at the very least. If you look at my previous posts, I commented on this two days ago.

There's literally _Zero_ value in DNSBL/DKIM/SPF because of this. Email sent from reputable sources is actually _more_ likely to be spam than small mail servers.

Content-based filters are the only thing that work against this sort of spam.


Following your logic, his idea wouldn't work, and he doesn't make 115k a month. But, clearly, it does [if he isn't lying] - so I am not following.


I'm not contradicting his assertions. I actually confirm them: there is now a lot of spam coming from gmail. He's clearly one of the reasons.


Address reputation was always big, even back then.

In the last years though, I cannot really recommend to use any of the DNSBL anymore. I've encountered more cases where legitimate servers were blocked due to netblock vicinity or indeed previous ownership than actual spam issues.

Greylisting will still catch dynamic allocations almost as effectively, while you won't reject legitimate mail due to server and/or DNSBL issues.


Have you tried using a quorum of DNSBL (e.g. barracudacentral.org, cbl.abuseat.org, truncate.gbudb.net) to reduce false positives?

In other words, if at least two DNSBL queries agree, then reject, or feed this information to the rest of the spam pipeline?


This is pretty common, and systems like SA do this for you by batching responses and calculating a score.

I found this to be pretty much worthless if you already have greylisting, even for high-quality curated lists such as spamhaus SBL/XBL.


From my perspective (~500 employee mail server), greylisting had a much larger impact at the time, thanks to the spambots/viruses attempting direct connection to mail servers. Extremely effective, zero false positives, much lighter on resources. I did use both, of course, so that I could keep a record of how effective the systems were.

Today the situation has flipped. Most of the spam we get is coming from authoritative servers (ie: gmail, yahoo, etc), making stuff like SPF/DKIM/etc next to worthless from a spam perspective (it's still marginally useful for forgeries), while bayes (or in general, trainable) filters are essentially the only thing that can differentiate it reliably.

With a modern setup, you can basically next to zero spam and no false positives. In fact, honest email marketing (ie: mailing lists you've actually subscribed to) are from my experience the only thing that throws these filters off.


Thanks, one thing we also found is that spammers tend to be poor at RFC standards, in a way that Gmail etc. will have no problem with, but which are obviously broken.

For example, we use our own https://github.com/ronomon/mime to detect and reject email which has missing multi-parts (no terminating boundary delimiter). All of this has been spam so far, and we are yet to see a false positive. I don't think SpamAssassin has a rule for this (yet)?

Another example is illegal header characters, which are almost always spam, with a handful of false positives (usually machine-generated).


That is an interesting approach. Care to let us know how you go from https://github.com/ronomon/mime to some kind of SMTP server plugin (like for postfix for example)?


Thanks, you might find Haraka to be easiest since it's already Javascript.

Postfix may require a process callout, you might need to write a milter.


I agree that greylisting was the cat's meow back then. I setup a VM running CentOS with Postfix and Postgrey as a MTA for our "work" email server and the result was a massive reduction in spam.


But do these operators actually perform deterministically? Does the page actually contains a match for my terms? In my experience, this has become less and less true.

The first obvious mistake in the list is that google doesn't default to AND anymore since ages. A list of two terms will be some random combination of one of them, maybe both of them, and occasionally none (and no, this doesn't happen due to fetch/indexing lag or stemming or autocorrect).

To get true inclusive searches you need to quote the terms, individually. People keep pointing at verbatim search, but verbatim performs phrase search, which is not what you want in most of the cases.

DDG suffers from the same. I curse them both. I've used some js to quote individual terms before performing the search to get back useful searches for technical terms.

But really, the number of times I now get pages which do not contain the exact terms I'm looking for is subjectively increasing.


I also second PragmataPro. I've been using it for 5+ years now. The ligatures are fully optional, and the font comes in several variants for software that doesn't allow to select features.

It's a slab serif, condensed. You have to like the style. Some people prefer the opposite (wide with ample interline spacing).

It packs quite some columns on the screen while still being perfectly readable and basically every glypth is hand-tuned for small pixel sizes. There's not much else comparable designed with that amount of care.

The closest font on a stylistic basis I've seen is Iosevka.


Fusion 360 has the better pricing model, it's that simple. 500$ is not too expensive even for a hobbist: you'll likely spend more on consumables in a single year. And there's simply nothing else of that value for that price point. There are no OSS solutions that can offer reasonable CAD design yet.

I'm not a fan of Fusion: it's UI is not really well designed. It's slow and clunky. It forces "cloud" on your throat for ABSOLUTELY NO GOOD REASON. Seriously: wtf? I frequently need to work offline, and Fusion doesn't really seem to get out of my way.

Onshape, which is 100% web based, is better is several ways: it's actually faster to booth, which is scary! (not in everything, but for most things it is!). Starts in a 1/10 of the time. The interface is much more streamlined and efficient to work with. For my purposes, Onshape is only lacking in "variable" management (there's no substitute for a simple spreadsheet here -- even FreeCAD is superior in this regard).

Am I recommending Onshape? No. I hate cloud-based solutions. The lock-in is absolutely obvious here: if you're offline you're doomed. Import/export is read-only.

However, for ~500$ a year, with Fusion you get a complete CAD/CAM solution with decent CFD. Onshape starts at 1000$+ and only gets you a (good) CAD system, and you still need $$$ more for CAM.

If you're starting, Fusion is the better deal. It's that simple.

I'm not sure why Onshape is squandering the opportunity here to provide a slightly cheaper offering to promote their solution. Their CAD is good, but not good enough for the fusion offer. They could capture a nice maker segment if they provided a slightly cheaper solution.

Now, why not FreeCAD? The problem I see is that FreeCAD is quite a bit behind to be usable for day-to-day work. I use it for toy projects, and it's pretty limiting by itself. I wrote and recommended before that FreeCAD starts to really shine only when combined with Cadquery or OpenSCAD. With parametric sketching, "visual" interfaces only get you so far. Cadquery gives FreeCAD a considerable edge for complex designs.

The problem though still stands: even if you just start with 3D printing you realize consumables and electricity are not a zero cost anymore. 3D printing and hardware design in general is expensive. Investing some money into 3D design tools is logical, but you want something that works reasonably for your hardware. It's a chicken-and-egg problem. I'm a developer, there's no way I could work on a 3D CAD in the size of FreeCAD in my spare time and get anywhere useful.

All being said, FreeCAD 0.17 passed SolveSpace for all my purposes this year, which is a great achievement. At some point FreeCAD will become viable enough and will in turn start to attract enough money to staff full-time people.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: