Hacker Newsnew | past | comments | ask | show | jobs | submit | badsectoracula's commentslogin

Some time ago i was wondering if the common "me at foobar dot com" you still see a lot of people do actually helps at all, especially now with LLMs, so i searched for some common "obfuscation" techniques and found this site (not the 2026 update, but the previous - it was a few months ago). Then i wrote a simple LLM query with a bunch of examples from the site[0] (the tool is just a frontend for a commandline program that uses llama.cpp and Mistral Small 3.1 in Q4_K_M quantization since it loads relatively fast and is fine for simple prompts). AFAICT it could reveal anything that wasn't relying on CSS tricks or JavaScript.

Like others mentioned, though, personally i haven't bothered by email harvesting for years now since spam filters seem to do a decent job. I have my email posted in plaintext here (which i bet is harvested very often) and in various other places and the occasional spam i get is eclipsed from "spam" from services i've actually signed up for (coughlinkedincough).

[0] https://i.imgur.com/ytYkyQW.png


IMO a better approach would be individualized addresses.

Imagine someone visiting your blog who wants to e-mail you can burn some CPU cycles to "earn" an address that hasn't been given out to anybody else, e.g. user+TOKEN@example.com, where it is algorithmically-unlikely for them to be able to guess a different TOKEN that will work. Then if abuse occurs, you can just retire that one address. (In a non-interactive context, like a paper ad, you could just generate one yourself.)

Naturally, this would be best with an e-mail client that is aware of the scheme, and with a mail-service that has some API for generating new addresses, such as if you want to cold e-mail somebody and use a new from/return address.

Some years ago I had the fanciful idea of doing it with a phone-app, where it manages creating new addresses as-needed, disabling them, and keeping notes about who you gave them to.


Sounds like a similar approach to this service: https://addy.io/

I use it all the time in conjunction with Bitwarden to generate unique emails per site. You can have notes in each email, and they show up in a small banner on in the forwarded email. And each one is individually disable-able, so you can easily cut it off if you see spam from it.

I was really interested in this space and made my own homegrown tool for this. I used it for a while until I discovered Addy and switched over. IIRC there are similar services by Mozilla, Apple, and Proton.


I would expect that a llm based scraper is going to be better at parsing an email address from your instructions than some of the more inattentive people who's emails you might want to receive. So I think some of the dumber mitigation measures that still block the simple regex bots from this topic are probably a better bet now.


The thing is, Microsoft got its position of dominance exactly because they did that - and that was because by doing this, the users' programs kept working. Remember that users outnumber developers by far and the last thing Microsoft wanted was for people to not upgrade Windows because they broke their previously working programs.

This was even more important at a time when Microsoft had actual competition in the OS space and people weren't able to just go online and download updates.


> The thing is, Microsoft got its position of dominance exactly because they did that

Yeah, right. No bribes, no preinstalled software...

They dominated by ... accident.


Nobody said it was by accident. They had deals with PC manufacturers for decades where they sold licenses based on the number of PCs sold by the manufacturer regardless of if the PC used MSDOS or another OS (like DRDOS), making all other options more expensive unless the client asked for them.

But the important thing is that the clients pretty much never asked for other OSes because all of their software worked on MSDOS - and later Windows. People bought computers to run their software, so if the software they wanted to run needed MSDOS or Windows, they'd buy the machines that ran that OS.

And by extension, if the software they wanted to run wouldn't run on the next version of MSDOS or Windows, they wouldn't have a reason to upgrade MSDOS or Windows. But from a user's perspective MSDOS/Windows was the best choice because everything supported it.

Microsoft didn't rely on just backwards compatibility (especially since the idea of "backwards compatibility" relies on something to be compatible with in the first place) but it was an incredibly important aspect of their strategy.


There was (and still is) VerInstallFile, however this was introduced in Windows 3.1 and it is possible installers wanted to also support Windows 3.0 (since there wasn't much of a time gap between the two many programs tried to support both) so they didn't use it.

TBH i think a more likely explanation is that they needed to somehow identify separate instances of that data structure and they thought to store some ID or something in it so that when they encountered it next they'd be able to do that without keeping copies of all the data in it and then comparing their data with the system's.

^^ The voice of experience, here.

FWIW a two digit amount of MB is usually at least 16MB (though with low hundreds of MHz it was probably at least 32MB if not 64MB) and most such systems could easily do 1024x768 at 16bit, 24bit or 32bit color. At least my mid-90s PC could :-P (24bit color specifically, i had some slow Cirrus Logic adapter that stored the framebuffer in triplets of R,G,B, probably to save RAM but at the cost of performance).

> Proprietary software needs a stable ABI.

Open source software also needs a stable ABI because:

a) i don't want to bother building it over and over (not everything is in my distro's repository, a ton of software has a stupid building process and not every new version is always better than the old versions)

b) a stable ABI implies a stable API and even if you have the source, it is a massive PITA to have to fix whatever stuff the program's dependencies broke to get it running, especially if you're not the developer who made it in the first place

c) as an extension to "b", a stable API also means more widely spread information/knowledge about it (people wont have to waste time learning how to do the same tasks in a slightly different way using a different API), thus much easier for people to contribute to software that use that API


There is also UMU Launcher[0] which is basically all that without the Steam integration/dependencies so you can run games from GOG and other stores (it is a command-line tool but launchers like Heroic can use it behind the scenes). I used to install dxvk, etc manually but in recent months i switched to it as it tends to work much more seamlessly for games (i did disable its autoupdates though).

[0] https://github.com/Open-Wine-Components/umu-launcher


Wine devs do not want to work with people who have looked at ReactOS[0] (see at the end) so any collaboration is one-way (or by ignoring the guidelines) and the likelihood of the two projects merging is zero.

[0] https://gitlab.winehq.org/wine/wine/-/wikis/Clean-Room-Guide...


Surprised no one responded to the 7th comment in that linked email thread, the author brought up a good point about making progress without using any disassembled windows binaries.

> One could argue DRI bowed out too soon. Then again, it’s questionable whether it would have won against Windows anyway. Microsoft was the larger company and had OEM agreements with all of the major PC makers.

Well, it is also that Windows even at version 1.0 was much more capable than GEM. Better documentation, better tools, better API and better functionality.

GEM was really just a shell over DOS and applications were actually DOS programs that called a special interrupt handler to make API calls. While this allowed any language that could make EXE files and call interrupts to be used to make GEM apps, it also meant that GEM inherited all the limitations from DOS, like the inability to run multiple applications at the same time (DR did eventually make GEM/XM that allowed switching between applications but it was still only one application active at any given time). Windows meanwhile not only could run multiple applications, but it also had a software-based virtual memory system that allowed applications to swap in/out both data and code to fit in the available memory (this required custom compiler support so, unlike GEM, you couldn't use any old compiler but on the other hand it you could make more complex applications).

The GEM API was also very barebones, you could create windows but all you could do with them was to draw inside. Dialog boxes were a completely separate thing that could take a tree of "objects" to draw inside them but even then the functionality was limited (the object types are hardcoded and while there is a "custom" type, all it does is provide a callback for drawing). You could work around some of the functionality by implementing some of it yourself - for example there is a call to draw an object tree (object trees are actually a flat array of fixed size structures where the first three fields define 16bit indices inside the tree for each object - this probably saved some bytes of memory at the cost of flexibility loss and TBH the extra bytes added for the code to work with the tree probably ate back those saved bytes, if not made things worse) so i think (never tried it) you could draw buttons, etc in a window when you receive the WM_REDRAW message but there is no event message propagation.

Meanwhile on Windows everything is a "window" in a window tree with a consistent approach to how things are handled. On GEM everything is a special case.

I get the impression that the GEM developers basically had some idea of what their desktop would look like and implement the functionality to do just that and nothing else with little room for flexibility or later expandability.

EDIT: also the graphics functionality was very limited, e.g. with hardcoded colors. Here are some GEM API docs in case anyone is interested:

https://www.seasip.info/Gem/vdi.html (low level API, draw graphics, input devices, etc)

https://www.seasip.info/Gem/aes.html (relatively high level API, make windows, define dialogs, messages, etc)

https://www.seasip.info/Gem/aestruct.html (some structures)

https://www.seasip.info/Gem/aesmsg.html (event message types)


Atari ST user here. AES is as you say rather bare bones. In some ways it's more analogous to X windows "Xt" X intrinsics than it is to any "widget" toolkit -- in that it gives you the facilities for constructing trees of drawn objects, registering applications, communicating between applications, receiving events, opening windows, redrawing windows, etc. but for actual active widgets ... only provides premade alerts, dialogs, windows, menubars, and a file selector. But in fact those pieces are made from object trees like you say. So, yes, with your own event handling you absolutely could write your own widgets directly into the window by drawing an object tree, and more savy developers did that.

I suspect there was maybe an intent to eventually build something higher level above it, still. Just that never happened, or was never standardized. There are in fact some C-level libraries for the Atari ST that do, but they're more recent inventions.

It's not a bad architecture, just incomplete. It wasn't aiming for -- nor would they have had the budget to make -- the same space as MS Windows in that it wasn't a full and complete environment in that way. Even on the Atari ST where they controlled the whole stack instead of being hoisted over MS-DOS.


I wrote a hypertext system that created dialog boxes on the fly and used the callback for the custom object type to implement links.

You can do a lot of stuff with the system as it is since it does expose a lot of its internals (and when you need to replicate functionality, there isn't that much in there to replicate so it is perfectly doable), but my point is that it wasn't as flexible or capable as Windows 1.0.

It wasn't just Microsoft's marketing skills that made Windows overshadow GEM, it was also that Windows was genuinely a better product - both from a technical and a functional perspective.


> The GCWZero was a MIPS console too

There have been a couple of GCWZero clones made in more recent years (e.g. from Anbernic) running the same (or a derivative) Linux-based OS with JZ4770 MIPS SoC and software compatibility. Too bad Ingenic never released any successor to the SoC though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: