Absolutely fantastic. I actually laughed out loud a few times.
My only suggestion is make the shuffle animation shorter. At first I thought you were actually doing some server work when I clicked it and got concerned.
Also if you sell these in real life I would buy them.
For tracking of military ships it's much better to use radar imaging satellites (e.g. see [0]). They can cover a larger area, see ships really well, and almost not affected by weather.
I will not be surprised if China has a constellation of such satellites to track US carriers and it's why Pentagon keeps them relatively far from Iran, since it's likely that China confidentially shares targeting information with them.
China has Huanjing [0], which is officially for "environmental monitoring", but almost certainly has enough resolution to track large ships (at least the later versions, apparently the early versions had poor resolution)
And even if they didn't, Russia have Kondor, [1] which is explicitly military, and we know they have been sharing data with Iran.
Strava tracks can also be spoofed and you have no
guarantee for them to appear on a schedule either.
I just find this to be on the sensationalist side of "data" journalism lacking any sort of contextualization or threat level assessment.
Unless there was evidence of some more sensitive locations that have not been published along this story, it looks like some serious unserious case of journalism to me.
Heh, establishing an "opsec failure guy" on the boat with software on his Garmin that can be activated on days with special secrecy demands to translate his runs to a plausible fake location? I like that idea. It would actually fit a one-off like the Charles de Gaulle quite nicely!
Clouds only affect a narrow range of the electromagnetic spectrum. Plenty of satellite constellations use synthetic aperture radar, for example, which can see ships regardless of cloud cover. There are gaps in revisit rates, especially over the ocean, but even that has come way down.
I think thats pretty unlikely. The nuke areas are very far inland. Its impractical from a political point of view to put that many troops that far inland.
I think more likely they may seize Kharg island. It has direct strategic value (~95% of oil exports) and is a much simpler target. This is why they bombed it today - shaping operations.
One of the scary things is that not even this really works. Ignoring supply chain attacks, most people treat any client as effectively black box. When was the last time you read through the code of a messaging app? How do you know its safe? Maybe _you_ read through it, but 99% of people don't.
I run LibreWolf, which is configured to ask me before a site can use WebGL, which is commonly used for fingerprinting. I got the popup on this site, so I assume that's how they're doing it.
"Available to userspace" is a much different thing than "available to every website that wants it, even in private mode".
I too was a little surprised by this. My browser (Vivladi) makes a big deal about how privacy-conscious they are, but apparently browser fingerprinting is not on their radar.
This is pretty standard. Usually the conditions are performance benchmarks, but may also include IPO. Typically its done in multiple tranches, e.g. 15B at the start, 5 more if you gain +500m users, 5 more if your profit exceeds X, and the rest for IPO (im over simplifying)
So they caught 2 very small cases and both cases were literally posting publicly about how they were doing a thing against the rules. Seems like they can't actually do anything if the culprits aren't total idiots?
Also what is this 5x payment penalty? What mechanism do they have to enforce this?
Do you really think a $30 hetzner host can sustain that level of traffic performantly? Don't get me wrong, I love hetzner, but I would be very surprised if the numbers work out there.
It shouldn't. The issue is that most developers would rather spin up another instance of their server than solve the performance issue in their code, so now it's a common belief that computers are really slow to serve content.
And we are talking about static content. You will be bottlenecked by bandwidth before you are ever bottlenecked by your laptop.
To be fair, computers are slow if you intentionally rent slow & overpriced ones from really poor-value vendors like cloud providers. For people who started their career in this madness they might be genuinely unaware of how fast modern hardware has become.
With a 2025 tech stack, yes. With a 2005 tech stack, no. Don't use any containers, no[/limited] server-side dynamic script languages, no microservices or anything like that.
Considering the content is essentially static, this is actually viable. Search functions might be a bit problematic, but that's a solvable problem.
Of course you pay with engineering skills and resources.
Is there any feasible way to implement search client-side on a database of this scale?
I guess you would need some sort of search term to document id mapping that gets downloaded to the browser but maybe there's something more efficient than trying to figure out what everyone might be searching for in advance?
And how would you do searching for phrases or substrings? I've no idea if that's doable without having a database server-side that has the whole document store to search through.
I think the key thing here is the context and size; the searchable content of even a lot of e-mails is quite dense and small. I'm not a search expert but I'd look at precalculated indexes on very short substrings (3-4 characters maybe?), have the client pull those it needs for a particular query and then process client-side from there. (Doesn't even need figuring out in advance what people will search for, though that'd certainly improve things.)
Theoretically, just thinking about the problem... You could probably embrace offline first and sync to indexeddb? After that search would become simple to query. Obviously comes with it's own challenges, depending on your user base (e.g. not a good idea if it's only a temporary login etc)
there's been demos of using SQLite client-side, with the database hosted in S3, and HTTP range requests used to only fetch the necessary rows for the query.
there might be some piece I'm missing, but the first thing that comes to mind would be using that, possibly with the full-text search extension, to handle searching the metadata.
at that point you'd still be paying S3 egress costs, but I'd be very surprised if it wasn't at least an order of magnitude less expensive than Vercel.
and since it's just static file hosting, it could conceivably be moved to a VPS (or a pair of them) running nginx or Caddy or whatever, if the AWS egress was too pricey.
There are several implementations of backing an Sqlite3 database with a lazy loaded then cached network storage, including multiple that work over HTTP (iirc usually with range requests).
Those basically just work.
Containers themselves don't, but a lot of the ecosystem structures around them do. Like having reverse proxies (or even just piles of ethernet bridges) in front of everything.
Or if you go ping pong across containers to handle a single request. That will certainly make a laptop unable to handle this load.
I just fired up a container on my laptop... running on kubernetes... running in a linux VM. It's lightly dynamic (no database or filesystem I/O).
While I've also got enough other stuff running that my 15 min load average is at 4 and I've got 83% RAM used ignoring buffers/caches/otherwise.
I went and grabbed a random benchmarking tool and pointed it at it with 125 concurrent connections.
Sustained an average of 13914 reqs/s. Highest latency was 53.21ms.
If there are 10,000 people online at any given time hitting the API on average once every 3 seconds (which I believe are generous numbers), you'd only be around 3.3k reqs/s, or about 24% of what my laptop could serve even before any sort of caching, CDN, or anything else.
So... if a laptop can't serve that sort of request load, it sounds more like an indictment of the site's software than anything.
No it won't. This is static content we're talking about. The only thing limiting you is your network throughput and maybe disk IO (assuming it doesn't fit in a compressed RAM). Even for an "around the globe roundtrip" latency, we're still talking few hundred msec.
Some cloud products have distorted an entire generation of developers understanding of how services can scale.
I think it’s more helpful to discuss this in requests per second.
I’d interpret “thousands of people hitting a single endpoint multiple times a day” as something like 10,000 people making ~5 requests per 24 hours. That’s 0.5 requests per second.
A laptop from 10 years ago should be able to comfortably serve that. Computers are really really fast. I'm sorry, thousands of users or tens of thousands of requests a day is nothing.
There may be a risk of running into thermal throttling in such a use-case, as laptops are really not designed for sustained loads of any variety. Some deal with it better than others, but few deal with it well.
Part of why this is a problem is that consumer grade NICs often tend to overload quite a lot of work to the CPU that higher end server specced NICs do themselves, as a laptop isn't really expected to have to keep up with 10K concurrent TCP connections.
I would use a $100/mo box with a much better CPU and more RAM, but I think the pinch point might be the 1Gbps unmetered networking that Hetzner provide.
They will sell you a 10Gbps uplink however, with (very reasonably priced) metered bandwidth.
A profitable customer? How would Hetzner know if you're profitable or not?
I've hosted side projects on Hetzner for years and have never experienced anything like that. Do you have any references of projects to which it happened?
I am not sure how one even gets 250TB/mo through a 1Gbps link. In any case, completely saturating your networking for the full month is outside most people's definition of "fair use".
Yeah but they still advertise with unlimited traffic.
"All root servers have a dedicated 1 GBit uplink by default and with it unlimited traffic"
https://docs.hetzner.com/robot/general/traffic/
> When we announced these products in November, we planned on being able to share specific pricing and launch dates by now. But the memory and storage shortages you've likely heard about across the industry have rapidly increased since then. The limited availability and growing prices of these critical components mean we must revisit our exact shipping schedule and pricing (especially around Steam Machine and Steam Frame).
Oof.... sounds like they are all going to be $$$. That sucks and really steals the thunder from the steam machine. Gaming HW is going to suck for many years.
Yes thats why I said China needs to reach parity on node size once they do then production will ramp up. Currently they are behind so just supplying to local captured China market and not competing for market share outside China.
Not just gaming hardware, everything where the electronics are a predominant part of the unit cost (read: all gadgets) is going to be seeing a big crunch in the next ~2 years (optimistically).
Approximately 100% of RAM manufacturing capacity on Earth has been reallocated to feed the slop machines; anything consumers get is effectively a production cast-off.
My only suggestion is make the shuffle animation shorter. At first I thought you were actually doing some server work when I clicked it and got concerned.
Also if you sell these in real life I would buy them.
reply