I think there's also laws of physics based on the current architecture. Its like saying looking at a 10GB video file and saying - it has to compress to 500MB right? I mean, it has to - right?
Unless we invent a completely NEW way of doing videos, there's no way you can get that kind of efficiency. If tomorrow we're using quantum pixels (or something), sure 500MB is good enough but not from existing.
In other words, you cannot compress a 100GB gguf file into .. 5GB.
There surely are limits, but I don't think we have a good idea of what those are, and there's nothing to indicate we're anywhere close to them. In terms of raw facts, you can look at the information content and know that you need at least that many bits to represent that knowledge in the model. Intelligence/reasoning is a lot less clear.
100GB to 5GB would be 20x. Video has seen an improvement of that magnitude in the days since MPEG-1.
It's interesting to consider that improvements in video codecs have come from both research and massively increased computing power, basically trading space for computation. LLMs are mostly constrained by memory bandwidth, so if there was some equivalent technique to trade space for computation in LLM inference, that would be a nice win.
It could be interesting to understand the actual content of the qrcode.
part1 is a static id, so likely linked to the membership.
part2 seems to be a timestamp. Maybe we can try to forge the value to "now - 10 seconds".
And if the implementation has been done right, the "part3" should be a signature of part1 and part2, not a "salt" (so forging part2 should be detected and code rejected).
Judging by the size of the qr code, part 3 is too short to be a signature. Probably the token is just registered in a centralized system that the qr code scanner checks with to see if the code is valid.
It's a good idea for future satellites, but upgrading existing satellites is probably not feasible.
And these polar orbit satellite typically live a lot longer than the relatively short lived starlink satellites, potentially opening you to a (perhaps unlikely?) scenario where starlink moves to new and incompatible hardware for inter-satellite communications, and your satellite is then made obsolete.
Vertical integration is not cheap, but it does have it's upsides.
That would require replacing all the satellites with new ones capable of doing that, which doesn't seem feasible. Starlink also doesn't have great coverage of the polar regions.
“We're passing over terabits per second [of data] every day across 9,000 lasers,” SpaceX engineer Travis Brashears said today at SPIE Photonics West, an event in San Francisco focused on the latest advancements in optics and light. "We actually serve over lasers all of our users on Starlink at a given time in like a two-hour window.”
Again though, you can't do "Starlink from the satellites, so we don't rely on a specific ground station" unless all of the satellites support talking to Starlink, which they don't. That means they'd have to be replaced.
As I understand it, the Starlink network has a number of ground stations, and an active inter-satellite "mesh," thanks to laser links, which would allow it to route around the loss of one or more ground stations? (although obviously it requires at least one ground station to be live in order to access the non-Starlink-connected Internet)
The lasers began being integrated between 2020 and 2021, so it's likely SpaceX has made decent progress equipping their network with this capability, although I can't find the latest figures for the proportion of satellites that have lasers.
It sounds like there's something I'm missing if we can't do "Starlink from the satellites, so we don't rely on a specific ground station"
Do you understand what the problem trying to be solved is?
There are satellites in orbit today that have nothing to do with Starlink. Some of these have been up for a long time. We're talking weather satellites and research satellites. The ones in a polar orbit can only use one of two ground stations to communicate back with the earth, simply due to their location. One of those ground stations has lost it's fiber optic connection so it can't be used at full bandwidth right now.
None of that so far has anything to do with Starlink. We're talking about a system os satellites that already exists and predates Starlink sometimes by decades.
The person that started this thread proposed: "Maybe just use Starlink from the satellites, so we don't rely on a specific ground station." In other words, have these already existing satellites integrate with the (relatively new) Starlink system.
So they're saying that we somehow make those old satellites, which are already in orbit and have their own communication systems designed to interface with ground stations, somehow stop using the ground station and start using Starlink instead.
Limiting characters can also be a feature, so users can't use emojis in their password (this is so fun), to realize later they can't login, because they don't know how to input emojis from their desktop computer.
Hopefully passwords will be gone soon (at least that's my hope).
Don't get me wrong, it's a stupid limitation indeed -- but sometimes decisions like these are meant to stop users from doing stupider things :)
I'm no fan of oversimplification, but Apple's audience is everyone, not just power users. In that case, I'd prefer having advanced options surfaced to me as "advanced" so I can do what I want, but leave the simple experiences for the simple folk.
You might have half a point about emojis, but that's not what Apple is doing here.
Is there a reason I as a Swede should be limited from using my full native alphabet in my passwords for example?
As an example, you know how people sometimes suggest using a short sentence as a password? Here's a phrase in a local dialect, which means "and in the river there's an island"
Å i åa e ä ö
Notice how only 2 of those letters are available in ASCII.
ASCII only is not a feature, and I honestly doubt anyone would try to argue that it is if this was about any company other than Apple. Try to look past the "who" and focus on the "what".
I'm wondering if the author contacted the JAXA team. Maybe they would share how the data is encoded?
If the information is secret, it's probably encrypted (SpaceX eventually did that once radio amateurs decoded the video stream), but if it's not, maybe JAXA would be happy to help?
The number of certificates issued/inserted on 25th December, which is almost the same as any other day in December, while being a bank holiday in most Western countries, makes me happy: the industry successfully made certificate renewal fully automatic.
For LetsEncrypt, all renewal requests are done with ACME clients, so this is not a surprise.
I'm curious to know which part of DigiCert and Certigo certificates are actually renewed with an ACME request (both support it).
Certigo is ZeroSSL for all intensive purposes (as far as I am concerned) so probably close to all of them were acme clients.
Digicert has been pushing acme for a while now, but it's a bit annoying as you (my company) needed to prepay/have a line of credit for it, or some annoyance that didn't make it as seemless as LE/ZeroSSL.
I think for digicert any of the certs with 89/90 day expiry would be acme renewals with a near 100% certainty.
I don't have a definite reason you (or anybody in particular) should choose Digicert but I can give you a couple of ideas of where technically they might be a good choice and ISRG (Let's Encrypt) are not.
Firstly there may be policy issues and you can pay Digicert to care whereas you can't pay Let's Encrypt to care about your problems. Meta for example pays (paid?) their issuer to obey their private extra requirements on top of the rules for the Web PKI when it comes to names in the famous facebook.com 2LD.
Secondly trust issues. Obviously for a mainstream browser or similar, ISRG are trusted, but maybe you've got a fleet of Multi-function Printers from 2015 across 54 offices and alas none of them trust Let's Encrypt for the TLS servers. Yes, this was a dumb purchase but you don't have a time machine and the people who maintain this fleet keeps telling you the next version will definitely fix it, so meanwhile you're buying Digicert certificates.
This is admittedly a rare use case, but is needed e.g. for setting up a DNS-over-HTTPS server.
ZeroSSL seems to support IPv4 SANs, but fails to validate IPv6 addresses; I tried emailing their support several times about this but they never replied. I finally got a working certificate via GeoCerts (https://www.geocerts.com/), a DigiCert reseller, but doing so required manual validation. For the record, GeoCerts support was fantastic.
I'm pretty sure in five years, local LLM will be a thing.
reply