Hacker Newsnew | past | comments | ask | show | jobs | submit | more DexesTTP's commentslogin

Captura is a great piece of software - I've used it for years, and I still use the latest release today.

It is a complete, all-in-one tool - very straightforward UI, lots of formats supported (especially through ffmpeg integration) and very easy to use in terms of window or screen area selection for recording - and more importantly for my use-cases, it's portable (no install, no admin rights needed). Really a great example of what's possible in that space.

I didn't participate in the project, but I've checked out the PRs and issues list every now and then and it's been frustrating seeing the author struggle against the store republishing issues for literal years. The issue tracking that (#405[1]) is not a happy read for sure.

The fact that Captura's MIT licensing gave effectively a "license to steal" to people and that it's so easy to publish something and sell it on the Microsoft store didn't mesh well.

I've however been really disappointed by Microsoft's non-response through all of that republishing debacle. Republishing free software is a difficult topic to get right for edge-cases, sure, but the Captura case was obvious to rule on and Microsoft did nothing for years - it was clear that there was no process for this kind of scenario, and that the solution was to do nothing. It took the author taking down the project for them to react, and even then I'm convinced that's only because whoever handled that case assumed that the republisher was the one taking it down, not the project author.

[1] https://github.com/MathewSachin/Captura/issues/405


Years ago, I made a significant piece of money with a game on the Windows Mobile app store. I'm French and, at that time, was unable to get the right documents to receive my money on my bank account. It was as if the Microsoft store was only conceived for US citizens. I kept trying to contact anyone at Microsoft, using various communication channels but received 0 response and the money kept growing.

Fortunately, I won a Microsoft chalenge about apps development. I had the opportunity to go to Seattle and assist to the next Microsoft mobile OS (Windows Phone). At the presentation, I took the opportunity, during a coffee break, to explain my situation to the presenter. He was so sorry and gave me an operational contact. Days later everything was resolved and I finally received my money.

Lesson learned: Microsoft is a huge bureaucracy but you can manage to find real involved and competent people. As a French person, I know how to deal with bureaucracy: avoid it if you can. I switched to other development platform and never go back to Microsoft.


I don't think it's likely. The real migration has been and will continue to be towards Discord servers and similar "smallish" live chat-based communities on centralized services.

The age of small self-hosted forums is unfortunately behind us, and I don't see them reviving any time soon.


The closed-source Discord that has bots built into the platform, that's the solution?

I wish more people would try Discourse, which is threaded like a real messageboard without all the automation bells and whistles.


Forums work just fine. You get to have your own community with your own rules on your own website with your own advertisement. If it helps you can also make your own tools. For local communities they are also amazingly fast.

If HN was a sub reddit I would never visit it.


Those Discord servers, even if hosted by centralized company are community run forums. You can set "your" server to be invite only so you can filter who gets in.

The underlying software is not really important, it's what you can do with it.


To simplify equations that create ratios between distance, time and energy. That's literally it. Using Planck units is a math trick to remove a bunch of constants from a bunch of equations.

The Planck length doesn't really have any known physical importance, at least no more than a meter does. There's some things that happens around a planck length (for example, the current theories predict that black holes need to be at least bigger than a planck length to exist, or alternatively the Planck length is the point where the quantum uncertainty completely overtakes any other classical theory) but particles can "move" distances less than a planck length (with lots of caveats, mainly because quantum uncertainty makes the notion of "moving" in the classical sense kinda weird and barely applicable, but still, they _can_).

A common belief is that the Planck length is kind of the "minimal distance" and start thinking that it's the "pixel of the universe" but there's no actual theoretical framework saying that. It's just a common misconception.


Planck length is where quantum gravity overtakes a nonrelativistic theory, quantum stuff happens on a 25 orders of magnitude larger scale. For comparison: the universe is about as much bigger than you.


There's a good chunk of the article about that. "Constant-tile" is a bit of a misnomer, "secret-independent resource consumption" is a better way to say it. If you just add a "sleep(rand)" or equivalent then you can still break the secret using statistics or sidechannel attacks.


I don't think that's a healthy mindset to have.

Just because something is widely used doesn't mean it's more secure (example: libwebp). The security issues tend to happen mostly when creating optimizations that bypass the "obviously secure" way to do things, but rely on an internal state that gets broken by another optimization down the line. This is way less frequent in "smaller" projects, just because they didn't need to go through that round of optimizations yet.

For this question specifically, tho, I think Ladybird is extremely interesting in terms of creating a security-focuses C++ project. Between the constant fuzzing that each part of Ladybird already goes under (courtesy of the OSS-Fuzz initiative), the first-class citizenship of process separation for the different features of the browser, the modern C++ used in Ladybird (that prevents a lot of the issues cropping up in commonly used media decoding libraries), the overall focus on accuracy over performance in library implementations, and the compiler-level sanitatization utils (UBSAN, ASAN) that are enabled-by-default, I think it's less likely that an critical security-impacting bug would exists in Ladybird's .ico support than in Webkit's for example.


The advances of renewables are definitely dependent on strong compute and widespread global (or at least widespread local) networking. Smart grids need all of the different tools from different manufacturers to talk to each other, which an agreed-upon networking protocol like IP on top of an existing internet backbone makes very easy to ipmlement. On-demand energy storage and providing from battery banks - or even hydroelectric storage or (as mentioned in the article) gas-powered backup turbines need the analytics power to study and predict load variations on all of the collected data, which existing data centers for search or even apps makes trivial to acquire.

Now, there could have been technical solutions for these issues found in the 80s or 90s, but more probably than not the engineers back then would not have thought about this beyond the obvious corporate aspects - after all, it's much more "practical" to focus on connecting the big power plants together and just vaguely estimate the required consumption.

It's probable that, if you sat at a design meeting in the 80s and proposed, as "an energy-efficient solution", to have a small computer in every consumer's breaker box that monitors the incoming electricity, sends the info through a combination of radio and country-spanning wires to a central server, and then do analytics on it to predict when the power is needed, you'd have been laughed out of the room. And yet, that's how smart grids work nowadays and it is the correct solution to this problem.

The improvements in energy storage and solar/wind renewables would have started decades earlier, sure, and that would have mattered. But I don't think anybody would have been able to use them the way we can use them today if it was done twenty or thirty years ago, and the technology might have died down instead. It's hard to know for sure, but it's a very plausible outcome.


LIGO by itself serves "very little" purpose, just watching for gravitational events that are theoretically possible and checking if they happen for real. It actually confirmed that these events exist a while ago[1], and now it's "just" looking for more things we already know exist.

in my opinion, the point of LIGO is no longer in detecting these gravitational events anymore tbh. Doing that is cool, and will bring us more data points about the events, but it's unlikely to ever yield "new science" like that.

Instead, LIGO (and Virgo) excel in exactly the kind of thing that we see in this article: to push the barrier of what we can do in this hyperspecific use-case, finding ways to do "new" engineering that would not make sense for other commercial projects yet and finding out how to implement cool solutions. The fairly consequent amount of funding and the extra focused goal it is aiming for will lead to new techniques and technologies that might have an impact.

Now, is there a guarantee that this new technology will have a larger impact than "better detectors"? No. Actually, there's no guarantee about anything coming out of LIGO ever, no more than out of the LHC[2] or ITER[3] or the ELT[4] will give us new science. But putting all of your eggs in the same basket is a bad solution for making more science, and there's enough room in science budgets to try a few dozen monumental projects and see what sticks.

[1] https://en.wikipedia.org/wiki/First_observation_of_gravitati...

[2] https://en.wikipedia.org/wiki/Large_Hadron_Collider

[3] https://en.wikipedia.org/wiki/ITER

[4] https://en.wikipedia.org/wiki/Extremely_Large_Telescope


There is a chance that LIGO will detect something we don't expect. I believe that has happened quite a bit with optical microscopes and telescopes.

Edit: Actually you said that but consider it unlikely which I absolutely can't argue with.


thank you


I don't think you understand the policy.

Unity isn't doing what Unreal did and asking for a cut of every sales. That would be surprising, that would be impactful, but that would be a manageable amount of money.

Unity is asking for a fee on every _install_ of the game, even if it amounts to one single sale of the game. Quoting their FAQ[0]:

> Q: If a user reinstalls/redownloads a game / changes their hardware, will that count as multiple installs?

> A: Yes. The creator will need to pay for all future installs. The reason is that Unity doesn’t receive end-player information, just aggregate data.

That means that if you sell a game for $1, and if a user reinstalls it 6 times over the course of their ownership of the game (changing computers, just uninstalling/reinstalling to free up space every now and then, adding or removing mods, etc...), and you made more than 200.000 sales... You now owe Unity $1.20 for that user. And you only earned $1 from that user. Sure, you can pay Unity 1500/year/developer to rise the threshold to $1 million and lower the fee to $0.15 per install, but that's not a "deal". That's a racket.

This is absolutely, completely unsustainable. And not like "this will cost some money to companies" unsustainable, more like "it will be cheaper to hire people to port our game to another engine" unsustainable, as the person handling the business side of Among Us stated yesterday[1]:

> This is legit the kind of math I'm doing too. I've learned that Unity's fee won't be retroactive which is delightful, but also Among Us gets enough dls per month that I could just hire two whole people to port AU away from Unity instead of them taxing us for 0 added value.

And that's not accounting for business models like bundles or game passes. AGGRO CRAB, the developers behind the upcoming "Another Crab's Treasure" game announced earlier this year and coming out early next year, has stated that they are unsure whether their GamePass release will be sustainable for them[2]:

> This means Another Crab's Treasure will be free to install for the 25 million Game Pass subscribers. If only a fraction of those users download our game, Unity could take a fee that puts an enormous dent in our income and threatens the sustainability of our business.

To be more exact and add some context, as reported by journalist Stephen Tottlio[3], Unity executive Mark Whitten stated that the fee would be sent to Microsoft:

> As for Game Pass and other subscription services, Whitten said that developers like Aggro Crab would not be on the hook, as the fees are charged to distributors, which in the Game Pass example would be Microsoft.

This will, however, deincentivize Microsoft to publish games made by studios who use Unity because Microsoft would be potentially publishing at a loss, which of course they don't want to do. So, not the exact same impact to developers, but the same difference for them.

And of course, that's not accounting for all of the "freemium" models here and there. And I don't mean microtransactions, I mean games that lock part of the content behind a paywall but propose access to most of the game for free. That business model, sustainable so far, is now literally going to end up with Unity asking for more money than you earn a year. (and yeah, this is a real risk for some companies. Anduo Games, the company behind the NSFW game Third Crisis, is still wondering if they're in that exact situation and what they can do about it[5]).

This move is just bonkers. Like, it's a business model where just using a game engine means that you can end up owing more money to a company than you earned over the lifetime your game.

And this is why people are not happy about this move.

[0] https://forum.unity.com/threads/unity-plan-pricing-and-packa...

[1] https://twitter.com/forte_bass/status/1701696983617180010

[2] https://twitter.com/AggroCrabGames/status/170169103683230926...

[3] https://www.axios.com/2023/09/13/unity-runtime-fee-policy-ma...

[4] https://itch.io/post/8572430 (the link is to a comment and is fully SFW, the game this comment is about isn't)


The point is that today, the key isn't in Google's or Amazon's or Meta's servers, but on the phones of people. That means that you literally don't have the key if you don't have the phone. And governments don't want that, they want the keys in order to eavesdrop but without being noticed (and stealing the phone would get you noticed).

So your only option to comply with this is to remove the phone-only key storage option and move all of the key into your servers, which is what we talk about when we mean "breaking end-to-end encryption".

The issue is that to comply with the rules, you have to secure that server so only the good guys can get in, and only if the warrant is legit, but also to allow fast access for time-sensitive cases such as terrorism and secret cases such as NSA investigations. You also have to make sure that there's absolutely no way for people to access that server if they don't have the approval.

Oh, and also that server / these servers contain the keys to read every message from every citizen of your country (including politicians), which is probably worth as much of your GDP.

So you need to build the equivalent of a safe containing one trillion dollars that can't be accessed for any reason except all of the reasons mentioned abov3. Except that this theoretical trillion of dollars are special dollars where if you mess up and let people in without anyone noticing they got in, they can "steal" the trillion dollars and start spending them and nobody would notice that they're being spent. And there's just about every country on earth that would love to "borrow" your two trillion dollars, especially if you can't ever realistically prove they did it.

Easy, right?


Has there ever been a public key sign-countersign encrypted tap method?

I.e. Authorized tap requestors have keys (law enforcement, intelligence) and sign a request (including timestamp), storing a copy for audit.

The approval system (courts, FISA) validates that request, countersigns if they approve (including timestamp), storing a copy for audit.

The system owners (messaging services, etc.) then validate both signatures and provide the requested tap information, creating a tap record (including content scope and timestamp), storing a copy for audit.

Ideally, then all audit logs get publicly published, albeit redacted as needed for case purposes.

Part of the central issue is deciding "Who should be responsible for security?" Imho, if governments want to mandate a scheme like this, it sure as shit shouldn't be the tech companies. The government should have to manage its own keys, or deal with consequences of leaking them (while allowing the tech companies to retain independent records of individual requests).

As much as it pains me to say this... this wouldn't be the worst use case for a blockchain...



Yes! Exactly like what you've apparently thought about and worked on for a long time. Neat!

>> To decrypt it, multiple parties need to come together and combine their keys, all the while creating an audit log of why they are accessing this or that portion.

To me, this is the technical solution that best mirrors the ideals of the pre-technical reality.

And I consider myself an encryption absolutist! But I think the powers arrayed against it are too strong (and in some areas, too morally correct) to fully resist.

Which devolves to creating a compromise, and hopefully one better than "Government has no keys, any of the time" or "Government has all keys, all the time."


So instead of stealing a single key, the FSB has to steal three?


The client side devices / cameras / whatever would send the encrypted copies off-prem, to be decrypted in the case of proper due process and authorization. But it would require interactively querying a distributed database that is managed by agencies or networks representing civilian interests, and these agencies would rate-limit the queryinf and disclose every query, who did it and why.

We need more transparency in our governments and security agencies (including FSB, CIA). Start with transparency on why the need certain data. More here:

https://community.qbix.com/t/transparency-in-government/234/...


Yes. In addition to two of those keys being attributable to the federal government.

Which, at least in the US DoD's case, already manages the world's largest PKI system.

The key difference with the UK scheme would be (1) the tech company would retain the final decryption key & (2) any use of that decryption key would be required (technically and legally) to generate a public audit record (albeit optionally obfuscated if the court order so requires it).


And what happens when the NSA or the FSB or some other equivalent just breaks into where the keys are stored, or beats it out of an employee, and bypasses the entire logging mechanism?

Your security guard having a clipboard where everyone signs in at the gate doesn't matter if someone dug a hole under the fence.


You mean when the {other nation's foreign intelligence agency} penetrates {nation's intelligence agency} and {nation's court system}?

And still creates a logging trail because the log system is intrinsically linked to fulfilling a request?


"Intrinsically linked" doesn't exist. Encryption is math, math you can do on a piece of paper (in theory). Anything you set up to log the fact that people did that math is always going to be meaningless if people take the numbers and do the math away from your logging system.

Now, you can say "but you can't ever access the numbers, just order the computer to do the operation". And also "To order the operation, you need 2FA and a signature for a judge and the president". And, of course, "The numbers needed for decrypting are split between three different servers all with their own security system and they can't be forced to talk to each other without the president's signature being added to a public log". And that's all well and good, but consider this: I install a listener on the RAM of each of the three servers. I wait until it does a totally legit, totally approved thing that gets logged. I now have the numbers copied somewhere. I do the decrypting for everything else away from the servers.

Sounds like a difficult operation? You're talking about three numbers worth a trillion dollars if they ever get out. Spy missions have been done that were harder to pull off for less benefit.

You just thought of [technical solution] to prevent listening through the RAM? Great, you just solved one _very obvious_ part of the attack surface. Now to address the ten thousand other parts identified by your threat model, and I really hope that you did a perfect job while designing that threat model because one blind spot = all of the keys are out forever. Also, no pressure, but your team of 10 or 100 or even 1000 people working on that threat model are immediately going to be pit against teams of the same size from every government ever, so I hope your team has the best and most amazing engineers we'll ever see in the world. And that's not considering the human aspect of all of that, because, well, one mole during the deployment, one developer paid enough by an adversary to do an "accidental" typo that leaves a security hole, one piece of open-source software getting supply chain attacked during deployment, and your threat model is moot.


So many arguments against this boil down to 'Anything less than perfection isn't perfect.'

That true.

But it's also missing goods of a less-than-perfect but better-than-worst-case system.

By your argument, TLS shouldn't exist.

And yet, it does, is widely deployed, and has generally improved the wire-security of the internet as a whole. Even while having organizational and threat surface flaws.

I agree with you that no government entity should have decryption keys in their possession.

However, I disagree that there should be no way for them to force decryption.

There's technical space between those two statements that preserves user privacy while also allowing the legal systems of our society to function in a post-widespread personal encryption age.


That's completely missing the point. This is not about perfection, this is about the threat level.

Decryption is always going to be technically possible. A government can always get possession of a phone, invest a lot of time and skill to get the key out of it, and then use that. This is what happened in that one famous Apple case, and this is what is always going to happen when people use E2E encryption. The point I made in my other posts was that once you get the key, you have the key, and that doesn't change just because the key is on the phone. That's your threat model when you use E2E encryption.

TLS works the same way. The encryption keys are ephemeral, but they're temporarily stored on your computer and on the server you're communicating with. If you want to attack a TLS connection (and you can!) you need to obtain the key from either the server or the client, and that's your threat model when you use TLS.

This is a completely fine and acceptable threat model as long as the keys are stored in a disparate sea of targets, either on hundred of millions of possible client/server machines for TLS, or on each person's phone (each one with a different model, from a different maker, and using different apps) for E2E. The thing is, in such a distributed model, nobody can realistically get every key out of every phone at once. This makes every single attack targeted to a couple of high-profile target, and therefore the impact of successful attacks way, wayyyy lower.

The issue arises when you decide to forbid end-to-end encryption, and instead mandate a global way to decrypt everything without needing access to the phone itself. This changes the threat model in a way that makes it unsustainable.

Again, and I know I repeated that vault analogy but it's a great way to explain attack surfaces and threat models: It's fine if everyone has a vault at home with their life savings in gold inside, because nobody can realistically rob every vault from everyone at once. It's still fine if every city has a vault where people store their gold, because while a few robberies might happen, it's possible to have high enough security to make it not worth to rob this vault. It starts being a bad idea to ask everyone to put their gold into a large, unique central vault that "only the government" has access to, because the money you need to spend to protect that vault is going to be prohibitive (and no way the government isn't going to skimp out on that at some point). And finally, it's an awful ideal to make that with magical gold that you can steal by touching it with a finger and teleporting out with it, because all of that gold is going to disappear so fast you better not blink, and losing that combined pile gold is going to impact every citizen ever.

It's a matter of threat modeling: the moment there's a way to access absolutely everything from a single entry point with possibly avoidable consequences for the attacker, then that entry point becomes so enticing that you can't protect it. You just can't. No amount of effort, money, and technical know-how is going to protect that target.


> TLS works the same way.

TLS does not use emphemeral keys, from a practical live connection perspective, because the root of trust is established via chaining up to a trusted root key.

Ergo, there are a set of root keys that, if compromised, topple the entire house of cards by enabling masquerading as the endpoint and proxying requests to it.

And that's exactly the problem you're gripping about with regards to a tap system. One key to rule them all.


Hacking the root certificates of TLS doesn't allow you to read every TLS-encrypted conversation ever, thankfully. It just allows you to set up a MITM attack that looks legit. And sure, that is bad, but it's not "immediately makes everything readable" bad.

That's why I call TLS keys "ephemeral" under this threat model.

The goal of anti-E2E legislation isn't to be able to MITM a conversation - again, government agencies can already set that up with the current protocols fairly easily. The goal of the legislation is to make it so that, "with the correct keys that only the good guys have", you can decrypt any past message you want that was already sent using the messaging system, without needing access to either device.

If the governments only settled with an "active tap system" that works like a MITM for e2e encrypted channels, we wouldn't be having this discussion or we wouldn't be talking about new regulations. Because again, that is already possible, and governments are already doing it.


That's why I put the live caveat. Granted, decryption of previously recorded conversations and decryption of new conversations are two different threat models.

Out of curiosity, can MITM of new connections be set up fairly easily with current protocols? (let's say TLS / web cert PKI and Telegram)

For the TLS case, they'd need to forge a cert for the other end and serve it to a targeted user. Anything broader would risk being picked up by cert transparency logs. Which limits the attack capability to targeted, small-scale and requires control of key internet routing infrastructure? Not ideal, but at least we're limiting mass continuous surveillance.

For Telegram, the initiation is via DH [0] and rekeyed every 100 messages or calendar week, whichever comes first, with interactive key visualization on the initial key exchange [1]. That seems a lot harder to break.

[0] https://core.telegram.org/api/end-to-end

[1] https://core.telegram.org/api/end-to-end/pfs#key-visualizati...


And not just TLS and certificate authorities but also DNSSEC. Still, it is pretty worrying to have one CA like letsencrypt be behind so many sites, or seven people behind DNSSEC:

https://www.icann.org/en/blogs/details/the-problem-with-the-...

But here is how they protect it:

https://www.iana.org/dnssec/ceremonies

On the other hand, data is routinely stored in centralized databases and they are constantly hacked:

https://qbix.com/blog/2023/06/12/no-way-to-prevent-this-says...


The issue is that whatever "audit" or "protection" method you create, whatever technology you use to ensure only the "good guys" get the information and the "bad guys" can't, it's only layers added on top of the real issue:

The final key is always going to be a single number. Once the key is out, it's out. There's nothing you can do about it being out, and no way to know it's out unless your audit system somehow caught it beforehand.

And that key (or these keys, which doesn't change much between "one number" and "two billion numbers" in terms of difficulty of stealing or storing them) is going to be worth trillions of dollars.

Again, the bank vault thing is an apt analogy (up to a point): You can add all of the security "around" the vault, guard rounds, advanced infrared sensors, reinforced concrete with weaved kevlar in it, etc... But if someone ever gets the dollar bills in their hands, then they got the bills. And if they somehow manage to bypass the security systems and not get noticed as they go in for the steal, you have no way to know who they are or that they did it.

Now, that is completely fine for a standard bank vault: after all, you need to physically send someone in, it's pretty rare for people to actually want in the vault so security can be pretty slow and involved, it doesn't have that much "money" inside (I'm pretty sure no bank vault in the world contains more than a handful of millions at any given time), and above all it's "physical" stuff inside: you'd immediately see if it's gone, it's not like someone who got in the vault can "magically" copy the bank notes and leave with the money while leaving the vault seemingly intact.

It's less fine for a "server" vault, where not only do you store everything so it's worth trillions, but people need to access it all the time because "investigations" and "warrants", and in a fast way because "terrorism", and if there's a breach or a mole or anything like that then people can copy all of the data inside and leave the server seemingly intact.

I think thinking that there's a technical solution is misunderstanding the problem, and that anyone pretending they "solved" it are always going to minimize one risk or the other. The governments and regulators don't get that yet, because it looks like it's just a technological issue to build "the vault". But the real issue, the fact that "the vault" doesn't matter when the consequences of stealing the contents of the vault are risk-free for bad guys but so immensely impactful for citizens, is the reason why technical solutions won't ever be enough.


I understand the analogies.

What I don't understand is, in the absence of some sort of scheme, how a justice system functions.

How would you compel production of evidence when duly authorized?


Note about the more productive approach: Mozilla published a blog post[0] back in late June about what solutions are already available and advising for the law to reinforce these existing mechanisms instead of mandating a new browser-level blocking thing that can easily be abused.

[0] https://blog.mozilla.org/netpolicy/2023/06/26/france-browser...


Thanks for linking. Ok, if fraud protection is the actual reason (I go with Mozilla): this makes no sense at all and is disporportionate. However, I think the bill is a Trojan horse in this case.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: