You know that feature in JetBrains (and possibly other) IDEs that highlights non-errors, like code that could be optimized for speed or readability (inverting ifs, using LINQ instead of a foreach, and so on)? As far as I can tell, these are just heuristics, and it feels like the perfect place for an “AI HUD.”
I don’t use Copilot or other coding AIs directly in the IDE because, most of the time, they just get in the way. I mainly use ChatGPT as a more powerful search engine, and this feels like exactly the kind of IDE integration that would fit well with my workflow.
As someone who knows basically nothing about cryptography - wouldn't training an LLM to work on encrypted data also make that LLM extremely good at breaking that encryption?
I assume that doesn't happen? Can someone ELI5 please?
Good encryption schemes are designed so that ciphertexts are effectively indistinguishable from random data -- you should not be able to see any pattern in the encrypted text without knowledge of the key and the algorithm.
If your encryption scheme satisfies this, there are no patterns for the LLM to learn: if you only know the ciphertext but not the key, every continuation of the plaintext should be equally likely, so trying to learn the encryption scheme from examples is effectively trying to predict the next lottery numbers.
This is why FHE for ML schemes [1] don't try to make ML models work directly on encrypted data, but rather try to package ML models so they can run inside an FHE context.
I didn't mean to suggest otherwise! That's why I also linked the CryptoNets paper - to show that you're transforming the inference to happen inside an FHE context, not trying to learn encrypted data
Yes, you can do Cryptonets. What I’m saying is that you don’t have to do cryptonets, you can simply use FHE to train the network in fully encrypted manner: both the network and the data are FHE-encrypted, so the training itself is an FHE application. It would be insanely slow and I doubt it can be done today even for “small” LLMs due to high overheads of FHE.
> This is why FHE for ML schemes [1] don't try to make ML models work directly on encrypted data, but rather try to package ML models so they can run inside an FHE context.
I don't think @strangecasts was trying to say you couldn't. I believe their point was that you can't have a model learn to coherently respond to encrypted inputs with just traditional learning mechanisms (so without FHE). Doing so would require an implicit breaking of the encryption scheme by the model because it would need a semantic understanding of the plaintext to provide a cogent, correctly encrypted response.
From my understanding of cryptography, most schemes are created with the assumption that _any_ function that does not have access to the secret key will have a probabilistically small chance of decoding the correct message (O(exp(-key_length)) usually). As LLMs are also a function, it is extremely unlikely for cryptographic protocols to be broken _unless_ LLMs can allow for new types of attacks all together.
Has anyone considered that it might've just been a novelty? A fancy paperweight? I wonder if, in a couple thousands of years, archeologists will wonder why some people owned a 10x10cm cube of tungsten...
> Has anyone considered that it might've just been a novelty? A fancy paperweight?
This just got me thinking, it would likely take us hours to explain an ancient Roman what a "paperweight" is. The fact that paper a) exists, b) is our main writing support, and c) can be made so lightweight that a slight breeze can blow a stack of it away would be mind-blowing
Paperweights would be even greater use for Romans. As often they were wound up on scrolls which means that some item stopping them from wounding up again when unwound would be rather useful.
Maybe that wouldn't be the worst thing. Maybe chrome capturing the majority of the iOS market would finally be the proverbial straw that breaks the camel's back and pushes regulators towards forcing Google to sell Chrome.
Or… Sundar Pichai has lunch with Trump, brings with him a few nice cigars and a Google-sponsored Yacht (I hear he’s still short on these), explains to him how that’s all just a liberal media fake news campaign against good American products, and they decide to axe regulatory bodies instead.
Main reason is that I wanted to be able to support single VPS K3s clusters all the way down to ~512MB.
K3s already takes up about 100MB which only leaves 400 for the application, so trying to run Canine on that machine would probably create too much bloat.
Don't know anything about this project, but usually the reason is "because there could be N clusters", and sometimes "because I want the cluster to come and go frequently" (e.g. for testing workloads).