Hacker Newsnew | past | comments | ask | show | jobs | submit | ahmedtd's commentslogin

I don't think this is true. It's something that could be useful, with some sort of ACME-like automated issuance, but should definitely be issued from a non-WebPKI certificate authority.

If that's all you want to accomplish, you don't need WebPKI. Just generate a private key and a self-signed certificate.

(This is basically how Let's Encrypt / ACME accounts work)


> This is basically how Let's Encrypt / ACME accounts work

That's how they're implemented. How they "work" is a trivial pushbutton thing as documented by a well-known and trusted provider who cares deeply about simple user experience.

"Just self-sign a cert" is very much not the story XMPP wants their federated server operators to deal with.


How do I convince the tens of thousands of other servers that my private key can be trusted without some kind of third party trust architecture?

There's DANE but outside of maybe two countries that's impractical to set up because DNS providers keep messing up DNSSEC.


If you are trusting a user since they are the same one that originally contacted you, you don't. It's tofu

I can't believe this was downvoted. Seriously a Certificate is binding a public key and the attributes (mainly the identity). If you don't need to use the attributes, you don't need a certificate!

Can you link your PRs here?

Kubernetes is such a huge project that there are few reviewers who would feel comfortable signing off an an arbitrary PR in a part of the codebase they are not very familiar with.

It's more like Linux, where you need to find the working group (Kubernetes SIG) who would be a good sponsor for a patch, and they can then assign a good reviewer.

(This is true even if you work for Google or Red Hat)


My PRs are submitted under my real name and my HN account is not under my real name, so I cannot share.

For my etcd changes I did submit to the correct SIG, but nobody reviewed it.


I have exactly the same two problems, haha. I wonder why they seem unable to fix them.


Sam Altman skipped any attempt to prove his own statements right, so...


So be better than him..?


It's not worth anyone's time to meticulously fact check known (and I'm being kind here) 'exaggerator' Sam Altman, because by the time you're done, he's already spread 10 more 'exaggerations'.


Sam Altman has been a joke for awhile now, heard only his investors defend him for their next round increase - is that who you are?


It's not enabled by default, but you can --- gRPC Reflection:

* https://github.com/grpc/grpc-java/blob/master/documentation/...

* https://grpc.io/docs/guides/reflection/

You can then use generic tools like grpc_cli or grpcurl to list available services and methods, and call them.


Various pieces support pieces for pod to pod mTLS are slowly being brought into the main Kubernetes project.

Take a look at https://github.com/kubernetes/enhancements/tree/master/keps/..., which is hopefully landing as alpha in Kubernetes 1.34. It lets you run a controller that issues certificates, and the certificates get automatically plumbed down into pod filesystems, and refresh is handled automatically.

Together with ClusterTrustBundles (KEP 3257), these are all the pieces that are needed for someone to put together a controller that distributes certificates and trust anchors to every pod in the cluster.


From the sync.Pool documentation:

> If the Pool holds the only reference when this happens, the item might be deallocated.

Conceptually, the pool is holding a weak pointer to the items inside it. The GC is free to clean them up if it wants to, when it gets triggered.


If they are using multitenant Docker / containerd containers with no additional sandboxing, then yes, then it's only a matter of time and attacker interest before a cross-tenant compromise occurs.


There isn't realistic sandboxing you can do with shared-kernel multitenant general-workload runtimes. You can do shared-kernel with a language runtime, like V8 isolates. You can do it with WASM. But you can't do native binary Unix execution and count on sandboxing to fix the security issues, because there's a track record of local LPEs in benign system calls.


GKE does ship with both Ingress and Gateway controllers integrated, they set up GCP load balancers with optional automatic TLS certificates.

I think you need to flip a flag on the cluster object to enable the Gateway controller.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: