Hacker Newsnew | past | comments | ask | show | jobs | submit | intangible's commentslogin

If you were already experienced and productive, it does very little for you beyond summaries, a little boilerplate, and possibly search help.

If you were unproductive, it allows you to be more "productive" while stalling or reversing your learning and growth.

Of course, person number 2's newfound "productivity" comes at the expense of leeching productivity away from the experienced and productive people by overloading them with reviewing and validating their non-deterministic generated spaghetti.

It amazes people who think pumping out code is the hard part of a project, when in fact that's the easiest part...

We've apparently collectively forgotten that lines of code is one of the worst metrics for measuring productivity.


> If you were already experienced and productive, it does very little for you beyond summaries, a little boilerplate, and possibly search help.

It sounds like you already have your mind made up about AI, but I disagree. The rest of your comment make assertions and assumptions to points I did not make, so I'll leave those alone and leave them for someone else to address.


Hey, you were the one who said you were confused - it sounds more like you've already made up your mind and don't want to hear otherwise.


Google ripped away the functionality to access your original Google Photos files via any programmatic method vs the manual Google takeout.

This was the biggest reason I also had to move away from Google Photos when all I really wanted was protection from getting my account accidentally G-structed with zero way to contact a human to get my files back.


If only Trump had 4 years to do anything other than kick the can down to road yet again like every prior president.

Biden took the political hit by doing the right thing and ending the war - No prior president did that.


I think Mercedes is the only company willing to take liability for their self-driving features. https://www.theverge.com/2023/9/27/23892154/mercedes-benz-dr...

If your Tesla crashes while using FSD, it's completely your fault, according to them.


The difference is Tesla says you have to pay attention and enforces this, but it can attempt to drive on any street, make turns and lane changes, and navigate to your destination. Whereas Mercedes has a much, much simpler system that only works in a straight line going less than 40mph on certain freeways (so only in traffic).


Isn't this just for relatively low-speed driving, on freeways? IIRC it was limited to under 40 MPH, but on roads described in a way that it sounded like only freeways would qualify. I think there are lots of cars that can drive well under those conditions, though others won't assume the liability.


It's difficult to say if the person is recoverable, but as long as you continue chest compressions without major interruptions, some blood will continue to circulate and should mostly stave off irreversible brain damage. Even without rescue breaths, there's a massive benefit https://hsi.com/blog/are-rescue-breaths-necessary-during-cpr


I've been using a 3 nuc (actually Ryzen devices) k3s on SuSE MicroOS https://microos.opensuse.org/ for my homelab for a while, and I really like it. They made some really nice decisions on which parts of k8s to trim down and which Networking / LB / Ingress to use.

The option to use sqlite in place of etcd on an even lighter single node setup makes it super interesting for even lighter weight homelab container environment setups.

I even use it with Longhorn https://longhorn.io/ for shared block storage on the mini cluster.

If anyone uses it with MicroOS, just make sure you switch to kured https://kured.dev/ for the transactional-updates reboot method.

I'd love to compare it against Talos https://www.talos.dev/ but Talos's lack of support for a persistent storage partition (only separate storage device https://github.com/siderolabs/talos/issues/4041 ) really hurts most small home / office usage I'd want to try.


Thanks for your perspective.

How has your experience been with Longhorn? Performance, flexibility, issues, maintenance...? I'm interested in moving away from a traditional single-node NAS to a cluster of storage servers. Ceph/Rook seem daunting, and I'd prefer something easy to setup and maintain, that's performant, reliable and scales well. Discovering issues once you're fully invested in a storage solution is a nightmare I'd like to avoid. :)


Ceph is a nightmare if you don’t set it up exactly how the docs say - and in fairness, the docs are excellent.

My advice, having done Ceph/Rook, Longhorn, and now Ceph via Proxmox is the latter, assuming you have access to an actual host. Proxmox-managed Ceph is a dream, and exposing it to VMs and then K8s via RBD is easy.

Longhorn is fairly easy to set up, but its performance is terrible in comparison.


Thanks for the insight. I've tried Proxmox before, and as much as I appreciate what it does, it's mostly a black box. I prefer a more hands-on approach, as long as the docs are comprehensible, and the solution is simple enough to manage without a special degree. Ceph always seemed daunting, and I've mostly ruled it out, which is why these k8s-native solutions are appealing.

Good to know about the performance. This is not critical for my use case, but I guess I'll need to test it out for myself, and see whether it's acceptable.


Ceph is ... well, it's an amazing journey. Especially if you were there when it started, and watched as each release made it more capable & crazier.

From the earily days of trying to remember WTF the CRUSH and "rados" acronyms actually meant and focusing on network and storage concerns and [0] ... architectural optimizations [1] ... a RedHat acquisition ... adjusting to the NVMe boom [2] ... and then Rook, probably one of the first k8s operators trying to somehow operate this underwatere beast in a sane manner.

If you are interested in it ... set it up manually once (get the binaries, write config files, generate keyrings, start the monitors, set up OSDs, RGW and you can use FileZilla or any S3 client). For some more productionish usage there's a great all-in-one docker image https://quay.io/repository/ceph/ceph?tab=tags .

[0] oh don't forget to put the intra-cluster chatter on an internal network, or use a virtual interface and traffic shaping to have enough bandwidth for replication and restore, and what to do if PGs are not pairing and OSDs are not coming up]

[1] raw block device support, called BlueStore, which basically creates a GPT partitioned device, with ~4 partitions, stores the object map in LevelDB - and then later RocksDB

[2] SeaStore, using the Seastar framework, SPDK and DPDK, optimize everything for chunky block I/O in userspace


> I prefer a more hands-on approach ... which is why these k8s-native solutions are appealing

"K8s-native" here implies Rook, which is in no way hands-on for Ceph.

> as long as ... the solution is simple enough to manage without a special degree

Ceph is not simple, that's my point. I assume you've read the docs [0] already; if not, please do so _in their entirety_ before you use it. There are so many ways to go wrong, from hardware (clock skew due to BIOS-level CPU power management, proper PLP on your drives...), to configuration (incorrect PG sizing, inadequate replication and/or EC settings...) and more.

I'm not trying to dissuade you from tackling this, I'm just saying it is in no way easy or simple. Statements like "k8s-native solutions" always make me cringe, because it usually means you want to use an abstraction without understanding the fundamentals.

To be clear, I have read the docs, set it up on my own, and decided I didn't want to try to manage it. I've ran a ZFS pool for a few years on Debian and shifted over to TrueNAS Scale last week; not because I was unable to deal with ZFS' complexity (knock on wood, the only [temporary] data loss I ever had was an incorrect `rm -rf`, and snapshots fixed that), but because of continual NFS issues. I may yet switch back; I don't know - I just no longer had the time to troubleshoot the data layer. Ceph makes ZFS looks like child's play in comparison.

[0]: https://docs.ceph.com/en/latest/


By "k8s-native" I meant solutions that were built from the ground up with k8s in mind. Rook is built on Ceph, and as such requires knowledge and maintenance of both, which seems like much more difficult than managing something like Longhorn.

I think you misunderstood me. I'm not inclined to give Ceph/Rook a try because of its complexity, regardless of the state of its documentation. I was refering to Proxmox in my previous comment, in the sense that I don't need a VM/container manager/orchestrator with a pretty UI. If I'm already running k3s, I can manage the infrastructure via the CLI and IaaC, and removing that one layer of abstraction that Proxmox provides is a positive to me. Which is why adding just a storage backend like Longhorn to a k3s cluster seems like the path of least resistance for my use case.

Ultimately, I don't _want_ to deal with the low-level storage details. If I did, I'd probably be managing RAID, ZFS, or even Ceph. For my current NAS I just use a single JBOD server with SnapRAID and MergerFS. This works great for my use case, but I want to have pseudo-HA and better fault tolerance, and experiment with k8s/k3s in the process.

So I'm looking for a k8s-native tool that I can throw a bunch of nodes and disks at, easily configure it to serve a few volumes, and that it gives me somewhat performant, reliable, and hopefully maintenance-free, block or object-level access. Persistent storage has always been a headache in k8s land, but I'm hoping that nowadays such user friendly and capable solutions exist that will allow me to level up my homelab.


Ahhh, yes, I misunderstood - sorry!

Also, I feel obligated to ask “are you me?” at your stated homelab journey. I also went from mergerfs + SnapRAID, albeit shifting to ZFS. I also went with k3s, but opted for k3os which is now dead and thus leaving me needing to shift again (I’m moving to Talos). Finally, everything in my homelab is also in IaC, with VMs buiit by Packer + Ansible, and deployed with Terraform.

Happy to discuss this more at length if you have any questions; my email is in my profile.


I've run Rook/Ceph, and I run Longhorn right now. I wish I didn't, and I'm actively migrating to provider-managed PVs.

My advice for on-prem is to buy storage from a reliable provider with a decent history of hybrid flash/ssd, so that you can take advantage of storage tiering (unless you just want to go all flash, which is a thing if you have money).

If you must use some sort of in-cluster distributed storage solution, I would advise you to exclude members of your control plane from taking part, and I would also dedicate entirely separate drives and volumes for the storage distribution so that normal host workload doesn't impact latency and contention for the distributed storage.


Good points, thanks. What makes you wish you didn't use Rook/Ceph/Longhorn?

In a professional setting, and depending on scale, I'd probably rely on a storage provider to manage this for me. But since this is for my homelab, I am interested in a DIY solution. As a learning experience, to be sure, but it should also be something that ideally won't cause maintenance headaches.

Keeping separate volumes makes sense. I can picture three tiers: SSDs outside of the distributed storage dedicated to the hosts themselves, SSDs part of distributed storage dedicated to the services running on k3s, and HDDs for the largest volume dedicated to long-term storage, i.e. the NAS part. Eventually I might start moving to SSDs for the NAS as well, but I have a bunch of HDDs currently that I want to reuse, and performance is not critical in this case.


>Good points, thanks. What makes you wish you didn't use Rook/Ceph/Longhorn?

It seems like my volumes are constantly falling into degraded and then rebuilding. Resizing volumes requires taking the workload that's attached down, and then it seems to take forever (15m+) for my clusters to figure out that the pod is gone and a new pod is trying to attach.

Really, it's a PITA and all of the providers' storage classes seem better than Longhorn. Ceph I had less experience with but very similar problems - long-gone pods held a lock on PVCs that had to be manually expunged, or wait for a very long timeout.


I've had similar issues with Mayastor (another in-cluster storage solution). It's under heavy development, so I've assumed the more mature options were better.

I'm working on v2 of my homelab cluster, and I'm going with plain old NFS to a file server with a ZFS pool. Yes, I will have a single node as a point of failure, but with how much pain I've had so far I think I'll be coming out ahead in terms of uptime.


I can't speak to performance because the workloads aren't really intense, but I run a small 3 node cluster using k3s and Longhorn and Longhorn has been really great.

It was easy to setup and has been reliably running with very minimal maintanence since.


If you use longhorn, make sure to enable the network policies when installing the helm chart. For some odd reason, these are disabled by default, which means ANY pod running on your cluster has full access to the longhorn manager, API, and all your volumes

https://github.com/longhorn/charts/blob/v1.5.x/charts/longho...


I wouldn't really treat it as a replacement for a NAS, mostly only for the container workloads running on kubernetes itself... Ideally, any apps you develop should use something more sane like object storage (Minio etc) for their data.

I push it pretty minimally right now, so no great performance testing myself, and I do run it in synchronous mode, so that means its write performance is likely going to be limited to the 1gbps network it syncs over.


Funny: I've been running a Talos cluster for the past six months, and just today decided to look into k3s. Talos has a lot of really nice things, but I have found that the lack of shell access can be frustrating at times when trying to troubleshoot.


Curious, what have you run into that you couldn’t troubleshoot with a privileged pod with host networking?


I had a situation where the etcd cluster got hosed, making it basically impossible (at least with the ways I know) to interact with the k8s API at all. So I didn't have any way to get a privileged pod running.


Ah gotcha. Haven’t had to deal with that yet. Maybe possible to add a static pod via the machine config? But yeah it’s basically throwing out a bunch of linux admin muscle memory.


Kudos to you. I feel like setting things up on real hardware is somehow needed in order to make things concrete enough to full understand. At least for me (I fully admit this may be a personal flaw) working with a vm on in the cloud is a little too abstract - even though eventually this is where things will land.


How did you go about deploying k3s to MicroOS? Did you go the route of installing it via combustion and a systemd unit?[0]

To me it seems strange that a systemd unit is used, but I didn't know if I was missing something about the way MicroOS worked.

[0]: https://en.opensuse.org/SDB:K3s_cluster_deployment_on_MicroO...


Re: Talos persistent storage, why not run it as a VM and pass in block devices from the hypervisor? You also then gain the benefit of templated VMs that you can easily recreate or scale as needed.


Another really nice Immutable Linux system that I'm using is VyOS.. It's targeted primarily at a router OS, but you can run containers on it now to make it pretty versatile.

Basically, it's an image based OS that configures everything from a single config file on boot. https://docs.vyos.io/en/latest/introducing/about.html


Same thing for OpenWrt I think. I believe it works by using a squashfs and tmpfs and using overlayfs to overlay the tmpfs on top of the r/o filesystem. But I'm not sure that fits the definition used here for immutable OS:

> We could say that a Linux LIVE-CD is immutable, because every time you boot it, you get the exact same programs running, and you can't change anything as the disk media is read only. But while the LIVE-CD is running, you can make changes to it, you can create files and directories, install packages, it's not stuck in an immutable state.


Quad9 was founded by IBM, PCH (Packet Clearing House - they operate a lot of smaller countries' authoritative dns), and GCA (Global Cyber Alliance), but it's a standalone Non-Profit. https://quad9.net/about/

I prefer to use them over Google and Cloudflare since their whole non-profit mission is DNS.

They also offer a 9.9.9.10 service if you don't want the "Security block list" (I don't like blacklists I don't control).


I'm a little late to this post, but found a talk about this projection by the creator (good English subtitles): https://www.youtube.com/watch?v=YsQtLASlDKE


> I can only think of one area that the state leaves alone, religion

I actually wouldn't say that... Consider how we've munged together religion and laws relating to Marriage.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: