I've never liked the directory.d/* infrastructure. In so many cases, even with a properly configured sshd_config, the resulting configuration file is not so large that it benefits from being split up.
You have to deal with ordering issues, symlink management in some cases, and unless the "namespace" of sorting number prefixes is strictly defined, it's never something that's convenient or durable to "patch" new files into. The proliferation of 99_* files shows the anti-utility this actually provides.
I much prefer configuration files with a basic "include" or "include directory" configuration item. Then I can scope and scale the configuration in ways that are useful to me and not some fragile distribution oriented mechanism. Aside from that with xz I don't think I want my configurations "patchable" in this way.
Config directories are there to solve change management problems like idempotency.
If you have one big file then different tools, or even the same tool but different points of that tools life cycle, can result in old config not correctly removed, new config applied multiple times, or even a corrupt file entirely.
This isnt an issue if you’re running a personal system which you hand edit those config files. But when you have fleets of servers, it becomes a big problem very quickly.
With config directories, you then only need to track the lifecycle of files themselves rather than the content of those files. Which solves all of the above problems.
I never managed a fleet. I mean I occasionally manage up to 30 instances, does that count?
Either way, my notion about doing it properly is to have a set of scripts (ansible/terraform?) that rebuild the configuration from templates and rewrite, restart everything. Afaiu, there's no "let's turn that off by rm-ing and later turn it on again by cat<<EOF-ing", cause there's no state database that could track it, unless you rely on [ -e $path ], which feels not too straightforward for e.g. state monitoring.
(I do the same basically, but without ansible. Instead I write a builder script and then paste its output into the root shell. Poor man's ansible, but I'm fine.)
So as I understand it, these dirs are only really useful for manual management, not for fleets where you just re-apply your "provisioning", or what's the proper term, onto your instances, without temporary modifications. When you have a fleet, any state that is not in the "sources" becomes a big problem very quickly. And if you have "sources", there's no problem of turning section output on and off. For example when I need a change, I rebuild my scripts and re-paste them into corresponding instances. This ensures that if I lose any server to crash, hw failure, etc, all I have is to rent another one and right-click a script into its terminal.
So I believe that gp has a point, or at least that I don't get the rationale that replies to gp suggest itt. Feels not that important for automatic management.
I’ve found the template system starts to have shortcomings if you need differences between different classes of systems within your fleet (eg VDIs for data scientists vs nodes in a ML pipeline pool)
Yeah a good templating language will allow you to apply conditionals but honestly, it’s just generally easier to have different files inside a config directory than have to manage all these different use cases in a single monolithic template or multiple different templates in which you need to remember to keep shared components in sync.
At the end of the day, we aren’t talking about solving problems that were impossible before but rather solving problems that were just annoying. It’s more a quality of life improvement than something that couldn’t be otherwise solved with enough time and effort.
Although I think that it sounds so ansible, and that's exactly why I avoid using it. It is a new set of syntax, idioms, restrictions, gotchas, while all I need is a nodejs script that joins initialization template files together, controlled by json server specs. With first class ifs and fors, and without "code in config" nonsense. I mean, I know exactly what was meant to be done manually, I just need N-times parametrized automation, right? Also, my servers are non-homogeneous as well, and I don't think I ever had an issue with that.
It’s not just Ansible. I’ve ran into this problem many times using many different configuration systems. Even in the old days of hand-rolling deployment scripts in Bash and Perl.
Managing files instead of one big monolithic config is just easier because there’s less chance of you foobarring something in a moment of absentmindedness.
But as I said, I’m not criticising your approach either. Templating your config is definitely a smart approach to the same problem too.
Not sure if I understand, because my templates are also multi-file and multi-js-module, where needed. It's only the result that is a single root-pasteable script (per server) which generates a single /etc/foo/foo_config file per service. So I think I'm lost again about changes.
You don't want instance-local changes. Do you? Afaiu, these changes are anti-pattern cause they do not persist in case of failure. You want to change the source templates and rebuild-repropagate the results. Having ./foo.d/nn-files is excessive and serves litte purpose in automated mode, unless you're dealing with clunky generators like ansible where you only have bash one-liners.
You’re not missing anything. A lot of the problem is due to clunky generators.
But then those clunky generators do solve different problems too. Though I’m not going to debate that topic here right now, besides saying no solution is perfect and thus choose a tech stack is always a question of choosing which tradeoffs you want to make.
However on the topic of monolithic config files vs config directories, the latter does provide more options for how to manage your config. So even if you have a perfect system for yourself which lends itself better for monolithic files, that doesn’t mean that config directories don’t make life a lot easier for a considerable number of other systems configurations.
The .d directories are important on Debian and Ubuntu where packaging needs to provide different snippets based on the set of installed packages, the VM environment, other configuration inputs like through cloud-init and so forth, and update them during upgrades, but also (as per policy) preserve user customisations on anything in /etc.
Since pretty much every file has different syntax, this is virtually impossible to do any other way.
The conf.d isn’t because the config file is large. It’s because it’s easier to disable or enable something with an “echo blah > conf.d/10-addin.conf” or an “rm conf.d/50-blah.conf” than it is to do sed -i or grep blah || echo blah >>
Exactly: if your templating logic accidentally produces a syntax error, now you can't log in to SSH. There's much less chance of that scenario with include directories. This applies for infrastructure as code scenarios, changes made by third party packages, updates of ssh, manual one-off changes, etc.
If any logic produces a syntax error anywhere in the sshd_config include chain, ssh is broken now. And you will have templating logic in automatic configuration one way or another, at least for different dns/ips.
I don't grep this argument at all. It feels like everyone's comparing to that "regular [bad] detergent" in this thread. A templating system will be as good and as error-prone to change and as modular etc as you make it, just like any program.
It applies only to local patchers (like e.g. certbot nginx) and manual changes, but that's exactly out of scope of templating and configuration automation. So it can't be better, cause these two things are in XOR relationships.
Edit to clarify: I don't disagree with foo.d approach in general. I just don't get the arguments that in automation setting it plays any positive role, when in fact you may step on a landmine by only writing your foo.d/00-my. Your DC might have put some crap into foo.d/{00,99}-cloud, so you have to erase and re-create the whole foo.d anyway. Or at least lock yourself into a specific cloud.
It's still possible to break the config with a syntax error, but there are less kinds of syntax errors that are possible if you aren't writing into the middle of an existing block of syntax. For example, there's no chance that you unintentionally close an existing open block due to incorrect nesting of options or anything like that. Plus, if you are writing into the middle of an existing file, there's a chance you could corrupt other parts of the file besides the part you intended to write. For example, if you have an auto-generated section that you intend to update occasionally, you will need to make sure you only delete and recreate the auto-generated parts and don't touch any hand-written parts, which could involve complicated logic with sentinel comments, etc. Then you need to make sure that users who edit the file in future don't break your logic. In addition it's harder to test your automation code when you're writing into an existing file because there's more edge cases to deal with regarding the surrounding context, etc.
Templating doesn't write in the middle. Writing in the middle is a medieval rudiment of manual configuration helpers. Automated config generation simply outputs a new file every time you run "build" and then somehow syncs it to the target server. All "user" changes go into templates, not outputs. What you're talking about can exist, but it is a mutable hell that is not reproducible and thus cannot be a part of a reliable infrastructure.
If this is not how modern devops/fleet management works, I withdraw my questions cause it's even less useful than my scripts.
1. They add huge configuration files where 99% are commented out.
2. Sometimes they invent whole new systems of configuration management. For example debian with apache httpd does that.
I don't need all of that. I just need simple 5-line configuration file.
My wish: ship absolutely minimal (yet secure) configuration. Do not comment out anything. Ask user to read manuals instead. Ship your configuration management systems as a separate packages for those who need it. Keep it simple by default.
You have to deal with ordering issues, symlink management in some cases, and unless the "namespace" of sorting number prefixes is strictly defined, it's never something that's convenient or durable to "patch" new files into. The proliferation of 99_* files shows the anti-utility this actually provides.
I much prefer configuration files with a basic "include" or "include directory" configuration item. Then I can scope and scale the configuration in ways that are useful to me and not some fragile distribution oriented mechanism. Aside from that with xz I don't think I want my configurations "patchable" in this way.