Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are you used you download a zip or tar.gz from upstream full of bundled dependencies?.

I don't like that.

As this is opensource and I get the source, I will download a full OS image or containerized binaries from third parties. Mmmm so spectacular.

Apps running on chroots? We've that since the 90's, the relaxed security model of Xorg and it's client/server architecture was there already, and it was the reason to left windows 3.11/95 behind.

But marketing campaigns are spectacular in effect in current money surrounded culture. Yes. Their network effect is spectacular. I would say so.

Edited: missing question mark



Subuser is not a revolutionary peice of software. You are right that it is a LOT like a zip file or tar.gz file full of bundled dependencies. It is, however, a big improvement on just that. Subuser contains the subusers, so they cannot mess with the rest of your system, making a lot eisier to trust those bundled dependencies. It also provides some rather primitive update mechanisms for those bundled dependencies. But it's main advantage is the containment and the "blank slate".

In the future, I would like to expand on the vision, to include the ability to do deduplication, so that all those zip files with bundled dependencies take up less space. And deduplication done right, will save bandwidth too. Indeed, when you have these "deduplicated zip files", with deduplication done algorithmically, you'll save even MORE space and MORE bandwidth than with a traditional dependency resolving file manager. Take for example debian. Debian uses packages. These packages have dependencies. The dependencies are shared. This saves space over a case where you have a bunch of zip files with unshared dependencies. But when you update a dependency, even if only one line of one file has changed, that dependency must be downloaded completely anew. But with algorithmic de-duplication, updating one line of one file means you only have to download one line of one file anew.

Space savings are there for a similar reason.


It is revolutionary - if you manage to build the ecosystem. Remember that Docker itself was not considered revolutionary ("yawn.. Use lxc").

I would really recommend you look at nix, zeroinstall, click packages and http://0pointer.net/blog/revisiting-how-we-put-together-linu...


zeroinstall and click are interesting. But zeroinstall doesn't do anything on the security front. Click is rather confusing, I haven't found a webpage for it yet, or any information about how to like install and use a click package on Debian(I think that this is because it's not possible/supported). However, I don't see nix or redhats various efforts(they have annoced a new universal package format every year or two for a decade now) to be very serious. The problem with these efforts is that they always want to impose some opinion on HOW things should be packaged. And I don't think that is useful as a global standard.


you should really kickstarter this project if you are really serious. I think this can really be something if pushed hard. For all its hate, systemd is a one man effort that changed Linux. And so was git.

There are people out there who would love to financially support this if you ask and have a good sense of what you want to do. IMHO your post above (about zeroinstall and click) are jumping the gun.

Would love to see what you come up with.. once the excitement has died down ;)


I keep wondering how much traction systemd would have gotten without having a long standing project like udev latched onto it (never mind the consolekit replacement logind).

I'm just waiting for some project to be wholly dependent on the existence of networkd or some such...


0pointer... after break half of the internet... and need to patch the other half... now releases a pid 2 silently (see release notes for last release), because people was right, because of architecture.

Fortunately still I'm free to use my non market targeted, written by myself, configuration management tool, to pick how i boot my personal systems.

Sorry, will not buy more blog posts from that domain.


> to include the ability to do deduplication

But... Won't it be an ugly kludge?

From what I understood, Subuser does introduce the duplication problem just because of the way it works. It didn't exist with traditional package management when done right (because shared components are packaged separately), and it didn't exist with Docker (because it uses layers). If software (any software) essentially adds some issue and then tries to fight it off, it's very likely that things will be ugly in architectural sense.

> But when you update a dependency, even if only one line of one file has changed, that dependency must be downloaded completely anew

Are you aware about debdelta?


> and it didn't exist with Docker (because it uses layers)

There's a big spectrum of 'solving the deduplication problem' and Docker is towards the 'when all you have is a sledgehammer' end.

Saying "you can manually arrange Dockerfiles so that they cooperate and share layers" is not solving things in an architectual sense! You essentially have to be using a single custom tool (Dockerfiles) to create your images and then you need to apply thinking power to consider how best to arrange your images (e.g. having a 'base' package.json to install a bunch of things common to many apps, then additional an additional package.json per-app).

It's getting better with 1.10 (layer content is no longer linked to parent layers, so ADDing the same file in different places should reuse the same layer) but it's still pretty imperfect. I created https://github.com/aidanhs/dayer to demonstrate the ability to extract common files from a set of images and use it as a base layer, which is another improvement. Even better would be a pool of data split with a rolling checksum, like the one bup creates - short of domain-specific improvements (e.g. LTO when compiling C), I think this is probably the best architectural thing you can do.


Oh, sorry. Yes. I think, I'm with you on this. I didn't meant to say about the quality of this approach. I just meant that Docker has its means to not duplicate by having shared base layers - so we can say the problem [mostly] didn't exist in systems Subuser had started from - but it's completely another matter whenever the approach it takes is good or not.


No, algorithmic deduplicaton is not a kludge! It is the oposite of a kludge. It is a beautiful way of letting the computer do hard work for you! Rather than trying to deduplicate things by hand (aka, traditional dependency management) you let the computer do it for you :)


Oh, no, I guess I wasn't clear on what I mean. Sorry.

Data compression (which deduplication essentially is a subset of) absolutely isn't a kludge. I meant, introducing the duplication (by design) and then inventing some workaround to get back to square one - that's the suspicious part. It adds complexity, when there could be none.


Square one, being, in your opinion cramming all of the dependencies together in one place? There is a big advantage to immutable dependencies though... Your code NEVER breaks. If the dependencies are immutable, and the architecture stays the same, then your code will run. Of course, you can still use apt-get or another package manager to build your images, so the whole updating of dependencies thing is no worse than where you started. It's just, you have the option of not changing things, and not breaking things as well.


I must be misunderstanding something.

Isn't it Subuser that crams all the dependencies together, in one place (image)? So there was that proposal so images are deduplicated, in a sense shared files are automatically detected and stored only once, saving disk space. But then, isn't it that Subuser is completely unaware of any metadata a particular file may have, so it can't really tell the difference between libz and libm, or know that all those binary-different libpng16.so have the 100% compatible ABI and are interchangeable (but not all libpng12.so do)?

From what I understood, Subuser is a package manager (plus permission manager) that doesn't know a thing about what it's packaging - only the large-scale image.

Package managers have all libraries separate, that's the whole point why package managers were invented in the first place. If a package management uses some database, dependencies may lay together in the filesystem, but the're completely separate in package manager's database. If package management system uses filesystem as its database, then packages are separate in that regard, too. There's immutability as well, sometimes enforced, sometimes along the lines of storing `dpkg -l | grep '^ii' | awk '{ print $2 "=" $3 }'` output to save the exact state.


I will not trust some binary logic that interacts with my data, just because it runs with a dferent numeric uid on my system. That's the point.

People is calling "containers" to operative system images pulled from third party repositories under certain laws.

If we left all that behind, yes, it's better if your firefox cannot read you ssh private key for example. But I don't need docker for that, sorry.

I need a third party blob, when I don't want to embrace the postgresql source code for free and it's manual, and make myself more free tomorrow.

But the... how many millions was it? dollars campaign behind certain technologies (and providers), has more power than any thinking, regarding network effects.


Running something in subuser is no more of a binary blob than installing it through apt-get. Indeed, it uses apt-get by default! https://github.com/subuser-security/subuser-default-reposito...


Your needs and point-of-view are pretty specialised. But just because you don't need to use it doesn't mean no one else could benefit.


You realize, that I (the author of subuser) literally live off of money I beg off my parents. I get NO money from developing this. In terms of XOrg seecurity, I've worked on that: http://subuser.org/news/0.3.html#the-xpra-x11-bridge


So you put your efforts on market trending topics.

I can understand.

Money surrounded culture does not exclude those that don't have the money, they are simply part of the game.

This doesn't solve any problem I have currently or I cannot solve using tools I've already tested on my usage, but I hope you had fun writing the software and sharing it.


Is there a better forum for "truely free software"? I honestly don't know of one, and I would love to find it if it exists. Personally, I too am dissatisfied with the comercial aspects of startupy-venture-opensourcedome. There has become a great deal of dishonesty and tension. The itch that I'm scratching here, is, for example, the problem of the freecad project, which is really hard to build and run from source as it uses some non-standard runtime dependencies. The typical solution has been to have all the developers run ubuntu. Subuser should make things better, because we can run ubuntu in subuser and then run subuser everywhere... This is better than using virtual machines due to the performance and integration requirements of the software at hand.

The other thing that is good about subuser is the isolation that it provides. If you download some new freecad plugin from some random person, and that person made a mistake and their plugin damages your system, then that sucks. But subuser can help contain that damage. It also has some protection agains malicious software. Short of a kernel bug, it's pretty hard to break out of subuser's containers. So if you EVER download development versions of software that are from non-vetted sources, you should consider installing and running that software through subuser. It will make you safer, and if enough people do it, and make subuser images, then it'll also save everyone in the free software world a lot of time :)


Hey, timthelion, sorry if any comment was harsh.

I did look better to your software, even if I avoid docker, and it looks nicely engineered and documented.

Congrats. Really. Thanks for your effort and for sharing it.

To don't just another comment, regarding the apt-get stuff of other comments: I build my packages and repos the same, since the 90's... but I was not an early docker adopter, because I was unable to run private registries. I cannot compare them sorry :-)

Have a nice day, and sorry for hijacking your nice project with my docker rage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: