Hacker Newsnew | past | comments | ask | show | jobs | submit | Overdr0ne's commentslogin

If an ai model is allowed to emit copy-left code verbatim in proprietary software, you can effectively create a gpl 3 stripper. I don't think that ultimately serves your goal of intellectual sharing


Independent of whether the virus was lab-leaked or natural in origin, the conduct of Peter Daszac was completely inappropriate, and he should face consequences.

Spearheading and cosigning the statement: "We stand together to strongly condemn conspiracy theories suggesting that COVID-19 does not have a natural origin.", when you have specific career and financial interest in that statement being true, is a conflict of interest, is not scientific, and should be addressed publicly. But then privately saying “you, me and him should not sign this statement, so it has some distance from us and therefore doesn’t work in a counterproductive way.” Daszak added, “We’ll then put it out in a way that doesn’t link it back to our collaboration so we maximize an independent voice.” displays clear intent to manipulate public opinion. Unbelievable.


The first rule of ditch digging is to stop digging ditches.

I love my hometown, but it was a slog wading through all the people who ultimately wanted to keep me under their thumb. Many of them so close that I just couldn't see it until a lot of damage had been done. Everyone has feelings of jealousy, whether they are conscious of it or not, and the closer you are, the more likely it is you will be competing over resources, the conflicts of interest are just inherent.

They told me to work hard, but their idea of working hard was often just making the same damn mistake over and over again, as long as no one had to admit they were wrong, as long as authority was respected and hierarchy maintained. Whether your brow is high or low, I think you just have to learn to break free from those people, and learn to think for yourself, because they are everywhere, because it is an intrinsic consequence of social organization.


I absolutely agree that we need to move away from the idea of physical folders. The file hierarchy IS very useful way of categorizing your data, but a file should be able to belong to multiple categories. That's why we have hard links, but I digress.

I wrote an emacs package called SFS(search file system) to solve exactly this set of problems. It is just an interface on top of the excellent Recoll full-text indexer. Among other things, SFS allows you to create hierarchies of queries, the search analog of a folder hierarchy. A parent "directory" is just the logical OR of all of it's content named queries. Everything works great for me, but the installation process is a bit painful(especially cross-platform), and indexing can take a serious amount of time and space initially, so I would say it's still a ways from being really comfy to use. But it is totally possible to have your own little google for your file system. I have gifs on the GitHub page to at least give you an idea:

https://github.com/Overdr0ne/sfs

Enjoy


Hard links prove quite fragile (many tools will remove the link then replace it with a new unlinked file) and occasionally dangerous.

Tagging may work better, though you'd need a tags-aware toolchain to work with them if the metadata are associated with the filesystem. An external tagstore might be resilient but could find itself out-of-sync with filesystem state.


Hmm, do you have some example of these dangerous scenarios? I suspect a lot of tools have come to think of the "file path" as the identifier for a file. So of course they would break if that classification were to be reorganized, like if two parent categories were to swap. I would call that an abuse of the file system though. But the status quo is what it is. Most file systems already have lots of other metadata built in that can be used to access the inode or whatever you call your data structure. My point is, accessing data in a more general case is a search operation.

As far as an external tag store, that is basically what a search index is. And Recoll is full-text, so each file has a shit-ton of tags associated with it. You then just pass the -m flag to the indexer, and it monitors for file modifications, and updates the index accordingly. I have not noticed a significant performance impact there. Mostly just the initial index operation sucks.


Some I know of, some I'm presuming, and there are all but certainly others.

Hardlinked directories create all kinds of mischeif. That's the principle issue. It's often entirely disabled. Recursive directory trees are all kinds of fun. (Moreso than even the symlinked version.)

Given a hardlink exists, a tool which operates by 1) removing the file (deletes the local directory entry to the hardlink inode), 2) creates a new file (same name, new inode, not hardlinked), and then 3) populates that with new content, creates the issue of a presumed identical hardlink existing where that's not the case.

Hardlinks with relative directory references will reference different files, or configurations, or executables, or devices, from different points on the filesystem.

... or within different filesystem chroots.

Hardlinks might be used to break out of a chroot or similar jail. A process which could change the hardlink could affect other processes outside the jail.

As for tags: These are ... generally ... not the same as what most people have in mind as a full-text index, or at the very least, a special class of index. I'm thinking of a controlled-vocabulary generally instantiated as an RDF triple, though folksononmies and casual tagging systems are also often used.

The problem occurs when you've got a tagged data store that's being modified by non-tag-aware tools. There are reasons why that might be permitted and/or necessary, though also problematic. My sense is that robust tagging probably needs implementing at the filesystem level.


Yeah, not a fan of recursive directory trees. Sysfs for example is pretty wonky esp when you're searching for some specific attribute of the device. Not hard links or real files ftm, but same idea. Hard linked directories breaks the category system.

Now the same name for a different inode in two directories is a point well taken, but I would argue that does not fully describe the inode, that name is just one component of the metadata for that file. People are just so unaware of all that other metadata because the interface rarely shows it to them. So many people have taken to packing all that data into the filename. Version numbers, code names- it's one way to achieve portability i guess, but what an ugly compromise! And with all the virtual environments now for pythons and such, it's quite easy to find yourself using the wrong version of something if you don't really know what you're doing and just look at the filename.

Hard linking links all that metadata, which of course does include that unique ID that open returns, so I think it's okay. I would just like to see our file interfaces more adapted to showing all that important metadata in a comfier way


The same name / different inodes problem isn't a filesystem issue, it's a tools issue. Specifically, the fact that tools which modify files (editors, shells, archival utilities, scripting languages, any random executable) only see the local filehandle, not the fact that it's "supposed" to be a single chained copy across multiple directories.

There might be some way to muck around with that using attributes (at least in theory, I don't know of any that do this now), but presently, the only way to accomplish this is through workflow and integrity-checking systems (e.g., that "filename" at any of numerous specified points should be identical to and/or a hardlink of a specified canonical source).

Oh, and one more: since hardlinks apply only to a single filesystem, any cross-filesystem references are impossible.

I think you also end up with issues in almost all cases of networked filesystems: NFS, SSHFS, etc.


Sorry, I was sick yesterday.

It is mostly a tools problem. It should be way easier than it is to see the metadata for a given file in your shell command, browser or whatever. Dired for example has a pretty darn good visual model for this, that could really be taken much further I think. The reason we don't see more metadata like extended attributes and such, is that they are still not standardized across different file systems. So we get left with the lowest common denominator. But a reasonably designed system could just show it if it's there.

I've just always thought a tree is a very elegant way to represent categorical data. Now that I think of it, placing files in the tree is a way to preindex a search for all objects in a given category, basically the ls command. It really affects how we reason about our data. Huh.

Soft links honestly seem like a hack to me, to get around our shitty distributed file system model. And then, because oh no, what if my file is on another server, I guess everyone should just use soft links for absolutely everything. Like, why not just concatenate the host string to the file ID, and have the OS figure out how to handle it? Sorta like tramp.


I completely agree, making money writing software is so convoluted. Like, why can't I submit my work to a publisher like all the other authors do? Where are the software publishing companies? Instead, we're expected to also come up with a business model around our software. Then, surprise, you find yourself not really writing software anymore because all of your energy is spent running this business thing you set up just so you could support yourself so you could be a soft..ware...writer. oops. It's a bummer man. Like, yeah, you probably shouldn't have used the MIT license, but like, we have to be lawyers too now? GD.


I really want my tabs to just be emacs buffers.

Emacs has sooo many different tools to organize and operate on buffers, so different types of users can compose a workflow that works for them, including plain old tabs if they really want. And of course, extensibility, the most important feature imo

And you can do this now of course, with w3m or other plaintext browsers, but it's just not that comfy for most webpages these days. And there's emacs-webkit, but it's still a bit of a pita to install.

I still use qutebrowser for most things, and with i3/sway plus the save-session function, i can get hierarchy and persistence, and that seems sufficient for now...


I find that the more you use emacs the more you get frustrated that nothing else works the way it does in emacs. You either deal with it, try and hack some emacs functionality into your favorite tools, or end up living full-time inside of emacs--relegating your desktop environment to a glorified emacs launcher.


For the record though, I'm very pleased to see people explicitly questioning the UI status quo, esp one so widely assumed as tabs. It seems browsers are just following the 'dont touch anything or you'll drive away our base' method of development, and it's depressing... Can't be much fun for the developers either I imagine. I look forward to seeing what they come up with


No. Linux drivers are supposed to implement runtime_pm callbacks. These are called based on the usage counter of those devices. ACPI is an Intel/microsoft standard for power state definition with spotty compliance. How and when you put those devices into whatever power state, ACPI or not is dictated by those callbacks. Bugs seem to most often occur when you turn off something when it is still being used cuz someone failed to claim it or released it too soon. Also asic bugs in their definitions of their power states.


runtime_pm is generally for runtime suspend (Intel calls it s0ix) instead of S3 suspend. You can implement S3 suspend using the same callbacks as your runtime_pm callbacks, but you don't need to.

The runtime_pm framework is for when applications are still running but the device goes to sleep. In S3 suspend first every single user space application freezes, then the drivers go to sleep. S3 is much easier to implement.


I think ideally all pm would be controlled by the runtime_pm and qos frameworks. You then don't need to define explicit power states like 'sleep' or 'suspend', you instead simply use what you need for a given performance spec and naturally use a minimum of power. I think that is the ultimate plan


It's not that simple. There's a lot of predicting the future when it comes to power management. When you put things to sleep in order to save power you add latency, because waking up the hardware and restoring its state takes a long time. It's all tradeoffs, and "use minimum power" is taking the slider and shifting it all the way to one side.


I've never experienced anything close to the stability and performance of a lean Arch build personally. It does a very good job of teaching you up front how your OS works, and how to configure it without dumbing things down for your grandma. Once I understood that, bugs just kinda disappeared, because I wasn't naively misconfiguring things anymore. I find linux pretty well organized and reasonable for most dev tasks. Your OS can be spectacularly powerful for automating things if you just understand it's basic components. It updates when I want it to, it looks the way I want it to, and darn near any operation I can put a name to I've writtent a script/elisp function/systemd service for. I feel limited really only by my own imagination at this point. Maybe the problem is Ubuntu or other such imitation distros, that rob people of that deeper understanding in exchange for a shallow illusion


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: