No need for personal snark comments. Not everybody lives in the US. The worst part moving there was surviving the lack of sun light during the winter but that was many years back.
Explain how we keep treating essential tremor as a disease of the elderly to be managed with CNS depressant drugs, when we have known for decades the disease is entirely caused by a single substance (harmane) which can be mostly avoided by stopping coffee, tobacco and meat consumption?
However, a more generous interpretation of their comment would probably sound something more like “some substances/products get banned on the grounds of being detrimental to people’s health, but this isn’t happening to coffee/meat despite their claimed dangers. This must be due to the medical industry lobbying or controlling all world government bodies, so that the industry can extract more revenue from people.”
It is still a conspiracy theory full of holes, but imo that feels more representative of what the grandparent likely meant.
It has nothing to do with coffee, tobacco and meat per se and everything to do with how coffee, tobacco and meat is prepared for human consumption. It is the pyrolytic processes which form harmane from amino acids.
The author seems to miss that relational algebra was developed for the needs of the databases of the time, i.e. in an effort to optimize reads off spinning iron. Any effort for async is destroyed by blocking fs syscalls.
It starts with "Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation)".
It goes on to say, "(The relational model) provides a means of describing data with its natural structure only - that is, without superimposing any additional structure for machine representation purposes".
For some more backstory[1] see Sowa, who was also an IBM researcher at the time:
George Boole (1847, 1854) applied his algebra to propositions, sets, and monadic predicates. The expression p×q, for example, could represent the conjunction of two propositions, the intersection of two sets, or the conjunction of two monadic predicates. With his algebra of dyadic relations, Peirce (1870) made the first major breakthrough in extending symbolic logic to predicates with two arguments (or subjects, as he called them). With that notation, he could represent expressions such as "lovers of women with bright green complexions". That version of the relational algebra was developed further by Ted Codd (1970, 1971), who earned his PhD under Arthur Burks, the editor of volumes 7 and 8 of Peirce’s Collected Papers. At IBM, Codd promoted relational algebra as the foundation for database systems, a version of which was adopted for the query language SQL, which is used in all relational database systems today. Like Peirce’s version, Codd’s relational algebra and the SQL language leave the existential quantifier implicit and require a double negation to express universal quantification.
This doesn't seem consistent with the history of relational algebra. It was introduced at a time when there were numerous competing storage technologies from cartridges, strips, drums, as well as disk drives all of which had different physical characteristics.
In fact disk drives were the least common storage system, they were the fastest but most expensive and had the least storage.
The Wikipedia entry on relational algebra does not even mention disks. Given this (together with what I recall from Codd's seminal papers on the concept), I am not inclined to believe it has anything to do with disks specifically, just on your say-so. If you have something more to say in support of your position, I will give it all due consideration.
There have been POSIX-certified Linux variants. But the open source projects you use don't bother (for obvious reasons) and commercial derivatives like Android and ChromeOS don't need it. Similarly Window NT was POSIX-certified way back in the day yet its descendants aren't, even though they implement the same API set (via very different technology).
I suspect (possibly incorrectly) that earthquakes are a chaotic phenomenon resulting from a multilayered complex system, a lot like a lottery ball picker.
Essentially random outputs from deterministic systems are unfortunately not rare in nature…. And I suspect that because of the relatively higher granularity of geology vs the semicohesive fluid dynamics of weather, geology will be many orders of magnitude more difficult to predict.
That said, it might be possible to make useful forecasts in the 1 minute to 1 hour range (under the assumption that major earthquakes often have a dynamic change in precursor events), and if accuracy was reasonable in that range, it would still be very useful for major events.
Looking at the outputs of chaotic systems like geolocated historical seismographic data might not be any more useful than 4-10 orders of magnitude better than looking at previous lottery ball selections in predicting the next ones…. Which is to say that the predictive power might still not be useful even though there is some pattern in the noise.
Generative AI needs a large and diverse training set to avoid overfitting problems. Something like high resolution underground electrostatic distribution might potentially be much more predictive than past outputs alone, but I don’t know of any such efforts to map geologic stress at a scale that would provide a useful training corpus.
They’re empiricists — the only ~~real~~ conclusive way to answer that question is to try it, IMO!
The old ML maxim was “don’t expect models to do anything a human expert couldn’t do with access to the same data”, but that’s clearly going to way of Moore’s Law… I don’t think a meteorologist could predict 11km^2 of weather 10 days out very accurately, and I know for sure that a neuroscientists couldn’t recreate someone’s visual field based on fMRI data!