The problem is data corruption. If a CSV is corrupted, then I could at least parse part of the data. For a corrupted SQL file, I'm done. Also, diff is not working for binary format, and it is more difficult to trace change for SQL format.
What is the source of the corruption. Unlikely to be disks nowadays but people should take backups of things. Very unlikely to be a SQLite buggy write but again a backup could save you. Worse case I’m sure there are tools to recover a corrupted file.
Oh this goes way back. The first SSD drives from Hong Kong advertised double or quadruple their capacity - you didn't find out until you tried to write the N+1th block and it overwrote the 0th block. Back in the 90's?
Seems inherit the same limitation of RobinHood: you can't do short selling. Not sure how actually they could do a free commission order, so not sure why short selling is not supported, but it limits the use of long only strategy.
Interesting. I suppose it's a much more involved process where you need to locate shares to borrow. You need to maintain a list of easy to borrow and hard to borrow names. ETFs by definition are hard to borrow.
I used to work next to the stock loan desk at a bank. The equity markets are generally pretty tech driven these days but stock loan is still operating in a 1980s mentality.
Slightly off topic, I was always thinking about are there any single point of failure in monetary system. Since bank reserve are accounts in central bank, which stores in some ancient servers. If terrorist or hacker found out a way to destroy those servers. Will the money in economy vaporize? Are there any paper record we could fall back?
This is pretty much the plot of Goldeneye. A lot of effort will have gone into disaster recovery (tapes in nuclear bunkers etc) but also remember the counterparties have records too.
Bitcoin won’t save you if the attacking nation state (say, China) can simultaneously nationalize and take over a majority of miners. Power is power no matter how many hoops it needs to jump through.
So, what is the most secured option for the moment? Buy a x86 box and turn it into a router? But it consumes more power than a low-power router, and buying more network adapter is not that cheap.
I am currently using the open source tomato firmware. However, since there is a bug/feature in the router so that I cannot flash an image too large, or otherwise it would not work. Also, the configuration is limited to 32 KB, if configure too much, then the configuration file will become gibberish and some random feature in the router would be missing, and required a factory reset to fix. So, I am stuck with an older version of tomato which guarantee some kind of vulnerability is not fixed.
Not sure what I can get in the form size of a router. Raspberry pi may work but too few ports available. I heard that the CPU would get hot for intense network traffic.
For something really small the ubiquiti edgerouter devices which run their EdgeOS are a good choice. If there's a serious security vulnerability on the WAN-facing interface it will be patched. They run a fork of Vyatta. Ubiquiti employs most of the old Vyatta development team, who did not go to Brocade when Vyatta was acquired.
Or build a really small low power x86 system with a few Intel gigabit NICs in it and run open source VyOS.
the $48 ER-X is much faster than 99% of peoples' residential last mile broadband connections, it's good for up to about 750 Mbps of NAT and default route outbound to a gateway.
I have no problems with a gigabit symmetrical line on ERLite-3. UniFi Security Gateway is the same hardware but in a nicer interface that works with UniFi APs & Switches if you want to go that route but you have to also host a controller. You can also upgrade to a ER-4 for a much faster CPU but I don't think you need to.
fli4l [1] on an ALIX board [2] (for example) is an option.
ALIX boards are reasonably energy efficient. fli4l can run from read-only media. This is no panacea (see fileless malware) but at least you can be sure that after a reboot your system is clean. Security is a primary goal [3] of the fli4l project and they maintain a public Security Archive.
Find a not too old Cisco integrated services router, set it up to drop everything coming from outside, and run DHCP network(s) on the inside. Use WiFi routers in bridge/access point mode.
Drawback is they tend to be noisy, but if you have a basement/closet..
I think its been around 7 years since a public exploit has been dropped for the apple airport extreme. YMMV though, as Apple has stopped selling them which means support is likely going to be minimal in the future if something does pop up. Alot of it is likely security through obscurity though as obviously the code is closed source and it uses a custom management interface vs web-access.
If you want to go the modern (better) route, enterprise equipment such as ubiquity or cisco with strict rules are likely your best bet. The budget option being a openwrt install with one of their recommended routers
> Buy a x86 box and turn it into a router? But it consumes more power than a low-power router, and buying more network adapter is not that cheap.
If you want to go this route, used Intel NICs are cheap. I recently picked up a 4-port gigabit NIC (PCI-E) for £13.99. I'm running on a machine that would be on anyway, so the power usage is negligible.
I don't think it makes any practical sense. Hong Kong by default is using cremation, unless some rich person have a land for them to keep their body inside a coffin because Chinese tradition prefer keeping the whole body after dead. Even if disposed to sea, they still need to build a grave for them. Cremation only required to have the space of urn to place, and a gravestone. The urn does not take many space. If graveyard is still required to build, it does not save too much space.
The alternative that saving space is just sent them back the urn of ash and people just put the gravestone at home. But putting dead body, even in ash, is taboo in Chinese tradition and nobody would accept it.
It is difficult to execute a policy that opposing the local tradition, and it is not really necessary.
Terraforming Mars, or maybe just create a habitable satellite is easier than saving the earth. Current economical model fosters growing business, and only government regulation to deal with externality. Growing is intuitive to human activity, but restricting human growth is counter-intuitive.
I forget the link, but a lecture video using bacteria growth as a metaphor of human growth creeps me out. Supposed bacteria in a jar growth 2 times for 1 minutes, and the jar will be full in 1 hour. When will the jar be half full? Answer is at 59 minutes. At the time of 58 minutes, only 25% space is used. How many bacteria thinks the jar or the world will be full after 2 minutes?
The situation is similar to human, and we still cannot find a way to protect the environment and have economic growth at the same time. Maybe the end of human history is next 2 years but we still think its pretty okay and didn't notice anything unusual. Then maybe finding a new jar is the second best way to deal with it.
Holy guacamole. "easier to terraform mars or create a habitable satellite than saving earth"
this is the most ignorant thing I've ever seen written on the internet. Get back to me once you have 1/10000000th the bio-diversity of earth on your habitable satellite.
1) change our paradigms, economic system, and other social factors. I can assure you that this is far more maleable than Martian soil composition...
our social structures evolved in the context of low population densities and plentiful resources.
our economic system is what needs to give, it doesn't even function for humans, let alone the planet and the rest of life we share it with...
need I remind you that
1) we are only ONE species, and haven't the RIGHT to destroy our shared home, nor the other species. Some of us humans are upset about this, and we WILL take you other humans on over this issue!
2) our planet is STILL the only known place with LIFE in the entire universe. This is likely to change, at some point, but not if you get us all killed first.
3) our social systems are flexible, arbitrary, dare I say "PRETEND"... change em.
I propose a voluntary mission where all those interested in perpetual crapitalism and archaic social values voluntarily move en-masse to Mars, and please try not to rob too many of Earths resources getting there...
Bring the right wingers and life-haters with you too please...
bacteria in a jar grow and populate uniformly. Humans don't and tend to clump up in high population areas. Human population growth also tend to slow down with economical stability which is the opposite to bacterial growth.
As for mars, what we really need is to build technology that can make inhospitable places livable without infusing additional resources. A key point is water extraction, which would make a lot of desert environments much more hospitable with the right technology.
The most successful decentralized communication system - email as it turned out, people would concentrated to large free provider like Google. Decentralized server does not protect privacy for normal user because not most people could handle owning their server.
The most success decentralized service is BitTorrent. It is decentralized and it is decentralized in client level. Though it also caused uncontrollable piracy, since it is too easy to spread any data using Bittorrent. I think a true decentralized social network to protect privacy should be a p2p app, not server to server federation.
Playing devils advocate here, but I also kind of believe this... There is no such thing as privacy in the social network. It is foolhardy to assume it is even possible. Even a real life social network relies on trust, trust that can be broken very easily and totally outside of your control. Maybe the answer is to accept that privacy isn't a real thing and stop sharing things, even in what you assume is a protected environment, that you don't wish to be public. I don't think there is a technical solution to "people can't keep secrets"
Technical solutions can't stop people you intended to share your secrets with from breaking your trust, but they can help prevent uninvolved third parties from getting direct access that no one intended to give them.
Not really. I've been using Mastodon for the last four months and I feel pretty safe. My instance doesn't know much more than I already told it. And I don't get reminders, emails telling me to check in, or ads following me. I could also run my own instance, and still be connected to the people I know.
No; the real life equivalent would be the the kid in class sitting between you and your friend opening the note and telling class what you said.
You asked the kid in class to pass the note. He did so freely. You assumed he wouldn't open the note, but guess what... he totally opened that note. And now he wants to profit off the information.
That’s an interesting point and I’m stealing it for in real life conversations, to point out the human element of secrets - but we should understand people are bad at keeping them while also not allowing Facebook to monitor messages to find a better way / leverage to sell us things.
I fully agree.
The problem is that it's even harder to design something that is fully p2p than something federated.
(One thing that tries to be exactly what you want would be https://secushare.org/ But its in a very very early stage, right now. There exist others, though.)
And you have to agree that (even if most people choose the biggest provider) simply having the choice of different providers or even being your own provider is a huge improvement.
Its not really about the design. At some point you have to recognize the physical impossibilities of p2p models - primarily availability. The reason why Matrix is more popular than Tox or why we haven't seen any remotely successful p2p social network while projects like Mastodon took off is because there is simply no way to make the UX of the scenario where you want to send a message to X, who is offline, and before they come online you go offline and the message is never sent.
The way Tox does it (and any network trying to work around this problem) is to locally cache messages en masse as close to the destitination as you can get. But as you can imagine that makes the bandwidth and power requirements of maintaining the network too streinuous to be competitive with a federated option that simply works when the always-on server is available or doesn't when its offline.
"Physical impossibilities of p2p models" - Although there might be structural limitations, I think they're not too strong.
Just because we currently do not have a major p2p network doesn't mean that it's not possible.
I think it's very possible to have something like this (even for availability). You just need to have a good design/mechanism.
But there's the problem (why we didn't see something like this yet): No one puts many resources into the design of p2p-stuff. The competing, central solutions get tons of resources from big companies that try to make money with it. There is no company that tries to build something p2p because with giving away the control, they give away the possibility to make money out of it.
Peer to peer connections on web browsers are pretty good (assuming you have relays to get around router issues with shared IP addresses). And Javascript is generally fast enough for encryption (although I'm not sure what the random number generator situation is).
But we lack the ability to easily guarantee file contents, which makes delivering encryption software more suspect. Additionally, data storage is still very unreliable. It is difficult to share information seamlessly between multiple browsers without a server, storage limits vary between browsers, data can get deleted for weird reasons. I've advocated for a while that users probably should be able to grant pages separate read/write access to specific files and folders on disk, but that's obviously a tricky decision to make and implement.
The Same Origin Policy obviously comes with security benefits. But it also means that if you share a 3rd party link, there's no way to look up metadata about the link without a proxy server to bypass the policy. Building something like an RSS reader in purely clientside Javascript is impossible because you literally won't be able to request many of the RSS feeds.
It can be a little bit surprising when you dig into all of the theoretical stuff that's possible with clientside Javascript to discover exactly what the areas are where the web is behind native. They're usually not the parts that get the most attention.
Maybe. But then there's a question of where does the content live? Most people don't have a desktop they leave connected all the time, and don't want to be hosting videos and photos off their mobile device.
So you're stuck with replicating that data out to all the peers, which means you've just lost control of "your" data again.
Sure but hosting content isn't free, so then you have the problem of paying for it. I could imagine a crypto-currency based solution but that is just sooo complicated.
Most of the reason for centralisation is simplification. Have you tried running your own email server lately?
Yes, cryptocurrencies was one way. (I don't think that it's really complicated.)
But on the other hand I don't think that it's really expensive. As in have a RasPi lying around at home that's keeping track of everything when you're not online. That should totally suffice for your own needs. If you have bigger needs or want to support the network (maybe even for a small compensation in whatever form) that's easily scaleable.
Or think of bittorrent: It's incentivised that you run contribute back what you received. That works totally without compensation in cryptocurrencies.
About the simplification I'm not sure either. Have you tried running Gmail lately? (Not as client but as service ^^ I think that it's not quite straight-forward.)
Once you have a proper working p2p network/algorithm/protocol I can imagine that it's easier to run for all parties.
> But then there's a question of where does the content live? Most people don't have a desktop they leave connected all the time
I think most people know someone who does and we can start there. The first step is to make it really easy to host on a desktop (including addressing and NAT busting, both of which Tor provide).
Another example - Git. It's decentralized in principle, but in reality, people either centralize around GitHub and alternatives, and even when self-hosted, there is usually a notion of master repo.
That's partly a tooling issue, though. If git had native requests and a decent UI around them baked in, and the UI client also had some way of discovering your peers across networks, then the need for gitlab/github would be diminished.
Yeah no, unless you work on a project with 100 other people. Even if peer discovery, nat traversal and whatnot would be solved, what am I supposed to do if both my project members are currently offline? Synchronizing progress would be a nightmare in a three people project where everybody is located somewhere else. You could pretty much consider git peer to peer already, but everybody is too lazy to open their firewall and instead talks to the always-on supernode that is github.
Diminished. It's extremely common for teams to be online throughout the same business day. If you have an entirely async team where you can't coordinate time to exchange work product, then sure, you need an async third party location. That's by far the exception rather than the rule though.
You might want to check out https://snake.li - It's a cryptography-based "social network" born out of a masters' degree thesis. AFAIR most of it works in the browser, while the server doesn't really know much about the data.
A nice idea that sadly didn't get enough funding, and their creators eventually moved on.
Bittorrent is decentralized in theory, but I think nowadays it's not worth much without trackers. Trackers again enable centralized groups with self-serving interests to centralize the activities and track user activity.
Check out what gnutella or the dat-protocol have to offer for reasonable alternatives.
How could I embed a knowledge/assumption in deep learning neurons? In high level programming language, it would be easy, but tweaking the neurons parameter to embed that knowledge? Sounds more difficult than writing machine code.
Any deviation from a series of full-connected layers represents some assumption being made, usually to reduce the size of the parameter space to a subset that is considered more promising.
Convolutions are one example: they assume that proximity correlates with a logical connection.
Note that this is a very useful assumption. Just shuffle the pixels in a photo and try to discern what they show to see how much we rely on that assumption. In fact I'm having trouble coming up with an obvious counterexample[0].
So let's not fall into the trap of these armchair scientists with the big spliff, staring into the distance and intoning trivialities with the air of revelation: "Man.... you're just a slave to your assumptions. What if, like, space and time are one and the same?"
In fact, one could argue that all of AI is an endeavour to find abstract rules defining what's "trivially obvious" to us. You don't have to explain to children that objects in the distance are smaller than when they are close.
Once you succeed with that, it's possible that ML can find a sort of post-modern reality. One that we are blinded to for cultural reasons and the structure of our perception: what if God, for example, appears in the form of seemingly random "pixel errors"? You would easily miss her constant presence due to all the error correction in the pathway of your perception (and also your camera sensors).
But that's the future. Just as art often flourishes within the confines of (often arbitrary) limitations, so do we. And embracing these limitations is not done for reason of ignorance, but expedience.
Depends. Many standard layers express a form of prior knowledge. A CNN layer embeds the assumption of spatial translation invariance, an RNN does the same for temporal translation. Graph Neural Nets have permutation invariance. Assumptions can also be expressed as regularisation terms added to the loss function. One common practice is to initialise a net with the weights of another net trained on a related task - usually CNNs trained on ImageNet, and word embeddings for NLP (though lately it is possible to use deep neural nets such as BERT, ELMo, ULMFiT and OpenAI transformer pre-trained on large text corpora).
In this sense, I prefer a SQL dump file.