Can you elaborate on the search for a part time gig? One of my pet theories is that the demand for full time engineers is more of a commitment signal than actually practical, as most people I know only actually work 2-6 hours per day while full-time. So it's rather frustrating that good part time roles don't seem to exist.
This perspective is about delivering maximum value, which is a worldview primarily of silicon valley and finance folks. Which is fine and a totally legitimate play for a limited set of high value subscribers. But if someone knows that value can be obtained by following someone, then they should already be doing so on the platforms where they are most prolific. Which means that this platform is not adding the entire value of 100+x gains, but rather the marginal value which probably wasn't worth capturing previously. Then the primary value of your service comes from convenience and discovery - but my impression is those two features are strongest when your service has a large number of diverse subscribers.
Something I've been wanting for some time but have been too lazy to implement myself is a similar system targeted towards utility rather than value. That is to say, targeted towards entertainment. Something like your favorite guitarist has a new side project, Patrick Rothfuss finally released Doors of Stone, your favorite director is releasing a new movie. Unfortunately most databases for this information seem to be either proprietary or inaccurate.
The case for NFTs rests on a virtuous cycle between artists and buyers. On the artist side, there are temporary benefits, such as increased prices and pandemic-safe fundraising. But the killer feature is programmability - for example, it is possible to program the NFT such that the artist takes a chunk out of every resale on the secondary market. This means that as long as there are buyers, there will be artists issuing their work as NFTs.
The buy side is more complicated, because people buy art for different reasons. But there are benefits such as digitization making money laundering more convenient. The main issue is legitimacy concerns, but so long as artists can benefit from NFTs, they are incentivized to express the legitimacy of the NFT. In the equation of who gets to decide legitimacy, I would say the artist has the most sway.
Of course, it's entirely possible that NFTs will fail. First off, it could fail if the issuing blockchain failed or had a contentious fork. Secondly, it's possible that the primary blockchain or issuer for NFTs could change, which could reduce the value of "obsolete" NFTs. There is also regulatory risk. Finally, there is the possibility that people who abuse NFT system, such as repeatedly reissuing art or issuing pirated art, could poison the well. But that's entirely different from saying NFTs are doomed to fail and inherently worthless.
The right they contractually set with the buyer. Secondary market prices exploding compared to first sale prices isn't exactly uncommon in the art world, so why artists would be interested in this is fairly obvious. (To a degree it can also help keep first sale prices lower, it somewhat discourages quick speculation resales (which many in the art world think is a good thing), and lets the artist participate in later value growth of their work which given that the value is due to their work many people would say is fair, and thus they agree to such contracts)
Currently it's done through the marketplace platform, so actually commissions don't track when you sell through other platforms. EIP-2981 aims to standardize this.
Depending on your view of platforms, this could be less than ideal. It should also be possible to modify the transfer function in the EIP-721 contract to be a swap, but this would require a new standard. In any case, it could be gotten around by an escrowed transfer with swap amount being set to 0. I don't think there is a way to prevent this.
The sale is done in via smart contract, a buyer essentially sends tokens (commonly Ethereum) to the smart contact. Under commissioned scheme the smart contact will create transactions that split the Ethereum between the relevant parties (commonly buyer, platform, seller).
I'm looking at your docs, and the RUN REPEATABLE command seems really powerful. But if the state is broken after a run, like if you have some pods stuck at Terminating, how would you recover things?
Another question I have is how would you handle state that sometimes needs to update and sometimes doesn't? For example, it would be ideal to have a staging database that can keep having migrations and data added to it when new features are added, but we only want to checkpoint the changes to it from testing when the PR is actually merged.
If the state is broken, you can "rerun without snapshots" via the dropdown in the top right - and since future runs load the latest snapshot, they'd use a "clean" one.
For databases - usually users have a named S3 bucket and use a secret to authenticate, since we take memory snapshots, the top of your Layerfile can be "start the database and populate it from this specific anonymized dump" and then you can edit the file in S3 and re-run without snapshots if you'd like to reload it.
Risk management is the correct way to go when uncertainty is high. Containment was the correct approach at the time.
When evidence starts coming in, then you can start applying evidence based approaches.
> Mortality rate: Mortality and morbidity rates are also downward biased, due to the lag between identified cases, deaths and reporting of those deaths [1]
> Among 3,711 Diamond Princess passengers and crew, 712 (19.2%) had positive test results for SARS-CoV-2 (Figure 1). Of these, 331 (46.5%) were asymptomatic at the time of testing. Among 381 symptomatic patients, 37 (9.7%) required intensive care, and nine (1.3%) died (8) . . . As of March 13, among 428 U.S. passengers and crew, 107 (25.0%) had positive test results for COVID-19; 11 U.S. passengers remain hospitalized in Japan (median age = 75 years), including seven in serious condition (median age = 76 years) [2].
Based on the second source, who can still seriously believe that the naive death rate is too conservative, because all the people in intensive care just have not died yet?
Look at the deaths/recoveries in Singapore and Hong Kong for more evidence [3][4].
Whereas, if you compare fatality rates reported by Germany, SK, HK, Singapore and other high testers vs China, Italy and Spain, it's pretty clear the latter are under-diagnosing mild/asymptomatic cases, which increase their fatality rate by a factor of 10 or more.
Right, so do the math. 5% of positives required intensive care. If 197 million Americans get this, That means 10 million people go to intensive care. There are 60,000 ICU beds in the U.S. If 10 million people need the ICU, effectively none get a ventilator and they all die.
Now it's true that the cruise ship passengers skewed significantly older, but on the other hand, they were all ambulatory and healthy enough to be taking a cruise. There are populations that are at significantly higher risk than the cruise ship passengers.
Also, Chinese experience was that about half of the people admitted to the ICU eventually died.
>Now it's true that the cruise ship passengers skewed significantly older, but on the other hand, they were all ambulatory and healthy enough to be taking a cruise.
While yes this is true technically, I'm not sure the bar for "healthy enough" is as high as you're making out it is. In my experience (apologies for the anecdote), significantly obese people are quite capable of going on a cruise almost always.
I'm talking about people in nursing homes, people in hospitals, people on immunosuppressants, for example, after a transplant (who wouldn't go on a cruise because of norovirus etc), people with other immune diseases, etc. There are a lot of these people out there, and these people would be very hard hit if we just let COVID run through the population.
But on the other other hand, they were all traveling, eating cruise ship food, and probably drinking, all of which could weaken their immune systems. We can add speculative adjustments all day long, but there's no way we're going to get a randomized double-blind study out of it.
Also you can't conclude much of anything based on a linear extrapolation, even if you have good data.
What do you need a randomized double blind study for? You're not sure the people on the ship died of COVID?
As for adjustment factors, if you just adjusted for age, you'd get about 50% less mortality if the ship had the same age distribution as the country. So that's 5 million dead. However there are over a million people in the U.S. that are medically compromised and would have a very high fatality rate with COVID.
I also don't see what the problem with a linear extrapolation is.
Finally, I only accounted for deaths due to lack of ventilators. There also wouldn't be enough hospital beds, and that would lead to millions more deaths.
There is simply no reasonable alternative to suppressing the disease. We're talking more deaths than the Holocaust here.
> What do you need a randomized double blind study for? You're not sure the people on the ship died of COVID?
Er, you're not trying to figure out how the ship victims already died, you're trying to predict how many other people might die of the same cause. To do that kind of thing well, you need a hypothesis, and then you need to test it properly.
> As for adjustment factors, if you just adjusted for age, you'd get about 50% less mortality if the ship had the same age distribution as the country.
You can't "just adjust for age" or "just adjust for" anything, you're going to miss something! That's why people do clinical trials.
> I also don't see what the problem with a linear extrapolation is.
Basically, an epidemic is not a linear system, so you can't model it with linear functions. Look into the "SIR model" for a standard way to do that kind of thing. I'm not trained in this field so I'd look for a medical/science forum if you have questions.
> Er, you're not trying to figure out how the ship victims already died, you're trying to predict how many other people might die of the same cause. To do that kind of thing well, you need a hypothesis, and then you need to test it properly.
What would be the randomized double blind trial that you would run, and what information would it give us?
> Basically, an epidemic is not a linear system, so you can't model it with linear functions. Look into the "SIR model" for a standard way to do that kind of thing. I'm not trained in this field so I'd look for a medical/science forum if you have questions.
I'm familiar with the SIR model. What you'll find is that if R0>1, the SIR model converges to a state where S=1/R0, I=0, and R=1-1/R0. In this epidemic, R0 is approximately 2.5, of course depending on conditions. That means in the U.S. population, 60% will end in state R, which means 60% of people will get the virus. That's the 198 million number from above. It's actually a little worse than that because the SIR model doesn't have a "Dead" state, so more than 60% of the population has to get the virus in order for 60% of the end state population to have recovered.
> What would be the randomized double blind trial that you would run, and what information would it give us?
I have absolutely no idea how to design or run a clinical study.
> 60% of people will get the virus.
All at the same time?? Your extrapolation comparing total critical cases with the number of ICU beds seemed to assume that. Try this interactive model, which plots infections over time and takes into account how long each patient will occupy a bed: https://neherlab.org/covid19/
No, but it doesn't matter. If 10,000,000 people need to use 60,000 beds, and they each use one for three weeks, that's 500 weeks, almost ten years. Even if you could get a ventilator for all of them, Chinese experience is that about half of the vented patients die.
Hopefully in a year and a half or so we'll have a vaccine. Until then we need to keep the case counts low, first by sequestering ourselves to get the numbers down, and then by other, less draconian means once the case counts are in single/double digits.
Whereas, if you compare fatality rates reported by Germany, SK, HK, Singapore and other high testers vs China, Italy and Spain, it's pretty clear the latter are under-diagnosing mild/asymptomatic cases, which increase their fatality rate by a factor of 10 or more.
South Korea has 1% fatality rate at the end of their epidemic, they showed .5% in the middle of this. Germany has .2% rate but it has crept up to .4% and I suspect it will continue to creep to 1%, and if they get overwhelmed it could go higher. China has a less than 1% rate outside of Wuhan, since outside that area, the health care system wasn't overwhelmed [1]. The extra deaths in Wuhan could be attributed to the health care system getting overwhelmed rather than under counting - 20 or 10% of those infected require intensive care. You quote 10% of the infected on the Diamond Princess as requiring hospitalization. With an overwhelmed health care system, that might be the death rate.
Which is to say that we have more evidence but that evidence seems to point to a desperate need for containment.