People really like to trash Freud, but you have to put him into context. Before Freud, we had a few branches of psychology: philosophical psychology in England arguing about empiricism vs. nativism, Wundtian psychology in Germany sitting around asking very specific questions that they answered using introspection, and in the States we had the very first blossoming of the behaviorism that would dominate psychology in the States until the '60s[0]. Some of these approaches had a concept of the subconscious, but they all viewed it as a static warehouse for previous experience, and very few people thought about it in a serious way.
Freud's major contribution to psychology was that we actually have a dynamic subconscious that profoundly affects how we live our lives. This aspect of his theory has become so ingrained in our culture that it's hard to imagine the world before Freud. Also, that aspect of his theory has held up over the years.
Also, he got a number of things correct: many of his coping mechanisms have strong empirical support, for instance.
Freud was wrong in detail, but his overarching approach changed psychology for the better.
[0] Yes, I know this "history" is a vast oversimplification.
The concept of subconscious is much older than Freud, it was a staple of the romantics. Freud proposed a specific structure theory of the subconscious centred around the Oedipus complex. That specific theory is indefensible.
> “Uber and Lyft can survive classifying drivers as employees,” she says. “It might cost them a little more, but it’s a successful concept. It’s not going to go away because we are trying to enforce the rules
> And several on-demand companies, such as the house cleaning start-up MyClean and the food delivery service Munchery, already treat their workers as W-2 employees. These companies’ labor costs are higher than their 1099-dependent rivals, but they get additional benefits, such as being able to train their workers and hold them to consistent schedules.
I wonder if Uber is fighting this because it will cut into profit margins and raise overhead costs, rather than because it is an existential threat. There seems to be two different opinions presented in the article (though I'm not educated enough in this area to be able to tell which one is closer to being right).
> I wonder if Uber is fighting this because it will cut into profit margins and raise overhead costs, rather than because it is an existential threat.
Uber is a young company, and they have yet to learn what most mature companies have learned.
Big businesses like regulations. They don't mind jumping through whatever hoops are imposed by regulations, because they can afford to drop the money on it, they can treat the cost of compliance as part of the cost of doing business and pass it on to the consumer, while their smaller competitors can't afford to pay for compliance. The competition gets priced out of the market, and big businesses sweep up all the customers who would've gone with the smaller competition, making regulation a net financial gain for them.
The only big businesses you'll see who don't like regulation are monopolists and near-monopolists, like ISPs. Look at the hissy fit the big ISPs are throwing over Net Neutrality. Why are they so upset? Because there's no real competition, no smaller players who will be priced out of the market by NN, so the established players just see it as a straight-up cut in revenue.
The only sensible conclusion is that Uber either a) thinks like a small player or b) thinks like a monopolist. Both are worrying, though for different reasons (reason A makes me doubt their competence, reason B makes me think they should be squashed sooner rather than later).
I'm not sure how easily one could leave a small town in, say, feudal Europe. Most serfs lacked travel rights, and while one could escape from their lord's land and move to a city (depending on the country), the cost was prohibitively high[0].
That looks fine to me. He's talking about reducing a matrix to reduced row echelon form, and gives a pretty clear example. If you're giving yourself 4 pages on linear algebra, there's not much better you can do.
That said, you're not going to learn linear algebra in four pages. No royal road, and all that.
I'll give it a shot, but first, a list of disclaimers: Yudkowsky gets a ton of unjustified internet hate and scorn, and I disagree with a lot of it. I read a number of the sequences, and quite enjoyed them. I also think his reaction to Roko's Basilisk was pretty reasonable: someone on your form comes up with a way to basically guarantee eternal torture for anyone who reads it, and then posts it, thus guaranteeing eternal torture for your forum readers? Who cares that the idea won't actually guarantee any such thing, Roko _thought_ that it might; I would be pissed as hell.
Anyway, my point is, I'm only offering a gentle and hopefully reasoned disagreement with someone I regard highly. I'm not jumping on the "fuck that guy" train. Moving on.
There was a post that boiled down to the question "Would you rather 3 ||| 3 people (where | is an ascii stand-in for Knuth's up-arrow notation) get a mote of dust in their eye, or one person be horrifically tortured for 50 years?" and his conclusion was basically "you can use math to assign some incredibly small epsilon of suffering to getting a mote of dust in your eye, but eventually, if you sum enough people, it's more suffering overall than one poor person getting horrifically tortured". My problem with his post was that I think that any morality scheme that results in the person getting tortured is fundamentally flawed, and I don't care how much math you throw at me to try and "prove" that it's better.
I don't think I've ever heard of an attempt to rigorously derive morality that I agree with. Morality is too contextual, too messy, for us to perfectly capture in those sorts of models. It's especially bad when we try and model morality mathematically, and then take as gospel the result of that model, rather than say "oh, um, that's not a great result, the model must be wrong".
>But let me ask you this. Suppose you had to choose between one person being tortured for 50 years, and a googol people being tortured for 49 years, 364 days, 23 hours, 59 minutes and 59 seconds. You would choose one person being tortured for 50 years, I do presume; otherwise I give up on you.
>And similarly, if you had to choose between a googol people tortured for 49.9999999 years, and a googol-squared people being tortured for 49.9999998 years, you would pick the former.
>A googolplex is ten to the googolth power. That's a googol/100 factors of a googol. So we can keep doing this, gradually - very gradually - diminishing the degree of discomfort, and multiplying by a factor of a googol each time, until we choose between a googolplex people getting a dust speck in their eye, and a googolplex/googol people getting two dust specks in their eye.
>If you find your preferences are circular here, that makes rather a mockery of moral grandstanding. If you drive from San Jose to San Francisco to Oakland to San Jose, over and over again, you may have fun driving, but you aren't going anywhere. Maybe you think it a great display of virtue to choose for a googolplex people to get dust specks rather than one person being tortured. But if you would also trade a googolplex people getting one dust speck for a googolplex/googol people getting two dust specks et cetera, you sure aren't helping anyone. Circular preferences may work for feeling noble, but not for feeding the hungry or healing the sick.
He's assuming linearity, which at the very least needs justification. He's assuming that the function that maps from the pair (number of people, type of torture) to suffering is linear in the number of people, and also linear in the type of torture. I don't believe either of those things are true.
To put it more clearly, he says:
> So we can keep doing this, gradually - very gradually - diminishing the degree of discomfort
And I don't agree that you can. The difference between dust-mote and torture is not one of degree, but one of _kind_. It's a discontinuous function (in my opinion). I don't know where the discontinuity is, but it's there.
There are a finite number of possible brain states, far less than the 3||3 number. All possible "feelings" is likewise less than that.
You need to be dividing all possible feelings of pain into two groups, and asserting that the absolute worst of the first group (containing dust specks) is incommensurate with the absolute best of the second group (containing torture).
At some point you need to say that you'd pick specks even over 1 minute of torture, even over 1 second, on which I think most people's intuitions would stop saying that. Or you need to find some point between 50 years and 1 second where it become commensurate.
Sure, they are commensurate at some point. I'd pick one nanosecond of torture over the dust-motes, for instance. But I'm not certain that I would ever choose the 50 years of torture over the dust motes for any number of dust-moted people. That's because it's not continuous, so you can't do the sort of epsilon-delta proofs that Yudkowsky's argument depends on.
That means that for some amount of time, say X seconds, you would prefer X-1 seconds of torture for one person over specks, but prefer specks over X seconds of torture for one person.
But let's double each side; presumably you would make the same decision if asked again, right? So now you prefer X-1 seconds of torture done to each of two people, over twice as much specks.
Now, unless X is very low, you should prefer X for a single person over X-1 for two people. (If you disagree with this, please give a plausible value for X that makes it false.)
So you prefer X on a single person over double!specks, but prefer single!specks over X. This seems extremely unlikely. Or even if true, we should be able to make you pick torture for 50 years just by multiplying specks another couple of orders of magnitude.
We're social animals. There's an amount of discomfort (maybe not large) I would be prepared to undergo to help someone else, and I find myself thinking that others "should" also be prepared to undergo such an amount of discomfort to assist others. Given the choice to accept a dust mote temporarily in my eye as part of a huge crowd in order to save another person from torture, I would gladly accept that, and I think all reasonable people would too, therefore there is no number of people such that I believe the utility of dustmote vs torture turns out in favour of torture.
> I would gladly accept that, and I think all reasonable people would too
Hello, apparently I'm unreasonable. And so is everyone else who has said that they choose torture over dust specks. If you select 3^^^3 people, you're going to find an awful lot of us. (And also some sociopaths who literally don't care if someone else gets tortured.)
Your argument seems to boil down to "specks is the correct answer, so anyone who gets it wrong doesn't count; and because we all agree that specks is the correct answer, it's okay to do specks".
On the other hand, I would totally accept the torture for myself, if it would prevent the specks. (At least I hope I would, and to the extent that I can model how I would act in that situation, it does seem plausible that I would.)
You aren't quite grokking the difference between "huge crowd" and 3||3. How do you deal with the argument from circularity? Are your preferences circular, and if not, which part do you reject?
The size of the numbers is irrelevant. I personally would consider a world in which I had no mote of dust and someone else was tortured as a world with less utility than a world in which I had a mote of dust in my eye. I expect every single one of those 3||3 individuals to feel the same way. Therefore, there is no number of individuals that I would choose not to suffer the dust mote in preference to torturing someone.
The answer to the argument from circularity is obvious - there is a discontinuity. I can see the discontinuity in my own thinking, and you probably can too. I would accept a dust mote to save an individual from years of torture (and I think almost all reasonable people would), but I would not accept a dust mote to save two people from a dust mote. That may be unethical of me, since I would of course prefer the world that has fewer people with dust motes, but for me to expect the greater number of people to make the sacrifice, I must be prepared to make the sacrifice myself and it must be a sacrifice that I think all people should make. Exactly how much I think people should sacrifice for others is difficult to say, but there is a clear step change at some point.
I don't see how that's relevant to answering the circularity argument. How are you avoiding the claim that your preferences are inconsistent? At some point, you need to accept a huge jump in the number of people getting hurt in return for a tiny decrease in the amount of hurt for each one, where the decrease can be pretty much arbitrarily small and the jump can be arbitrarily large.
(Also, if you click on the comment you can reply).
There's a difference between the utility of a world where one person has a mote of dust in their eye and the utility of a world where one person has a mote of dust in their eye because it saves someone else.
>At some point, you need to accept a huge jump in the number of people getting hurt in return for a tiny decrease in the amount of hurt for each one, where the decrease can be pretty much arbitrarily small and the jump can be arbitrarily large.
Well, the boundary is fuzzy, so I believe that different people will draw the line in different places, but yes, that is correct. There is a point at which I accept a huge jump in the number of people getting hurt in return for a tiny decrease in the amount of hurt for each one, and that point is the point at which I determine that the hurt falls under the threshold I would expect every person to be prepared to sacrifice for any other person.
I think the debatable range is actually quite large, so the tiny decrease part might not be fair (there are a lot of degrees of pain I would not demand someone to suffer to save others), but at that point of expected sacrifice, the number of people jumps to infinite.
> and that point is the point at which I determine that the hurt falls under the threshold I would expect every person to be prepared to sacrifice for any other person.
That seems like a good way to put it.
The number of people doesn't really matter, when it the amount of hurt per person is so small that you can say with all confidence something like "I believe that literally any sane person would agree to get one speck in their eye to save a stranger from fifty years of torture".
So first of all, your preferences are still circular, unless you bite another bullet somewhere. But even your argument isn't quite accurate. It's not this one person who needs to get a speck, it's a literally unimaginable amount of people.
As it happens to be, a number of people in our tiny world have said they would choose torture, so your argument fails just considering them.
> But even your argument isn't quite accurate. It's not this one person who needs to get a speck, it's a literally unimaginable amount of people.
Nearly all of whom prefer to receive the speck than to allow the individual to be tortured.
> As it happens to be, a number of people in our tiny world have said they would choose torture, so your argument fails just considering them.
Yes, I feel somewhat uncomfortable ignoring the agency of torturers and murderers in this scenario, and that reluctance would play into a very conservative estimate of where that threshold should be, but I would be reluctant to choose a lowest common denominator measure to establish morality.
So your intuition says that there's some point where everyone is suffering pain X, where X is very large, and they would each agree to cause the X-(1 mote speck) to a trillion times as many people, in return for reducing their own suffering by 1 mote speck? That strikes me as beyond regular selfishness, and non intuitive.
Can you explain more clearly why what I said implies that, because I don't think I mean to say that.
I believe that there is an amount of suffering that it is reasonable to expect anyone to accept in order to help another person. That amount depends on the amount to be suffered, and the amount benefited by the recipient. Once the suffering falls under that threshold, I do not believe the number of people required to make the sacrifice comes into consideration, as each of them if reasonable would say "I prefer to belong to this world, where as part of a huge group I accept this small ill in order that someone else benefits". Therefore, the implied sacrifice results in greater utility for that choice.
Let me try a different tack.
Let's say that you observe a universe with some large number of people suffering dust specks in their eye. That sounds bad. But what if every single one of those people actually suffering thinks that this universe is better than the alternatives. You don't suffer from a dust spec, but are you going to ignore all those people in their estimation of the utility of the universe? If you switched to a universe where all of those people didn't suffer from dust specs, but someone else was suffering, they would tell you that that was a worse universe.
It's pretty obvious to me that even if that isn't the exact case, it's close to being the case in reality - that's why people find the dust speck argument to be unintuitive, not the large numbers thing. It's because some measure of sacrifice for other people is part of what we expect from everyone, and most people know instinctively that if everyone asked to make a sacrifice agrees that it's right to make that sacrifice, then the world is better because of it.
This is not really a good format for continuing this discussion, because lack of notifications and such. Would you consider opening an account on lesswrong and posting in open thread? Or you could PM my account there at http://lesswrong.com/user/ike/overview/. That said, here's my reply.
>Can you explain more clearly why what I said implies that, because I don't think I mean to say that.
It's basically a reformulation of the circularity argument.
I assume there's some level of pain that you would prefer to the specks; say a single second of torture, equivalent to a smack or such. (If you think we should prefer 3|||3 specks to one smack, I could go further, so let me know.)
So counting up from that one second at a time (i.e. 2 seconds of pain, 3 second, etc), eventually we reach a point where you no longer think it's better than the specks. Call this X.
So X and X-epsilon are qualitatively different; the lower amount is not bad enough to outweigh specks, but the higher amount is. You need to prefer giving X-epsilon to a large number of people rather than X to a single one, if the qualitative difference is to be upheld.
(I may not be phrasing this so well. Maybe try working through the circularity argument above, or the other phrasings I used in this thread.)
Now to respond to your line of reasoning: this proves too much.
Imagine instead of dust specks, we want everyone to donate a dollar to save the person from torture. Are you really going to say that we should be spending unbounded amounts of money (3|||3) to save anyone from torture? Have you donated all the money you could get to prevent torture? (and yes, I'm sure there are charities that are at least partially effective.)
Why doesn't your argument work for the case I just outlined as well?
> I assume there's some level of pain that you would prefer to the specks; say a single second of torture, equivalent to a smack or such.
Quite possibly I prefer a world with one person tortured for a lifetime compared to 3|||3 specks if they are evaluated out of context. It's hard to say, because pain doesn't easily sum, there are different qualities of pain, and we are talking about situations where we may be losing an entire persons contribution to humanity. I just don't think any of this is relevant to the conversation, or to the reason that so many people find your conclusion unpalatable - and it's nothing to do with not understanding large numbers.
In the case that they are evaluated in the context of a choice between one of those worlds or the other world, I would take into account what I believe to be the value that the individuals involved would place on the worlds were they to know the details and have minimal moral standards like mine.
Let me phrase it another way:
What would you say the utility of a world where there are 3|||3 people with specks who chose it gladly and voluntarily to save someone from torture is?
Let's say you tell those 3|||3 who wanted to save someone from torture by accepting a speck in their eye that they cannot, and someone must be tortured instead. You've massively increased the unhappiness in the world - not only is an individual getting tortured, but 3|||3 have ended up with a situation that's worse than they wanted. Are you going to claim that it's still got a higher utility? Now that you notice that you're making those 3|||3 unhappy by the choice, you can see that the disutility scales with the number of people - that's why the number of people becomes irrelevant.
> Imagine instead of dust specks, we want everyone to donate a dollar to save the person from torture. Are you really going to say that we should be spending unbounded amounts of money (3|||3) to save anyone from torture?
Your phrasing is unnecessarily emotive here. You seem to be saying that 3|||3 dollars is an awful lot of dollars without giving me any context about what fraction of the dollars belonging to those 3|||3 people those 3|||3 dollars are or what else they could/should be spending it on. If it's a negligible fraction that scales, and I could plausibly think that any sane person should donate that fraction of their money to save a person from torture then yes. You'll notice that when it's phrased like that it does not require that I donate all the money I could get to prevent torture. I in fact do donate a small amount of money regularly to prevent torture, but I would demand much less of the 3|||3.
>Let's say you tell those 3|||3 who wanted to save someone from torture by accepting a speck in their eye that they cannot, and someone must be tortured instead. You've massively increased the unhappiness in the world - not only is an individual getting tortured, but 3|||3 have ended up with a situation that's worse than they wanted. Are you going to claim that it's still got a higher utility? Now that you notice that you're making those 3|||3 unhappy by the choice, you can see that the disutility scales with the number of people - that's why the number of people becomes irrelevant
If the people are told of the choice, that's a whole new problem, but that's kind of avoiding the point of the original question. To use a hacking analogy, you're using a side-channel to cheat.
Nobody is told about any of this. If they were, that would itself need to be factored in, and quite possibly lead me to prefer specks.
>If it's a negligible fraction that scales, and I could plausibly think that any sane person should donate that fraction of their money to save a person from torture then yes.
Each person isn't donating to save someone from torture, they're donating to save 1/3|||3 of torture.
Let's rephrase the original question to zero in on that last point. There are 3|||3 people. You choose the number of people who donate one dollar, which can be any number between 0 and 3|||3. After you make a decision, one of those people is chosen at random, and they are tortured iff they did not donate.
If you think about it, this results in the exact same outcomes in either choice, except you have more options than all or nothing. You basically choose the probability of torture.
To be consistent with your previous view, you'd need to not pick zero donaters. So for at least some people, it should be worth it for them to pay 1 dollar to avoid a 1/3|||3 probability of torture.
This dissolves, as you may have noticed, into Pascal's Wager. (Or Pascal's Mugging, more precisely, which was coined by Yudkowsky.) What if I tell you that unless you give me a dollar, I'm going to torture you for 50 years? The probability of that being true is more than 1/3|||3 (and if you disagree, then you are way too overconfident for life. 3|||3 is a huge number, and there's no conclusions I can think of in which I'd place that much confidence. We don't even have anywhere near the kind of raw data to draw any conclusion that confident.) So are you willing to give up a dollar to avoid the >1/3|||3 chance that I'm telling the truth?
If not, and all or most of those people in the problem would answer the same, then your previous rationale falls apart.
(Oh, and this is not the real Pascal's Mugging; that's much harder to deal with. But let's stick with the easy stuff for now, shall we?)
> Nobody is told about any of this. If they were, that would itself need to be factored in, and quite possibly lead me to prefer specks.
Exactly. My whole contention is that the reason this question is considered unintuitive by so many people is that they're really considering a different question to the one you think you're asking.
I know that that is the formulation of the question, But since you're asking me to make the choice, you're asking me to inflict suffering on many extra people who would not have suffered otherwise. If what is gained by their suffering is large enough compared to that suffering that I believe they all (or nearly all) would have chosen voluntarily to accept the suffering for the benefit, then I am happy with the choice, and my best model for that is what suffering I would be prepared to undergo for what benefit. I think the confusion is between evaluating the utility between two possible alternatives (where obviously the world with fewer people suffering is better) vs evaluating the utility between two possible alternatives where those were the only two alternatives. If it's just two of many alternatives, the utility placed on a meaningful sacrifice doesn't come into it, but if it's a choice, the utility placed on a meaningful sacrifice does.
But that's assuming that unpleasantness per individual is a continuous function always distinguishable from normal human experience, rather than something like a line eventually rising from the discontinuous murky soup that contains all the constant minor annoyances of being human (itchy nose, wedgies, the feeling of the back of your tongue against the roof of your mouth, one shoe being a little tighter than the other, etc) that the brain is well-optimized to tune out and rapidly forget about after the fact.
Even if it's discontinuous, it still needs to grow incredibly slowly. The ratio of pain neurons that fire for a speck versus for torture over 50 years is nowhere near 3||3. You need a function that grows so slow that outweighs 3||3, which is damn near impossible for any plausible function.
I care about impact on people, not neuronal activity, and the point I'm making is that there's a minimum below which the real impact on a given person, considered in aggregate with the impact of everything else that's a part of being human, is effectively nil.
Actually my decision probably depends on the person. [cough]
But anyway. This isn't even philosophy - it's a digital remix of medieval scholasticism pretending to be philosophy.
The irony is that politics proves empirically that ideas actually can be dangerous and harmful. And some ideas - actually narratives - can be very dangerous and harmful indeed.
There's over a century of "persuasion technology" (Bernays, etc) that exploits this.
Nothing I've seen Y write deals with the problem of politics as a social exploit in an insightful - never mind a useful - way.
Meanwhile real people are being tortured in real ways. What's his proposed rational solution to that problem?
I notice you didn't actually say at which point you would prefer torturing (10^100)*X people for Y years each, over torturing X people for Y+0.0000001 years each, for X and Y at least 0.0000001.
(You may assume you don't know anything in particular about these people, other than that they are adult humans.)
What the heck? Sorry, I didn't understand any of this. Are you saying that with real moral issues, it always trivially wrong to torture? This seems simply false.
If I capture person X and X's laptop Y, and X tells me under no duress that Y contains the location of a nuclear bomb that X has placed in a major city, and I have other strong evidence that this is true, but X refuses to give me the password to Y; then it is moral (but rightly illegal) for me to torture X for the password to Y.
Torture is not literally incommensurate with any other bad thing. Then the question arises, how do we, in full generality, determine which is the greater of two evils (or the better of two goods)? The torture vs. dust specks thing is supposed to disabuse people of the unhelpful notion that some things are somehow incomparable in terms of goodness and badness.
I think the dust speck argument would be better if first framed as a preference for yourself:
If you were going to live for 3^^^3 lifetimes, would you like to be tortured for 50 years in exchange for one fewer dust specks in your eye during each of those lifetimes.
I think I can make that position seem more reasonable if framed another way: Would you risk a 1/(3^^^3) chance at 50 years of torture, in exchange for getting rid of one dust speck out of your eye? I think most people would say yes. You couldn't even get out of bed in the morning if you weren't willing to take even incredibly small risks.
There is a much higher probability than 1/(3^^^3) that you could get in a car accident with injuries that cause 50 years of incredible pain, yet you will still probably risk driving for even trivial things.
And if you lived for 3^^^3 lifetimes and took this risk each time, you would likely suffer 50 years of torture during at least one lifetime.
Yes, I agree, I would take the risk, but those are two entirely different things. It's not the same argument framed in a different way at all.
In Yudkowsky's argument, it's the option between one person _absolutely guaranteed_ to have 50 years of torture, vs 3^^^3 people _absolutely guaranteed_ to be dust-moted. I think the guarantee changes things significantly.
If you consistently take low risks that might result in getting tortured, then you are pretty much guaranteed to eventually lose the bet and suffer. The probability of eventually losing even a small bet approaches 1 if you take it enough times.
Just think of it as if you were going to other people and deciding to take this bet for them. 1/3^^^3 chance of torture in exchange for removing a speck of dust from their eye. I think you would be ok with that because it's an obvious choice for ourselves.
And if you continue to do this for enough people, eventually one of them will get tortured. After removing dust specks from approximately 3^^^3 people's eye. But you are right, there is a chance no one will get tortured. So do it for 3^^^^3 people then and the probability is 0.99999...
There aren't 3^^^3 atoms in the universe, let alone people. I think it's absurd to complain about a system not holding up when stretched far beyond the boundaries of the universe.
If you decrease the number to even the number of humans that ever existed/exist/will exist, then it returns to being negligible in relation to torture.
The problem with an if -> then, is when the 'if' is flawed, it doesn't really matter what the 'then' is. The human mind can't grasp 3^^^3 dust motes because it doesn't exist in any meaningful way.
That last point about the models actually captures a rather interesting point. Generally I agree that if the model isn't giving results that match observation than it should be concluded that the model is incorrect rather than the observation was flawed. This has worked very well as the scientific method, but I think that it is hinged on the fact that these observations are objective and verifiable.
Now I happen to agree with you that I feel this dust result is incorrect, but morality is so undefined and subjective that I can't help but disagree with your conclusion that the model must be wrong. The entire moral question of the dust vs torture doesn't have one defined answer so we can't really conclude anything more than opinions about the model
I quite liked his introduction! Be aware, however, that he dramatically overstates some things. For example, he claims that the many-worlds interpretation is obviously correct. This is basically wrong: it may be correct, but it certainly isn't obviously correct, and many scientists whose entire research is dedicated to quantum mechanics would disagree that many-worlds is correct at all.
There is a long list of alternative explanations[0], with very little evidence to lend support to or contradict any of them (at least, any of the ones that are still around. Some have fallen by the wayside).
I think it's fair to say that it's "obvious" when optimizing for the same things Yudkowsky does. For instance, any issue having something to say about the Born rule seems to be his dump stat, if you will. This makes sense, because in the paper where Max Born introduced the rule, he suggests that it indicates only one possible correct interpretation (hint: it was not Many Worlds).
That all being said, no one really disagrees about the predictions of the math, and at least many-worlds has less completely ridiculous misinterpretations by lay-people, so both ends of the spectrum are tolerable to me.
The linked paper does not suggest that Martians built nuclear reactors on Mars, which then melted down. The paper suggests that there were "large fireballs in the atmosphere such as Tunguska-like events , with mid-air explosions, but of much greater energy release than Tunguska".
It is worth pointing out that at least the "natural reactor" theory is not crazy: http://en.wikipedia.org/wiki/Natural_nuclear_fission_reactor Though it is unclear to me how such a process could go critical, in much the same way that one of the biggest challenges with building a real bomb fission bomb is not so much getting it to explode, but keeping it from exploding gently.
Whether we have the data to parse out something to this level of detail on Mars when we're still arguing about what killed the dinosaurs on this planet I'm substantially more "meh" about. But it is not, intrinsically, impossible or stupid.
The same goes for the alien hypothesis... but even taking the possibility seriously, I'd submit that A: we don't have enough evidence to eliminate the possibility that Mars could have once had an intelligent civilization but B: even moreso, let me underline that, even moreso, we have no evidence to suggest that it ever did. Such speculations would be pure science fiction right now. Right now we still know very little, full stop.
And I'd observe that the stories about these civilizations, viewed through modern technological eyes have some really weird aspects to them, such as, why would a civilization with technology capable of destroying planets (and, in this case, really, really destroying them, Death Star-style, not merely sterilizing the planet which is literally ten+ orders of magnitude easier [1]) only settle on Earth after the disaster? We're not planning on waiting for Earth to go bad before heading to Mars... we're pretty much only blocked on the requisite tech and on cosmological scales the instant we have it we'll be there. There's little reason to believe that a technological civilization of that scale would actually be destroyed even by its home planet going up in smoke. I'd submit the most likely hypothesis is that they were indeed written by humans thousands of years ago, who ultimately had no idea what technology was going to look like. (Heck, even we futurists are still only grasping at smoke in terms of what we'll have 50-100 years from today, to say nothing of trying to guess thousands of years ago....)
The alien genocider theory doesn't pass the smell test.
Why have they not bothered to come get us yet? We have our own nukes. They waited a little too long, or so it would seem.
Now, if they wiped out Mars with a designer black hole, or something of that nature, I'd say that we're still not too big for our britches for them to come do the same... but if they're just lobbing atom bombs, then they're overdue.
It's almost certainly some sort of natural, random event. If not another Oklo, then something not so different from it.
"Why have they not bothered to come get us yet? We have our own nukes. They waited a little too long, or so it would seem."
Oh, any alien genocider that may or may not exist would still be thoroughly unimpressed by our ability to fight back. Anything that could cross the stars in any period of time since civilization started need simply ram Earth to wipe humanity out as we know it. Call it a 10000 kilogram craft travelling at one-thousandth the speed of light from Alpha Centauri, setting sail 4000 years ago or so; if that simply rammed Earth it would be ~150,000 Hiroshima bombs [1]. That's pretty conservative for a genocider's capabilities, really, too. Obviously they're not right here, so anything that can travel here in time to get us is also a weapon that can wipe us out.
(This is to say nothing of the extreme opposite end of the scale and what a mature nanotechnology ought to be able to, even without Drexlerian extremes. I sometimes ponder "Our entire civilization was uploaded in its sleep ~3000BC and the real Solar System has long since been converted to computronium, and the simulated universe is lifeless to keep the processing simple." Pick your date of upload to suit your taste.)
To be clear, while I remain open-minded my current "top-probability" pick for resolution to the Fermi Paradox is "life is far more rare than science currently guesses". But it's still fun to discuss the alternatives and it's not like I could put that even remotely near 100%... it's just my best guess in a field where we have virtually no data, and YMWV.
They're too busy trying to keep warm on Planet X, which has an extremely eccentric orbit that keeps them well out into the far reaches of the Solar System most of the time. They're kept warm by engineering a blanket of aeresolized gold particles in their atmosphere. They came to Earth the first time and enslaved us to mine for more gold. After the rebellion, they decided it was too hard to keep us under control and withdrew to their planet, which has since retreated into the cold depths. Soon, it will return and they will need more gold...
[This is all based off the nonsense Planet X theories that were going around. There was a good serialized YA novel that mined the ideas for plot, along with other Ancient Aliens nonsense, to quite good effect)
I don't think it says anything that no aliens destroyed us yet. If such aliens would exist and such an event would have taken place in our solar system, that would still have been a long time ago. I don't think it's fair to assume that that alien race would just go around and kill all planet with live like a child stamping on ants.
It would be far more likely that they would be at least as sophisticated as humans, with emotional and strategic reasons, complex governments that change every now and then etc. By now they would probably be entirely different people from back then, just as our ethics evolve and and human empires rise and fall over a few centuries.
I don't think it's a very likely theory, but I don't see a general problem with it.