The arguments are the same as today: We don't want more controls and oversight, because our enemies could use our limitations against us to neutralize our capability, and in any case strict military discipline and comprehensive personnel screening will eliminate abuse. Sound familiar?
There is one very significant wrinkle, though: The difference between ongoing practices and hypothetical, emergency actions.
In the former case--ongoing practices--it is hard to make a credible argument against oversight.
Some details may need to be withheld from the public to protect basic operation security. For example, the identities of overseas informants ought not to be disclosed, lest they be immediately imprisoned or executed by their governments. Nor should there be a Twitter feed giving up-to-the-minute location data on US special forces teams. These examples seem farcical, but they illustrate the point that at least some operational details genuinely deserve secrecy. (Unless you reject the underlying premise that the United States should conduct intelligence and military operations overseas at all. But that's a very different discussion.)
But even though specific details may need to be kept secret--for a reasonable amount of time, until they're no longer actionable by enemies--the nature of the government's ongoing practices ought to be disclosed.
The latter case--wherein lower-level officials are permitted to make judgment calls in time-sensitive emergencies--is rather different. This is the situation contemplated by the White House during the Cold War.
As the reasoning went, the American nuclear deterrent must be credible, or else the Soviets would seize the opportunity and launch an unprovoked attack on the US, Germany, or both. If a communication breakdown between the White House and the military could render the nuclear arsenal unusable, that would tend to diminish the credibility of the deterrent. On the other hand, the risk of a lower-level official making the wrong judgment call and starting a world war is appreciable.
Given these considerations, reasonable people could disagree on what the optimal policy would be. It is not unreasonable or morally repugnant to conclude that certain trusted military officers should be granted the authority to make a judgment call when a) the situation is an emergency, and b) the chain of command has broken down.
Again, the key distinction here is between ongoing practices and emergency measures in a hypothetical, worst-case scenario. That distinction must affect our moral judgment of the respective policies.
In addition to that, I think it's also important to understand exactly what the realistic worst-case scenarios actually are for any given situation.
For the nuclear deterrent, the worst-case scenario was basically the end of the world as we know it. Either the USSR takes over the world, or global civilization is entirely wrecked. I'm not sure which one would have been considered worse. Either way, hundreds of millions, perhaps billions, die.
For counter-terrorism operations, the worst-case scenario is orders of magnitude smaller. There appears to be no credible threat of any terrorists obtaining WMD, so they're pretty much limited to casualties in the thousands.
I think that much of our current woes come down to applying cold-war thinking to terrorism. The situations just aren't comparable. We were facing a true existential threat from a powerful enemy that outclassed us in many ways. Now, we're facing a minuscule threat that can, at most, kill a small number of our citizens from time to time. There were realistic scenarios that end with, "and the US was destroyed/defeated by the USSR", but there are none that end with, "and the US was destroyed/defeated by al Qaeda".
> For the nuclear deterrent, the worst-case scenario was
> basically the end of the world as we know it. Either
> the USSR takes over the world, or global civilization
> is entirely wrecked. I'm not sure which one would have
> been considered worse.
I personally don't find this a difficult question. A half-destroyed world controlled by the USSR would have been far better than a full-destroyed world.
The point of the entire US atomic weapons arsenal was a murder-suicide pact against International Communism. If you don't burn the planet after a Soviet victory you may as well have not built it in the first place.
MAD was/is all a mental game to avoid a first strike. But once that happens, the calcuation is moot. In the event of a surprise attack by the USSR, the better result for the planet and humanity would have been for the US not to fire its own missiles.
I find it interesting that everyone always talks about surprise attacks by the USSR - in reality the USSR spent most of the Cold War terrified of a strike by its enemies (both the West and China). Indeed, as far as I know only one side in the Cold War had military leaders who wanted to start a nuclear war - guess which one?
NB The only possible exception to this was the Able Archer 83 incident when the Soviets thought the US was going to conduct a first strike ("Evil Empire" rhetoric etc.) and they almost pre-empted this with their own strike - but that was the terror of senile geriatrics, not a deliberate plan.
> as far as I know only one side in the Cold War had military leaders who wanted to start a nuclear war - guess which one?
I'd be interested in seeing a citation for this fact.
Edit: Curtis LeMay secretly advocated preemptive strikes if it became clear the Soviets were preparing a first strike. This is different from "wanting" nuclear war.
The reference I specifically remember (although I have read others) is in "Dark Sun" by Richard Rhodes. LeMay ordered overflights over the USSR and commented:
"Well, maybe if we do this overflight right, we can get World War III started."
I don't have my copy of Dark Sun handy, but there is a reference to the incident in this review:
LeMay's belief in a "preventative" war was oddly rational - he believed that a conflict between the US and the USSR was inevitable and that therefore that US should strike before the USSR posed a strategic threat to the US (which it didn't really do until the mid 60s at the earliest).
Rhodes writes about the Cuban Missile Crisis:
"If John Kennedy had followed LeMay's advice, history would have forgotten the Nazis and their terrible Holocaust. Ours would have been the historic omnicide".
LeMay was very clear that the US should have struck first with nuclear weapons before the USSR had nuclear capabilities. He was a die-hard partisan of using force when you had the edge over your opponent.
A lot of people who made their bones in the Cold War are in control of the government, so "if all you have is a hammer" becomes the plinth of US foreign policy.
They also define PTSD as a state of mind oriented around preventing something that has already happened. Perhaps in this case, the end of the Cold War.
I mean, why not though? Why is it OK for the government to have any secrets? Contrast to a person, the government is an entity that is representative of and responsible to the people it governs (at least it should be, here). How can it be responsible or representative if it has secrets.
Maybe we shouldn't be sending out special forces teams or covert operatives. If there is something the government wants to keep secret, perhaps they shouldn't be doing it.
> It is not unreasonable or morally repugnant to conclude that certain trusted military officers should be granted the authority to make a judgment call when a) the situation is an emergency, and b) the chain of command has broken down.
Or perhaps systems should be built with redundancy and decentralization. It shouldn't be up to any one person to make the decision to launch a nuke, or any other attack. We have methods such as Shamir Secret Sharing that would require multiple people to agree to something before being able to provide something to authorize a use of force.
> Again, the key distinction here is between ongoing practices and emergency measures in a hypothetical, worst-case scenario. That distinction must affect our moral judgment of the respective policies.
Unless that "hypothetical, worst-case scenario" is the government going awry and surveying it's entire populace and killing its citizens without going through any process set down by the constitution of said country. In that "hypothetical, worst-case scenario" is it really OK to be giving the government more power and leeway while decreasing the transparency?
I really don't understand the all or nothing argument.
Can you genuinely not envision a situation in which we would need secrecy for at least a time? If for nothing else, to protect the people who are looking out for our interests overseas? Like it or not, covert operations and intelligence is important to the long-term operations of a sufficiently large entity. How else will we know what our enemies are planning?
I would argue that your view of 'no secrets at all' is just as ridiculous as the government's current approach to our data. It's incredibly naive and comes across as sophomore-level debate. Saying that we can have no secrets is giving up embedded operatives dealing with people who genuinely have bad intentions. Saying that we can have no secrets would have dramatically changed the outcome of the landings at Normandy and WWII in general.
That black and white style of argument is ridiculous; that's what I'm saying.
> If for nothing else, to protect the people who are looking out for our interests overseas? Like it or not, covert operations and intelligence is important to the long-term operations of a sufficiently large entity.
Just because that's the way things have always been done, does that make it a requirement? Why do we need to spy on others? Why do we need to launch covert attacks on others?
> Saying that we can have no secrets is giving up embedded operatives dealing with people who genuinely have bad intentions.
Yes, I understand that.
> Saying that we can have no secrets would have dramatically changed the outcome of the landings at Normandy and WWII in general.
Did it really? I mean, none of that was do to the Axis powers pretty much dropping the ball and fighting a battle on two fronts and a Führer who didn't listen to his generals (yes, partially because he believed the disinformation we fed)? I'm not saying that secrecy wasn't important to how D-Day was done or Operation Fortitude (the deception operation done before Neptune). We did, however, loose heavy casualties in it and the Germans couldn't move their equipment fast enough to combat us for reasons I said earlier.
It wasn't simply that we kept it a secret and it went off without a hitch.
> That black and white style of argument is ridiculous; that's what I'm saying.
I agree, however, defining that gray is extremely tough. We have attempted to define it in a very narrow context, and look what we're dealing with now. Absolutes are ridiculous, abuse of power because grey is hard to define, though, is bad as well.
Which is worse? A government unable to take secretive (hostile) action against other states (or it's own citizens), or one in which we have mass surveillance, secret (kangaroo courts), torture, and secret executions of people not deemed by a public court to be a threat?
There was a raging argument in German high command about the best way to combat allied landing. The key sticking point was whether extremely forward placement of troops (and tanks) was better than rear placement where initial sea bombardment could be avoided. The air supremacy enjoyed by the allies made rear placement tricky, as movement of troops was limited during daylight. Neither option was great. Rommel lost out and many troops were held in reserve back from the coast. However he was further hamstrung by Hitler keeping key units in reserve after the initial landings as he feared a second landing at the Pas de Calais. Had there been certainty about the landings, reserves could have been appropriately placed ahead of time and the Atlantic wall further bolstered (it was build pretty damn fast anyway). The bad whether at the time hampered landing of equipment and supplies and a big German push could have been very effective. Secrecy and deception were key parts of the plan
and contributed in a big way to keeping decent numbers of German troops away (operation Mince Meat is fascinating - a dead man made a very significant impact on allied success http://en.m.wikipedia.org/wiki/Operation_Mincemeat).
While having 2 fronts was a problem for the Germans, the western front wasn't the problematic part. The war was lost on the eastern front. Not the best source, but it is a short answer that in congruous with other stuff I have read. http://wiki.answers.com/Q/How_many_of_the_German_casualties_...
> Which is worse? A government unable to take secretive (hostile) action against other states (or it's own citizens), or one in which we have mass surveillance, secret (kangaroo courts), torture, and secret executions of people not deemed by a public court to be a threat?
Which is worse? A bullet in the head, or a malignant, metastatic, but slow-growing cancer? Neither is good, you understand, but if I'm forced to choose between them, I'll pick the more survivable one every time.
I guess you're saying not being able to conduct covert action against other states (and your own citizens) is like taking a bullet to the head, in that it's fast and offers no hope, while the increase of mass surveillance is like a slow-growing cancer that you can hopefully survive.
I think they are both like slow-growing cancers, and that both can be survived. Maybe overt actions are enough. I don't think the situation today is similar to WWII.
Certainly the situation today isn't similar to World War II. But that's not the only sort of situation in which covert intelligence is of value; how better, after all, to know what overt action to take?
It would certainly be very convenient if secrets were unimportant in a war, or in contests between adversarial nations generally. Everything above-board, all cards on the table, eh?
However, as other comments have mentioned, that certainly wasn't the case in either of the World Wars, especially the second one. And it probably hasn't been the case in almost any war.
There are two basic problems with your thread of argument:
1. No modern government of importance will give up dealing in secrets, and
2. No government of importance has ever tried.
So if the only hope is that governments swear off secrets, I think most would agree we're out of luck.
Fortunately, there are probably other options besides the either-or choice of "no secrets" or "our current worsening dystopia."
> Fortunately, there are probably other options besides the either-or choice of "no secrets" or "our current worsening dystopia."
I agree. We need to work to find a better choice. However, I believe the better options fall closer to absolute transparency then they do the current system.
> There are two basic problems with your thread of argument:
> 1. No modern government of importance will give up dealing in secrets, and
> 2. No government of importance has ever tried.
That's part of the problem. Simply because everyone thinks it's OK for governments to deal in secrets makes it OK. Noöne seems willing to conceive of a system where secrets are the exceptional, special case and not the norm.
And yeah, I'm not sure why I was trying to say secrets weren't important even after I said myself secrets and deception (helped/caused) conflict about where to place troops for the Germans.
We can't avoid the collection of secrets without talking about the power of secrets. Governments collect secrets because uncollected secrets can actually be very powerful.
One of the sources of power that a secret gives you is the element of surprise.
Probably the scariest instance of this, to a government tasked with providing security to its citizens, would be a unique, valuable operational capability, particularly one that has to do with technology, that one country develops in secret.
They are paranoid about this because of the lessons of the 20th century. Sonar, fighters with jet engines, radar, and so very many other things. All the way up to something as dramatic as the atom bomb. All were secret at some point, and all conferred tremendous exclusive operational advantages at one point.
So that's one extreme and obvious example of the value of secrets: secret operational capabilities can mean that when conflict comes between nations, the nation with no idea of its neighbors' secrets could be terribly unprepared.
There are solutions to that that could avoid the need for secrets, but they're impractical: stop having wars, or get all nations to tell each other what their capabilities are (and would they be believed anyway?)
Looking for and collecting lots of secrets is the only way that governments striving for security can do their job, because you don't know what you don't know. Unless you look.
That's a big problem. And changing that value proposition seems unrealistic for humans -- or at least, it seems like the kind of change that hasn't happened since they started living in groups!
Realistic solutions to the problems raised by the Summer Of Snowden probably have more to do with drawing lines about what secrets are out of bounds, and trying to establish better oversight, than in trying to get rid of Intelligence by making governments transparent.
> Simply because everyone thinks it's OK for governments to deal in secrets makes it OK. Noöne seems willing to conceive of a system where secrets are the exceptional, special case and not the norm.
This is frankly confirmation bias. Secrets are the exception in our government, and in most governments, because they are by nature public entities. It's just that you place very little value on the other information out there (and so does the government, and so do other governments, which is why they're not secret in the first place).
The topic of these conversations starts out with the presumption that the secrets are the important bits to discuss. That makes it incredibly difficult to recognize the presence of other pieces of information right in front of your eyes, because it looks like background noise.
Unlike mr_luc, I do have a solution, but it's not a politically feasible one: a world government. There's a quibble there about the need for things like wiretaps for law enforcement, but I'd be willing to bet that it's a lot less necessary to have operational security even for things like that.
> I mean, why not though? Why is it OK for the government to have any secrets?...Maybe we shouldn't be sending out special forces teams or covert operatives.
This gets at the heart of the issue. As I mentioned in my parenthetical above, the arguments in favor of operational security are only relevant if you assume operations are happening in the first place. If you philosophically reject all warfare, espionage, and law enforcement, then there is no legitimate reason for a government secret. However, it's no light matter to reject those things altogether. It implies a society whose structure is fundamentally different from our own. I'm not saying it's impossible (nor am I saying it is possible), just that we're now tremendously magnifying the scope of our claims. We're no longer talking about a particular government policy (nuclear launch decisions), but a complete rewrite of society from the ground up.
> Or perhaps systems should be built with redundancy and decentralization. It shouldn't be up to any one person to make the decision to launch a nuke, or any other attack. We have methods such as Shamir Secret Sharing that would require multiple people to agree to something before being able to provide something to authorize a use of force.
That doesn't address the problem the White House faced in the Cold War. Namely, how do you ensure the nuclear deterrent is credible even when lines of communication have broken down? If you rely on Shamir Secret Sharing, then retaliation would be impossible if the secret holders are unable to communicate with each other.
> Unless that "hypothetical, worst-case scenario" is the government going awry and surveying it's entire populace and killing its citizens without going through any process set down by the constitution of said country. In that "hypothetical, worst-case scenario" is it really OK to be giving the government more power and leeway while decreasing the transparency?
You're shifting the meaning of "worst-case scenario." I used the term to refer to a physical threat, external to the government, which the government is responsible for defending against. You're using the term to refer to a pathology of the government itself. So, the scenario you envision is not the one to which I was referring.
>If you philosophically reject all warfare, espionage, and law enforcement, then there is no legitimate reason for a government secret.
Law enforcement doesn't need to be based on secrecy. I never said you have to broadcast that you're investigating someone, just that the information about who, how, and why you're investigating someone be reasonably public (going through real courts to get warrants and such).
As for war, if we're in a defensive war we can chat about not letting the army put their immediate plans up for all to see.
Ditto with the case about the Olympics in a sibling thread. I see the usefulness in _short-term_, someone's-life-is-in-danger (not because the state put them there), secrecy.
> That doesn't address the problem the White House faced in the Cold War. Namely, how do you ensure the nuclear deterrent is credible even when lines of communication have broken down? If you rely on Shamir Secret Sharing, then retaliation would be impossible if the secret holders are unable to communicate with each other.
The basic idea would be to have redundancy in channel, location, mode, since we're talking about a case in which the chain of command's top has been ripped off leaving many higher-ranking officers at the top of the new chains. An attacker wouldn't be able to take out all (all routes for all modes all over the US) of the communication network simultaneously, especially if we're not talking about superbly massive bombings in which case there would/should be detection.
Also, to be credible, you're assuming the US is telling other's about their system (so that the USSR knows/will think that it can't simply take out the president and there won't be retaliation). Otherwise it's all just based on speculation and in terms of deterrent doesn't matter what the actual system is.
So instead of saying "in a communications breakdown everyone does what they think is best, so someone is going to retaliate" it's "there can't be a communications breakdown because of redundancy in mode, route, and location; so in the event of a collapse of the chain of command, it can only collapse so far and hence retaliatory action is still possible."
This just seems better in practice as well. There is the Dr. Strangelove example of someoene just taking power, and one can also imagine that someone innocently loses communication and assumes the worst.
> You're shifting the meaning of "worst-case scenario." I used the term to refer to a physical threat, external to the government, which the government is responsible for defending against. You're using the term to refer to a pathology of the government itself. So, the scenario you envision is not the one to which I was referring.
Why must it be external, though. Both you and I are concerned about how the government should operate in a manner that protects the populace; I'm just saying that from the populace's POV, the government is/can be an external threat. Look at Kiev right now.
Yes, there will be (exceptional) times when secrets are needed; I get that. Those should be exceptions, and not the rule, as it feels like it is now.
I just think that erroring on the side of transparency, and not doing actions that would otherwise be limited by that transparency (e.g. covert operations) is the better option in terms of dealing with external _and_ internal threats than is secrecy.
Also, an ideal would be for emergency measures to simply flow from the normal operational measures. In the nuke example, forcing some sub-set of commanders to agree that they've received orders to launch nukes (and by implication of participation agree with them and have verified them) before nukes could be launched. The emergency situation is now simply that instead of confirming an order from above, they confirm that they all agree that the situation is such that they should launch.
> Law enforcement doesn't need to be based on secrecy. I never said you have to broadcast that you're investigating someone, just that the information about who, how, and why you're investigating someone be reasonably public (going through real courts to get warrants and such).
There's a difference between judicial oversight and public oversight. The court may issue a wiretap authorization, yet that will probably remain secret for as long as it's in effect. Otherwise, the wiretap will be largely useless. You can't go to your local courthouse and demand a list of currently active wiretaps. It's not public record.
> The basic idea would be to have redundancy in channel, location, mode, since we're talking about a case in which the chain of command's top has been ripped off leaving many higher-ranking officers at the top of the new chains. An attacker wouldn't be able to take out all (all routes for all modes all over the US) of the communication network simultaneously, especially if we're not talking about superbly massive bombings in which case there would/should be detection.
What you're proposing is maybe practicable today. I doubt it would have been at the time in question. Technology was very primitive then, in comparison to what we have now.
> Also, to be credible, you're assuming the US is telling other's about their system (so that the USSR knows/will think that it can't simply take out the president and there won't be retaliation). Otherwise it's all just based on speculation and in terms of deterrent doesn't matter what the actual system is.
The assumption is that the enemy has some insight into your system. In which case it really does matter what your system is.
> Why must it be external, though. Both you and I are concerned about how the government should operate in a manner that protects the populace; I'm just saying that from the populace's POV, the government is/can be an external threat. Look at Kiev right now.
Again, you're arguing about a different issue than the one I am. I agree that a government can become an enemy of its public. But while that's a plausible scenario, it's not the one I was analyzing. Thus my comment that you're shifting the meaning of the vocabulary to refer to a different concept.
"Controversy arose in early 2003, while Rivera was traveling with the 101st Airborne Division in Iraq. During a Fox News broadcast, Rivera began to disclose an upcoming operation, even going so far as to draw a map in the sand for his audience. The military immediately issued a firm denunciation of his actions, saying it put the operation at risk; Rivera was nearly expelled from Iraq. Two days later, he announced that he would be reporting on the Iraq conflict from Kuwait."
Somewhere between the generality of Murphy's Law and the specificity of Rule 34 is an observation that states that an idiot will always arise to complete an idiotic possibility.
Nor should there be a Twitter feed giving up-to-the-minute location data on US special forces teams. These examples seem farcical
Prime real-world example: During the Munich Olympics hostage crisis, police prepared to raid the occupied rooms with overwhelming force (a la "SWAT team"). The raid was called off at the last minute when they realized reporters were broadcasting live video of the preparations & locations, and the terrorists were watching it on TV.
There is a difference between assuming that they are expecting you, and having your enemy know your every.single.move.as.you.make.it.
EX:
If I'm going to punch you in the back of the head, I'll sneak up behind you. You might know I'm there, you might not.
Now, if I come up behind you at the same speed while yelling, "I'M COMING UP BEHIND YOU AND I'M GOING TO PUNCH YOU IN THE BACK OF THE HEAD," you'll definitely know I'm coming, and I should expect a countermeasure.
I think there's two conflated questions here. I should ideally plan my attack as if my opponent knows my moves, where possible. I should not tell my opponent my moves, where possible. Just because it's not on the news networks doesn't mean the enemy has no way of observing you.
If we're boxing, or another adversarial match, though wouldn't you become suspicious if I was seemingly unaware that you were behind me?
This isn't a "suprise! we're here to check up on things" this is something the kidnappers should be expecting and planning for (what else do they think will happen).
Boxing is kind of a great analogy here, but not in the way you think.
Boxers have this concept called "telegraphing" to call out patterns of movement that indicate what they're about to do. It's one of the easiest ways to lose a fight because an opponent who can read your tells can react to them instead of to the move that they precede.
Even though both participants know that the other is actively attempting to attack them, there's still a significant amount of strategic value in concealing the specifics of imminent activity.
The difference was that they knew exactly where the team would break in. The difference between "there will probably be people trying to enter the room" and "I'll shoot that guy once he breaks the window over there"
Expecting, yes. Watching your every move in real-time, in 1972, no.
Remember, this was in the early days of television - not the continuous video/metadata surveillance state we have now, with commensurate capabilities available to any idiot with a motive and few hundred bucks. Sure, the police assumed the terrorists were expecting a raid somewhere between right now & hours/days from now and involving a lot of heavily-armored police with lots of firepower, but wouldn't know which second and the exact arrangement & capabilities of forces involved. Even seeing all the TV cameras around, the police hadn't learned to make the instant & automatic connection between a video crew and the TV set on in the hotel room about to get raided. This was the first time the targets of a raid were able to see that there were, in fact, right now, 12 cops standing just feet from the windows, and 5 approaching the front door carrying machine-guns and about to bash in the door with a battering-ram in 7...6...5...4...
You should expect it and plan for it, but when you know it your actions have to change. If I know you're about to kick down my door, I have far more options than if I know "in the next hour they'll probably kick down my door". The information completely alters the dynamic. Even in games with no hidden information (such as chess or go), you don't know what my next action will be (except in few cases where my options are severely limited), you only know the probabilities. What if we're raiding a house for some reason (like in Munich), and there are 4 doors and a dozen windows. 5 gunmen inside, if they know which doors we're about to go through and which windows we're ignoring, they can either effect an escape or a more effective defense. Our raid is going to be fair less effective if we announce our plan to the opponent instead of making them guess at it.
Ah, but that's about cryptosystems. And its application is really about the continued use of a thing. Kerckhoff's principle doesn't apply so much to a one-off situation. Sticking with a house raid:
We have a plan to raid this house using these entrances, covering those exits with these 10 people. The adversary knowing our plan screws us, but learning it after the fact doesn't help our adversary.
But what if we use the same tactic every time? What if we always cut power immediately before breaching. That's a signal to the adversary. What if we always use the same sniper and he has a known handicap? If the adversary learns these things then they can break our efforts (or reduce the effectiveness) to conduct the raid. And this is where Kerckhoff's principle could be applied.
EDIT: It's not secrets that are the problem, it's keeping the wrong secrets. You can't have my passphrase or my PGP private key(s), but, sure, you can have the AES and RSA algorithms and my public key(s).
Sometimes a fragile plan is still the best one you can have. Life doesn't always give you good choices. Sometimes you're stuck with either a fragile plan where things can go horribly wrong if secrecy is breached, and one that's even worse.
As the reasoning went, the American nuclear deterrent must
be credible, or else the Soviets would seize the
opportunity and launch an unprovoked attack on the US,
Germany, or both. If a communication breakdown between the
White House and the military could render the nuclear
arsenal unusable, that would tend to diminish the
credibility of the deterrent.
If you wanted the pre-delegation of Presidential authority to increase the credibility of the deterrent though, the Soviets had to know about it. Which makes you wonder - did they strongly deny it to the public whilst quietly informing the Soviet Union about the true situation?
The book from which this is adapted (Command & Control) is fascinating and terrifying. Everyone should read it. You'll wonder how it is that we've managed to escape truly catastrophic accidents with these weapons.
Command and Control kept me up reading well past when I should have been asleep many a night. I have a friend who grew up not far from the Damascus, AR silo, the story about which the book is structured around, which made it that much more compelling.
After reading it, I'm convinced that sheer dumb luck has been the deciding factor in not having seen so very many unplanned nuclear detonations over the years.
Have you read "One Minute to Midnight: Kennedy, Khrushchev, and Castro on the Brink of Nuclear War" - I found that a lot more alarming than Command and Control (although I agree that Command and Control is an excellent book).
Another fun one is the Cuban Missile War timeline, a frightening alternate history of the Cuban Missile Crisis turned into nuclear war. I'm not an expert, but my impression is that it's pretty realistic.
The timeline diverges from reality when a Soviet submarine captain launches a nuclear torpedo at an American ship, which then gradually spirals into all-out nuclear war. This very nearly actually happened: the three main officers on the submarine had to unanimously agree to use nuclear weapons, and two of them were in favor. The third one managed to talk them down.
That reminds me of the novel Resurrection Day, which is set 10 years after a global war over Cuba and has a damaged US run as a miltary dictatorship and pretty much an international pariah due to the extent of the US attack on the Soviets (e.g. surviving members of SAC are regarded as war criminals).
Not a bad book - my main nitpick with it being that its scenario has Western Europe coming off fairly untouched by the war which would have been extremely unlikely.
I read Command and Control in the span of a week (fast for me) over the Christmas holiday the other month, and it was enlightening (read: scary as hell that we for the most part lucked out not having a nuclear dead zone in the middle of the USA at this point in time).
One thing I did take away: the military kept using more extreme and dangerous measures (mainly in my mind the fully loaded bombers circling 24x7 over Europe) with the military trying to install fail-deadly mission objectives (ie: circle Europe for your daily bombing run, and if you don't hear an all clear message, assume the USA is being attacked and bomb the Soviet Union). One point seemed to be made though was: for all of the idiocy and lack of control during the arms race there were no actual nuclear detonations, so were the weapons at the time actually safe (even though many improved safety mechanisms were ignored by the military), or was it just luck due to small-enough sample size and a luckily short enough time span?
I actually ordered off of Amazon these two books cited by Schossler and just received them in the last two weeks and am really looking forward to reading them:
The Making of the Atomic Bomb - Richard Rhodes:
~900 page book that is supposed to be the seminal book about the history of the bomb; also is apparently very detailed in the physics of the endeavor which should be very interesting.
Dark Sun: The Making of the Hydrogen Bomb by Richard Rhodes is the follow up to The Making of the Atomic Bomb and is even more interesting as it tells the story of the Soviet bomb project at the same time as the US H-bomb project:
The story from Command and Control that sticks in my head is how, a few months after WW2 ended, the Truman Administration wanted to get an exact count of how many A-bombs the U.S. had available in their inventory. The answer came back from Los Alamos, "Hmm, we could probably put together one or two in a couple weeks."
This quickly led to a crash program in standardization of atomic (and later, nuclear) weapons and increasing their shelf-life.
Agreed. I've read pretty much every book I can get my hands on covering Cold War nuclear history and Command and Control has lots of new and terrifying details.
I don't wonder. It's simply that, while certainly no one is perfect, almost everyone is better than almost anyone imagines that anyone could possibly be.
I think you have a very, very optimistic view of humanity. It's entirely possible you and I have completely opposite experiences, because while I've known plenty of good, decent people, I've known more that fell somewhere between mediocre and horrid.
I'm also not sure how you could consume any sort of news for any period of time and still hold on to this belief. Of course we hear about the worst, and it's put as shockingly as possible, but with all the terrible things that happen every day, "almost everyone" just doesn't seem to me to work in your statement.
> I think you have a very, very optimistic view of humanity.
Indeed, very much so! Having once lost my faith in progressivism, I found myself no longer wedded to the peculiar species of pessimism, regarding humanity and human affairs, which is part and parcel of that strain of political belief, and I further found myself capable for once of considering the possibility that for someone's politics to disagree with mine, however fundamentally, did not necessarily make that person a horrible human being. This discovery came as no small relief to me, in that it brought to swift and successful conclusion what had theretofore been a decade-long and losing battle against an increasingly black and pervasive depression.
> I'm also not sure how you could consume any sort of news for any period of time and still hold on to this belief.
Granted, and that's precisely why for the most part I don't. News that's local enough or big enough to matter in my life, I will hear about soon enough, and that's the only sort of news with which I choose to concern myself; as far as I'm concerned, everything else can go and whistle. What, after all, is the point in flagellating myself with a litany of trumped-up disaster stories regarding matters over which I have no power or influence in any case? Such obsessive behavior does no one any good.
> with all the terrible things that happen every day
Oh? This seems to me a rather lopsided view. What of all the terrible things that don't?
I picked up a copy last night. Thus far, it's mainly challenging my patience and my tolerance for lousy writing styles -- it wanders all over the place. I haven't read any of Schlosser's other books; is he always like this?
Having now finished the book, I remain unimpressed, or at least not impressed in the fashion you expected that I would be. If anything, I have to say that, not only for there have been relatively few mishaps over so many years of nuclear weapons having been actively deployed, but for none of those mishaps to have had a yield measurable in kilotons, strikes me as powerful cause for optimism.
Strongly recommend The Lives Of Others. It's not absurdist like Strangelove, but it does address the psychology and motivations of those doing the surveillance.
It also gets at the culture of corruption surveillance and power engender.
Though when you realize that the surveillance required dedicated observers and resulted in paper records, you realize how vastly more capable current US operations are.
Are there any contemporary figures as "interesting" as Curtis LeMay, Herman Kahn or Thomas Power - the latter being the man who said while head of SAC:
"Restraint? Why are you so concerned with saving their lives? The whole idea is to kill the bastards. At the end of the war if there are two Americans and one Russian left alive, we win"
Dianne Feinstein is way over on the side of the surveillance state, while the BRIC market share withers out from under her constituents' companies. You have to wonder where that is coming from.
came here to suggest people who are interested in where kubrick got his model for the character dr strangelove to study herman kahn. just read a book about him - 'the worlds of herman kahn' - and cannot recommend it enough for this subject. his book 'on thermonuclear war' was widely talked about when it came out in the 60s, one of the huge impacts that RAND had on our society.
it was an absurd time as the world changed so radically so quickly and strategists scrambled to figure out doctrine, policy, strategy, and how to use the emerging thinking of game theory.
another book on the evolution of nuclear policy is 'the dead hand', which looks at the emergence of nuclear deterrence from the first bomb through the recent years following the breakup of the USSR. the dead hand referred to in the title is the failsafe the soviets built to strike back once they had been demolished.
Ever see he SNL bit where Tina fey just read word for word quotes from Sarah Palin revealing their complete absurdity? Hilarious and very pointed commentary in one.
Our Looking Glass aircraft(EC-135C) flew 24x7 for many years with 2 launch officers. They could launch the minuteman missiles from the air and didn't require NCA approval. It also could contact the subs via LF/VLF and tell them to launch.
Their mission assumed DC and SAC HQ were already taken out.
The Cold War in general was full of all sorts of insanity no one would ever imagine to be possible. Prime example of truth being stranger than fiction.
That and all good satire has truth in it. The more truth and the better illustrated, the better the satire. Kubrick was a visionary for a reason.
I dunno - I think we were safer with dedicated airmen manning Minuteman silo switches. Morale dropped after the PALs were installed. They disrespected the whole idea - entered a launch code of 0000-0000. Still they knew they'd become administrators (alcoholism, spousal- and drug-abuse etc) and not the last link in the chain preventing nuclear war. How was that an improvement?
Problems with poor morale on missile bases? The second in command at Stratcom[1] with gambling problems? Looks like we need to take humans out of the loop when it comes to nuclear decision making. We need something that will not be subject to emotions. Something that makes decisions based only on cold hard logic. . .
During the Cold War there were nuclear anti-aircraft missiles, nuclear anti-missile missiles, nuclear depth charges, nuclear artillery shells, nuclear torpedos, nuclear man-portable demolition charges, even nuclear recoilless guns:
I guess it wasn't the author's fault, but that's the one of the most ridiculous things I've ever seen. Until about 1990, anti-aircraft missiles were horribly inaccurate. I read somewhere that the chance of scoring a hit with a korean-war era air-to-air missile was about 5%. Why would anyone want to launch a nuclear missile that's pretty much guaranteed to not his its target?
> I read somewhere that the chance of scoring a hit with a korean-war era air-to-air missile was about 5%. Why would anyone want to launch a nuclear missile that's pretty much guaranteed to not his its target?
Because with a nuclear air-to-air missile in its intended role, you don't need to "hit the target", you just need to detonate somewhere in the general area of the bomber formation that you are trying to whittle down.
I used to work on the AIR-2A. It a was a thing of simplicity. Any airplane within a mile that survived had to land immediately. This is before modern guidance systems. The military imagined masses of bombers with fighter cover attacking similar to WW II. Perhaps its a bit of fighting the last war, but it was a simple, effective answer to a real threat.
It's the other way around. If you're sure to get a hit, you don't need a nuke. But if you don't have very much accuracy, you can make up for it with a big blast radius.
Well, "nuclear suitcase bombs" are usually envisioned as taking out a city. Wheel one into Times Square, push the button, and boom, millions dead. The Davy Crockett's yield was about 20 tons, so it would be like a really big truck bomb going off. Not a good thing to happen in a crowded city, but we're talking about a couple of city blocks being destroyed. For a sneak attack, you'd probably be better off buying or stealing a tractor trailer and loading it to the gills with ammonium nitrate.
I was kind of in surprise at the wording too, I didn't believe they would use a nuclear anti aircraft weapon, it seemed overkill, and then I read the rationale of the genie. I'm surprised to say the least.
Hardly overkill, when you consider these weapons were designed for use against squadrons of nuclear-armed Soviet bombers, and the abysmal performance of fighters and interceptors against bomber squadrons during World War II.
Make up for accuracy with power. It doesn't mater if your missile can only make it within a mile of the target if it destroys every aircraft within a mile when it detonates.
I think it's heartening, in a sense - and this goes beyond nuclear weapons - that as close as we are to danger, dangerous events are pretty rare.
Of course, whenever I have that thought, part of me fears that I'll ironically be vaporized by an errant nuclear weapon, moments later. I reassure myself that, at least, it will be a quick end.
Not at all! Most deaths would be slow and horrific, from radiation burns (thermal or ionizing), radiation sickness, or blast effects like being crushed in a collapsed building. Only a lucky few would be vaporized instantly.
This is also why the old advice of "duck and cover" was actually very relevant and good advice. If you're not vaporized, you'd prefer to avoid the oncoming blast as much as possible (and yes, even the flimsy newspaper-over-the-head helps reflect thermal energy and could seriously mitigate your burns).
Sounds like DRM (Digital Rights Management) problem. We have these locks on things so you wouldn't do anything questionable, but of course we can't send MPAA representative everytime you try to watch a movie, so here are the keys as well..
I used to use this movie to test out prospective girlfriends. If they found humor in it, they were worth a second look. If it was WTF face all the way, there was never going to be any compatibility.