Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do drones get around the issue of communications jamming? I suppose they have some way (autonomous would be one way), but it seems to me that if communications are cut off, having a pilot with human judgement to respond to changing conditions will almost always have some advantage. Although, you could get a pretty advanced autonomous system that responded to changes from a pre-programmed attack plan.


> How do drones get around the issue of communications jamming?

AESA derivatives as bidirectional communication devices seem like they will render jamming a lot less effective. Simply by virtue of being able to pump radar levels of power into communication.

And current trends seem to be converging on flocks of drones, of which one or two can be specialized with uplink. Or simply babysat by stealthy HALE platforms like the RQ-180.

You'll have to blanket an area with ungodly amounts of energy to fully jam point-to-point, highly directional links, especially for close range hops.


Kill Decision by Daniel Suarez is a great book (great in audio book form!) about autonomous drones and comms blackouts.


Can you give us a TLDR what it says about comms blackouts and jamming?


The book talks about how they serve multiple purposes. They can be used to hinder enemy operations, mask one's own activities, or isolate units to force them into pre-defined roles. Increasing reliance on digital and wireless communications in warfare can thus be viewed as a double-edged sword, offering advantages but also creating new vulnerabilities.


> How do drones get around the issue of communications jamming?

For one, by making it harder to jam in the first place. Starlink with its extremely directional antennas is a good example - an opponent would need an equally massive fleet of satellites or high-altitude ECM planes to jam it, and the latter ones can easily be targeted by anti-radar rockets.

This is why the US government has been pushing insane amounts of money into SpaceX... Starlink is the future of interconnected wars.


SpaceX is not fully under operational control of the US government, if some of the reports from Twitter about geofencing are to be believed.


I have yet to see a credible report of them denying a US government request.


Big big big big difference between "they usually do what we ask" and "we have operational authority over this system, and can court martial anyone that impedes its operation"


Isn't everyone with the root password a US Citizen? Last time I checked SpaceX's job opening, a US clearance was required.


Which is partly why the DoD commissioned a second constellation specifically for military use: https://www.spacex.com/starshield/


Wow. Is there any concrete info about the build-out? Or do you think they will just provision X% of existing sats / bandwidth to military use? I recall learning years ago that modern "long lines" (telco) were all pure data lines, where a certain portion was reserved for guaranteed bandwidth required for (voice) telephone calls.


Assuming that the US ever enters any war directly, guess what their first action will be: take Musk out of the picture, deal with the legalities later on.


The legality isn't in question; Defense Production Act very much applies.


> Starlink is the future of interconnected wars.

Maybe, but what do you figure the US military will use?


Starshield


A near peer adversary will attempt to degrade Starlink (and other military satellite constellations) as their first step in any major conflict. China is making huge investments into EW, cyber, and ASAT. The US military has to plan to fight with little or no satellite support.


The plane-sized drones are capable of some autonomous operation. It may or may not be possible to spoof that: https://en.wikipedia.org/wiki/Iran%E2%80%93U.S._RQ-170_incid...

The smaller drones are not usually autonomous. See the Starlink alleged incident.

Inertial guidance is popular but very expensive to do accurately with laser gyros. I'm surprised there haven't been more "terrain following" systems.

There's probably always going to be a continuum between manned and unmanned platforms, and a discussion about SEAD.


> See the Starlink alleged incident.

Ukraine says it happened and Musk does too - is ‘alleged’ needed?


[flagged]


But one says ‘I did it’ and the other says ‘he did it’.

What other agenda could be going on here? They are both hiding some Russian capabilities from us?


Depends on your definition of expensive. In comparison with other military hardware, inertial navigation systems aren't that expensive. They're also used in large numbers in civil aviation.


Can’t jam every frequency.


Hold my beer, and volunteer to pay the power bill.

You say that as if it isn't incredibly easy to do to the point we have entire enforcement orgs built around trying to keep people from unintentiinally doing just that.


Ok, let me clarify - can’t jam every potential drone frequency without taking out your own coms and giving the operator cancer.


> How do drones get around the issue of communications jamming

The same way the F-35 does I guess. Besides, it's pretty common for AI systems to overtake human capabilities these days, so when jammed they can just carry on.

Sure, they do mistakes but humans do these too and the advantage of not carrying 80kg of fragile human and all the life support systems onboard is quite significant. It makes the thing much cheaper, it removes the need to come back thus doubles the range, it makes the thing smaller thus harder to detect and destroy, it doesn't have to limit its manoeuvres to human levels this makes the thing much more agile.


> The same way the F-35 does I guess.

No, because that's in part "let the human pilot make decisions".

> Besides, it's pretty common for AI systems to overtake human capabilities these days...

Not in the realm of "should I shoot that thing?" sort of decisions.


> Tarnak Farm incident

Canada's first losses in a combat zone since Korean War.

> "Let's just make sure that it's, that it's not friendlies, is all"

> Twenty-two seconds later, he reported a direct hit. Ten seconds later, the controller ordered the pilots to disengage, saying the forces on the ground were "friendlies Kandahar".


The argument is not "humans never make a mistake".

There's little evidence autonomous combat fighter AIs are better than humans at tough calls of this nature. They may be someday, but given the state of the art in self-driving, that day probably hasn't arrived.


Weren’t they in Croatia under UNPROFOR in the Medak pocket?


let the human pilot make decisions == let the machine make decisions

It's not like pilots are making political decisions. They pilot and shoot predefined targets, avoid hostile actions. AI is capable of doing this.

> Not in the realm of "should I shoot that thing?" sort of decisions.

On the contrary, AI is very capable of making that decision. There are no philosophical dilemmas or children in the skies and even if there were we are at the point where we can tell the device not shoot children. There will be mistakes but human pilots makes mistakes too.


Military pilots absolutely make all sorts of decisions, like "that looks like a civilian target, maybe I have incorrect info" or "a little kid just ran into the target area" or "the controller says I just shot at friendlies".

I would not currently trust an AI to handle those very well.

What would an AI have done in this situation? What should it have done? "Russian pilot deliberately fired missiles at a Royal Air Force surveillance plane in international airspace over the Black Sea last year": https://apnews.com/article/uk-russia-fighter-jet-missile-bla...


AI can absolutely say "that looks like a civilian target, maybe I have incorrect info" or "a little kid just ran into the target area" or "the controller says I just shot at friendlies".

What makes you think that AI can't incorporate those into decision making? Pilots do these through instruments anyway.


> What makes you think that AI can't incorporate those into decision making?

The fact that state-of-the-art AI already fails at much simpler decisions.

In the case of the Black Sea incident, the potential consequences include global thermonuclear war.


The same AI that gets confused when you stick a traffic cone on the hood? Yeah, I don't want that algorithm deciding who to bomb.


Self-driving cars are a much harder problem than anything airborne.


Maybe we can use autonomous drones to shoot the traffic cones off the self driving cars then.


Wrong. Flying from point to point is easy. Following complex ROEs, using combined arms tactics, dealing with system failures, identifying valid targets, and employing weapons are all much harder problems than self-driving cars.

It's always hilarious to see the confidently incorrect comments by a bunch of ignorant software developers. The Dunning–Kruger effect is on full display here.


LOL, OK. You've listed a lot of problems that have been largely solved already, and are trying to convince us that they are harder than a problem that has eluded the brightest people in the tech industry, armed with computational tools that the aerospace community never dreamed of and backed by more-or-less infinite capital.

Nothing is harder than self-driving cars. Nothing. We'll colonize Mars before we have a solid solution to that problem. Why? Self-driving cars have to coexist with human drivers and human infrastructure.

Nobody in aviation has that problem. If they did, they'd run screaming for the hills.


You've been watching too many movies and are just making things up. Those problems haven't been solved in tactical aviation.


No, it's just that every time I drive somewhere, I try to maintain a low-priority thread in my head to work on the problem, "How would I write code to do what I just did?" Frequently the answer is, "I have no idea, and wow, I'm glad it's not my job."

That simply doesn't happen when I fly my quads. "How would I write code to dodge an attacking drone? How would I modify my drone to drop a grenade or a Molotov cocktail, or otherwise cause a large amount of grief to people below? How would I build a SLAM model that allows the drone to do this without intervention from the ground?" None of these engineering problems bug me the way driving a car would. They are all addressable with multiple degrees of freedom, both literally and figuratively.

Meanwhile, on the road:

"Hmm, the light at this intersection is out. There's a cop with an angry look on his face, flapping his arms at me like a dying chicken. What does he want me to do, exactly?"

"Huh, here I am in Seattle, and it looks like they have chosen to mark the stripes on the road with some sort of paint whose complex impedance at optical frequencies is identical to that of rainwater. I'm sure glad I'm driving, and not my lane-keep assistant, which I had to turn off because it tried to steer me into the median the last time it snowed."

"Whoa, where'd that ambulance come from. The law says I have to move right, but the only way I can get out of his way is to move left, and in any case, that's what the car ahead of me is doing. What to do, what to do."

In most of the airborne scenarios you mention, doing nothing is a fail-safe answer when confronted with a situation the hardware or software can't handle. If we approach driving that way, a few miscreants can brick an entire city, intentionally or otherwise.

I'm not surprised Karpathy tapped out at Tesla, let's put it that way. My guess is, I've thought about this a lot more than you have, and a lot less than he has.


Or the USS Liberty for that matter.


Nonsense. AI can work well enough for striking certain known targets. But it is simply not capable of following complex rules of engagement or adapting to highly dynamic situations in real time. We are at least decades away from that capability in a general sense. What you see in movies is not reality.

Sixth-generation tactical aircraft (the successors to the F-35) are likely to be optionally manned. They will be able to operate with remote pilots and/or autonomous control for high risk strike missions but most of the time will still have human crews on board.


If you haven't notice, lately AI is pretty good at woking with information that never seen before.


It's also pretty good at hallucinating convincingly.


Apparently you haven't been paying attention and don't understand the basics of AI technology. It is terrible at handling novel situations, especially in something as complex as tactical aviation.


I'm more worried about domestic use. It's hard to get soldiers to carpet bomb wrong thinkers.


> It's hard to get soldiers to carpet bomb wrong thinkers.

I'm not sure how accurate that is historically.


I didn't say it's impossible, but it isn't sustainable. With autonomous drones, it's easy. Ask your generals if autonomous drones are right for you.


> It's not like pilots are making political decisions.

In the age of the Strategic Corporal, they absolutely are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: