Will future generations of military pilots control their aircraft from a cockpit inside their aircraft, from a ground-based control centre, or will fully autonomous systems fight future air battles?

Given yesterday’s announcement from Farnborough – a future fighter jet that can deploy air launched ‘swarming’ Unmanned Air Vehicles – will we see a combination of capabilities that marry human and machine learning elements to create air lethality? DARPA, for example, appear keen on developing a similar idea with Gremlins, a programme that has challenged companies to investigate air-launched and air recoverable UAS.

Back in the present, there wasn’t much separating the 477 poll participants, with 34 per cent agreeing that fully autonomous aircraft are inevitable, followed by 32 per cent who think future military aircraft will be remotely operated. Third place went to the 31 per cent who think pilots will remain in the cockpit of next-generation military aircraft, and the remaining three per cent chose ‘none of the above’.
In the comments that followed doubts were raised about autonomous systems being able to make the right decisions.
Former RAF serviceman David Anderson said: “Machines can only make decisions based on whatever ‘logic’ is programmed into them. The human pilot can apply judgement of all factors, including ‘gut instinct’ something that can never be programmed.”
This view was mirrored by Peter Thornhill who added: “All very plausible but AI cannot replace human intuition. Rather, I could see pilotless squadrons but overseen by a group of manned aircraft within the group to apply their interpretation when required. Whatever the outcome without the risk to the human life of the attacker I can see confrontations becoming more ruthless & merciless.”
Finally, but by no means least, John Patrick Ettridge said: “The problem with having a fully Autonomous weapons is that they rely on GPS and Wi-Fi to receive there instructions, so knock out the electronic signs and they do not work. There will always be a need for human control, even if from a “Mother Ship or command post” close to the action, with the Drones being the delivery of expendable ammunition.”
What do you think? Let us know using comments below.
I know how this ends – someone programs a fully autonomous fighter to protect the population and the cockamamie thing connects itself to the internet, identifies heart disease as the nations biggest killer and starts a brutal shock-and-awe air campaign against Greggs. If that’s the future these so-called boffins want then they can flipping well think twice, and that’s swearing.
@Mickey Padgett
Or it considers that humans are the biggest threat to the planet and terminates us all.
Could make a good plot for a SciFi movie!
The worrying thing is how often movie makers predict a real event in film before it occurs – Sleep well tonight!
So what you’re saying is I should start hoarding canned foods and shotguns? Find a job you love and you’ll never work a day in your life!
What nonsense Mickey.
It’s McDonalds first , followed by KFC and then Greggs.
A world without a soss and eg mcmfin is not one I wish to contemplate
So, yes, American fast food, the bane of all, (not the bane of old). It will ever so hard to surpass the capabilities of the F-35. A fully autonomous aircraft, will make instant decisions, just hope they are the correct ones. For instance, the decision of really whether to engage (fire a missile) or not, and actually start a conflict, where the other side is just bluffing. Might be sort of hard to call that one back.
It took 128 aircraft in a coordinated effort to bomb the airfield on the Falkland islands. (This included refuelling mid flight several times).
If we had to repel a remote invasion again, think you would want at least one human, in the air to check that that everything was going to plan. You wouldn’t just program a fleet of aircraft and let them go.
T/O and landing? just maybe but otherwise: where does anything other than point and press fit?
BAE Systems seem to be totally committed to unmanned aircraft of various sorts, as they have not revealed any future plans for a piloted replacement for the Typhoon or, for that matter, the Hawk.
The future will, almost certainly and as usual, be a combination of all the above. What seems inevitable is that close combat will be carried out by aircraft with a high element of autonomy. A human being flying via a signal bounced several thousand miles around the earth via satellite links just has too much inherent latency. Whether they are fully autonomous, or have a remote pilot “supervisor” component is a largely political decision, being human I have little expectation that a moral element will enter into the decision making process.
The unfortunate reality is that a human cannot stand the the forces modern fighters can produce for more than a few seconds. An autonomous fighter would not have these restrictions (ie probability of pilot loss extreme) during an engagement.
Even politics will not stop this, it only takes one adversary to implement it and everyone will need similar capabilities.
Air transport, troop movement, helicopters, surveillance /command all require pilots. Remotely piloted air vehicles will play an increasing role in offensive and defensive areas. Mark one eye ball has very little time delay, however 12k miles plus does have a delay which could be costly. It is rumoured the UK and Sweden are considering a joint venture for a future aircraft?
It depends on the AI’s capabilities. EITHER algorithms will just keep getting better and better, so at some point the AI pilot can ‘out-think’ the human (in effect replicating an epiphenomenalistic model of the human mind) OR AI research will hit a wall, because of an inability to encode that phenomenon we call ‘consciousness’. Roger Penrose has argued that the very structure of the brain allows it to operate non-algorithmically and these processes lead to free will, self-awareness, emotions, wants and desires, motivations, insights etc. At the very least, Penrose argues, these traits must provide some evolutionary (i.e. survival) purpose (or they wouldn’t have evolved). So, in this view a conscious human has a subtle advantage over an unconscious rules-driven AI no matter how advanced
Being ex RAF, never a pilot, I would support the use of pilots. Machines can only make decisions based on whatever ‘logic’ is programmed into them. The human pilot can apply judgement of all factors, including ‘gut instinct’ something that can never be programmed. The g forces mentioned are a factor compared to what an aircraft alone can withstand but technology can find ways to mitigate these effects. I personally would rather put my faith in a human than a machine.
Absolutely the next generation and a few thereafter will be piloted.
The real question should be – Will the next generation of military aircraft be IN AIRCRAFT piloted?
All the drones we see and hear of currently operating are pretty much ALL piloted. They may be remote piloted but piloted they are. Autonomous weapons attacks are still not legal under Geneva conventions, as I have said on these forums before any weapon release still requires a ‘Go / No-Go’ decision by a real person (or often including a team of lawyers). In current operations where pilots are unsure they can refer to the legal team as to whether they can legally fire / drop weapons (At least UK can).
Different nations have differing ‘Rules of Engagement’ but all are still bound by the Geneva Conventions (assuming they signed up) which protect civilians and even personnel (enemy) who have laid down their arms (which is where confusion on attack ‘go / no-go’ can come in, e.g. insurgents see foreign military put the rocket launcher down and run away).
So will we see the next generation of attack aircraft without pilots ?- Absolutely we’re already there with the many drones etc already on the war marketplace.
Will we see pilotless attack aircraft? I very much doubt it for many years to come if ever.
Just as a final note many weapons and drones may have ‘loiter’ modes where they patrol until a suitable target comes along. At that point in some cases a pilot / operator may take over.
It makes sense to take the pilot out of the machine as the machine can be smaller, lighter and more cost effective, however if we have all our pilots in a central location flying by radio control, guess where the target moves to! The radio control bunker.
Hi Geof how does the Geneva convention apply in the case of a cruise missile, yes the human makes the launch decision but once in flight the missile is autonomous perhaps for several minutes before reaching the target?
While being no expert, but having been in the arena, the target has to be of military value in order to meet the aims of the objective. Thus you would never see cruise, used against temporary ‘militia’ type sites, as the value can not be assured at the time of impact. Cruise missiles have relatively short flight time and thus also at time of launch the validity of the launch would be assured. Many also have ‘abort’ commands that can be sent. I’m sure many of us recall video’s from cruise missile attacks, this is real time monitored also with the option to abort.
Same applies for less smart weapons however, once the weapon leaves the platform (especially bombs) it’s virtually a done deal.
Based on the efforts of those who in the past had, and in the present and future actually might make the political decision to commit (ie those who are morally, intellectually, professionally socially and academically limited and suspect) these highly trained individuals to conflict, I would go for an algorithm every time!
I am reminded of the efforts to find out (in a moving mass of vehicles) which one contained the tank-commanders of a Warsaw Pact advance. Commanders travelled in an individual specially sprung/damped vehicle (high speed jolting over rough ground in a ‘tank’ was known to disorientate their thinking!) joining their individual tanks just before battle. Hit that one vehicle and …..severely blunt the overall attack!
All very plausible but AI cannot replace human intuition. Rather, I could see pilotless squadrons but overseen by a group of manned aircraft within the group to apply their interpretation when required. Whatever the outcome without the risk to the human life of the attacker I can see confrontations becoming more ruthless & merciless
Try computer based litigation and dealing with any civil? servant who hides behind his/her screen and system?
The problem with having a fully Autonomous weapons is that they rely on GPS and Wi-Fi to receive there instructions, so knock out the electronic signs and they do not work. There will always be a need for human control, even if from a “Mother Ship or command post” close to the action, with the Drones being the delivery of expendable ammunition.
Of course there is yet another alternative.
So, let the Generals, Admirals and Air Marshals (ours and theirs!) sit in their ‘posts’ and play the equivalent of ‘Space Invaders’ on their consoles : then they can decide who shoots at what and when. Better still, put them into the equivalent of ‘reality training’ simulators (surely the programming is sophisticated enough by now) and then they can get the full experience, including crashing (and burning)when hit!
One other little nicety to think about. You would not want an autonomous A/C to be flying back on a homing beacon (or a manned piloted aircraft for that matter) when fighting an aircraft carrier battle. The last thing to do is give away your position. Think about the Battle of Midway, and our American young men who flew those missions, knowing full well it was radio silence from the fleet, and if their dead reckoning of fleet last known position was mistaken, well, I reckon they were dead. It is a big old ocean, and a little bitty ship by comparison. The Wildcat pilots were more afraid of sharks than they were of the Japanese, just remember that.