Army AI-enabled robots can fire weapons in warfare to defend against attacks


What if artillery shells, swarms of mini-drone explosives, rockets, and even air missiles approach Army troops positioned forward at the same time? Imagine that these incoming weapons are scattered, varied, and fast approaching. How do ground commanders avoid being overwhelmed and destroyed? Could AI help solve this deadly situation?

Incoming attacks are fast, multifaceted and extremely lethal. Ground commanders and armed infantry simply don't have time to respond to all targets simultaneously to discern which ones to intercept. Not only that, but commanders may simply have too many objectives to optimize which type of layered defense might be best suited to counter different approaching weapons. Think of the scenarios that go through the head of a commander. Which countermeasure is the best? A kinetic interceptor? Electronic warfare? Lasers? These questions will likely need to be addressed, analyzed and answered in real time, possibly even within seconds, to save lives.

"We are trying to merge operationally relevant data at the tactical level by connecting the sensor to the shooter. What if the shooter is a robot? Do I have to confirm each objective? Dr. Bruce Jette, Assistant Secretary for the Army, Procurement, Logistics and Technology, told The National Interest in an interview.


Artificial intelligence systems can now instantly organize data from incoming sensors, perform near-real-time analysis, and make determinations about attack range, range, speed, configuration, and approach path. Perhaps more importantly, it could determine what method of defense might be necessary. All of these variables are likely to converge so fast that a human commander simply cannot respond, putting the force of the Army, facility, or forward base of operations at great risk of destruction.

U.S. Army Soldiers from Delta Company, 3rd Battalion, 187th Infantry Regiment, 3rd Brigade Combat Team, 101st Airborne Division (Air Assault), fire the TOW missile system during a real fire at Fort Campbell, Ky. October 24, 2018 - File photo.

U.S. Army Soldiers from Delta Company, 3rd Battalion, 187th Infantry Regiment, 3rd Brigade Combat Team, 101st Airborne Division (Air Assault), fire the TOW missile system during a real fire at Fort Campbell, Ky. October 24, 2018 – File photo.
(US Army photo by Captain Justin Wright)

However, advanced algorithms can synthesize and analyze radar returns, infrared sensor data, navigational details, and countermeasures, bounce all that information from an existing and seemingly limitless database, and present options to decision makers. human immediately. Perhaps the largest incoming weapons will need to be destroyed with an explosive and kinetic interceptor like a Coyote drone? Perhaps a small group of electronically powered mini-drones could be better countered by an EW weapon capable of catching the target and deflecting it off course? Or, moreover, perhaps the combat is in an urban area where explosive fragments can injure civilians, a circumstance that may require a laser? However, weather or certain terrain features can impede the ability of a laser to incinerate oncoming targets, requiring a different defensive solution. AI programs can now determine which methods have been effective in the past in variable but specific scenarios and present commanders with a number of options. This type of application, which is now being rapidly developed by Army scientists, researchers and weapons developers, could bring new dimensions to the war. It is exactly what Jette imagines.


Jette compared this process to an interesting and meaningful multi-service term called "gun hold, tight guns, free guns." Withholding weapons, according to a 2002 multi-service "Brevity Code" manual, means only shooting in self-defense or in response to an order: Tight weapons means shooting at targets positively identified as hostile; Gunless means only shooting at targets not identified as friends. AI, Jette explained, can greatly speed up this process. Perhaps some elements of this could be done autonomously?

In these cases involving the use of force for defensive purposes, if applied in a non-lethal manner, they could be better executed by the robots themselves, Jette explained. Of course, the priority is ensuring that humans are aware of decisions about deadly force, although perhaps this type of defensive application could save lives in seconds. The concept, as Jette seemed to explain, was to synergize and optimize the combination of ideas between the analytical and procedural functions that AI best performed, while preserving and drawing upon the unique attributes and faculties of human cognition.

"How far can we leverage computational capabilities to do that in real time, or near-real-time functionality for a weapons system? We are looking at target identification and developing shooting solutions, "said Jette.

Ultimately, Jette explained that the fundamental concept with all of this is command and control, essentially advancing the need to analyze the precise extent to which new technologies change and inform tactical warfare.


"I think this requires a thought process that is different from what we have explored so far, because we have focused so much on technology. We have small and medium sized robotic systems and we have numerous artificial intelligence efforts. The real difference will be thinking about that second and third layer, seeing how we change the way we think about command and control, and taking advantage of the capabilities that are inherent in the system, "said Jette.

– Kris Osborn is the managing editor of Warrior Maven and defense editor of The National Interest –


Please enter your comment!
Please enter your name here