Using AI In Warfare

Using AI In Warfare

 

 

By Karen Schumacher

 

While Israel is being condemned for its actions taken against Hamas following the October 7, 2023 attack in Israel, there are some aspects of warfare being used by Israel which most may not be aware.

 

In February 2023, Israel announced the use of Artificial Intelligence (AI) to detect and classify terrorist targets which would accelerate the ability to thwart activity.  This type of warfare was previously used by Israel in 2021.  Using technology for rapid enemy targeting has been studied by Israel since at least 2019 through its Targeting Directorate program, which has resulted in an increase in the number of targets that could be identified.  Israel Defense Forces (IDF) states this is “not a list of operatives eligible for attack,” but used only to aid in human analysis.  More clearly, Israel states these AI identified targets are only for the purposes of determining which ones to strike based on human analysis of the data.

 

AI has the capability of collecting large amounts of intelligence data such as from information channels, social media, and phone usage to determine targets, and determining a location of the enemy.  Similar activity by human intelligence analytics would take days.  Determining the location of a target can take minutes with AI.

 

While this may sound like an advancement that contributes to the elimination of an enemy, other investigations into its operations contends a different perspective.

 

Based on “interviews with six anonymous Israeli intelligence officers,” +972, an independent, non-profit magazine, reported in April, 2024 that AI use was expanded through Israel’s AI based Lavender program that tracks people, a system that decides who to bomb in Gaza, and is being used with little human oversight.  According to +972, “during the first weeks of the war, the army almost completely relied on Lavender, which clocked as many as 37,000 Palestinians as suspected militants — and their homes — for possible air strikes.”

 

Israel’s intelligence Unit 8200, comparable to the U.S. National Security Agency, created the Lavender program.  This Unit 8200 Commander explained how AI was used to identify a Hamas commander in 2021.   In spite of this sophisticated technology, Israel failed to foresee the coming Hamas attack.

 

In response, Israeli military reports that AI is used to “cross-reference intelligence sources, in order to produce up-to-date layers of information on the military operatives of terrorist organizations” and “does not include automated target annihilation.”

 

While +972 claims Israel designated all operatives in the Hamas military wings as human targets, regardless of military importance, no collateral information on “Operation Iron Swords” supports that claim.  Iron Swords was the name that  referenced a wide range of ground operation counteroffensives.  Israel reiterated the “army does not use an artificial intelligence system that identifies (terrorist) operatives or tries to predict whether a person is a (terrorist). Information systems are merely tools for analysts in the target identification process.”

 

So there are two reports, one stating AI has been used for not only identifying targets, but is being used as the basis for actual attacks, some of those attacks occurring in target homes, along with other claims that civilian casualties were of no consequence.  Israeli reports state that is not true, AI is only used for identifying targets which is then used for analysis by humans for potential targeting.  How does one know which report to believe?  There are no links to verify the +972 information, just anonymous sources.  And +927 is no favorite of Netanyahu for its left of center reporting.

 

The use of AI technology in warfare raises ethical and legal questions that have not been addressed.  The World Economic Forum (WEF) put its two-cents worth in on the issue, generally addressing the Lethal Autonomous Weapon Systems (LAWS) aspect, and it just being another issue for regulation.

 

As usual, the U.S. State Department has bloviated about non-binding guidelines for the use of AI in warfare under international law.  There was even a global summit on the responsible use of AI in the military.  The United States isn’t innocent in this trend, the military has been exploring the same use of AI technology, and for the potential use of AI as well.

 

With continued advancement of this technology, there seems to be a fine line between “semi-autonomous systems and entirely automated killing machines.”  Some Israeli leaders have proclaimed their intention to become an “AI superpower” and “seeks to position itself as a global leader in autonomous weaponry.”  Robert Work, former U.S. Deputy Secretary of Defense, states at the 1’16”:36 mark that China is racing to be ahead of the U.S. in military power with AI and autonomous technology that is a “national plan.”

 

But this type of AI warfare is already being used in war elsewhere, perhaps a perfect opportunity for testing the new technology.  Not surprisingly, it is being advanced by Silicon Valley for its profit making ability.  Eric Schmidt, former Google CEO, “Is Building the Perfect AI War-Fighting Machine” for such purposes as global security, with Ukraine being a suitable AI lab for experimentation.  Palmer Luckey is another contributor to this type of warfare with his Anduril defense tech company.  These youngsters are building a compilation of weapons as if they were to be used in a video game.  Maybe they should spend some real time in a war zone to see the devastation brought on by wars.

 

Maybe the truth is it has always been this way, man developing more sophisticated weaponry to destroy each other.  The question is how should heinous people who want to dominate and destroy others be stopped or eliminated?  Over time, there has never been a shortage of those individuals in the world.

 

The other question is the danger of AI, maybe it is the real enemy.  Could this technology literally take over one day, some say it is already training itself.  Or will AI affect what it means to be human, possibly leading to extinction?

 

In 1970, such a proposition was taken in the movie Colossus: The Forbin Project,  Two computers coming together and essentially taking over the world with the threat of destruction if not heeded.  Maybe AI technology is the real threat that should be stopped now rather than using it to make war more sophisticated.  There are no easy answers to what is just one more problem mankind is creating.

 

 

Categories: