Artificial Intelligence – Informed Comment https://www.juancole.com Thoughts on the Middle East, History and Religion Wed, 24 Apr 2024 03:23:17 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.9 Gaza War: Artificial Intelligence is radically changing Targeting Speeds and Scale of Civilian Harm https://www.juancole.com/2024/04/artificial-intelligence-radically.html Wed, 24 Apr 2024 04:06:29 +0000 https://www.juancole.com/?p=218208 By Lauren Gould, Utrecht University; Linde Arentze, NIOD Institute for War, Holocaust and Genocide Studies; and Marijn Hoijtink, University of Antwerp | –

(The Conversation) – As Israel’s air campaign in Gaza enters its sixth month after Hamas’s terrorist attacks on October 7, it has been described by experts as one of the most relentless and deadliest campaigns in recent history. It is also one of the first being coordinated, in part, by algorithms.

Artificial intelligence (AI) is being used to assist with everything from identifying and prioritising targets to assigning the weapons to be used against those targets.

Academic commentators have long focused on the potential of algorithms in war to highlight how they will increase the speed and scale of fighting. But as recent revelations show, algorithms are now being employed at a large scale and in densely populated urban contexts.

This includes the conflicts in Gaza and Ukraine, but also in Yemen, Iraq and Syria, where the US is experimenting with algorithms to target potential terrorists through Project Maven.

Amid this acceleration, it is crucial to take a careful look at what the use of AI in warfare actually means. It is important to do so, not from the perspective of those in power, but from those officers executing it, and those civilians undergoing its violent effects in Gaza.

This focus highlights the limits of keeping a human in the loop as a failsafe and central response to the use of AI in war. As AI-enabled targeting becomes increasingly computerised, the speed of targeting accelerates, human oversight diminishes and the scale of civilian harm increases.

Speed of targeting

Reports by Israeli publications +927 Magazine and Local Call give us a glimpse into the experience of 13 Israeli officials working with three AI-enabled decision-making systems in Gaza called “Gospel”, “Lavender” and “Where’s Daddy?”.

These systems are reportedly trained to recognise features that are believed to characterise people associated with the military arm of Hamas. These features include membership of the same WhatsApp group as a known militant, changing cell phones every few months, or changing addresses frequently.

The systems are then supposedly tasked with analysing data collected on Gaza’s 2.3 million residents through mass surveillance. Based on the predetermined features, the systems predict the likelihood that a person is a member of Hamas (Lavender), that a building houses such a person (Gospel), or that such a person has entered their home (Where’s Daddy?).

In the investigative reports named above, intelligence officers explained how Gospel helped them go “from 50 targets per year” to “100 targets in one day” – and that, at its peak, Lavender managed to “generate 37,000 people as potential human targets”. They also reflected on how using AI cuts down deliberation time: “I would invest 20 seconds for each target at this stage … I had zero added value as a human … it saved a lot of time.”

They justified this lack of human oversight in light of a manual check the Israel Defense Forces (IDF) ran on a sample of several hundred targets generated by Lavender in the first weeks of the Gaza conflict, through which a 90% accuracy rate was reportedly established. While details of this manual check are likely to remain classified, a 10% inaccuracy rate for a system used to make 37,000 life-and-death decisions will inherently result in devastatingly destructive realities.


“Lavender III,” Digital Imagining, Dream, Dreamland v. 3, 2024

But importantly, any accuracy rate number that sounds reasonably high makes it more likely that algorithmic targeting will be relied on as it allows trust to be delegated to the AI system. As one IDF officer told +927 magazine: “Because of the scope and magnitude, the protocol was that even if you don’t know for sure that the machine is right, you know that statistically it’s fine. So you go for it.”

The IDF denied these revelations in an official statement to The Guardian. A spokesperson said that while the IDF does use “information management tools […] in order to help intelligence analysts to gather and optimally analyse the intelligence, obtained from a variety of sources, it does not use an AI system that identifies terrorist operatives”.

The Guardian has since, however, published a video of a senior official of the Israeli elite intelligence Unit 8200 talking last year about the use of machine learning “magic powder” to help identify Hamas targets in Gaza. The newspaper has also confirmed that the commander of the same unit wrote in 2021, under a pseudonym, that such AI technologies would resolve the “human bottleneck for both locating the new targets and decision-making to approve the targets”.

Scale of civilian harm

AI accelerates the speed of warfare in terms of the number of targets produced and the time to decide on them. While these systems inherently decrease the ability of humans to control the validity of computer-generated targets, they simultaneously make these decisions appear more objective and statistically correct due to the value that we generally ascribe to computer-based systems and their outcome.

This allows for the further normalisation of machine-directed killing, amounting to more violence, not less.

While media reports often focus on the number of casualties, body counts – similar to computer-generated targets – have the tendency to present victims as objects that can be counted. This reinforces a very sterile image of war. It glosses over the reality of more than 34,000 people dead, 766,000 injured and the destruction of or damage to 60% of Gaza’s buildings and the displaced persons, the lack of access to electricity, food, water and medicine.

It fails to emphasise the horrific stories of how these things tend to compound each other. For example, one civilian, Shorouk al-Rantisi, was reportedly found under the rubble after an airstrike on Jabalia refugee camp and had to wait 12 days to be operated on without painkillers and now resides in another refugee camp with no running water to tend to her wounds.

Aside from increasing the speed of targeting and therefore exacerbating the predictable patterns of civilian harm in urban warfare, algorithmic warfare is likely to compound harm in new and under-researched ways. First, as civilians flee their destroyed homes, they frequently change addresses or give their phones to loved ones.

Such survival behaviour corresponds to what the reports on Lavender say the AI system has been programmed to identify as likely association with Hamas. These civilians, thereby unknowingly, make themselves suspect for lethal targeting.

Beyond targeting, these AI-enabled systems also inform additional forms of violence. An illustrative story is that of the fleeing poet Mosab Abu Toha, who was allegedly arrested and tortured at a military checkpoint. It was ultimately reported by the New York Times that he, along with hundreds of other Palestinians, was wrongfully identified as Hamas by the IDF’s use of AI facial recognition and Google photos.

Over and beyond the deaths, injuries and destruction, these are the compounding effects of algorithmic warfare. It becomes a psychic imprisonment where people know they are under constant surveillance, yet do not know which behavioural or physical “features” will be acted on by the machine.

From our work as analysts of the use of AI in warfare, it is apparent that our focus should not solely be on the technical prowess of AI systems or the figure of the human-in-the-loop as a failsafe. We must also consider these systems’ ability to alter the human-machine-human interactions, where those executing algorithmic violence are merely rubber stamping the output generated by the AI system, and those undergoing the violence are dehumanised in unprecedented ways.The Conversation

Lauren Gould, Assistant Professor, Conflict Studies, Utrecht University; Linde Arentze, Researcher into AI and Remote Warfare, NIOD Institute for War, Holocaust and Genocide Studies, and Marijn Hoijtink, Associate Professor in International Relations, University of Antwerp

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
A Brief History of Kill Lists, From Langley to Lavender https://www.juancole.com/2024/04/history-langley-lavender.html Wed, 17 Apr 2024 04:02:05 +0000 https://www.juancole.com/?p=218072 ( Code Pink ) – The Israeli online magazine +972 has published a detailed report on Israel’s use of an artificial intelligence (AI) system called “Lavender” to target thousands of Palestinian men in its bombing campaign in Gaza. When Israel attacked Gaza after October 7, the Lavender system had a database of 37,000 Palestinian men with suspected links to Hamas or Palestinian Islamic Jihad (PIJ).

Lavender assigns a numerical score, from one to a hundred, to every man in Gaza, based mainly on cellphone and social media data, and automatically adds those with high scores to its kill list of suspected militants. Israel uses another automated system, known as “Where’s Daddy?”, to call in airstrikes to kill these men and their families in their homes.

The report is based on interviews with six Israeli intelligence officers who have worked with these systems. As one of the officers explained to +972, by adding a name from a Lavender-generated list to the Where’s Daddy home tracking system, he can place the man’s home under constant drone surveillance, and an airstrike will be launched once he comes home.

The officers said the “collateral” killing of the men’s extended families was of little consequence to Israel. “Let’s say you calculate [that there is one] Hamas [operative] plus 10 [civilians in the house],” the officer said. “Usually, these 10 will be women and children. So absurdly, it turns out that most of the people you killed were women and children.”

The officers explained that the decision to target thousands of these men in their homes is just a question of expediency. It is simply easier to wait for them to come home to the address on file in the system, and then bomb that house or apartment building, than to search for them in the chaos of the war-torn Gaza Strip.

The officers who spoke to 972+ explained that in previous Israeli massacres in Gaza, they could not generate targets quickly enough to satisfy their political and military bosses, and so these AI systems were designed to solve that problem for them. The speed with which Lavender can generate new targets only gives its human minders an average of 20 seconds to review and rubber-stamp each name, even though they know from tests of the Lavender system that at least 10% of the men chosen for assassination and familicide have only an insignificant or a mistaken connection with Hamas or PIJ. 

The Lavender AI system is a new weapon, developed by Israel. But the kind of kill lists that it generates have a long pedigree in U.S. wars, occupations and CIA regime change operations. Since the birth of the CIA after the Second World War, the technology used to create kill lists has evolved from the CIA’s earliest coups in Iran and Guatemala, to Indonesia and the Phoenix program in Vietnam in the 1960s, to Latin America in the 1970s and 1980s and to the U.S. occupations of Iraq and Afghanistan.

Just as U.S. weapons development aims to be at the cutting edge, or the killing edge, of new technology, the CIA and U.S. military intelligence have always tried to use the latest data processing technology to identify and kill their enemies.

The CIA learned some of these methods from German intelligence officers captured at the end of the Second World War. Many of the names on Nazi kill lists were generated by an intelligence unit called Fremde Heere Ost (Foreign Armies East), under the command of Major General Reinhard Gehlen, Germany’s spy chief on the eastern front(see David Talbot, The Devil’s Chessboard, p. 268).

Gehlen and the FHO had no computers, but they did have access to four million Soviet POWs from all over the USSR, and no compunction about torturing them to learn the names of Jews and communist officials in their hometowns to compile kill lists for the Gestapo and Einsatzgruppen.

After the war, like the 1,600 German scientists spirited out of Germany in Operation Paperclip, the United States flew Gehlen and his senior staff to Fort Hunt in Virginia. They were welcomed by Allen Dulles, soon to be the first and still the longest-serving director of the CIA. Dulles sent them back to Pullach in occupied Germany to resume their anti-Soviet operations as CIA agents. The Gehlen Organization formed the nucleus of what became the BND, the new West German intelligence service, with Reinhard Gehlen as its director until he retired in 1968.

After a CIA coup removed Iran’s popular, democratically elected prime minister Mohammad Mosaddegh in 1953, a CIA team led by U.S. Major General Norman Schwarzkopf trained a new intelligence service, known as SAVAK, in the use of kill lists and torture. SAVAK used these skills to purge Iran’s government and military of suspected communists and later to hunt down anyone who dared to oppose the Shah.

By 1975, Amnesty International estimated that Iran was holding between 25,000 and 100,000 political prisoners, and had “the highest rate of death penalties in the world, no valid system of civilian courts and a history of torture that is beyond belief.”

In Guatemala, a CIA coup in 1954 replaced the democratic government of Jacobo Arbenz Guzman with a brutal dictatorship. As resistance grew in the 1960s, U.S. special forces joined the Guatemalan army in a scorched earth campaign in Zacapa, which killed 15,000 people to defeat a few hundred armed rebels. Meanwhile, CIA-trained urban death squads abducted, tortured and killed PGT (Guatemalan Labor Party) members in Guatemala City, notably 28 prominent labor leaders who were abducted and disappeared in March 1966.

Once this first wave of resistance was suppressed, the CIA set up a new telecommunications center and intelligence agency, based in the presidential palace. It compiled a database of “subversives” across the country that included leaders of farming co-ops and labor, student and indigenous activists, to provide ever-growing lists for the death squads. The resulting civil war became a genocide against indigenous people in Ixil and the western highlands that killed or disappeared at least 200,000 people.

TRT World Video: “‘Lavender’: How Israel’s AI system is killing Palestinians in Gaza”

This pattern was repeated across the world, wherever popular, progressive leaders offered hope to their people in ways that challenged U.S. interests. As historian Gabriel Kolko wrote in 1988, “The irony of U.S. policy in the Third World is that, while it has always justified its larger objectives and efforts in the name of anticommunism, its own goals have made it unable to tolerate change from any quarter that impinged significantly on its own interests.”

When General Suharto seized power in Indonesia in 1965, the U.S. Embassy compiled a list of 5,000 communists for his death squads to hunt down and kill. The CIA estimated that they eventually killed 250,000 people, while other estimates run as high as a million.

Twenty-five years later, journalist Kathy Kadane investigated the U.S. role in the massacre in Indonesia, and spoke to Robert Martens, the political officer who led the State-CIA team that compiled the kill list. “It really was a big help to the army,” Martens told Kadane. “They probably killed a lot of people, and I probably have a lot of blood on my hands. But that’s not all bad – there’s a time when you have to strike hard at a decisive moment.”

Kathy Kadane also spoke to former CIA director William Colby, who was the head of the CIA’s Far East division in the 1960s. Colby compared the U.S. role in Indonesia to the Phoenix Program in Vietnam, which was launched two years later, claiming that they were both successful programs to identify and eliminate the organizational structure of America’s communist enemies. 

The Phoenix program was designed to uncover and dismantle the National Liberation Front’s (NLF) shadow government across South Vietnam. Phoenix’s Combined Intelligence Center in Saigon fed thousands of names into an IBM 1401 computer, along with their locations and their alleged roles in the NLF. The CIA credited the Phoenix program with killing 26,369 NLF officials, while another 55,000 were imprisoned or persuaded to defect. Seymour Hersh reviewed South Vietnamese government documents that put the death toll at 41,000.

How many of the dead were correctly identified as NLF officials may be impossible to know, but Americans who took part in Phoenix operations reported killing the wrong people in many cases. Navy SEAL Elton Manzione told author Douglas Valentine (The Phoenix Program) how he killed two young girls in a night raid on a village, and then sat down on a stack of ammunition crates with a hand grenade and an M-16, threatening to blow himself up, until he got a ticket home. 

“The whole aura of the Vietnam War was influenced by what went on in the “hunter-killer” teams of Phoenix, Delta, etc,” Manzione told Valentine. “That was the point at which many of us realized we were no longer the good guys in the white hats defending freedom – that we were assassins, pure and simple. That disillusionment carried over to all other aspects of the war and was eventually responsible for it becoming America’s most unpopular war.”

Even as the U.S. defeat in Vietnam and the “war fatigue” in the United States led to a more peaceful next decade, the CIA continued to engineer and support coups around the world, and to provide post-coup governments with increasingly computerized kill lists to consolidate their rule.

After supporting General Pinochet’s coup in Chile in 1973, the CIA played a central role in Operation Condor, an alliance between right-wing military governments in Argentina, Brazil, Chile, Uruguay, Paraguay and Bolivia, to hunt down tens of thousands of their and each other’s political opponents and dissidents, killing and disappearing at least 60,000 people.

The CIA’s role in Operation Condor is still shrouded in secrecy, but Patrice McSherry, a political scientist at Long Island University, has investigated the U.S. role and concluded, “Operation Condor also had the covert support of the US government. Washington provided Condor with military intelligence and training, financial assistance, advanced computers, sophisticated tracking technology, and access to the continental telecommunications system housed in the Panama Canal Zone.”

McSherry’s research revealed how the CIA supported the intelligence services of the Condor states with computerized links, a telex system, and purpose-built encoding and decoding machines made by the CIA Logistics Department. As she wrote in her book, Predatory States: Operation Condor and Covert War in Latin America:    

“The Condor system’s secure communications system, Condortel,… allowed Condor operations centers in member countries to communicate with one another and with the parent station in a U.S. facility in the Panama Canal Zone. This link to the U.S. military-intelligence complex in Panama is a key piece of evidence regarding secret U.S. sponsorship of Condor…”

Operation Condor ultimately failed, but the U.S. provided similar support and training to right-wing governments in Colombia and Central America throughout the 1980s in what senior military officers have called a “quiet, disguised, media-free approach” to repression and kill lists.

The U.S. School of the Americas (SOA) trained thousands of Latin American officers in the use of torture and death squads, as Major Joseph Blair, the SOA’s former chief of instruction described to John Pilger for his film, The War You Don’t See:

“The doctrine that was taught was that, if you want information, you use physical abuse, false imprisonment, threats to family members, and killing. If you can’t get the information you want, if you can’t get the person to shut up or stop what they’re doing, you assassinate them – and you assassinate them with one of your death squads.”

When the same methods were transferred to the U.S. hostile military occupation of Iraq after 2003, Newsweek headlined it “The Salvador Option.” A U.S. officer explained to Newsweek that U.S. and Iraqi death squads were targeting Iraqi civilians as well as resistance fighters. “The Sunni population is paying no price for the support it is giving to the terrorists,” he said. “From their point of view, it is cost-free. We have to change that equation.”

The United States sent two veterans of its dirty wars in Latin America to Iraq to play key roles in that campaign. Colonel James Steele led the U.S. Military Advisor Group in El Salvador from 1984 to 1986, training and supervising Salvadoran forces who killed tens of thousands of civilians. He was also deeply involved in the Iran-Contra scandal, narrowly escaping a prison sentence for his role supervising shipments from Ilopango air base in El Salvador to the U.S.-backed Contras in Honduras and Nicaragua.

In Iraq, Steele oversaw the training of the Interior Ministry’s Special Police Commandos – rebranded as “National” and later “Federal” Police after the discovery of their al-Jadiriyah torture center and other atrocities.

Bayan al-Jabr, a commander in the Iranian-trained Badr Brigade militia, was appointed Interior Minister in 2005, and Badr militiamen were integrated into the Wolf Brigade death squad and other Special Police units. Jabr’s chief adviser was Steven Casteel, the former intelligence chief for the U.S. Drug Enforcement Agency (DEA) in Latin America.

The Interior Ministry death squads waged a dirty war in Baghdad and other cities, filling the Baghdad morgue with up to 1,800 corpses per month, while Casteel fed the western media absurd cover stories, such as that the death squads were all “insurgents” in stolen police uniforms. 

Meanwhile U.S. special operations forces conducted “kill-or-capture” night raids in search of Resistance leaders. General Stanley McChrystal, the commander of Joint Special Operations Command from 2003-2008, oversaw the development of a database system, used in Iraq and Afghanistan, that compiled cellphone numbers mined from captured cellphones to generate an ever-expanding target list for night raids and air strikes.

The targeting of cellphones instead of actual people enabled the automation of the targeting system, and explicitly excluded using human intelligence to confirm identities. Two senior U.S. commanders told the Washington Post that only half the night raids attacked the right house or person.

In Afghanistan, President Obama put McChrystal in charge of U.S. and NATO forces in 2009, and his cellphone-based “social network analysis” enabled an exponential increase in night raids, from 20 raids per month in May 2009 to up to 40 per night by April 2011.

As with the Lavender system in Gaza, this huge increase in targets was achieved by taking a system originally designed to identify and track a small number of senior enemy commanders and applying it to anyone suspected of having links with the Taliban, based on their cellphone data.

This led to the capture of an endless flood of innocent civilians, so that most civilian detainees had to be quickly released to make room for new ones. The increased killing of innocent civilians in night raids and airstrikes fueled already fierce resistance to the U.S. and NATO occupation and ultimately led to its defeat.

President Obama’s drone campaign to kill suspected enemies in Pakistan, Yemen and Somalia was just as indiscriminate, with reports suggesting that 90% of the people it killed in Pakistan were innocent civilians.

And yet Obama and his national security team kept meeting in the White House every “Terror Tuesday” to select who the drones would target that week, using an Orwellian, computerized “disposition matrix” to provide technological cover for their life and death decisions.   

Looking at this evolution of ever-more automated systems for killing and capturing enemies, we can see how, as the information technology used has advanced from telexes to cellphones and from early IBM computers to artificial intelligence, the human intelligence and sensibility that could spot mistakes, prioritize human life and prevent the killing of innocent civilians has been progressively marginalized and excluded, making these operations more brutal and horrifying than ever.

Nicolas has at least two good friends who survived the dirty wars in Latin America because someone who worked in the police or military got word to them that their names were on a death list, one in Argentina, the other in Guatemala. If their fates had been decided by an AI machine like Lavender, they would both be long dead.

As with supposed advances in other types of weapons technology, like drones and “precision” bombs and missiles, innovations that claim to make targeting more precise and eliminate human error have instead led to the automated mass murder of innocent people, especially women and children, bringing us full circle from one holocaust to the next.

Via Code Pink

]]>
Gaza Conflict: Israel using AI to identify Human Targets raises Fears Innocents are Targeted https://www.juancole.com/2024/04/conflict-identify-innocents.html Sat, 13 Apr 2024 04:06:51 +0000 https://www.juancole.com/?p=218008 By Elke Schwarz, Queen Mary University of London | –

A report by Jerusalem-based investigative journalists published in +972 magazine finds that AI targeting systems have played a key role in identifying – and potentially misidentifying – tens of thousands of targets in Gaza. This suggests that autonomous warfare is no longer a future scenario. It is already here and the consequences are horrifying.

There are two technologies in question. The first, “Lavender”, is an AI recommendation system designed to use algorithms to identify Hamas operatives as targets. The second, the grotesquely named “Where’s Daddy?”, is a system which tracks targets geographically so that they can be followed into their family residences before being attacked. Together, these two systems constitute an automation of the find-fix-track-target components of what is known by the modern military as the “kill chain”.

Systems such as Lavender are not autonomous weapons, but they do accelerate the kill chain and make the process of killing progressively more autonomous. AI targeting systems draw on data from computer sensors and other sources to statistically assess what constitutes a potential target. Vast amounts of this data are gathered by Israeli intelligence through surveillance on the 2.3 million inhabitants of Gaza.

Such systems are trained on a set of data to produce the profile of a Hamas operative. This could be data about gender, age, appearance, movement patterns, social network relationships, accessories, and other “relevant features”. They then work to match actual Palestinians to this profile by degree of fit. The category of what constitutes relevant features of a target can be set as stringently or as loosely as is desired. In the case of Lavender, it seems one of the key equations was “male equals militant”. This has echoes of the infamous “all military-aged males are potential targets” mandate of the 2010 US drone wars in which the Obama administration identified and assassinated hundreds of people designated as enemies “based on metadata”.

What is different with AI in the mix is the speed with which targets can be algorithmically determined and the mandate of action this issues. The +972 report indicates that the use of this technology has led to the dispassionate annihilation of thousands of eligible – and ineligible – targets at speed and without much human oversight.


“Lavender 3,” digital, Dream/ Dreamworld v. 3, 2024.

The Israel Defense Forces (IDF) were swift to deny the use of AI targeting systems of this kind. And it is difficult to verify independently whether and, if so, the extent to which they have been used, and how exactly they function. But the functionalities described by the report are entirely plausible, especially given the IDF’s own boasts to be “one of the most technological organisations” and an early adopter of AI.

With military AI programs around the world striving to shorten what the US military calls the “sensor-to-shooter timeline” and “increase lethality” in their operations, why would an organisation such as the IDF not avail themselves of the latest technologies?

The fact is, systems such as Lavender and Where’s Daddy? are the manifestation of a broader trend which has been underway for a good decade and the IDF and its elite units are far from the only ones seeking to implement more AI-targeting systems into their processes.

When machines trump humans

Earlier this year, Bloomberg reported on the latest version of Project Maven, the US Department of Defense AI pathfinder programme, which has evolved from being a sensor data analysis programme in 2017 to a full-blown AI-enabled target recommendation system built for speed. As Bloomberg journalist Katrina Manson reports, the operator “can now sign off on as many as 80 targets in an hour of work, versus 30 without it”.

Manson quotes a US army officer tasked with learning the system describing the process of concurring with the algorithm’s conclusions, delivered in a rapid staccato: “Accept. Accept, Accept”. Evident here is how the human operator is deeply embedded in digital logics that are difficult to contest. This gives rise to a logic of speed and increased output that trumps all else.

The efficient production of death is reflected also in the +972 account, which indicated an enormous pressure to accelerate and increase the production of targets and the killing of these targets. As one of the sources says: “We were constantly being pressured: bring us more targets. They really shouted at us. We finished [killing] our targets very quickly”.

Built-in biases

Systems like Lavender raise many ethical questions pertaining to training data, biases, accuracy, error rates and, importantly, questions of automation bias. Automation bias cedes all authority, including moral authority, to the dispassionate interface of statistical processing.

Speed and lethality are the watchwords for military tech. But in prioritising AI, the scope for human agency is marginalised. The logic of the system requires this, owing to the comparatively slow cognitive systems of the human. It also removes the human sense of responsibility for computer-produced outcomes.

I’ve written elsewhere how this complicates notions of control (at all levels) in ways that we must take into consideration. When AI, machine learning and human reasoning form a tight ecosystem, the capacity for human control is limited. Humans have a tendency to trust whatever computers say, especially when they move too fast for us to follow.

The problem of speed and acceleration also produces a general sense of urgency, which privileges action over non-action. This turns categories such as “collateral damage” or “military necessity”, which should serve as a restraint to violence, into channels for producing more violence.

I am reminded of the military scholar Christopher Coker’s words: “we must choose our tools carefully, not because they are inhumane (all weapons are) but because the more we come to rely on them, the more they shape our view of the world”. It is clear that military AI shapes our view of the world. Tragically, Lavender gives us cause to realise that this view is laden with violence.The Conversation

Elke Schwarz, Reader in Political Theory, Queen Mary University of London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Four Ways AI could Help us Respond to Climate Change https://www.juancole.com/2024/02/respond-climate-change.html Wed, 28 Feb 2024 05:06:31 +0000 https://www.juancole.com/?p=217319 By Lakshmi Babu Saheer, Anglia Ruskin University | –

(The Conversation) – Advanced AI systems are coming under increasing critcism for how much energy they use. But it’s important to remember that AI could also contribute in various ways to our response to climate change.

Climate change can be broken down into several smaller problems that must be addressed as part of an overarching strategy for adapting to and mitigating it. These include identifying sources of emissions, enhancing the production and use of renewable energy and predicting calamities like floods and fires.

My own research looks at how AI can be harnessed for predicting greenhouse gas emissions from cities and farms or
to understand changes in vegetation, biodiversity and terrain from satellite images.

Here are four different areas where AI has already managed to master some of the smaller tasks necessary for a wider confrontation with the climate crisis.


“AI and Climate Change,” Digital, Dream, Dreamland v. 3, 2024

1. Electricity

AI could help reduce energy-related emissions by more accurately forecasting energy supply and demand.

AI can learn patterns in how and when people use energy. It can also accurately forecast how much energy will be generated from sources like wind and solar depending on the weather and so help to maximise the use of clean energy.

For example, by estimating the amount of solar power generated from panels (based on sunlight duration or weather conditions), AI can help plan the timing of laundry or charging of electric vehicles to help consumers make the most of this renewable energy. On a grander scale, it could help grid operators pre-empt and mitigate gaps in supply.

Researchers in Iran used AI to predict the energy consumption of a research centre by taking account of its occupancy, structure, materials and local weather conditions. The system also used algorithms to optimise the building’s energy use by proposing appropriate insulation measures and heating controls and how much lighting and power was necessary based on the number of people present, ultimately reducing it by 35%.

2. Transport

Transport accounts for roughly one-fifth of global CO₂ emissions. AI models can encourage green travel options by suggesting the most efficient routes for drivers, with fewer hills, less traffic and constant speeds, and so minimise emissions.

An AI-based system suggested routes for electric vehicles in the city of Gothenburg, Sweden. The system used features like vehicle speed and the location of charging points to find optimal routes that minimised energy use.

3. Agriculture

Studies have shown that better farming practices can reduce emissions. AI can ensure that space and fertilisers (which contribute to climate change) are used sparingly.

By predicting how much of a crop people will buy in a particular market, AI can help producers and distributors minimise waste. A 2017 study conducted by Stanford University in the US even showed that advanced AI models can predict county-level soybean yields.

This was possible using images from satellites to analyse and track the growth of crops. Researchers compared multiple models to accurately predict crop yields and the best performing one could predict a crop’s yield based on images of growing plants and other features, including the climate.

Knowing a crop’s probable yield weeks in advance can help governments and agencies plan alternative means of procuring food in advance of a bad harvest.

4. Disaster management

The prediction and management of disasters is a field where AI has made major contributions. AI models have studied images from drones to predict flood damage in the Indus basin in Pakistan.

The system is also useful for detecting the onset of a flood, helping with real-time rescue operation planning. The system could be used by government authorities to plan prompt relief measures.

These potential uses don’t erase the problem of AI’s energy consumption, however, To ensure AI can be a force for good in the fight against climate change, something will still have to be done about this.

The Conversation


Lakshmi Babu Saheer, Director of Computing Informatics and Applications Research Group, Anglia Ruskin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
AI Behavior, Human Destiny and the Rise of the Killer Robots https://www.juancole.com/2024/02/behavior-destiny-killer.html Wed, 21 Feb 2024 05:02:39 +0000 https://www.juancole.com/?p=217200 ( Tomdispatch.com ) – Yes, it’s already time to be worried — very worried. As the wars in Ukraine and Gaza have shown, the earliest drone equivalents of “killer robots” have made it onto the battlefield and proved to be devastating weapons. But at least they remain largely under human control. Imagine, for a moment, a world of war in which those aerial drones (or their ground and sea equivalents) controlled us, rather than vice-versa. Then we would be on a destructively different planet in a fashion that might seem almost unimaginable today. Sadly, though, it’s anything but unimaginable, given the work on artificial intelligence (AI) and robot weaponry that the major powers have already begun. Now, let me take you into that arcane world and try to envision what the future of warfare might mean for the rest of us.

By combining AI with advanced robotics, the U.S. military and those of other advanced powers are already hard at work creating an array of self-guided “autonomous” weapons systems — combat drones that can employ lethal force independently of any human officers meant to command them. Called “killer robots” by critics, such devices include a variety of uncrewed or “unmanned” planes, tanks, ships, and submarines capable of autonomous operation. The U.S. Air Force, for example, is developing its “collaborative combat aircraft,” an unmanned aerial vehicle (UAV) intended to join piloted aircraft on high-risk missions. The Army is similarly testing a variety of autonomous unmanned ground vehicles (UGVs), while the Navy is experimenting with both unmanned surface vessels (USVs) and unmanned undersea vessels (UUVs, or drone submarines). China, Russia, Australia, and Israel are also working on such weaponry for the battlefields of the future.

The imminent appearance of those killing machines has generated concern and controversy globally, with some countries already seeking a total ban on them and others, including the U.S., planning to authorize their use only under human-supervised conditions. In Geneva, a group of states has even sought to prohibit the deployment and use of fully autonomous weapons, citing a 1980 U.N. treaty, the Convention on Certain Conventional Weapons, that aims to curb or outlaw non-nuclear munitions believed to be especially harmful to civilians. Meanwhile, in New York, the U.N. General Assembly held its first discussion of autonomous weapons last October and is planning a full-scale review of the topic this coming fall.

For the most part, debate over the battlefield use of such devices hinges on whether they will be empowered to take human lives without human oversight. Many religious and civil society organizations argue that such systems will be unable to distinguish between combatants and civilians on the battlefield and so should be banned in order to protect noncombatants from death or injury, as is required by international humanitarian law. American officials, on the other hand, contend that such weaponry can be designed to operate perfectly well within legal constraints.

However, neither side in this debate has addressed the most potentially unnerving aspect of using them in battle: the likelihood that, sooner or later, they’ll be able to communicate with each other without human intervention and, being “intelligent,” will be able to come up with their own unscripted tactics for defeating an enemy — or something else entirely. Such computer-driven groupthink, labeled “emergent behavior” by computer scientists, opens up a host of dangers not yet being considered by officials in Geneva, Washington, or at the U.N.

For the time being, most of the autonomous weaponry being developed by the American military will be unmanned (or, as they sometimes say, “uninhabited”) versions of existing combat platforms and will be designed to operate in conjunction with their crewed counterparts. While they might also have some capacity to communicate with each other, they’ll be part of a “networked” combat team whose mission will be dictated and overseen by human commanders. The Collaborative Combat Aircraft, for instance, is expected to serve as a “loyal wingman” for the manned F-35 stealth fighter, while conducting high-risk missions in contested airspace. The Army and Navy have largely followed a similar trajectory in their approach to the development of autonomous weaponry.

The Appeal of Robot “Swarms”

However, some American strategists have championed an alternative approach to the use of autonomous weapons on future battlefields in which they would serve not as junior colleagues in human-led teams but as coequal members of self-directed robot swarms. Such formations would consist of scores or even hundreds of AI-enabled UAVs, USVs, or UGVs — all able to communicate with one another, share data on changing battlefield conditions, and collectively alter their combat tactics as the group-mind deems necessary.

“Emerging robotic technologies will allow tomorrow’s forces to fight as a swarm, with greater mass, coordination, intelligence and speed than today’s networked forces,” predicted Paul Scharre, an early enthusiast of the concept, in a 2014 report for the Center for a New American Security (CNAS). “Networked, cooperative autonomous systems,” he wrote then, “will be capable of true swarming — cooperative behavior among distributed elements that gives rise to a coherent, intelligent whole.”

As Scharre made clear in his prophetic report, any full realization of the swarm concept would require the development of advanced algorithms that would enable autonomous combat systems to communicate with each other and “vote” on preferred modes of attack. This, he noted, would involve creating software capable of mimicking ants, bees, wolves, and other creatures that exhibit “swarm” behavior in nature. As Scharre put it, “Just like wolves in a pack present their enemy with an ever-shifting blur of threats from all directions, uninhabited vehicles that can coordinate maneuver and attack could be significantly more effective than uncoordinated systems operating en masse.”

In 2014, however, the technology needed to make such machine behavior possible was still in its infancy. To address that critical deficiency, the Department of Defense proceeded to fund research in the AI and robotics field, even as it also acquired such technology from private firms like Google and Microsoft. A key figure in that drive was Robert Work, a former colleague of Paul Scharre’s at CNAS and an early enthusiast of swarm warfare. Work served from 2014 to 2017 as deputy secretary of defense, a position that enabled him to steer ever-increasing sums of money to the development of high-tech weaponry, especially unmanned and autonomous systems.

From Mosaic to Replicator

Much of this effort was delegated to the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s in-house high-tech research organization. As part of a drive to develop AI for such collaborative swarm operations, DARPA initiated its “Mosaic” program, a series of projects intended to perfect the algorithms and other technologies needed to coordinate the activities of manned and unmanned combat systems in future high-intensity combat with Russia and/or China.

“Applying the great flexibility of the mosaic concept to warfare,” explained Dan Patt, deputy director of DARPA’s Strategic Technology Office, “lower-cost, less complex systems may be linked together in a vast number of ways to create desired, interwoven effects tailored to any scenario. The individual parts of a mosaic are attritable [dispensable], but together are invaluable for how they contribute to the whole.”

This concept of warfare apparently undergirds the new “Replicator” strategy announced by Deputy Secretary of Defense Kathleen Hicks just last summer. “Replicator is meant to help us overcome [China’s] biggest advantage, which is mass. More ships. More missiles. More people,” she told arms industry officials last August. By deploying thousands of autonomous UAVs, USVs, UUVs, and UGVs, she suggested, the U.S. military would be able to outwit, outmaneuver, and overpower China’s military, the People’s Liberation Army (PLA). “To stay ahead, we’re going to create a new state of the art… We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat.”

To obtain both the hardware and software needed to implement such an ambitious program, the Department of Defense is now seeking proposals from traditional defense contractors like Boeing and Raytheon as well as AI startups like Anduril and Shield AI. While large-scale devices like the Air Force’s Collaborative Combat Aircraft and the Navy’s Orca Extra-Large UUV may be included in this drive, the emphasis is on the rapid production of smaller, less complex systems like AeroVironment’s Switchblade attack drone, now used by Ukrainian troops to take out Russian tanks and armored vehicles behind enemy lines.

At the same time, the Pentagon is already calling on tech startups to develop the necessary software to facilitate communication and coordination among such disparate robotic units and their associated manned platforms. To facilitate this, the Air Force asked Congress for $50 million in its fiscal year 2024 budget to underwrite what it ominously enough calls Project VENOM, or “Viper Experimentation and Next-generation Operations Model.” Under VENOM, the Air Force will convert existing fighter aircraft into AI-governed UAVs and use them to test advanced autonomous software in multi-drone operations. The Army and Navy are testing similar systems.

When Swarms Choose Their Own Path

In other words, it’s only a matter of time before the U.S. military (and presumably China’s, Russia’s, and perhaps those of a few other powers) will be able to deploy swarms of autonomous weapons systems equipped with algorithms that allow them to communicate with each other and jointly choose novel, unpredictable combat maneuvers while in motion. Any participating robotic member of such swarms would be given a mission objective (“seek out and destroy all enemy radars and anti-aircraft missile batteries located within these [specified] geographical coordinates”) but not be given precise instructions on how to do so. That would allow them to select their own battle tactics in consultation with one another. If the limited test data we have is anything to go by, this could mean employing highly unconventional tactics never conceived for (and impossible to replicate by) human pilots and commanders.

The propensity for such interconnected AI systems to engage in novel, unplanned outcomes is what computer experts call “emergent behavior.” As ScienceDirect, a digest of scientific journals, explains it, “An emergent behavior can be described as a process whereby larger patterns arise through interactions among smaller or simpler entities that themselves do not exhibit such properties.” In military terms, this means that a swarm of autonomous weapons might jointly elect to adopt combat tactics none of the individual devices were programmed to perform — possibly achieving astounding results on the battlefield, but also conceivably engaging in escalatory acts unintended and unforeseen by their human commanders, including the destruction of critical civilian infrastructure or communications facilities used for nuclear as well as conventional operations.

At this point, of course, it’s almost impossible to predict what an alien group-mind might choose to do if armed with multiple weapons and cut off from human oversight. Supposedly, such systems would be outfitted with failsafe mechanisms requiring that they return to base if communications with their human supervisors were lost, whether due to enemy jamming or for any other reason. Who knows, however, how such thinking machines would function in demanding real-world conditions or if, in fact, the group-mind would prove capable of overriding such directives and striking out on its own.

What then? Might they choose to keep fighting beyond their preprogrammed limits, provoking unintended escalation — even, conceivably, of a nuclear kind? Or would they choose to stop their attacks on enemy forces and instead interfere with the operations of friendly ones, perhaps firing on and devastating them (as Skynet does in the classic science fiction Terminator movie series)? Or might they engage in behaviors that, for better or infinitely worse, are entirely beyond our imagination?

Top U.S. military and diplomatic officials insist that AI can indeed be used without incurring such future risks and that this country will only employ devices that incorporate thoroughly adequate safeguards against any future dangerous misbehavior. That is, in fact, the essential point made in the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” issued by the State Department in February 2023. Many prominent security and technology officials are, however, all too aware of the potential risks of emergent behavior in future robotic weaponry and continue to issue warnings against the rapid utilization of AI in warfare.

Of particular note is the final report that the National Security Commission on Artificial Intelligence issued in February 2021. Co-chaired by Robert Work (back at CNAS after his stint at the Pentagon) and Eric Schmidt, former CEO of Google, the commission recommended the rapid utilization of AI by the U.S. military to ensure victory in any future conflict with China and/or Russia. However, it also voiced concern about the potential dangers of robot-saturated battlefields.

“The unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability,” the report noted. This could occur for a number of reasons, including “because of challenging and untested complexities of interaction between AI-enabled and autonomous weapon systems [that is, emergent behaviors] on the battlefield.” Given that danger, it concluded, “countries must take actions which focus on reducing risks associated with AI-enabled and autonomous weapon systems.”

When the leading advocates of autonomous weaponry tell us to be concerned about the unintended dangers posed by their use in battle, the rest of us should be worried indeed. Even if we lack the mathematical skills to understand emergent behavior in AI, it should be obvious that humanity could face a significant risk to its existence, should killing machines acquire the ability to think on their own. Perhaps they would surprise everyone and decide to take on the role of international peacekeepers, but given that they’re being designed to fight and kill, it’s far more probable that they might simply choose to carry out those instructions in an independent and extreme fashion.

If so, there could be no one around to put an R.I.P. on humanity’s gravestone.

Via Tomdispatch.com

]]>
AI Goes to War: Will the Pentagon’s Techno-Fantasies Pave the Way for War with China? https://www.juancole.com/2023/10/pentagons-techno-fantasies.html Wed, 04 Oct 2023 04:04:12 +0000 https://www.juancole.com/?p=214662 ( Tomdispatch.com) – On August 28th, Deputy Secretary of Defense Kathleen Hicks chose the occasion of a three-day conference organized by the National Defense Industrial Association (NDIA), the arms industry’s biggest trade group, to announce the “Replicator Initiative.” Among other things, it would involve producing “swarms of drones” that could hit thousands of targets in China on short notice. Call it the full-scale launching of techno-war.

Her speech to the assembled arms makers was yet another sign that the military-industrial complex (MIC) President Dwight D. Eisenhower warned us about more than 60 years ago is still alive, all too well, and taking a new turn. Call it the MIC for the digital age.

Hicks described the goal of the Replicator Initiative this way:

“To stay ahead [of China], we’re going to create a new state of the art… leveraging attritable, autonomous systems in all domains which are less expensive, put fewer people at risk, and can be changed, upgraded, or improved with substantially shorter lead times… We’ll counter the PLA’s [People’s Liberation Army’s] with mass of our own, but ours will be harder to plan for, harder to hit, and harder to beat.”

Think of it as artificial intelligence (AI) goes to war — and oh, that word “attritable,” a term that doesn’t exactly roll off the tongue or mean much of anything to the average taxpayer, is pure Pentagonese for the ready and rapid replaceability of systems lost in combat. Let’s explore later whether the Pentagon and the arms industry are even capable of producing the kinds of cheap, effective, easily replicable techno-war systems Hicks touted in her speech. First, though, let me focus on the goal of such an effort: confronting China.

Target: China

However one gauges China’s appetite for military conflict — as opposed to relying more heavily on its increasingly powerful political and economic tools of influence — the Pentagon is clearly proposing a military-industrial fix for the challenge posed by Beijing. As Hicks’s speech to those arms makers suggests, that new strategy is going to be grounded in a crucial premise: that any future technological arms race will rely heavily on the dream of building ever cheaper, ever more capable weapons systems based on the rapid development of near-instant communications, artificial intelligence, and the ability to deploy such systems on short notice.

The vision Hicks put forward to the NDIA is, you might already have noticed, untethered from the slightest urge to respond diplomatically or politically to the challenge of Beijing as a rising great power. It matters little that those would undoubtedly be the most effective ways to head off a future conflict with China.

Such a non-military approach would be grounded in a clearly articulated return to this country’s longstanding “One China” policy. Under it, the U.S. would forgo any hint of the formal political recognition of the island of Taiwan as a separate state, while Beijing would commit itself to limiting to peaceful means its efforts to absorb that island.

There are numerous other issues where collaboration between the two nations could move the U.S. and China from a policy of confrontation to one of cooperation, as noted in a new paper by my colleague Jake Werner of the Quincy Institute: “1) development in the Global South; 2) addressing climate change; 3) renegotiating global trade and economic rules; and 4) reforming international institutions to create a more open and inclusive world order.” Achieving such goals on this planet now might seem like a tall order, but the alternative — bellicose rhetoric and aggressive forms of competition that increase the risk of war — should be considered both dangerous and unacceptable.

On the other side of the equation, proponents of increasing Pentagon spending to address the purported dangers of the rise of China are masters of threat inflation. They find it easy and satisfying to exaggerate both Beijing’s military capabilities and its global intentions in order to justify keeping the military-industrial complex amply funded into the distant future.

As Dan Grazier of the Project on Government Oversight noted in a December 2022 report, while China has made significant strides militarily in the past few decades, its strategy is “inherently defensive” and poses no direct threat to the United States. At present, in fact, Beijing lags behind Washington strikingly when it comes to both military spending and key capabilities, including having a far smaller (though still undoubtedly devastating) nuclear arsenal, a less capable Navy, and fewer major combat aircraft. None of this would, however, be faintly obvious if you only listened to the doomsayers on Capitol Hill and in the halls of the Pentagon.

But as Grazier points out, this should surprise no one since “threat inflation has been the go-to tool for defense spending hawks for decades.” That was, for instance, notably the case at the end of the Cold War of the last century, after the Soviet Union had disintegrated, when then Chairman of the Joint Chiefs of Staff Colin Powell so classically said: “Think hard about it. I’m running out of demons. I’m running out of villains. I’m down to [Cuba’s Fidel] Castro and Kim Il-sung [the late North Korean dictator].”

Needless to say, that posed a grave threat to the Pentagon’s financial fortunes and Congress did indeed insist then on significant reductions in the size of the armed forces, offering less funds to spend on new weaponry in the first few post-Cold War years. But the Pentagon was quick to highlight a new set of supposed threats to American power to justify putting military spending back on the upswing. With no great power in sight, it began focusing instead on the supposed dangers of regional powers like Iran, Iraq, and North Korea. It also greatly overstated their military strength in its drive to be funded to win not one but two major regional conflicts at the same time. This process of switching to new alleged threats to justify a larger military establishment was captured strikingly in Michael Klare’s 1995 book Rogue States and Nuclear Outlaws.

After the 9/11 attacks, that “rogue states” rationale was, for a time, superseded by the disastrous “Global War on Terror,” a distinctly misguided response to those terrorist acts. It would spawn trillions of dollars of spending on wars in Iraq and Afghanistan and a global counter-terror presence that included U.S. operations in 85 — yes, 85! — countries, as strikingly documented by the Costs of War Project at Brown University.

All of that blood and treasure, including hundreds of thousands of direct civilian deaths (and many more indirect ones), as well as thousands of American deaths and painful numbers of devastating physical and psychological injuries to U.S. military personnel, resulted in the installation of unstable or repressive regimes whose conduct — in the case of Iraq — helped set the stage for the rise of the Islamic State (ISIS) terror organization. As it turned out, those interventions proved to be anything but either the “cakewalk” or the flowering of democracy predicted by the advocates of America’s post-9/11 wars. Give them full credit, though! They proved to be a remarkably efficient money machine for the denizens of the military-industrial complex.

Constructing “the China Threat”

As for China, its status as the threat du jour gained momentum during the Trump years. In fact, for the first time since the twentieth century, the Pentagon’s 2018 defense strategy document targeted “great power competition” as the wave of the future.

One particularly influential document from that period was the report of the congressionally mandated National Defense Strategy Commission. That body critiqued the Pentagon’s strategy of the moment, boldly claiming (without significant backup information) that the Defense Department was not planning to spend enough to address the military challenge posed by great power rivals, with a primary focus on China.

The commission proposed increasing the Pentagon’s budget by 3% to 5% above inflation for years to come — a move that would have pushed it to an unprecedented $1 trillion or more within a few years. Its report would then be extensively cited by Pentagon spending boosters in Congress, most notably former Senate Armed Services Committee Chair James Inhofe (R-OK), who used to literally wave it at witnesses in hearings and ask them to pledge allegiance to its dubious findings.

That 3% to 5% real growth figure caught on with prominent hawks in Congress and, until the recent chaos in the House of Representatives, spending did indeed fit just that pattern. What has not been much discussed is research by the Project on Government Oversight showing that the commission that penned the report and fueled those spending increases was heavily weighted toward individuals with ties to the arms industry. Its co-chair, for instance, served on the board of the giant weapons maker Northrop Grumman, and most of the other members had been or were advisers or consultants to the industry, or worked in think tanks heavily funded by just such corporations. So, we were never talking about a faintly objective assessment of U.S. “defense” needs.

Beware of Pentagon “Techno-Enthusiasm”

Just so no one would miss the point in her NDIA speech, Kathleen Hicks reiterated that the proposed transformation of weapons development with future techno-war in mind was squarely aimed at Beijing. “We must,” she said, “ensure the PRC leadership wakes up every day, considers the risks of aggression and concludes, ‘today is not the day’ — and not just today, but every day, between now and 2027, now and 2035, now and 2049, and beyond… Innovation is how we do that.”

The notion that advanced military technology could be the magic solution to complex security challenges runs directly against the actual record of the Pentagon and the arms industry over the past five decades. In those years, supposedly “revolutionary” new systems like the F-35 combat aircraft, the Army’s Future Combat System (FCS), and the Navy’s Littoral Combat Ship have been notoriously plagued by cost overruns, schedule delays, performance problems, and maintenance challenges that have, at best, severely limited their combat capabilities. In fact, the Navy is already planning to retire a number of those Littoral Combat Ships early, while the whole FCS program was canceled outright.

In short, the Pentagon is now betting on a complete transformation of how it and the industry do business in the age of AI — a long shot, to put it mildly.

But you can count on one thing: the new approach is likely to be a gold mine for weapons contractors, even if the resulting weaponry doesn’t faintly perform as advertised. This quest will not be without political challenges, most notably finding the many billions of dollars needed to pursue the goals of the Replicator Initiative, while staving off lobbying by producers of existing big-ticket items like aircraft carriers, bombers, and fighter jets.

Members of Congress will defend such current-generation systems fiercely to keep weapons spending flowing to major corporate contractors and so into key congressional districts. One solution to the potential conflict between funding the new systems touted by Hicks and the costly existing programs that now feed the titans of the arms industry: jack up the Pentagon’s already massive budget and head for that trillion-dollar peak, which would be the highest level of such spending since World War II.

The Pentagon has long built its strategy around supposed technological marvels like the “electronic battlefield” in the Vietnam era; the “revolution in military affairs,” first touted in the early 1990s; and the precision-guided munitions praised since at least the 1991 Persian Gulf war. It matters little that such wonder weapons have never performed as advertised. For example, a detailed Government Accountability Office report on the bombing campaign in the Gulf War found that “the claim by DOD [Department of Defense] and contractors of a one-target, one-bomb capability for laser-guided munitions was not demonstrated in the air campaign where, on average, 11 tons of guided and 44 tons of unguided munitions were delivered on each successfully destroyed target.”

When such advanced weapons systems can be made to work, at enormous cost in time and money, they almost invariably prove of limited value, even against relatively poorly armed adversaries (as in Iraq and Afghanistan in this century). China, a great power rival with a modern industrial base and a growing arsenal of sophisticated weaponry, is another matter. The quest for decisive military superiority over Beijing and the ability to win a war against a nuclear-armed power should be (but isn’t) considered a fool’s errand, more likely to spur a war than deter it, with potentially disastrous consequences for all concerned.

Perhaps most dangerous of all, a drive for the full-scale production of AI-based weaponry will only increase the likelihood that future wars could be fought all too disastrously without human intervention. As Michael Klare pointed out in a report for the Arms Control Association, relying on such systems will also magnify the chances of technical failures, as well as misguided AI-driven targeting decisions that could spur unintended slaughter and decision-making without human intervention. The potentially disastrous malfunctioning of such autonomous systems might, in turn, only increase the possibility of nuclear conflict.

It would still be possible to rein in the Pentagon’s techno-enthusiasm by slowing the development of the kinds of systems highlighted in Hicks’s speech, while creating international rules of the road regarding their future development and deployment. But the time to start pushing back against yet another misguided “techno-revolution” is now, before automated warfare increases the risk of a global catastrophe. Emphasizing new weaponry over creative diplomacy and smart political decisions is a recipe for disaster in the decades to come. There has to be a better way.

Tomdispatch.com

]]>
AI vs. AI: And Human Extinction as Collateral Damage https://www.juancole.com/2023/07/extinction-collateral-damage.html Wed, 12 Jul 2023 04:02:30 +0000 https://www.juancole.com/?p=213159 ( Tomdispatch.com) – A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial, and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable “hallucinations,” resulting in potentially catastrophic outcomes. But there’s an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.

The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture. In the prophetic 1983 film “WarGames,” a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced “whopper”) nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The “Terminator” movie franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called “Skynet” that, like WOPR, was designed to control U.S. nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.

Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of “autonomous,” or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called “robot generals.” In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over America’s atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanity’s demise.

Now, take a breath for a moment. The installation of an AI-powered command-and-control (C2) system like this may seem a distant possibility. Nevertheless, the U.S. Department of Defense is working hard to develop the required hardware and software in a systematic, increasingly rapid fashion. In its budget submission for 2023, for example, the Air Force requested $231 million to develop the Advanced Battlefield Management System (ABMS), a complex network of sensors and AI-enabled computers designed to collect and interpret data on enemy operations and provide pilots and ground forces with a menu of optimal attack options. As the technology advances, the system will be capable of sending “fire” instructions directly to “shooters,” largely bypassing human control.

“A machine-to-machine data exchange tool that provides options for deterrence, or for on-ramp [a military show-of-force] or early engagement,” was how Will Roper, assistant secretary of the Air Force for acquisition, technology, and logistics, described the ABMS system in a 2020 interview. Suggesting that “we do need to change the name” as the system evolves, Roper added, “I think Skynet is out, as much as I would love doing that as a sci-fi thing. I just don’t think we can go there.”

And while he can’t go there, that’s just where the rest of us may, indeed, be going.

Mind you, that’s only the start. In fact, the Air Force’s ABMS is intended to constitute the nucleus of a larger constellation of sensors and computers that will connect all U.S. combat forces, the Joint All-Domain Command-and-Control System (JADC2, pronounced “Jad-C-two”). “JADC2 intends to enable commanders to make better decisions by collecting data from numerous sensors, processing the data using artificial intelligence algorithms to identify targets, then recommending the optimal weapon… to engage the target,” the Congressional Research Service reported in 2022.

AI and the Nuclear Trigger

Initially, JADC2 will be designed to coordinate combat operations among “conventional” or non-nuclear American forces. Eventually, however, it is expected to link up with the Pentagon’s nuclear command-control-and-communications systems (NC3), potentially giving computers significant control over the use of the American nuclear arsenal. “JADC2 and NC3 are intertwined,” General John E. Hyten, vice chairman of the Joint Chiefs of Staff, indicated in a 2020 interview. As a result, he added in typical Pentagonese, “NC3 has to inform JADC2 and JADC2 has to inform NC3.”

It doesn’t require great imagination to picture a time in the not-too-distant future when a crisis of some sort — say a U.S.-China military clash in the South China Sea or near Taiwan — prompts ever more intense fighting between opposing air and naval forces. Imagine then the JADC2 ordering the intense bombardment of enemy bases and command systems in China itself, triggering reciprocal attacks on U.S. facilities and a lightning decision by JADC2 to retaliate with tactical nuclear weapons, igniting a long-feared nuclear holocaust.

The possibility that nightmare scenarios of this sort could result in the accidental or unintended onset of nuclear war has long troubled analysts in the arms control community. But the growing automation of military C2 systems has generated anxiety not just among them but among senior national security officials as well.

As early as 2019, when I questioned Lieutenant General Jack Shanahan, then director of the Pentagon’s Joint Artificial Intelligence Center, about such a risky possibility, he responded, “You will find no stronger proponent of integration of AI capabilities writ large into the Department of Defense, but there is one area where I pause, and it has to do with nuclear command and control.” This “is the ultimate human decision that needs to be made” and so “we have to be very careful.” Given the technology’s “immaturity,” he added, we need “a lot of time to test and evaluate [before applying AI to NC3].”

In the years since, despite such warnings, the Pentagon has been racing ahead with the development of automated C2 systems. In its budget submission for 2024, the Department of Defense requested $1.4 billion for the JADC2 in order “to transform warfighting capability by delivering information advantage at the speed of relevance across all domains and partners.” Uh-oh! And then, it requested another $1.8 billion for other kinds of military-related AI research.

Pentagon officials acknowledge that it will be some time before robot generals will be commanding vast numbers of U.S. troops (and autonomous weapons) in battle, but they have already launched several projects intended to test and perfect just such linkages. One example is the Army’s Project Convergence, involving a series of field exercises designed to validate ABMS and JADC2 component systems. In a test held in August 2020 at the Yuma Proving Ground in Arizona, for example, the Army used a variety of air- and ground-based sensors to track simulated enemy forces and then process that data using AI-enabled computers at Joint Base Lewis McChord in Washington state. Those computers, in turn, issued fire instructions to ground-based artillery at Yuma. “This entire sequence was supposedly accomplished within 20 seconds,” the Congressional Research Service later reported.

Less is known about the Navy’s AI equivalent, “Project Overmatch,” as many aspects of its programming have been kept secret. According to Admiral Michael Gilday, chief of naval operations, Overmatch is intended “to enable a Navy that swarms the sea, delivering synchronized lethal and nonlethal effects from near-and-far, every axis, and every domain.” Little else has been revealed about the project.

“Flash Wars” and Human Extinction

Despite all the secrecy surrounding these projects, you can think of ABMS, JADC2, Convergence, and Overmatch as building blocks for a future Skynet-like mega-network of super-computers designed to command all U.S. forces, including its nuclear ones, in armed combat. The more the Pentagon moves in that direction, the closer we’ll come to a time when AI possesses life-or-death power over all American soldiers along with opposing forces and any civilians caught in the crossfire.

Such a prospect should be ample cause for concern. To start with, consider the risk of errors and miscalculations by the algorithms at the heart of such systems. As top computer scientists have warned us, those algorithms are capable of remarkably inexplicable mistakes and, to use the AI term of the moment, “hallucinations” — that is, seemingly reasonable results that are entirely illusionary. Under the circumstances, it’s not hard to imagine such computers “hallucinating” an imminent enemy attack and launching a war that might otherwise have been avoided.

And that’s not the worst of the dangers to consider. After all, there’s the obvious likelihood that America’s adversaries will similarly equip their forces with robot generals. In other words, future wars are likely to be fought by one set of AI systems against another, both linked to nuclear weaponry, with entirely unpredictable — but potentially catastrophic — results.

Not much is known (from public sources at least) about Russian and Chinese efforts to automate their military command-and-control systems, but both countries are thought to be developing networks comparable to the Pentagon’s JADC2. As early as 2014, in fact, Russia inaugurated a National Defense Control Center (NDCC) in Moscow, a centralized command post for assessing global threats and initiating whatever military action is deemed necessary, whether of a non-nuclear or nuclear nature. Like JADC2, the NDCC is designed to collect information on enemy moves from multiple sources and provide senior officers with guidance on possible responses.

China is said to be pursuing an even more elaborate, if similar, enterprise under the rubric of “Multi-Domain Precision Warfare” (MDPW). According to the Pentagon’s 2022 report on Chinese military developments, its military, the People’s Liberation Army, is being trained and equipped to use AI-enabled sensors and computer networks to “rapidly identify key vulnerabilities in the U.S. operational system and then combine joint forces across domains to launch precision strikes against those vulnerabilities.”

Picture, then, a future war between the U.S. and Russia or China (or both) in which the JADC2 commands all U.S. forces, while Russia’s NDCC and China’s MDPW command those countries’ forces. Consider, as well, that all three systems are likely to experience errors and hallucinations. How safe will humans be when robot generals decide that it’s time to “win” the war by nuking their enemies?

If this strikes you as an outlandish scenario, think again, at least according to the leadership of the National Security Commission on Artificial Intelligence, a congressionally mandated enterprise that was chaired by Eric Schmidt, former head of Google, and Robert Work, former deputy secretary of defense. “While the Commission believes that properly designed, tested, and utilized AI-enabled and autonomous weapon systems will bring substantial military and even humanitarian benefit, the unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability,” it affirmed in its Final Report. Such dangers could arise, it stated, “because of challenging and untested complexities of interaction between AI-enabled and autonomous weapon systems on the battlefield” — when, that is, AI fights AI.

Though this may seem an extreme scenario, it’s entirely possible that opposing AI systems could trigger a catastrophic “flash war” — the military equivalent of a “flash crash” on Wall Street, when huge transactions by super-sophisticated trading algorithms spark panic selling before human operators can restore order. In the infamous “Flash Crash” of May 6, 2010, computer-driven trading precipitated a 10% fall in the stock market’s value. According to Paul Scharre of the Center for a New American Security, who first studied the phenomenon, “the military equivalent of such crises” on Wall Street would arise when the automated command systems of opposing forces “become trapped in a cascade of escalating engagements.” In such a situation, he noted, “autonomous weapons could lead to accidental death and destruction at catastrophic scales in an instant.”

At present, there are virtually no measures in place to prevent a future catastrophe of this sort or even talks among the major powers to devise such measures. Yet, as the National Security Commission on Artificial Intelligence noted, such crisis-control measures are urgently needed to integrate “automated escalation tripwires” into such systems “that would prevent the automated escalation of conflict.” Otherwise, some catastrophic version of World War III seems all too possible. Given the dangerous immaturity of such technology and the reluctance of Beijing, Moscow, and Washington to impose any restraints on the weaponization of AI, the day when machines could choose to annihilate us might arrive far sooner than we imagine and the extinction of humanity could be the collateral damage of such a future war.

Via Tomdispatch.com

]]>
Who will Inherit the Earth? The Rise of AI and the Human Future (?) https://www.juancole.com/2023/05/inherit-earth-future.html Fri, 12 May 2023 04:02:16 +0000 https://www.juancole.com/?p=211926 ( Tomdispatch.com ) – After almost 79 years on this beleaguered planet, let me say one thing: this can’t end well. Really, it can’t. And no, I’m not talking about the most obvious issues ranging from the war in Ukraine to the climate disaster. What I have in mind is that latest, greatest human invention: artificial intelligence.

It doesn’t seem that complicated to me. As a once-upon-a-time historian, I’ve long thought about what, in these centuries, unartificial and — all too often — unartful intelligence has “accomplished” (and yes, I’d prefer to put that in quotation marks). But the minute I try to imagine what that seemingly ultimate creation AI, already a living abbreviation of itself, might do, it makes me shiver. Brrr…

Let me start with honesty, which isn’t an artificial feeling at all. What I know about AI you could put in a trash bag and throw out with the garbage. Yes, I’ve recently read whatever I could in the media about it and friends of mine have already fiddled with it. TomDispatch regular William Astore, for instance, got ChatGPT to write a perfectly passable “critical essay” on the military-industrial complex for his Bracing Views newsletter — and that, I must admit, was kind of amazing.

Still, it’s not for me. Never me. I hate to say never because we humans truly don’t know what we’ll do in the future. Still, consider it my best guess that I won’t have anything actively to do with AI. (Although my admittedly less than artificially intelligent spellcheck system promptly changed “chatbox” to “hatbox” when I was emailing Astore to ask him for the URL to that piece of his.)

But let’s stop here a minute. Before we even get to AI, let’s think a little about LTAI (Less Than Artificial Intelligence, just in case you don’t know the acronym) on this planet. Who could deny that it’s had some remarkable successes? It created the Mona Lisa, Starry Night, and Diego and I. Need I say more? It’s figured out how to move us around this world in style and even into outer space. It’s built vast cities and great monuments, while creating cuisines beyond compare. I could, of course, go on. Who couldn’t? In certain ways, the creations of human intelligence should take anyone’s breath away. Sometimes, they even seem to give “miracle” a genuine meaning.

And yet, from the dawn of time, that same LTAI went in far grimmer directions, too. It invented weaponry of every kind, from the spear and the bow and arrow to artillery and jet fighter planes. It created the AR-15 semiautomatic rifle, now largely responsible (along with so many disturbed individual LTAIs) for our seemingly never-ending mass killings, a singular phenomenon in this “peacetime” country of ours.

And we’re talking, of course, about the same Less Than Artificial Intelligence that created the Holocaust, Joseph Stalin’s Russian gulag, segregation and lynch mobs in the United States., and so many other monstrosities of (in)human history. Above all, we’re talking about the LTAI that turned much of our history into a tale of war and slaughter beyond compare, something that, no matter how “advanced” we became, has never — as the brutal, deeply destructive conflict in Ukraine suggests — shown the slightest sign of cessation. Although I haven’t seen figures on the subject, I suspect that there has hardly been a moment in our history when, somewhere on this planet (and often that somewhere would have to be pluralized), we humans weren’t killing each other in significant numbers.

And keep in mind that in none of the above have I even mentioned the horrors of societies regularly divided between and organized around the staggeringly wealthy and the all too poor. But enough, right? You get the idea.

Oops, I left one thing out in judging the creatures that have now created AI. In the last century or two, the “intelligence” that did all of the above also managed to come up with two different ways of potentially destroying this planet and more or less everything living on it. The first of them it created largely unknowingly. After all, the massive, never-ending burning of fossil fuels that began with the nineteenth-century industrialization of much of the planet was what led to an increasingly climate-changed Earth. Though we’ve now known what we were doing for decades (the scientists of one of the giant fossil-fuel companies first grasped what was happening in the 1970s), that hasn’t stopped us. Not by a long shot. Not yet anyway.

Over the decades to come, if not taken in hand, the climate emergency could devastate this planet that houses humanity and so many other creatures. It’s a potentially world-ending phenomenon (at least for a habitable planet as we’ve known it). And yet, at this very moment, the two greatest greenhouse gas emitters, the United States and China (that country now being in the lead, but the U.S. remaining historically number one), have proven incapable of developing a cooperative relationship to save us from an all-too-literal hell on Earth. Instead, they’ve continued to arm themselves to the teeth and face off in a threatening fashion while their leaders are now not exchanging a word, no less consulting on the overheating of the planet.

The second path to hell created by humanity was, of course, nuclear weaponry, used only twice to devastating effect in August 1945 on the Japanese cities of Hiroshima and Nagasaki. Still, even relatively small numbers of weapons from the vast nuclear arsenals now housed on Planet Earth would be capable of creating a nuclear winter that could potentially wipe out much of humanity.

And mind you, knowing that, LTAI beings continue to create ever larger stockpiles of just such weaponry as ever more countries — the latest being North Korea — come to possess them. Under the circumstances and given the threat that the Ukraine War could go nuclear, it’s hard not to think that it might just be a matter of time. In the decades to come, the government of my own country is, not atypically, planning to put another $2 trillion into ever more advanced forms of such weaponry and ways of delivering them.

Entering the AI Era

Given such a history, you’d be forgiven for imagining that it might be a glorious thing for artificial intelligence to begin taking over from the intelligence responsible for so many dangers, some of them of the ultimate variety. And I have no doubt that, like its ancestor (us), AI will indeed prove anything but one-sided. It will undoubtedly produce wonders in forms that may as yet be unimaginable.

Still, let’s not forget that AI was created by those of us with LTAI. If now left to its own devices (with, of course, a helping hand from the powers that be), it seems reasonable to assume that it will, in some way, essentially repeat the human experience. In fact, consider that a guarantee of sorts. That means it will create beauty and wonder and — yes! — horror beyond compare (and perhaps even more efficiently so). Lest you doubt that, just consider which part of humanity already seems the most intent on pushing artificial intelligence to its limits.

Yes, across the planet, departments of “defense” are pouring money into AI research and development, especially the creation of unmanned autonomous vehicles (think: killer robots) and weapons systems of various kinds, as Michael Klare pointed out recently at TomDispatch when it comes to the Pentagon. In fact, it shouldn’t shock you to know that five years ago (yes, five whole years!), the Pentagon was significantly ahead of the game in creating a Joint Artificial Intelligence Center to, as the New York Times put it, “explore the use of artificial intelligence in combat.” There, it might, in the end — and “end” is certainly an operative word here — speed up battlefield action in such a way that we could truly be entering unknown territory. We could, in fact, be entering a realm in which human intelligence in wartime decision-making becomes, at best, a sideline activity.

Only recently, AI creators, tech leaders, and key potential users, more than 1,000 of them, including Apple co-founder Steve Wozniak and billionaire Elon Musk, had grown anxious enough about what such a thing — such a brain, you might say — let loose on this planet might do that they called for a six-month moratorium on its development. They feared “profound risks to society and humanity” from AI and wondered whether we should even be developing “nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us.”

The Pentagon, however, instantly responded to that call this way, as David Sanger reported in the New York Times: “Pentagon officials, speaking at technology forums, said they thought the idea of a six-month pause in developing the next generations of ChatGPT and similar software was a bad idea: The Chinese won’t wait, and neither will the Russians.” So, full-speed ahead and skip any international attempts to slow down or control the development of the most devastating aspects of AI!

And I haven’t even bothered to mention how, in a world already seemingly filled to the brim with mis- and disinformation and wild conspiracy theories, AI is likely to be used to create yet more of the same of every imaginable sort, a staggering variety of “hallucinations,” not to speak of churning out everything from remarkable new versions of art to student test papers. I mean, do I really need to mention anything more than those recent all-too-realistic-looking “photos of Donald Trump being aggressively arrested by the NYPD and Pope Francis sporting a luxurious Balenciaga puffy coat circulating widely online”?

I doubt it. After all, image-based AI technology, including striking fake art, is on the rise in a significant fashion and, soon enough, you may not be able to detect whether the images you see are “real” or “fake.” The only way you’ll know, as Meghan Bartels reports in Scientific American, could be thanks to AI systems trained to detect — yes! — artificial images. In the process, of course, all of us will, in some fashion, be left out of the picture.

On the Future, Artificially Speaking

And of course, that’s almost the good news when, with our present all-too-Trumpian world in mind, you begin to think about how Artificial Intelligence might make political and social fools of us all. Given that I’m anything but one of the better-informed people when it comes to AI (though on Less Than Artificial Intelligence I would claim to know a fair amount more), I’m relieved not to be alone in my fears.

In fact, among those who have spoken out fearfully on the subject is the man known as “the godfather of AI,” Geoffrey Hinton, a pioneer in the field of artificial intelligence. He only recently quit his job at Google to express his fears about where we might indeed be heading, artificially speaking. As he told the New York Times recently, “The idea that this stuff could actually get smarter than people — a few people believed that, but most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Now, he fears not just the coming of killer robots beyond human control but, as he told Geoff Bennett of the PBS NewsHour, “the risk of super intelligent AI taking over control from people… I think it’s an area in which we can actually have international collaboration, because the machines taking over is a threat for everybody. It’s a threat for the Chinese and for the Americans and for the Europeans, just like a global nuclear war was.”

And that, indeed, is a hopeful thought, just not one that fits our present world of hot war in Europe, cold war in the Pacific, and division globally.

I, of course, have no way of knowing whether Less Than Artificial Intelligence of the sort I’ve lived with all my life will indeed be sunk by the AI carrier fleet or whether, for that matter, humanity will leave AI in the dust by, in some fashion, devastating this planet all on our own. But I must admit that AI, whatever its positives, looks like anything but what the world needs right now to save us from a hell on earth. I hope for the best and fear the worst as I prepare to make my way into a future that I have no doubt is beyond my imagining.

Via Tomdispatch.com

]]>
Weaponizing ChatGPT? The Pentagon Girds for Mid-Century Wars https://www.juancole.com/2023/04/weaponizing-chatgpt-pentagon.html Mon, 17 Apr 2023 04:02:37 +0000 https://www.juancole.com/?p=211396 ( Tomdispatch.com )- Why is the Pentagon budget so high?

On March 13th, the Biden administration unveiled its $842 billion military budget request for 2024, the largest ask (in today’s dollars) since the peaks of the Afghan and Iraq wars. And mind you, that’s before the hawks in Congress get their hands on it. Last year, they added $35 billion to the administration’s request and, this year, their add-on is likely to prove at least that big. Given that American forces aren’t even officially at war right now (if you don’t count those engaged in counter-terror operations in Africa and elsewhere), what explains so much military spending?

The answer offered by senior Pentagon officials and echoed in mainstream Washington media coverage is that this country faces a growing risk of war with Russia or China (or both of them at once) and that the lesson of the ongoing conflict in Ukraine is the need to stockpile vast numbers of bombs, missiles, and other munitions. “Pentagon, Juggling Russia, China, Seeks Billions for Long-Range Weapons” was a typical headline in the Washington Post about that 2024 budget request. Military leaders are overwhelmingly focused on a potential future conflict with either or both of those powers and are convinced that a lot more money should be spent now to prepare for such an outcome, which means buying extra tanks, ships, and planes, along with all the bombs, shells, and missiles they carry.

Even a quick look at the briefing materials for that future budget confirms such an assessment. Many of the billions of dollars being tacked onto it are intended to procure exactly the items you would expect to use in a war with those powers in the late 2020s or 2030s. Aside from personnel costs and operating expenses, the largest share of the proposed budget — $170 billion or 20% — is allocated for purchasing just such hardware.

But while preparations for such wars in the near future drive a significant part of that increase, a surprising share of it — $145 billion, or 17% — is aimed at possible conflicts in the 2040s and 2050s. Believing that our “strategic competition” with China is likely to persist for decades to come and that a conflict with that country could erupt at any moment along that future trajectory, the Pentagon is requesting its largest allocation ever for what’s called “research, development, test, and evaluation” (RDT&E), or the process of converting the latest scientific discoveries into weapons of war.

To put this in perspective, that $145 billion is more than any other country except what China spends on defense in toto and constitutes approximately half of China’s full military budget. So what’s that staggering sum of money, itself only a modest part of this country’s military budget, intended for?

Some of it, especially the “T&E” part, is designed for futuristic upgrades of existing weapons systems. For example, the B-52 bomber — at 70, the oldest model still flying — is being retrofitted to carry experimental AGM-183A Air-Launched Rapid Response Weapons (ARRWs), or advanced hypersonic missiles. But much of that sum, especially the “R&D” part, is aimed at developing weapons that may not see battlefield use until decades in the future, if ever. Spending on such systems is still only in the millions or low billions, but it will certainly balloon into the tens or hundreds of billions of dollars in the years to come, ensuring that future Pentagon budgets soar into the trillions.

Weaponizing Emerging Technologies

Driving the Pentagon’s increased focus on future weapons development is the assumption that China and Russia will remain major adversaries for decades to come and that future wars with those, or other major powers, could largely be decided by the mastery of artificial intelligence (AI) along with other emerging technologies. Those would include robotics, hypersonics (projectiles that fly at more than five times the speed of sound), and quantum computing. As the Pentagon’s 2024 budget request put it:

“An increasing array of fast-evolving technologies and innovative applications of existing technology complicates the [Defense] Department’s ability to maintain an edge in combat credibility and deterrence. Newer capabilities such as counterspace weapons, hypersonic weapons, new and emerging payload and delivery systems… all create a heightened potential… for shifts in perceived deterrence of U.S. military power.”

To ensure that this country can overpower Chinese and/or Russian forces in any conceivable encounter, top officials insist, Washington must focus on investing in a major way in the advanced technologies likely to dominate future battlefields. Accordingly, $17.8 billion of that $145 billion RDT&E budget will be directly dedicated to military-related science and technology development. Those funds, the Pentagon explains, will be used to accelerate the weaponization of artificial intelligence and speed the growth of other emerging technologies, especially robotics, autonomous (or “unmanned”) weapons systems, and hypersonic missiles.

Artificial intelligence (AI) is of particular interest to the Department of Defense, given its wide range of potential military uses, including target identification and assessment, enhanced weapons navigation and targeting systems, and computer-assisted battlefield decision-making. Although there’s no total figure for AI research and development offered in the unclassified version of the 2024 budget, certain individual programs are highlighted. One of these is the Joint All-Domain Command-and-Control system (JADC2), an AI-enabled matrix of sensors, computers, and communications devices intended to collect and process data on enemy movements and convey that information at lightning speed to combat forces in every “domain” (air, sea, ground, and space). At $1.3 billion, JADC2 may not be “the biggest number in the budget,” said Under Secretary of Defense Michael J. McCord, but it constitutes “a very central organizing concept of how we’re trying to link information together.”

AI is also essential for the development of — and yes, nothing seems to lack an acronym in Pentagon documents — autonomous weapons systems, or unmanned aerial vehicles (UAVs), unmanned ground vehicles (UGVs), and unmanned surface vessels (USVs). Such devices — far more bluntly called “killer robots” by their critics — typically combine a mobile platform of some sort (plane, tank, or ship), an onboard “kill mechanism” (gun or missile), and an ability to identify and attack targets with minimal human oversight. Believing that the future battlefield will become ever more lethal, Pentagon officials aim to replace as many of its crewed platforms as possible — think ships, planes, and artillery — with advanced UAVs, UGVs, and USVs.

The 2024 budget request doesn’t include a total dollar figure for research on future unmanned weapons systems but count on one thing: it will come to many billions of dollars. The budget does indicate that $2.2 billion is being sought for the early procurement of MQ-4 and MQ-25 unmanned aerial vehicles, and such figures are guaranteed to swell as experimental robotic systems move into large-scale production. Another $200 million was requested to design a large USV, essentially a crewless frigate or destroyer. Once prototype vessels of this type have been built and tested, the Navy plans to order dozens, perhaps hundreds of them, instantly creating a $100 billion-plus market for a naval force lacking the usual human crew.

Another area receiving extensive Pentagon attention is hypersonics, because such projectiles will fly so fast and maneuver with such skill (while skimming atop the atmosphere’s outer layer) that they should be essentially impossible to track and intercept. Both China and Russia already possess rudimentary weapons of this type, with Russia reportedly firing some of its hypersonic Kinzhal missiles into Ukraine in recent months.

As the Pentagon put it in its budget request:

“Hypersonic systems expand our ability to hold distant targets at risk, dramatically shorten the timeline to strike a target, and their maneuverability increases survivability and unpredictability. The Department will accelerate fielding of transformational capability enabled by air, land, and sea-based hypersonic strike weapon systems to overcome the challenges to our future battlefield domain dominance.”

Another 14% of the RDT&E request, or about $2.5 billion, is earmarked for research in even more experimental fields like quantum computing and advanced microelectronics. “The Department’s science and technology investments are underpinned by early-stage basic research,” the Pentagon explains. “Payoff for this research may not be evident for years, but it is critical to ensuring our enduring technological advantage in the decades ahead.” As in the case of AI, autonomous weapons, and hypersonics, these relatively small amounts (by Pentagon standards) will balloon in the years ahead as initial discoveries are applied to functioning weapons systems and procured in ever larger quantities.

Harnessing American Tech Talent for Long-Term War Planning

There’s one consequence of such an investment in RDT&E that’s almost too obvious to mention. If you think the Pentagon budget is sky high now, just wait! Future spending, as today’s laboratory concepts are converted into actual combat systems, is likely to stagger the imagination. And that’s just one of the significant consequences of such a path to permanent military superiority. To ensure that the United States continues to dominate research in the emerging technologies most applicable to future weaponry, the Pentagon will seek to harness an ever-increasing share of this country’s scientific and technological resources for military-oriented work.

This, in turn, will mean capturing an ever-larger part of the government’s net R&D budget at the expense of other national priorities. In 2022, for example, federal funding for non-military R&D (including the National Science Foundation, the National Institutes of Health, and the National Oceanic and Atmospheric Administration) represented only about 33% of R&D spending. If the 2024 military budget goes through at the level requested (or higher), that figure for non-military spending will drop to 31%, a trend only likely to strengthen in the future as more and more resources are devoted to war preparation, leaving an ever-diminishing share of taxpayer funding for research on vital concerns like cancer prevention and treatment, pandemic response, and climate change adaptation.

No less worrisome, ever more scientists and engineers will undoubtedly be encouraged — not to say, prodded — to devote their careers to military research rather than work in more peaceable fields. While many scientists struggle for grants to support their work, the Department of Defense (DoD) offers bundles of money to those who choose to study military-related topics. Typically enough, the 2024 request includes $347 million for what the military is now calling the University Research Initiative, most of which will be used to finance the formation of “teams of researchers across disciplines and across geographic boundaries to focus on DoD-specific hard science problems.” Another $200 million is being allocated to the Joint University Microelectronics Program by the Defense Advanced Projects Research Agency, the Pentagon’s R&D outfit, while $100 million is being provided to the University Consortium for Applied Hypersonics by the Pentagon’s Joint Hypersonics Transition Office. With so much money flowing into such programs and the share devoted to other fields of study shrinking, it’s hardly surprising that scientists and graduate students at major universities are being drawn into the Pentagon’s research networks.

In fact, it’s also seeking to expand its talent pool by providing additional funding to historically Black colleges and universities (HBCUs). In January, for example, Secretary of Defense Lloyd Austin announced that Howard University in Washington, D.C., had been chosen as the first such school to serve as a university-affiliated research center by the Department of Defense, in which capacity it will soon be involved in work on autonomous weapons systems. This will, of course, provide badly needed money to scientists and engineers at that school and other HBCUs that may have been starved of such funding in the past. But it also begs the question: Why shouldn’t Howard receive similar amounts to study problems of greater relevance to the Black community like sickle-cell anemia and endemic poverty?

Endless Arms Races vs. Genuine Security

In devoting all those billions of dollars to research on next-generation weaponry, the Pentagon’s rationale is straightforward: spend now to ensure U.S. military superiority in the 2040s, 2050s, and beyond. But however persuasive this conceit may seem — even with all those mammoth sums of money pouring in — things rarely work out so neatly. Any major investment of this sort by one country is bound to trigger countermoves from its rivals, ensuring that any early technological advantage will soon be overcome in some fashion, even as the planet is turned into ever more of an armed camp.

The Pentagon’s development of precision-guided munitions, for example, provided American forces with an enormous military advantage during the Persian Gulf Wars of 1991 and 2003, but also prompted China, Iran, Russia, and other countries to begin developing similar weaponry, quickly diminishing that advantage. Likewise, China and Russia were the first to deploy combat-ready hypersonic weapons, but in response, the U.S. will be fielding a far greater array of them in a few years’ time.

Chinese and Russian advances in deploying hypersonics also led the U.S. to invest in developing — yes, you guessed it! — anti-hypersonic hypersonics, launching yet one more arms race on planet Earth, while boosting the Pentagon budget by additional billions. Given all this, I’m sure you won’t be surprised to learn that the 2024 Pentagon budget request includes $209 million for the development of a hypersonic interceptor, only the first installment in costly development and procurement programs in the years to come in Washington, Beijing, and Moscow.

If you want to bet on anything, then here’s a surefire way to go: the Pentagon’s drive to achieve dominance in the development and deployment of advanced weaponry will lead not to supremacy but to another endless cycle of high-tech arms races that, in turn, will consume an ever-increasing share of this country’s wealth and scientific talent, while providing negligible improvements in national security. Rather than spending so much on future weaponry, we should all be thinking about enhanced arms control measures, global climate cooperation, and greater investment in non-military R&D.

If only…

Via Tomdispatch.com )

]]>