science – Informed Comment https://www.juancole.com Thoughts on the Middle East, History and Religion Wed, 24 Apr 2024 03:23:17 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.9 Gaza War: Artificial Intelligence is radically changing Targeting Speeds and Scale of Civilian Harm https://www.juancole.com/2024/04/artificial-intelligence-radically.html Wed, 24 Apr 2024 04:06:29 +0000 https://www.juancole.com/?p=218208 By Lauren Gould, Utrecht University; Linde Arentze, NIOD Institute for War, Holocaust and Genocide Studies; and Marijn Hoijtink, University of Antwerp | –

(The Conversation) – As Israel’s air campaign in Gaza enters its sixth month after Hamas’s terrorist attacks on October 7, it has been described by experts as one of the most relentless and deadliest campaigns in recent history. It is also one of the first being coordinated, in part, by algorithms.

Artificial intelligence (AI) is being used to assist with everything from identifying and prioritising targets to assigning the weapons to be used against those targets.

Academic commentators have long focused on the potential of algorithms in war to highlight how they will increase the speed and scale of fighting. But as recent revelations show, algorithms are now being employed at a large scale and in densely populated urban contexts.

This includes the conflicts in Gaza and Ukraine, but also in Yemen, Iraq and Syria, where the US is experimenting with algorithms to target potential terrorists through Project Maven.

Amid this acceleration, it is crucial to take a careful look at what the use of AI in warfare actually means. It is important to do so, not from the perspective of those in power, but from those officers executing it, and those civilians undergoing its violent effects in Gaza.

This focus highlights the limits of keeping a human in the loop as a failsafe and central response to the use of AI in war. As AI-enabled targeting becomes increasingly computerised, the speed of targeting accelerates, human oversight diminishes and the scale of civilian harm increases.

Speed of targeting

Reports by Israeli publications +927 Magazine and Local Call give us a glimpse into the experience of 13 Israeli officials working with three AI-enabled decision-making systems in Gaza called “Gospel”, “Lavender” and “Where’s Daddy?”.

These systems are reportedly trained to recognise features that are believed to characterise people associated with the military arm of Hamas. These features include membership of the same WhatsApp group as a known militant, changing cell phones every few months, or changing addresses frequently.

The systems are then supposedly tasked with analysing data collected on Gaza’s 2.3 million residents through mass surveillance. Based on the predetermined features, the systems predict the likelihood that a person is a member of Hamas (Lavender), that a building houses such a person (Gospel), or that such a person has entered their home (Where’s Daddy?).

In the investigative reports named above, intelligence officers explained how Gospel helped them go “from 50 targets per year” to “100 targets in one day” – and that, at its peak, Lavender managed to “generate 37,000 people as potential human targets”. They also reflected on how using AI cuts down deliberation time: “I would invest 20 seconds for each target at this stage … I had zero added value as a human … it saved a lot of time.”

They justified this lack of human oversight in light of a manual check the Israel Defense Forces (IDF) ran on a sample of several hundred targets generated by Lavender in the first weeks of the Gaza conflict, through which a 90% accuracy rate was reportedly established. While details of this manual check are likely to remain classified, a 10% inaccuracy rate for a system used to make 37,000 life-and-death decisions will inherently result in devastatingly destructive realities.


“Lavender III,” Digital Imagining, Dream, Dreamland v. 3, 2024

But importantly, any accuracy rate number that sounds reasonably high makes it more likely that algorithmic targeting will be relied on as it allows trust to be delegated to the AI system. As one IDF officer told +927 magazine: “Because of the scope and magnitude, the protocol was that even if you don’t know for sure that the machine is right, you know that statistically it’s fine. So you go for it.”

The IDF denied these revelations in an official statement to The Guardian. A spokesperson said that while the IDF does use “information management tools […] in order to help intelligence analysts to gather and optimally analyse the intelligence, obtained from a variety of sources, it does not use an AI system that identifies terrorist operatives”.

The Guardian has since, however, published a video of a senior official of the Israeli elite intelligence Unit 8200 talking last year about the use of machine learning “magic powder” to help identify Hamas targets in Gaza. The newspaper has also confirmed that the commander of the same unit wrote in 2021, under a pseudonym, that such AI technologies would resolve the “human bottleneck for both locating the new targets and decision-making to approve the targets”.

Scale of civilian harm

AI accelerates the speed of warfare in terms of the number of targets produced and the time to decide on them. While these systems inherently decrease the ability of humans to control the validity of computer-generated targets, they simultaneously make these decisions appear more objective and statistically correct due to the value that we generally ascribe to computer-based systems and their outcome.

This allows for the further normalisation of machine-directed killing, amounting to more violence, not less.

While media reports often focus on the number of casualties, body counts – similar to computer-generated targets – have the tendency to present victims as objects that can be counted. This reinforces a very sterile image of war. It glosses over the reality of more than 34,000 people dead, 766,000 injured and the destruction of or damage to 60% of Gaza’s buildings and the displaced persons, the lack of access to electricity, food, water and medicine.

It fails to emphasise the horrific stories of how these things tend to compound each other. For example, one civilian, Shorouk al-Rantisi, was reportedly found under the rubble after an airstrike on Jabalia refugee camp and had to wait 12 days to be operated on without painkillers and now resides in another refugee camp with no running water to tend to her wounds.

Aside from increasing the speed of targeting and therefore exacerbating the predictable patterns of civilian harm in urban warfare, algorithmic warfare is likely to compound harm in new and under-researched ways. First, as civilians flee their destroyed homes, they frequently change addresses or give their phones to loved ones.

Such survival behaviour corresponds to what the reports on Lavender say the AI system has been programmed to identify as likely association with Hamas. These civilians, thereby unknowingly, make themselves suspect for lethal targeting.

Beyond targeting, these AI-enabled systems also inform additional forms of violence. An illustrative story is that of the fleeing poet Mosab Abu Toha, who was allegedly arrested and tortured at a military checkpoint. It was ultimately reported by the New York Times that he, along with hundreds of other Palestinians, was wrongfully identified as Hamas by the IDF’s use of AI facial recognition and Google photos.

Over and beyond the deaths, injuries and destruction, these are the compounding effects of algorithmic warfare. It becomes a psychic imprisonment where people know they are under constant surveillance, yet do not know which behavioural or physical “features” will be acted on by the machine.

From our work as analysts of the use of AI in warfare, it is apparent that our focus should not solely be on the technical prowess of AI systems or the figure of the human-in-the-loop as a failsafe. We must also consider these systems’ ability to alter the human-machine-human interactions, where those executing algorithmic violence are merely rubber stamping the output generated by the AI system, and those undergoing the violence are dehumanised in unprecedented ways.The Conversation

Lauren Gould, Assistant Professor, Conflict Studies, Utrecht University; Linde Arentze, Researcher into AI and Remote Warfare, NIOD Institute for War, Holocaust and Genocide Studies, and Marijn Hoijtink, Associate Professor in International Relations, University of Antwerp

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
A Brief History of Kill Lists, From Langley to Lavender https://www.juancole.com/2024/04/history-langley-lavender.html Wed, 17 Apr 2024 04:02:05 +0000 https://www.juancole.com/?p=218072 ( Code Pink ) – The Israeli online magazine +972 has published a detailed report on Israel’s use of an artificial intelligence (AI) system called “Lavender” to target thousands of Palestinian men in its bombing campaign in Gaza. When Israel attacked Gaza after October 7, the Lavender system had a database of 37,000 Palestinian men with suspected links to Hamas or Palestinian Islamic Jihad (PIJ).

Lavender assigns a numerical score, from one to a hundred, to every man in Gaza, based mainly on cellphone and social media data, and automatically adds those with high scores to its kill list of suspected militants. Israel uses another automated system, known as “Where’s Daddy?”, to call in airstrikes to kill these men and their families in their homes.

The report is based on interviews with six Israeli intelligence officers who have worked with these systems. As one of the officers explained to +972, by adding a name from a Lavender-generated list to the Where’s Daddy home tracking system, he can place the man’s home under constant drone surveillance, and an airstrike will be launched once he comes home.

The officers said the “collateral” killing of the men’s extended families was of little consequence to Israel. “Let’s say you calculate [that there is one] Hamas [operative] plus 10 [civilians in the house],” the officer said. “Usually, these 10 will be women and children. So absurdly, it turns out that most of the people you killed were women and children.”

The officers explained that the decision to target thousands of these men in their homes is just a question of expediency. It is simply easier to wait for them to come home to the address on file in the system, and then bomb that house or apartment building, than to search for them in the chaos of the war-torn Gaza Strip.

The officers who spoke to 972+ explained that in previous Israeli massacres in Gaza, they could not generate targets quickly enough to satisfy their political and military bosses, and so these AI systems were designed to solve that problem for them. The speed with which Lavender can generate new targets only gives its human minders an average of 20 seconds to review and rubber-stamp each name, even though they know from tests of the Lavender system that at least 10% of the men chosen for assassination and familicide have only an insignificant or a mistaken connection with Hamas or PIJ. 

The Lavender AI system is a new weapon, developed by Israel. But the kind of kill lists that it generates have a long pedigree in U.S. wars, occupations and CIA regime change operations. Since the birth of the CIA after the Second World War, the technology used to create kill lists has evolved from the CIA’s earliest coups in Iran and Guatemala, to Indonesia and the Phoenix program in Vietnam in the 1960s, to Latin America in the 1970s and 1980s and to the U.S. occupations of Iraq and Afghanistan.

Just as U.S. weapons development aims to be at the cutting edge, or the killing edge, of new technology, the CIA and U.S. military intelligence have always tried to use the latest data processing technology to identify and kill their enemies.

The CIA learned some of these methods from German intelligence officers captured at the end of the Second World War. Many of the names on Nazi kill lists were generated by an intelligence unit called Fremde Heere Ost (Foreign Armies East), under the command of Major General Reinhard Gehlen, Germany’s spy chief on the eastern front(see David Talbot, The Devil’s Chessboard, p. 268).

Gehlen and the FHO had no computers, but they did have access to four million Soviet POWs from all over the USSR, and no compunction about torturing them to learn the names of Jews and communist officials in their hometowns to compile kill lists for the Gestapo and Einsatzgruppen.

After the war, like the 1,600 German scientists spirited out of Germany in Operation Paperclip, the United States flew Gehlen and his senior staff to Fort Hunt in Virginia. They were welcomed by Allen Dulles, soon to be the first and still the longest-serving director of the CIA. Dulles sent them back to Pullach in occupied Germany to resume their anti-Soviet operations as CIA agents. The Gehlen Organization formed the nucleus of what became the BND, the new West German intelligence service, with Reinhard Gehlen as its director until he retired in 1968.

After a CIA coup removed Iran’s popular, democratically elected prime minister Mohammad Mosaddegh in 1953, a CIA team led by U.S. Major General Norman Schwarzkopf trained a new intelligence service, known as SAVAK, in the use of kill lists and torture. SAVAK used these skills to purge Iran’s government and military of suspected communists and later to hunt down anyone who dared to oppose the Shah.

By 1975, Amnesty International estimated that Iran was holding between 25,000 and 100,000 political prisoners, and had “the highest rate of death penalties in the world, no valid system of civilian courts and a history of torture that is beyond belief.”

In Guatemala, a CIA coup in 1954 replaced the democratic government of Jacobo Arbenz Guzman with a brutal dictatorship. As resistance grew in the 1960s, U.S. special forces joined the Guatemalan army in a scorched earth campaign in Zacapa, which killed 15,000 people to defeat a few hundred armed rebels. Meanwhile, CIA-trained urban death squads abducted, tortured and killed PGT (Guatemalan Labor Party) members in Guatemala City, notably 28 prominent labor leaders who were abducted and disappeared in March 1966.

Once this first wave of resistance was suppressed, the CIA set up a new telecommunications center and intelligence agency, based in the presidential palace. It compiled a database of “subversives” across the country that included leaders of farming co-ops and labor, student and indigenous activists, to provide ever-growing lists for the death squads. The resulting civil war became a genocide against indigenous people in Ixil and the western highlands that killed or disappeared at least 200,000 people.

TRT World Video: “‘Lavender’: How Israel’s AI system is killing Palestinians in Gaza”

This pattern was repeated across the world, wherever popular, progressive leaders offered hope to their people in ways that challenged U.S. interests. As historian Gabriel Kolko wrote in 1988, “The irony of U.S. policy in the Third World is that, while it has always justified its larger objectives and efforts in the name of anticommunism, its own goals have made it unable to tolerate change from any quarter that impinged significantly on its own interests.”

When General Suharto seized power in Indonesia in 1965, the U.S. Embassy compiled a list of 5,000 communists for his death squads to hunt down and kill. The CIA estimated that they eventually killed 250,000 people, while other estimates run as high as a million.

Twenty-five years later, journalist Kathy Kadane investigated the U.S. role in the massacre in Indonesia, and spoke to Robert Martens, the political officer who led the State-CIA team that compiled the kill list. “It really was a big help to the army,” Martens told Kadane. “They probably killed a lot of people, and I probably have a lot of blood on my hands. But that’s not all bad – there’s a time when you have to strike hard at a decisive moment.”

Kathy Kadane also spoke to former CIA director William Colby, who was the head of the CIA’s Far East division in the 1960s. Colby compared the U.S. role in Indonesia to the Phoenix Program in Vietnam, which was launched two years later, claiming that they were both successful programs to identify and eliminate the organizational structure of America’s communist enemies. 

The Phoenix program was designed to uncover and dismantle the National Liberation Front’s (NLF) shadow government across South Vietnam. Phoenix’s Combined Intelligence Center in Saigon fed thousands of names into an IBM 1401 computer, along with their locations and their alleged roles in the NLF. The CIA credited the Phoenix program with killing 26,369 NLF officials, while another 55,000 were imprisoned or persuaded to defect. Seymour Hersh reviewed South Vietnamese government documents that put the death toll at 41,000.

How many of the dead were correctly identified as NLF officials may be impossible to know, but Americans who took part in Phoenix operations reported killing the wrong people in many cases. Navy SEAL Elton Manzione told author Douglas Valentine (The Phoenix Program) how he killed two young girls in a night raid on a village, and then sat down on a stack of ammunition crates with a hand grenade and an M-16, threatening to blow himself up, until he got a ticket home. 

“The whole aura of the Vietnam War was influenced by what went on in the “hunter-killer” teams of Phoenix, Delta, etc,” Manzione told Valentine. “That was the point at which many of us realized we were no longer the good guys in the white hats defending freedom – that we were assassins, pure and simple. That disillusionment carried over to all other aspects of the war and was eventually responsible for it becoming America’s most unpopular war.”

Even as the U.S. defeat in Vietnam and the “war fatigue” in the United States led to a more peaceful next decade, the CIA continued to engineer and support coups around the world, and to provide post-coup governments with increasingly computerized kill lists to consolidate their rule.

After supporting General Pinochet’s coup in Chile in 1973, the CIA played a central role in Operation Condor, an alliance between right-wing military governments in Argentina, Brazil, Chile, Uruguay, Paraguay and Bolivia, to hunt down tens of thousands of their and each other’s political opponents and dissidents, killing and disappearing at least 60,000 people.

The CIA’s role in Operation Condor is still shrouded in secrecy, but Patrice McSherry, a political scientist at Long Island University, has investigated the U.S. role and concluded, “Operation Condor also had the covert support of the US government. Washington provided Condor with military intelligence and training, financial assistance, advanced computers, sophisticated tracking technology, and access to the continental telecommunications system housed in the Panama Canal Zone.”

McSherry’s research revealed how the CIA supported the intelligence services of the Condor states with computerized links, a telex system, and purpose-built encoding and decoding machines made by the CIA Logistics Department. As she wrote in her book, Predatory States: Operation Condor and Covert War in Latin America:    

“The Condor system’s secure communications system, Condortel,… allowed Condor operations centers in member countries to communicate with one another and with the parent station in a U.S. facility in the Panama Canal Zone. This link to the U.S. military-intelligence complex in Panama is a key piece of evidence regarding secret U.S. sponsorship of Condor…”

Operation Condor ultimately failed, but the U.S. provided similar support and training to right-wing governments in Colombia and Central America throughout the 1980s in what senior military officers have called a “quiet, disguised, media-free approach” to repression and kill lists.

The U.S. School of the Americas (SOA) trained thousands of Latin American officers in the use of torture and death squads, as Major Joseph Blair, the SOA’s former chief of instruction described to John Pilger for his film, The War You Don’t See:

“The doctrine that was taught was that, if you want information, you use physical abuse, false imprisonment, threats to family members, and killing. If you can’t get the information you want, if you can’t get the person to shut up or stop what they’re doing, you assassinate them – and you assassinate them with one of your death squads.”

When the same methods were transferred to the U.S. hostile military occupation of Iraq after 2003, Newsweek headlined it “The Salvador Option.” A U.S. officer explained to Newsweek that U.S. and Iraqi death squads were targeting Iraqi civilians as well as resistance fighters. “The Sunni population is paying no price for the support it is giving to the terrorists,” he said. “From their point of view, it is cost-free. We have to change that equation.”

The United States sent two veterans of its dirty wars in Latin America to Iraq to play key roles in that campaign. Colonel James Steele led the U.S. Military Advisor Group in El Salvador from 1984 to 1986, training and supervising Salvadoran forces who killed tens of thousands of civilians. He was also deeply involved in the Iran-Contra scandal, narrowly escaping a prison sentence for his role supervising shipments from Ilopango air base in El Salvador to the U.S.-backed Contras in Honduras and Nicaragua.

In Iraq, Steele oversaw the training of the Interior Ministry’s Special Police Commandos – rebranded as “National” and later “Federal” Police after the discovery of their al-Jadiriyah torture center and other atrocities.

Bayan al-Jabr, a commander in the Iranian-trained Badr Brigade militia, was appointed Interior Minister in 2005, and Badr militiamen were integrated into the Wolf Brigade death squad and other Special Police units. Jabr’s chief adviser was Steven Casteel, the former intelligence chief for the U.S. Drug Enforcement Agency (DEA) in Latin America.

The Interior Ministry death squads waged a dirty war in Baghdad and other cities, filling the Baghdad morgue with up to 1,800 corpses per month, while Casteel fed the western media absurd cover stories, such as that the death squads were all “insurgents” in stolen police uniforms. 

Meanwhile U.S. special operations forces conducted “kill-or-capture” night raids in search of Resistance leaders. General Stanley McChrystal, the commander of Joint Special Operations Command from 2003-2008, oversaw the development of a database system, used in Iraq and Afghanistan, that compiled cellphone numbers mined from captured cellphones to generate an ever-expanding target list for night raids and air strikes.

The targeting of cellphones instead of actual people enabled the automation of the targeting system, and explicitly excluded using human intelligence to confirm identities. Two senior U.S. commanders told the Washington Post that only half the night raids attacked the right house or person.

In Afghanistan, President Obama put McChrystal in charge of U.S. and NATO forces in 2009, and his cellphone-based “social network analysis” enabled an exponential increase in night raids, from 20 raids per month in May 2009 to up to 40 per night by April 2011.

As with the Lavender system in Gaza, this huge increase in targets was achieved by taking a system originally designed to identify and track a small number of senior enemy commanders and applying it to anyone suspected of having links with the Taliban, based on their cellphone data.

This led to the capture of an endless flood of innocent civilians, so that most civilian detainees had to be quickly released to make room for new ones. The increased killing of innocent civilians in night raids and airstrikes fueled already fierce resistance to the U.S. and NATO occupation and ultimately led to its defeat.

President Obama’s drone campaign to kill suspected enemies in Pakistan, Yemen and Somalia was just as indiscriminate, with reports suggesting that 90% of the people it killed in Pakistan were innocent civilians.

And yet Obama and his national security team kept meeting in the White House every “Terror Tuesday” to select who the drones would target that week, using an Orwellian, computerized “disposition matrix” to provide technological cover for their life and death decisions.   

Looking at this evolution of ever-more automated systems for killing and capturing enemies, we can see how, as the information technology used has advanced from telexes to cellphones and from early IBM computers to artificial intelligence, the human intelligence and sensibility that could spot mistakes, prioritize human life and prevent the killing of innocent civilians has been progressively marginalized and excluded, making these operations more brutal and horrifying than ever.

Nicolas has at least two good friends who survived the dirty wars in Latin America because someone who worked in the police or military got word to them that their names were on a death list, one in Argentina, the other in Guatemala. If their fates had been decided by an AI machine like Lavender, they would both be long dead.

As with supposed advances in other types of weapons technology, like drones and “precision” bombs and missiles, innovations that claim to make targeting more precise and eliminate human error have instead led to the automated mass murder of innocent people, especially women and children, bringing us full circle from one holocaust to the next.

Via Code Pink

]]>
Gaza Conflict: Israel using AI to identify Human Targets raises Fears Innocents are Targeted https://www.juancole.com/2024/04/conflict-identify-innocents.html Sat, 13 Apr 2024 04:06:51 +0000 https://www.juancole.com/?p=218008 By Elke Schwarz, Queen Mary University of London | –

A report by Jerusalem-based investigative journalists published in +972 magazine finds that AI targeting systems have played a key role in identifying – and potentially misidentifying – tens of thousands of targets in Gaza. This suggests that autonomous warfare is no longer a future scenario. It is already here and the consequences are horrifying.

There are two technologies in question. The first, “Lavender”, is an AI recommendation system designed to use algorithms to identify Hamas operatives as targets. The second, the grotesquely named “Where’s Daddy?”, is a system which tracks targets geographically so that they can be followed into their family residences before being attacked. Together, these two systems constitute an automation of the find-fix-track-target components of what is known by the modern military as the “kill chain”.

Systems such as Lavender are not autonomous weapons, but they do accelerate the kill chain and make the process of killing progressively more autonomous. AI targeting systems draw on data from computer sensors and other sources to statistically assess what constitutes a potential target. Vast amounts of this data are gathered by Israeli intelligence through surveillance on the 2.3 million inhabitants of Gaza.

Such systems are trained on a set of data to produce the profile of a Hamas operative. This could be data about gender, age, appearance, movement patterns, social network relationships, accessories, and other “relevant features”. They then work to match actual Palestinians to this profile by degree of fit. The category of what constitutes relevant features of a target can be set as stringently or as loosely as is desired. In the case of Lavender, it seems one of the key equations was “male equals militant”. This has echoes of the infamous “all military-aged males are potential targets” mandate of the 2010 US drone wars in which the Obama administration identified and assassinated hundreds of people designated as enemies “based on metadata”.

What is different with AI in the mix is the speed with which targets can be algorithmically determined and the mandate of action this issues. The +972 report indicates that the use of this technology has led to the dispassionate annihilation of thousands of eligible – and ineligible – targets at speed and without much human oversight.


“Lavender 3,” digital, Dream/ Dreamworld v. 3, 2024.

The Israel Defense Forces (IDF) were swift to deny the use of AI targeting systems of this kind. And it is difficult to verify independently whether and, if so, the extent to which they have been used, and how exactly they function. But the functionalities described by the report are entirely plausible, especially given the IDF’s own boasts to be “one of the most technological organisations” and an early adopter of AI.

With military AI programs around the world striving to shorten what the US military calls the “sensor-to-shooter timeline” and “increase lethality” in their operations, why would an organisation such as the IDF not avail themselves of the latest technologies?

The fact is, systems such as Lavender and Where’s Daddy? are the manifestation of a broader trend which has been underway for a good decade and the IDF and its elite units are far from the only ones seeking to implement more AI-targeting systems into their processes.

When machines trump humans

Earlier this year, Bloomberg reported on the latest version of Project Maven, the US Department of Defense AI pathfinder programme, which has evolved from being a sensor data analysis programme in 2017 to a full-blown AI-enabled target recommendation system built for speed. As Bloomberg journalist Katrina Manson reports, the operator “can now sign off on as many as 80 targets in an hour of work, versus 30 without it”.

Manson quotes a US army officer tasked with learning the system describing the process of concurring with the algorithm’s conclusions, delivered in a rapid staccato: “Accept. Accept, Accept”. Evident here is how the human operator is deeply embedded in digital logics that are difficult to contest. This gives rise to a logic of speed and increased output that trumps all else.

The efficient production of death is reflected also in the +972 account, which indicated an enormous pressure to accelerate and increase the production of targets and the killing of these targets. As one of the sources says: “We were constantly being pressured: bring us more targets. They really shouted at us. We finished [killing] our targets very quickly”.

Built-in biases

Systems like Lavender raise many ethical questions pertaining to training data, biases, accuracy, error rates and, importantly, questions of automation bias. Automation bias cedes all authority, including moral authority, to the dispassionate interface of statistical processing.

Speed and lethality are the watchwords for military tech. But in prioritising AI, the scope for human agency is marginalised. The logic of the system requires this, owing to the comparatively slow cognitive systems of the human. It also removes the human sense of responsibility for computer-produced outcomes.

I’ve written elsewhere how this complicates notions of control (at all levels) in ways that we must take into consideration. When AI, machine learning and human reasoning form a tight ecosystem, the capacity for human control is limited. Humans have a tendency to trust whatever computers say, especially when they move too fast for us to follow.

The problem of speed and acceleration also produces a general sense of urgency, which privileges action over non-action. This turns categories such as “collateral damage” or “military necessity”, which should serve as a restraint to violence, into channels for producing more violence.

I am reminded of the military scholar Christopher Coker’s words: “we must choose our tools carefully, not because they are inhumane (all weapons are) but because the more we come to rely on them, the more they shape our view of the world”. It is clear that military AI shapes our view of the world. Tragically, Lavender gives us cause to realise that this view is laden with violence.The Conversation

Elke Schwarz, Reader in Political Theory, Queen Mary University of London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
Four Ways AI could Help us Respond to Climate Change https://www.juancole.com/2024/02/respond-climate-change.html Wed, 28 Feb 2024 05:06:31 +0000 https://www.juancole.com/?p=217319 By Lakshmi Babu Saheer, Anglia Ruskin University | –

(The Conversation) – Advanced AI systems are coming under increasing critcism for how much energy they use. But it’s important to remember that AI could also contribute in various ways to our response to climate change.

Climate change can be broken down into several smaller problems that must be addressed as part of an overarching strategy for adapting to and mitigating it. These include identifying sources of emissions, enhancing the production and use of renewable energy and predicting calamities like floods and fires.

My own research looks at how AI can be harnessed for predicting greenhouse gas emissions from cities and farms or
to understand changes in vegetation, biodiversity and terrain from satellite images.

Here are four different areas where AI has already managed to master some of the smaller tasks necessary for a wider confrontation with the climate crisis.


“AI and Climate Change,” Digital, Dream, Dreamland v. 3, 2024

1. Electricity

AI could help reduce energy-related emissions by more accurately forecasting energy supply and demand.

AI can learn patterns in how and when people use energy. It can also accurately forecast how much energy will be generated from sources like wind and solar depending on the weather and so help to maximise the use of clean energy.

For example, by estimating the amount of solar power generated from panels (based on sunlight duration or weather conditions), AI can help plan the timing of laundry or charging of electric vehicles to help consumers make the most of this renewable energy. On a grander scale, it could help grid operators pre-empt and mitigate gaps in supply.

Researchers in Iran used AI to predict the energy consumption of a research centre by taking account of its occupancy, structure, materials and local weather conditions. The system also used algorithms to optimise the building’s energy use by proposing appropriate insulation measures and heating controls and how much lighting and power was necessary based on the number of people present, ultimately reducing it by 35%.

2. Transport

Transport accounts for roughly one-fifth of global CO₂ emissions. AI models can encourage green travel options by suggesting the most efficient routes for drivers, with fewer hills, less traffic and constant speeds, and so minimise emissions.

An AI-based system suggested routes for electric vehicles in the city of Gothenburg, Sweden. The system used features like vehicle speed and the location of charging points to find optimal routes that minimised energy use.

3. Agriculture

Studies have shown that better farming practices can reduce emissions. AI can ensure that space and fertilisers (which contribute to climate change) are used sparingly.

By predicting how much of a crop people will buy in a particular market, AI can help producers and distributors minimise waste. A 2017 study conducted by Stanford University in the US even showed that advanced AI models can predict county-level soybean yields.

This was possible using images from satellites to analyse and track the growth of crops. Researchers compared multiple models to accurately predict crop yields and the best performing one could predict a crop’s yield based on images of growing plants and other features, including the climate.

Knowing a crop’s probable yield weeks in advance can help governments and agencies plan alternative means of procuring food in advance of a bad harvest.

4. Disaster management

The prediction and management of disasters is a field where AI has made major contributions. AI models have studied images from drones to predict flood damage in the Indus basin in Pakistan.

The system is also useful for detecting the onset of a flood, helping with real-time rescue operation planning. The system could be used by government authorities to plan prompt relief measures.

These potential uses don’t erase the problem of AI’s energy consumption, however, To ensure AI can be a force for good in the fight against climate change, something will still have to be done about this.

The Conversation


Lakshmi Babu Saheer, Director of Computing Informatics and Applications Research Group, Anglia Ruskin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
AI Behavior, Human Destiny and the Rise of the Killer Robots https://www.juancole.com/2024/02/behavior-destiny-killer.html Wed, 21 Feb 2024 05:02:39 +0000 https://www.juancole.com/?p=217200 ( Tomdispatch.com ) – Yes, it’s already time to be worried — very worried. As the wars in Ukraine and Gaza have shown, the earliest drone equivalents of “killer robots” have made it onto the battlefield and proved to be devastating weapons. But at least they remain largely under human control. Imagine, for a moment, a world of war in which those aerial drones (or their ground and sea equivalents) controlled us, rather than vice-versa. Then we would be on a destructively different planet in a fashion that might seem almost unimaginable today. Sadly, though, it’s anything but unimaginable, given the work on artificial intelligence (AI) and robot weaponry that the major powers have already begun. Now, let me take you into that arcane world and try to envision what the future of warfare might mean for the rest of us.

By combining AI with advanced robotics, the U.S. military and those of other advanced powers are already hard at work creating an array of self-guided “autonomous” weapons systems — combat drones that can employ lethal force independently of any human officers meant to command them. Called “killer robots” by critics, such devices include a variety of uncrewed or “unmanned” planes, tanks, ships, and submarines capable of autonomous operation. The U.S. Air Force, for example, is developing its “collaborative combat aircraft,” an unmanned aerial vehicle (UAV) intended to join piloted aircraft on high-risk missions. The Army is similarly testing a variety of autonomous unmanned ground vehicles (UGVs), while the Navy is experimenting with both unmanned surface vessels (USVs) and unmanned undersea vessels (UUVs, or drone submarines). China, Russia, Australia, and Israel are also working on such weaponry for the battlefields of the future.

The imminent appearance of those killing machines has generated concern and controversy globally, with some countries already seeking a total ban on them and others, including the U.S., planning to authorize their use only under human-supervised conditions. In Geneva, a group of states has even sought to prohibit the deployment and use of fully autonomous weapons, citing a 1980 U.N. treaty, the Convention on Certain Conventional Weapons, that aims to curb or outlaw non-nuclear munitions believed to be especially harmful to civilians. Meanwhile, in New York, the U.N. General Assembly held its first discussion of autonomous weapons last October and is planning a full-scale review of the topic this coming fall.

For the most part, debate over the battlefield use of such devices hinges on whether they will be empowered to take human lives without human oversight. Many religious and civil society organizations argue that such systems will be unable to distinguish between combatants and civilians on the battlefield and so should be banned in order to protect noncombatants from death or injury, as is required by international humanitarian law. American officials, on the other hand, contend that such weaponry can be designed to operate perfectly well within legal constraints.

However, neither side in this debate has addressed the most potentially unnerving aspect of using them in battle: the likelihood that, sooner or later, they’ll be able to communicate with each other without human intervention and, being “intelligent,” will be able to come up with their own unscripted tactics for defeating an enemy — or something else entirely. Such computer-driven groupthink, labeled “emergent behavior” by computer scientists, opens up a host of dangers not yet being considered by officials in Geneva, Washington, or at the U.N.

For the time being, most of the autonomous weaponry being developed by the American military will be unmanned (or, as they sometimes say, “uninhabited”) versions of existing combat platforms and will be designed to operate in conjunction with their crewed counterparts. While they might also have some capacity to communicate with each other, they’ll be part of a “networked” combat team whose mission will be dictated and overseen by human commanders. The Collaborative Combat Aircraft, for instance, is expected to serve as a “loyal wingman” for the manned F-35 stealth fighter, while conducting high-risk missions in contested airspace. The Army and Navy have largely followed a similar trajectory in their approach to the development of autonomous weaponry.

The Appeal of Robot “Swarms”

However, some American strategists have championed an alternative approach to the use of autonomous weapons on future battlefields in which they would serve not as junior colleagues in human-led teams but as coequal members of self-directed robot swarms. Such formations would consist of scores or even hundreds of AI-enabled UAVs, USVs, or UGVs — all able to communicate with one another, share data on changing battlefield conditions, and collectively alter their combat tactics as the group-mind deems necessary.

“Emerging robotic technologies will allow tomorrow’s forces to fight as a swarm, with greater mass, coordination, intelligence and speed than today’s networked forces,” predicted Paul Scharre, an early enthusiast of the concept, in a 2014 report for the Center for a New American Security (CNAS). “Networked, cooperative autonomous systems,” he wrote then, “will be capable of true swarming — cooperative behavior among distributed elements that gives rise to a coherent, intelligent whole.”

As Scharre made clear in his prophetic report, any full realization of the swarm concept would require the development of advanced algorithms that would enable autonomous combat systems to communicate with each other and “vote” on preferred modes of attack. This, he noted, would involve creating software capable of mimicking ants, bees, wolves, and other creatures that exhibit “swarm” behavior in nature. As Scharre put it, “Just like wolves in a pack present their enemy with an ever-shifting blur of threats from all directions, uninhabited vehicles that can coordinate maneuver and attack could be significantly more effective than uncoordinated systems operating en masse.”

In 2014, however, the technology needed to make such machine behavior possible was still in its infancy. To address that critical deficiency, the Department of Defense proceeded to fund research in the AI and robotics field, even as it also acquired such technology from private firms like Google and Microsoft. A key figure in that drive was Robert Work, a former colleague of Paul Scharre’s at CNAS and an early enthusiast of swarm warfare. Work served from 2014 to 2017 as deputy secretary of defense, a position that enabled him to steer ever-increasing sums of money to the development of high-tech weaponry, especially unmanned and autonomous systems.

From Mosaic to Replicator

Much of this effort was delegated to the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s in-house high-tech research organization. As part of a drive to develop AI for such collaborative swarm operations, DARPA initiated its “Mosaic” program, a series of projects intended to perfect the algorithms and other technologies needed to coordinate the activities of manned and unmanned combat systems in future high-intensity combat with Russia and/or China.

“Applying the great flexibility of the mosaic concept to warfare,” explained Dan Patt, deputy director of DARPA’s Strategic Technology Office, “lower-cost, less complex systems may be linked together in a vast number of ways to create desired, interwoven effects tailored to any scenario. The individual parts of a mosaic are attritable [dispensable], but together are invaluable for how they contribute to the whole.”

This concept of warfare apparently undergirds the new “Replicator” strategy announced by Deputy Secretary of Defense Kathleen Hicks just last summer. “Replicator is meant to help us overcome [China’s] biggest advantage, which is mass. More ships. More missiles. More people,” she told arms industry officials last August. By deploying thousands of autonomous UAVs, USVs, UUVs, and UGVs, she suggested, the U.S. military would be able to outwit, outmaneuver, and overpower China’s military, the People’s Liberation Army (PLA). “To stay ahead, we’re going to create a new state of the art… We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat.”

To obtain both the hardware and software needed to implement such an ambitious program, the Department of Defense is now seeking proposals from traditional defense contractors like Boeing and Raytheon as well as AI startups like Anduril and Shield AI. While large-scale devices like the Air Force’s Collaborative Combat Aircraft and the Navy’s Orca Extra-Large UUV may be included in this drive, the emphasis is on the rapid production of smaller, less complex systems like AeroVironment’s Switchblade attack drone, now used by Ukrainian troops to take out Russian tanks and armored vehicles behind enemy lines.

At the same time, the Pentagon is already calling on tech startups to develop the necessary software to facilitate communication and coordination among such disparate robotic units and their associated manned platforms. To facilitate this, the Air Force asked Congress for $50 million in its fiscal year 2024 budget to underwrite what it ominously enough calls Project VENOM, or “Viper Experimentation and Next-generation Operations Model.” Under VENOM, the Air Force will convert existing fighter aircraft into AI-governed UAVs and use them to test advanced autonomous software in multi-drone operations. The Army and Navy are testing similar systems.

When Swarms Choose Their Own Path

In other words, it’s only a matter of time before the U.S. military (and presumably China’s, Russia’s, and perhaps those of a few other powers) will be able to deploy swarms of autonomous weapons systems equipped with algorithms that allow them to communicate with each other and jointly choose novel, unpredictable combat maneuvers while in motion. Any participating robotic member of such swarms would be given a mission objective (“seek out and destroy all enemy radars and anti-aircraft missile batteries located within these [specified] geographical coordinates”) but not be given precise instructions on how to do so. That would allow them to select their own battle tactics in consultation with one another. If the limited test data we have is anything to go by, this could mean employing highly unconventional tactics never conceived for (and impossible to replicate by) human pilots and commanders.

The propensity for such interconnected AI systems to engage in novel, unplanned outcomes is what computer experts call “emergent behavior.” As ScienceDirect, a digest of scientific journals, explains it, “An emergent behavior can be described as a process whereby larger patterns arise through interactions among smaller or simpler entities that themselves do not exhibit such properties.” In military terms, this means that a swarm of autonomous weapons might jointly elect to adopt combat tactics none of the individual devices were programmed to perform — possibly achieving astounding results on the battlefield, but also conceivably engaging in escalatory acts unintended and unforeseen by their human commanders, including the destruction of critical civilian infrastructure or communications facilities used for nuclear as well as conventional operations.

At this point, of course, it’s almost impossible to predict what an alien group-mind might choose to do if armed with multiple weapons and cut off from human oversight. Supposedly, such systems would be outfitted with failsafe mechanisms requiring that they return to base if communications with their human supervisors were lost, whether due to enemy jamming or for any other reason. Who knows, however, how such thinking machines would function in demanding real-world conditions or if, in fact, the group-mind would prove capable of overriding such directives and striking out on its own.

What then? Might they choose to keep fighting beyond their preprogrammed limits, provoking unintended escalation — even, conceivably, of a nuclear kind? Or would they choose to stop their attacks on enemy forces and instead interfere with the operations of friendly ones, perhaps firing on and devastating them (as Skynet does in the classic science fiction Terminator movie series)? Or might they engage in behaviors that, for better or infinitely worse, are entirely beyond our imagination?

Top U.S. military and diplomatic officials insist that AI can indeed be used without incurring such future risks and that this country will only employ devices that incorporate thoroughly adequate safeguards against any future dangerous misbehavior. That is, in fact, the essential point made in the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” issued by the State Department in February 2023. Many prominent security and technology officials are, however, all too aware of the potential risks of emergent behavior in future robotic weaponry and continue to issue warnings against the rapid utilization of AI in warfare.

Of particular note is the final report that the National Security Commission on Artificial Intelligence issued in February 2021. Co-chaired by Robert Work (back at CNAS after his stint at the Pentagon) and Eric Schmidt, former CEO of Google, the commission recommended the rapid utilization of AI by the U.S. military to ensure victory in any future conflict with China and/or Russia. However, it also voiced concern about the potential dangers of robot-saturated battlefields.

“The unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability,” the report noted. This could occur for a number of reasons, including “because of challenging and untested complexities of interaction between AI-enabled and autonomous weapon systems [that is, emergent behaviors] on the battlefield.” Given that danger, it concluded, “countries must take actions which focus on reducing risks associated with AI-enabled and autonomous weapon systems.”

When the leading advocates of autonomous weaponry tell us to be concerned about the unintended dangers posed by their use in battle, the rest of us should be worried indeed. Even if we lack the mathematical skills to understand emergent behavior in AI, it should be obvious that humanity could face a significant risk to its existence, should killing machines acquire the ability to think on their own. Perhaps they would surprise everyone and decide to take on the role of international peacekeepers, but given that they’re being designed to fight and kill, it’s far more probable that they might simply choose to carry out those instructions in an independent and extreme fashion.

If so, there could be no one around to put an R.I.P. on humanity’s gravestone.

Via Tomdispatch.com

]]>
In our National Crisis, We need Public Voices of Optimism — not Gadflies circling a Black Hole https://www.juancole.com/2024/02/national-optimism-gadflies.html Fri, 09 Feb 2024 05:04:59 +0000 https://www.juancole.com/?p=216997 Sacramento (Special to Informed Comment) – Who is a public intellectual? What role should they play? Searching the internet yields several answers. Alan Lightman’s The Role of the Public Intellectual offers a thoughtful discussion of different visions of the public intellectual and their role and responsibilities. I have opted for a broader description, but with some important provisos. A public intellectual is a person who, by virtue of her knowledge and expertise, engages with the public to promote the public good.

An effective criticism of social and political woes by public intellectuals might get the attention of some segments of the public, especially those who might be labeled “politically aware”—individuals who regularly follow the news and crises of the day. But there is more to being a public intellectual than becoming a gadfly gnawing on the pestiferous hide of the establishment. Eloquently depicting misdirection, mismanagement, and overweening ambitions among the political class can be motivating but often prove insufficient. Worse yet, it could become a self-defeating enterprise when these criticisms lead to public despair and political alienation. It is akin to the proverbial heralding that “the emperor has no clothes,” with the added twist that no tailor can sew one either. When others pile on, we get closer to a political black hole.

Churning out critical essays and commentaries should not be the end but an inducement to search for remedies. What utility do such analyses offer if their message only intimates a rotten and entrenched status quo immune to change and improvements?

Channel 4 New Video: “Hannah Ritchie on replacing eco-anxiety with ‘cautious optimism’ & how to build a sustainable world”

The public intellectual must go beyond criticism of the unsatisfactory status quo and policies by inspiring a sense of optimism in the public’s mind about change and reform and suggesting how they might be achieved. How can this be done responsibly?

Paul Romer, a Nobel laureate in Economics (2018), distinguishes complacent optimism from contingent optimism (he calls it  “conditional optimism”; I prefer contingent optimism to  accentuate the difference with complacent optimism) by giving an example of each: “Complacent optimism is the feeling of a child waiting for presents. “Contingent optimism is the feeling of a child who is thinking about building a treehouse. ‘If I get some wood and nails and persuade some other kids to help do the work, we can end up with something really cool.” In the first case (complacent optimism), the child is passive, awaiting a present with earnest expectation. In the second case (contingent optimism), the child lays out a plan to make her wish a reality. The optimism of the first child is wholly dependent on the largesse of others; she makes herself the object of her expectations. The optimism of the second child is born of her agency to identify and secure the resources she needs to build her treehouse.

Contingent optimism begins by taking stock of the challenge. Once the problem is defined, you search for credible solutions to change the situation in the desired direction. In other words, contingent optimism makes the reason for developing an optimistic outlook contingent on working out a strategy of change that makes it likely to achieve the outcomes one seeks. It is the careful mapping out of a plan that justifies feeling optimistic about change. That optimism is contingent on having correctly defined the problem and potential solutions.

We should expect contingent optimism from public intellectuals, not despair. They are uniquely equipped and positioned to critically analyze our societal ills and propose remedies that can change the system to better serve the common good. The same goes for the rest of us. Deluding ourselves with passive hope is the essence of complacent optimism. Planning how to achieve our wishes justifies optimism—contingently, of course!

]]>
Tech “Visionaries” are Actually Holding back Progress with Bloated, Predatory Corporations https://www.juancole.com/2024/01/visionaries-predatory-corporations.html Wed, 03 Jan 2024 05:02:29 +0000 https://www.juancole.com/?p=216333 University of Essex | – Technological innovation in the last couple of decades has brought fame and huge wealth to the likes of Elon Musk, Steve Jobs, Mark Zuckerberg and Jeff Bezos. Often feted as geniuses, they are the faces behind the gadgets and media that so many of us depend upon. […]]]> By Peter Bloom, >University of Essex | –

Technological innovation in the last couple of decades has brought fame and huge wealth to the likes of Elon Musk, Steve Jobs, Mark Zuckerberg and Jeff Bezos. Often feted as geniuses, they are the faces behind the gadgets and media that so many of us depend upon.

Sometimes they are controversial. Sometimes the level of their influence is criticised.

But they also benefit from a common mythology which elevates their status. That myth is the belief that executive “visionaries” leading vast corporations are the engines which power essential breakthroughs too ambitious or futuristic for sluggish public institutions.

For there are many who consider the private sector to be far better equipped than the public sector to solve major challenges. We see such ideology embodied in ventures like OpenAI. This successful company was founded on the premise that while artificial intelligence is too consequential to be left to corporations alone, the public sector is simply incapable of keeping up.

The approach is linked to a political philosophy which champions the idea of pioneering entrepreneurs as figureheads who advance civilisation through sheer individual brilliance and determination.

In reality, however, most modern technological building blocks – like car batteries, space rockets, the internet, smart phones, and GPS – emerged from publicly funded research. They were not the inspired work of corporate masters of the universe.

And my work suggests a further disconnect: that the profit motive seen across Silicon Valley (and beyond) frequently impedes innovation rather than improving it.

For example, attempts to profit from the COVID vaccine had a detrimental impact on global access to the medicine. Or consider how recent ventures into space tourism seem to prioritise experiences for extremely wealthy people over less lucrative but more scientifically valuable missions.

More broadly, the thirst for profit means intellectual property restrictions tend to restrict collaboration between (and even within) companies. There is also evidence that short-term shareholder demands distort real innovation in favour of financial reward.

Allowing executives focused on profits to set technological agendas can incur public costs too. It’s expensive dealing with the hazardous low-earth orbit debris caused by space tourism, or the complex regulatory negotiations involved in protecting human rights around AI.

So there is a clear tension between the demands of profit and long-term technological progress. And this partly explains why major historical innovations emerged from public sector institutions which are relatively insulated from short-term financial pressures. Market forces alone rarely achieve transformative breakthroughs like space programs or the creation of the internet.

Excessive corporate dominance has other dimming effects. Research scientists seem to dedicate valuable time towards chasing funding influenced by business interests. They are also increasingly incentivised to go into the profitable private sector.

Here those scientists’ and engineers’ talents may be directed at helping advertisers to better keep hold of our attention. Or they may be tasked with finding ways for corporations to make more money from our personal data.

Projects which might address climate change, public health or global inequality are less likely to be the focus.

Likewise, research suggests that university laboratories are moving towards a “science for profit” model through industry partnerships.

Digital destiny

But true scientific innovation needs institutions and people guided by principles that go beyond financial incentives. And fortunately, there are places which support them.

Open knowledge institutions” and platform cooperatives are focused on innovation for the collective good rather than individual glory. Governments could do much more to support and invest in these kinds of organisations.

If they do, the coming decades could see the development of healthier innovation ecosystems which go beyond corporations and their executive rule. They would create an environment of cooperation rather than competition, for genuine social benefit.

There will still be a place for the quirky “genius” of Musk and Zuckerberg and their fellow Silicon Valley billionaires. But relying on their bloated corporations to design and dominate technological innovation is a mistake.

For real discovery and progress cannot rely on the minds and motives of a few famous men. It involves investing in institutions which are rooted in democracy and sustainability – not just because it is more ethical, but because in the the long term, it will be much more effective.The Conversation

Peter Bloom, Professor of Management, University of Essex

This article is republished from The Conversation under a Creative Commons license. Read the original article.

]]>
AI Goes to War: Will the Pentagon’s Techno-Fantasies Pave the Way for War with China? https://www.juancole.com/2023/10/pentagons-techno-fantasies.html Wed, 04 Oct 2023 04:04:12 +0000 https://www.juancole.com/?p=214662 ( Tomdispatch.com) – On August 28th, Deputy Secretary of Defense Kathleen Hicks chose the occasion of a three-day conference organized by the National Defense Industrial Association (NDIA), the arms industry’s biggest trade group, to announce the “Replicator Initiative.” Among other things, it would involve producing “swarms of drones” that could hit thousands of targets in China on short notice. Call it the full-scale launching of techno-war.

Her speech to the assembled arms makers was yet another sign that the military-industrial complex (MIC) President Dwight D. Eisenhower warned us about more than 60 years ago is still alive, all too well, and taking a new turn. Call it the MIC for the digital age.

Hicks described the goal of the Replicator Initiative this way:

“To stay ahead [of China], we’re going to create a new state of the art… leveraging attritable, autonomous systems in all domains which are less expensive, put fewer people at risk, and can be changed, upgraded, or improved with substantially shorter lead times… We’ll counter the PLA’s [People’s Liberation Army’s] with mass of our own, but ours will be harder to plan for, harder to hit, and harder to beat.”

Think of it as artificial intelligence (AI) goes to war — and oh, that word “attritable,” a term that doesn’t exactly roll off the tongue or mean much of anything to the average taxpayer, is pure Pentagonese for the ready and rapid replaceability of systems lost in combat. Let’s explore later whether the Pentagon and the arms industry are even capable of producing the kinds of cheap, effective, easily replicable techno-war systems Hicks touted in her speech. First, though, let me focus on the goal of such an effort: confronting China.

Target: China

However one gauges China’s appetite for military conflict — as opposed to relying more heavily on its increasingly powerful political and economic tools of influence — the Pentagon is clearly proposing a military-industrial fix for the challenge posed by Beijing. As Hicks’s speech to those arms makers suggests, that new strategy is going to be grounded in a crucial premise: that any future technological arms race will rely heavily on the dream of building ever cheaper, ever more capable weapons systems based on the rapid development of near-instant communications, artificial intelligence, and the ability to deploy such systems on short notice.

The vision Hicks put forward to the NDIA is, you might already have noticed, untethered from the slightest urge to respond diplomatically or politically to the challenge of Beijing as a rising great power. It matters little that those would undoubtedly be the most effective ways to head off a future conflict with China.

Such a non-military approach would be grounded in a clearly articulated return to this country’s longstanding “One China” policy. Under it, the U.S. would forgo any hint of the formal political recognition of the island of Taiwan as a separate state, while Beijing would commit itself to limiting to peaceful means its efforts to absorb that island.

There are numerous other issues where collaboration between the two nations could move the U.S. and China from a policy of confrontation to one of cooperation, as noted in a new paper by my colleague Jake Werner of the Quincy Institute: “1) development in the Global South; 2) addressing climate change; 3) renegotiating global trade and economic rules; and 4) reforming international institutions to create a more open and inclusive world order.” Achieving such goals on this planet now might seem like a tall order, but the alternative — bellicose rhetoric and aggressive forms of competition that increase the risk of war — should be considered both dangerous and unacceptable.

On the other side of the equation, proponents of increasing Pentagon spending to address the purported dangers of the rise of China are masters of threat inflation. They find it easy and satisfying to exaggerate both Beijing’s military capabilities and its global intentions in order to justify keeping the military-industrial complex amply funded into the distant future.

As Dan Grazier of the Project on Government Oversight noted in a December 2022 report, while China has made significant strides militarily in the past few decades, its strategy is “inherently defensive” and poses no direct threat to the United States. At present, in fact, Beijing lags behind Washington strikingly when it comes to both military spending and key capabilities, including having a far smaller (though still undoubtedly devastating) nuclear arsenal, a less capable Navy, and fewer major combat aircraft. None of this would, however, be faintly obvious if you only listened to the doomsayers on Capitol Hill and in the halls of the Pentagon.

But as Grazier points out, this should surprise no one since “threat inflation has been the go-to tool for defense spending hawks for decades.” That was, for instance, notably the case at the end of the Cold War of the last century, after the Soviet Union had disintegrated, when then Chairman of the Joint Chiefs of Staff Colin Powell so classically said: “Think hard about it. I’m running out of demons. I’m running out of villains. I’m down to [Cuba’s Fidel] Castro and Kim Il-sung [the late North Korean dictator].”

Needless to say, that posed a grave threat to the Pentagon’s financial fortunes and Congress did indeed insist then on significant reductions in the size of the armed forces, offering less funds to spend on new weaponry in the first few post-Cold War years. But the Pentagon was quick to highlight a new set of supposed threats to American power to justify putting military spending back on the upswing. With no great power in sight, it began focusing instead on the supposed dangers of regional powers like Iran, Iraq, and North Korea. It also greatly overstated their military strength in its drive to be funded to win not one but two major regional conflicts at the same time. This process of switching to new alleged threats to justify a larger military establishment was captured strikingly in Michael Klare’s 1995 book Rogue States and Nuclear Outlaws.

After the 9/11 attacks, that “rogue states” rationale was, for a time, superseded by the disastrous “Global War on Terror,” a distinctly misguided response to those terrorist acts. It would spawn trillions of dollars of spending on wars in Iraq and Afghanistan and a global counter-terror presence that included U.S. operations in 85 — yes, 85! — countries, as strikingly documented by the Costs of War Project at Brown University.

All of that blood and treasure, including hundreds of thousands of direct civilian deaths (and many more indirect ones), as well as thousands of American deaths and painful numbers of devastating physical and psychological injuries to U.S. military personnel, resulted in the installation of unstable or repressive regimes whose conduct — in the case of Iraq — helped set the stage for the rise of the Islamic State (ISIS) terror organization. As it turned out, those interventions proved to be anything but either the “cakewalk” or the flowering of democracy predicted by the advocates of America’s post-9/11 wars. Give them full credit, though! They proved to be a remarkably efficient money machine for the denizens of the military-industrial complex.

Constructing “the China Threat”

As for China, its status as the threat du jour gained momentum during the Trump years. In fact, for the first time since the twentieth century, the Pentagon’s 2018 defense strategy document targeted “great power competition” as the wave of the future.

One particularly influential document from that period was the report of the congressionally mandated National Defense Strategy Commission. That body critiqued the Pentagon’s strategy of the moment, boldly claiming (without significant backup information) that the Defense Department was not planning to spend enough to address the military challenge posed by great power rivals, with a primary focus on China.

The commission proposed increasing the Pentagon’s budget by 3% to 5% above inflation for years to come — a move that would have pushed it to an unprecedented $1 trillion or more within a few years. Its report would then be extensively cited by Pentagon spending boosters in Congress, most notably former Senate Armed Services Committee Chair James Inhofe (R-OK), who used to literally wave it at witnesses in hearings and ask them to pledge allegiance to its dubious findings.

That 3% to 5% real growth figure caught on with prominent hawks in Congress and, until the recent chaos in the House of Representatives, spending did indeed fit just that pattern. What has not been much discussed is research by the Project on Government Oversight showing that the commission that penned the report and fueled those spending increases was heavily weighted toward individuals with ties to the arms industry. Its co-chair, for instance, served on the board of the giant weapons maker Northrop Grumman, and most of the other members had been or were advisers or consultants to the industry, or worked in think tanks heavily funded by just such corporations. So, we were never talking about a faintly objective assessment of U.S. “defense” needs.

Beware of Pentagon “Techno-Enthusiasm”

Just so no one would miss the point in her NDIA speech, Kathleen Hicks reiterated that the proposed transformation of weapons development with future techno-war in mind was squarely aimed at Beijing. “We must,” she said, “ensure the PRC leadership wakes up every day, considers the risks of aggression and concludes, ‘today is not the day’ — and not just today, but every day, between now and 2027, now and 2035, now and 2049, and beyond… Innovation is how we do that.”

The notion that advanced military technology could be the magic solution to complex security challenges runs directly against the actual record of the Pentagon and the arms industry over the past five decades. In those years, supposedly “revolutionary” new systems like the F-35 combat aircraft, the Army’s Future Combat System (FCS), and the Navy’s Littoral Combat Ship have been notoriously plagued by cost overruns, schedule delays, performance problems, and maintenance challenges that have, at best, severely limited their combat capabilities. In fact, the Navy is already planning to retire a number of those Littoral Combat Ships early, while the whole FCS program was canceled outright.

In short, the Pentagon is now betting on a complete transformation of how it and the industry do business in the age of AI — a long shot, to put it mildly.

But you can count on one thing: the new approach is likely to be a gold mine for weapons contractors, even if the resulting weaponry doesn’t faintly perform as advertised. This quest will not be without political challenges, most notably finding the many billions of dollars needed to pursue the goals of the Replicator Initiative, while staving off lobbying by producers of existing big-ticket items like aircraft carriers, bombers, and fighter jets.

Members of Congress will defend such current-generation systems fiercely to keep weapons spending flowing to major corporate contractors and so into key congressional districts. One solution to the potential conflict between funding the new systems touted by Hicks and the costly existing programs that now feed the titans of the arms industry: jack up the Pentagon’s already massive budget and head for that trillion-dollar peak, which would be the highest level of such spending since World War II.

The Pentagon has long built its strategy around supposed technological marvels like the “electronic battlefield” in the Vietnam era; the “revolution in military affairs,” first touted in the early 1990s; and the precision-guided munitions praised since at least the 1991 Persian Gulf war. It matters little that such wonder weapons have never performed as advertised. For example, a detailed Government Accountability Office report on the bombing campaign in the Gulf War found that “the claim by DOD [Department of Defense] and contractors of a one-target, one-bomb capability for laser-guided munitions was not demonstrated in the air campaign where, on average, 11 tons of guided and 44 tons of unguided munitions were delivered on each successfully destroyed target.”

When such advanced weapons systems can be made to work, at enormous cost in time and money, they almost invariably prove of limited value, even against relatively poorly armed adversaries (as in Iraq and Afghanistan in this century). China, a great power rival with a modern industrial base and a growing arsenal of sophisticated weaponry, is another matter. The quest for decisive military superiority over Beijing and the ability to win a war against a nuclear-armed power should be (but isn’t) considered a fool’s errand, more likely to spur a war than deter it, with potentially disastrous consequences for all concerned.

Perhaps most dangerous of all, a drive for the full-scale production of AI-based weaponry will only increase the likelihood that future wars could be fought all too disastrously without human intervention. As Michael Klare pointed out in a report for the Arms Control Association, relying on such systems will also magnify the chances of technical failures, as well as misguided AI-driven targeting decisions that could spur unintended slaughter and decision-making without human intervention. The potentially disastrous malfunctioning of such autonomous systems might, in turn, only increase the possibility of nuclear conflict.

It would still be possible to rein in the Pentagon’s techno-enthusiasm by slowing the development of the kinds of systems highlighted in Hicks’s speech, while creating international rules of the road regarding their future development and deployment. But the time to start pushing back against yet another misguided “techno-revolution” is now, before automated warfare increases the risk of a global catastrophe. Emphasizing new weaponry over creative diplomacy and smart political decisions is a recipe for disaster in the decades to come. There has to be a better way.

Tomdispatch.com

]]>
A Perfect Storm: Does China’s EV Dominance Threaten European Auto Makers on their Home Turf? https://www.juancole.com/2023/09/dominance-threaten-european.html Sun, 10 Sep 2023 04:02:18 +0000 https://www.juancole.com/?p=214285 By Sören Amelang | –

On the way from China to Europe: BYD electric car. Image by BYD
On the way from China to Europe: BYD electric car. Image by BYD

( Clean Energy Wire ) – Europe’s carmakers have little to celebrate at this year’s IAA mobility show. Chinese firms’ widening lead in the global shift to electric vehicles is on open display at the biennial fair in Munich, while climate activists lambast European firms’ reliance on dirty combustion engines and SUVs. The number of Chinese exhibitors has doubled, underlining their ambition to challenge brands like Volkswagen, BMW, Peugeot, Renault and Fiat on their home turf. Industry experts warn that these household names are facing a “perfect storm.” [UPDATE: IAA opening with Scholz address]

China’s rapid transformation into an electric car superpower that threatens Europe’s established carmakers even on their home turf is a dominant theme at this year’s IAA motor show in Bavaria’s capital Munich. The presence of Chinese companies has doubled at this year’s show, which is open to the public from 5 to 10 September, reflecting a tectonic shift in the global automobile industry.

“A perfect storm is brewing in Munich,” said Christian Hochfeld, who heads Berlin-based clean mobility think tank Agora Verkehrswende. “Established European carmakers are facing huge challenges.”

Hochfeld pointed to China’s lead in battery technology, digitalisation, competitiveness, and the automation of production. He also noted conventional carmakers’ dependence on fragile supply chains dominated by the East Asian country, both in raw materials and products like batteries and electronics. Additionally, carmakers are dealing with a sales crisis in Europe, high energy prices, and a skilled labour shortage, Hochfeld told Clean Energy Wire.

Stefan Bratzel, the director of the Center of Automotive Management, also warned against regarding conventional carmakers’ record profits as an indication of future success. “They might still be doing fine at the moment, but trouble is ahead,” Bratzel said.

“China is like a magnifying glass of the transition. What we will see on a global scale in the future can already be observed in China today,” Bratzel said. “Electrification and digitalisation are far more advanced than in Europe, and domestic competitors are very strong in China, especially in electric mobility. Established carmakers have their work cut out for them.”

As the world races to lower emissions in the fight against climate change, a rapidly growing number of countries and cities have already set phase-out dates for combustion engines. This shift has massive repercussions for the Europe’s car industry, whose success was built on this climate-damaging technology, and has been slow to switch to clean alternatives like electric vehicles.

The IAA promises a comprehensive view of mobility.
The IAA promises a comprehensive view of mobility.

German Chancellor Olaf Scholz attended the official opening, underlining the political significance of the show hosted by Germany’s car industry association VDA. In his opening address, he praised the nation’s carmakers, and said the competitiveness of Germany as a car country was “completely beyond question”.

The IAA is one of the world’s most important auto industry events, and has a history stretching back more than 100 years. Previously known as the Frankfurt auto show, the event rebranded to “IAA Mobility” in 2021 in the face of climate protests and declining interest from both industry and the public. The main novelty this year is the complete separation of these two target audiences. A traditional trade fair will cater exclusively to business visitors, while non-business visitors can check out various locations in Munich’s city centre free of charge.

Previous IAA shows had already shifted much of their focus to zero-emission mobility and have moved the attention from solely cars to other forms of transport like bikes and buses. The IAA says it “stands for a modern and comprehensive concept of mobility like no other event.”

Sales shock made in China

Recent trends in China, the world’s largest car market, provide deeply unsettling news for Europe’s auto industry, especially German brands. Accounting for more than a third of sales, the East Asian country is the most important market for BMW, Mercedes, and Volkswagen, including its Audi and Porsche subsidiaries. But the German brands’ success in China, which was built on combustion-engine prowess, is quickly eroding as the country switches rapidly to electric vehicles (EVs).  

VW recently lost its decades-old pole position in the Chinese market to domestic rival BYD (“Build Your Dreams”), which delivered almost 20 times more electric cars to customers in China than VW at the start of this year. Not a single European brand makes it into the top ten of China’s best-selling electric models. Four out of five electric vehicles sold in China are from domestic brands – only Tesla has a sizeable electric market share.

For the first time, Chinese carmakers are likely to outsell foreign brands in the overall Chinese auto market this year, including vehicles with internal combustion engines, according to management consultancy AlixPartners, who expect the domestic share to rise to 65 percent by 2030.

Rattled European carmakers fear Chinese “invasion”

“China is on its way to becoming an automotive superpower,” AlixPartners’ Fabian Piontek told DW. European manufacturers are increasingly finding themselves defending market share at home. “The era of record profits for German automakers is coming to an end,” he concluded. This shift is being felt even on a global scale, where BYD overtook BMW in the first half of year.

The Chinese newcomers are also starting to conquer the European market. Of new EVs sold on the continent this year, eight percent were made by Chinese brands – double the 2021 share, according to autos consultancy Inovev. Western automakers are rattled, with Carlos Tavares, the CEO of the Stellantis Group, which includes Peugeot and Fiat, warning of an “invasion” of cheap Chinese EVs in Europe.

The German car industry’s assessment is also sombre. “The Chinese car industry is massively supported by the state in terms of industrial policy, while our production costs are moving further and further out of line with international competitiveness. These are difficult conditions,” said the VDA lobby group’s head Hildegard Müller.

Volkswagen top executives also took a dim view on an internal conference call in July, people familiar with the event told the Wall Street Journal. A divisional chief told his colleagues that exploding costs, falling demand and new rivals such as Tesla and Chinese electric-car makers are making for a “perfect storm,” adding: “The roof is on fire.”

BMW CEO Oliver Zipse takes a slightly more upbeat view. Products made in China must be taken seriously but do not yet pose a significant threat to his company’s business, he told business daily Handelsblatt. “No one can enter a new market overnight,” Zipse said, arguing that “ambition does not equal success” when it comes to new potential rivals from China that attempt to gain a foothold in the European car market. “It remains to be seen how well the new players meet the requirements and tastes of European customers,” the manager said, explaining that digitalisation, logistics, maintenance services and many other factors played a role in customers satisfaction.

Chinese carmakers “are coming to stay”

China’s superiority in many industry trends, as well as its ambitions, are obvious at the IAA, where BYD and other Chinese brands unveil a whole range of electric models. “BYD is truly dedicated to introducing innovative and high-tech eco-friendly cars to the European market,” said Michael Shu, managing director of the company’s European division. “The IAA in Munich provides us with the perfect platform to showcase our latest BYD models … We head to the Munich motor show with great excitement.”

In a further sign of changing times, China held its own electric mobility trade show, the World New Energy Vehicle Congress (WNEVC), as part of the IAA – the first time the WNEVC takes place outside of China.

“It’s a strong signal clearly indicating the Chinese carmakers are coming, and they are coming to stay,” Hochfeld said. “They will probably become market leaders in certain segments in Europe. That will be the new normal.”

Nio is another Chinese newcomer marketing its cars to European clients. Image by Nio
Nio is another Chinese newcomer marketing its cars to European clients. Image by Nio

From entry level to premium

Gartner analyst Pedro Pacheco also highlighted the significance of the WNEVC. “Having the WNEVC come to the IAA in Munich is quite symbolic because we are starting to see Chinese automakers expanding more and more outside of China,” he told Automotive News Europe. He added the entry-level EVs are an obvious target for Chinese players, given that European brands have “next to nothing” on offer in this segment.

But Hochfeld warned that Chinese brands could also take a sizeable share of the premium market, as clients’ attention shifts from mechanical precision – a trademark of European luxury brands – to connectivity and entertainment, where Chinese brands are superior. Auto industry consulting firm Berylls already forecasts a “change of guard” in China’s premium segment. In the race with traditional luxury manufacturers from Germany, the Chinese are “overtaking on the fast lane,” Berylls said.

Problem child

The IAA is also shining a spotlight on the host country’s failure to make much headway in making mobility more climate friendly. Germany likes to consider itself a green pioneer due to its landmark energy transition. But the country has been struggling to lower transport emissions, which have remained broadly stable for decades, as gains from more efficient engines have been eaten up by heavier cars. The transition to low-emission mobility is often referred to as climate policy’s “problem child,” and is a particularly sensitive topic given that more than 800,000 German manufacturing jobs directly depend on the industry.

As a result, future mobility has already become an electoral topic in Germany. The Christian Democrats (CDU), who are in opposition on a federal level, as well as the Free Democrats (FDP), who are in a federal coalition government, are attempting to lure voters with pro-car policies. For example, the city of Berlin’s new CDU-led government coalition halted cycle lane projects earlier this year to preserve parking spaces, following a successful pro-car election campaign. In comparison to a rapidly growing number of other European cities like Amsterdam, Barcelona or Paris, Germany’s capital already lags far behind when it comes to sustainable mobility.

Under intense pressure from the FDP, the German government earlier this year insisted on exceptions for synthetic fuels in the EU push for climate-friendly cars, enraging European partners. These fuels made with renewable electricity could throw a lifeline to combustion engines, but are considered an unsuitable option by most experts, because they are highly inefficient compared to electric motors. Germany is also lagging on its own targets for the rollout of electric cars. “The current state of affairs signals that the German government’s target of 15 million electric vehicles in 2030 will be missed by 50 percent,” according to the Center of Automotive Management (CAR).

An activist alliance plans a protest camp and other actions at this year's IAA
An activist alliance plans a protest camp and other actions at this year’s IAA

Radical climate activists reject invitation

Against this backdrop, the IAA is accompanied by anti-industry and anti-government protests. Climate activists united under the label “BlockIAA” are calling for a large demonstration on 10 September. Additionally, about 1,500 activists are expected to participate in a “Mobility Transition Camp” in a central Munich park. It offers an alternative programme to the IAA’s that talks about “ways out of the fossil car trap” and “towards a mobility that benefits people instead of car industry shareholders,” according to organisers, which include well known activist movements Fridays for Future and Attac Germany, as well as smaller groups like “Sand im Getriebe” (roughly translated as “spanner in the works”), “No Future for IAA”, and “Smash IAA”.

“We are heading unchecked toward a climate catastrophe – and the car companies continue to press down on the gas pedal. More and more cars, ever larger and heavier, are clogging up the roads, taking away the air we breathe and heating up the climate. This has to stop now,” argue the organisers, who are also planning a bicycle demo and acts of civil disobedience. They accuse the IAA of “cluttering up our public spaces with token bicycles, greenwashed electric vehicles & braggy cars”.

In an attempt to channel climate protests, which have become something of an IAA tradition, the organisers offered an exhibition space to the relatively radical Last Generation group, which has become infamous for their road blockades in Germany. But the group, along with Greenpeace and Fridays for Future, rejected the invitation: “We are not available for such an obvious attempt to co-opt us.” The VDA has also tried to engage with other climate activists and critics of the event. For example, 21-year-old activist and UN advisor Sophia Kianni is scheduled to speak at the show on 6 September about the climate protection initiative she founded, Climate Cardinals.  

Clean Energy Wire

]]>