Training artificial intelligence models does not typically involve coming face-to-face with an armed soldier who is pointing a gun at you and shouting at your driver to get out of the car. But the system that F. LeRon Shults and Justin Lane, cofounders of CulturePulse, are developing for the United Nations is not a typical AI model.
“I got pulled over by the [Israeli] military, by a guy holding [a military rifle] because we had a Palestinian taxi driver who drove past a line he wasn't supposed to,” Shults tells WIRED. “So that was an adventure.”
Shults and Lane were in the West Bank in September, just weeks before Hamas attacked Israel on October 7, sparking what has become one of the worst periods of violence in the region in at least 50 years.
Shults and Lane—both Americans who are now based in Europe—were on the ground as part of a contract they signed with the UN in August to develop a first-of-its-kind AI model that they hope will help analyze solutions to the Israel-Palestinian conflict.
Shults and Lane are aware that claiming that AI could “solve the crisis” between Israelis and Palestinians is likely to result in a lot of eye-rolling if not outright hostility, especially given the horrific scenes coming out of Gaza daily. So they are quick to dispel that this is what they are trying to do.
“Quite frankly, if I were to phrase it that way, I'd roll my eyes too,” Shults says. “The key is that the model is not designed to resolve the situation; it's to understand, analyze, and get insights into implementing policies and communication strategies.”
The conflict in the region is centuries old and deeply complex, and it's made even more complicated by the current crisis. Countless efforts at finding a political solution have failed, and any eventual end to the crisis will need support not just from the two sides involved, but likely the wider international community. All of this makes it impossible for an AI system to simply spit out a fully formed solution. Instead, CulturePulse aims to pinpoint the underlying causes of the conflict.
“We know that you can't solve a problem this complex with a single AI system. That's not ever going to be feasible in my opinion,” Lane tells WIRED. “What is feasible is using an intelligent AI system—using a digital twin of a conflict—to explore the potential solutions that are there.”
The digital twin Lane is speaking of is CulturePulse’s multi-agent AI model they are building that will ultimately allow them to create a virtual version of the region. In past iterations, the model has replicated every single person virtually each imbued with demographics, religious beliefs, and moral values that echo their real-world counterparts, according to Shults and Lane.
In total, CulturePulse’s models can factor in over 80 categories to each “agent,” including traits like anger, anxiety, personality, morality, family, friends, finances, inclusivity, racism, and hate speech, though not all characteristics are used in all models.
“These models are entire artificial societies, with thousands or millions of simulated adaptive artificially intelligent agents that are networked with each other, and they're designed in a way that is more psychologically realistic and more sociologically realistic,” Shults says. “Basically you have a laboratory, an artificial laboratory, that you can play with on your PC in ways that you could never do ethically, certainly, in the real world.”
The current project will initially model the socio-ecological aspects of the Israeli-Palestinian region that are relevant to the conflict, meaning it is smaller in scale than some of their previous projects. However, should the project be expanded in the future, a model could allow the UN to see how the virtual society would react to changes in economic prosperity, heightened security, changing political influences, and a range of other parameters. Shults and Lane claim their model predicts outcomes with clinical accuracy of over 95 percent confidence to real-world outcomes.
“It goes beyond just learning randomly and finding patterns like machine learning, and it goes beyond statistics, which gives you correlations,” Shults says. “It actually gets to a causality, because of the multi-agent AI system which grows the conflict, or the polarization, or the peaceful immigration policy from the ground up. So it shows you what you want to create before you try it out in the real world.”
Discussions around AI and the Israel-Hamas war have so far been focused on the threat posed by generative AI to push disinformation. While those threats have yet to materialize, news cycles have been clouded by disinformation and misinformation being shared by all sides. Rather than trying to eliminate this disruptive element, CulturePulse’s model has in the past factored this type of information directly into its analysis.
“We actually deliberately want to make sure that those materials that are biased are being put into these models. They just need to be put into the model in a psychologically real way,” Lane says.
The horrific massacres and humanitarian crises happening in Israel and Gaza over the past month have brought home the pressing need for a solution to the deeply rooted conflict. But before the latest outbreak in violence in the region, the UN Development Program (UNDP) was already exploring new options in trying to find a resolution, signing an initial five-month contact with CulturePulse in August.
The application of artificial intelligence technologies to conflict situations has been around since at least 1996, with machine learning being used to predict where conflicts may occur. The use of AI in this area has expanded in the intervening years, being used to improve logistics, training, and other aspects of peacekeeping missions. Lane and Shults believe they could use artificial intelligence to dig deeper and find the root causes of conflicts.
Their idea for an AI program that models the belief systems that drive human behavior first began when Lane moved to Northern Ireland a decade ago to study whether computation modeling and cognition could be used to understand issues around religious violence.
In Belfast, Lane figured out that by modeling aspects of identity and social cohesion, and identifying the factors that make people motivated to fight and die for a particular cause, he could accurately predict what was going to happen next.
“We set out to try and come up with something that could help us better understand what it is about human nature that sometimes results in conflict, and then how can we use that tool to try and get a better handle or understanding on these deeper, more psychological issues at really large scales,” Lane says.
The result of their work was a study published in 2018 in The Journal for Artificial Societies and Social Simulation, which found that people are typically peaceful but will engage in violence when an outside group threatens the core principles of their religious identity.
A year later, Lane wrote that the model he had developed predicted that measures introduced by Brexit—the UK’s departure from the European Union that included the introduction of a hard border in the Irish Sea between Northern Ireland and the rest of the UK—would result in a rise in paramilitary activity. Months later, the model was proved right.
The multi-agent model developed by Lane and Shults relied on distilling more than 50 million articles from GDelt, a project that monitors “the world's broadcast, print, and web news from nearly every corner of every country in over 100 languages.” But feeding the AI millions of articles and documents was not enough, the researchers realized. In order to fully understand what was driving the people of Northern Ireland to engage in violence against their neighbors, they would need to conduct their own research.
Lane spent months finding and speaking to those directly involved in the violence, such as members of the Ulster Volunteer Force (UVF), a paramilitary group loyal to the British crown, and the Irish Republican Army (IRA), a paramilitary group seeking the end of British rule on the island of Ireland. The information that Lane gathered in these interviews was fed into his model in order to give a more complete understanding of the psychology behind the violence that had riven the country for three decades.
While Lane is now based in Slovakia, he maintains the links he built up while in Northern Ireland, returning at least once a year to speak to the people again, and update his model with the latest information. If during these conversations Lane hears about a particular issue or a reason why someone took a particular action that’s not present in the AI model, the team will see if there is lab data to back it up before putting it into his model.
“And if the data doesn’t exist, we'll go out and we'll do our own experimentation with universities to see if there is evidence, and then we will build that into our project,” Lane says.
In recent years, Lane and Shults have worked with a number of groups and governments to apply their model to better understand situations across the globe, including the conflicts in South Sudan and the Balkans. The model has also been used in the Syrian Refugee Crisis, where Lane and Shults traveled to the Greek island of Lesbos to gather firsthand information to help their system integrate refugees with host families. CulturePulse has also worked with the Norwegian government to tackle the spread of Covid-19 misinformation by better understanding the reasons why someone is sharing inaccurate information.
Key to the success of all of these efforts is the collection of firsthand information about what’s happening on the ground. And so, when they signed the contract with the UNDP in August, the first thing Shults and Lane wanted to arrange was a visit to Israel and the West Bank, where they spent “about a week” gathering data. “We met with the UN and different NGOs going out to the villages, seeing firsthand what it looks like with the settler dynamics that are there,” Shults says. The pair hoped to go to Gaza but were not able to secure permission in advance. The trip to Israel also included time speaking to their employers to find out exactly what it is they are hoping to get from this project.
“We spent a whole week extracting from the UN officials we met information that's relevant, that we need to know for the model, getting a sense of their understanding of the dynamics, the data that they might have that could inform the model's calibration and final validation,” Shults says.
Shults would not discuss the detailed parameters the UN had specified be built into the model, but his team gives the UN team regular updates over Zoom on the construction of the model and “the simulation experiments that are being run to test out the conditions and mechanisms that might lead to outcomes that they desire,” he says.
The UNDP has not yet responded to WIRED’s request for comment.
CulturePulse’s contract with the UNDP runs out in January, but they are hopeful of signing a phase-two contract that would see them build out a fully functional model. CulturePulse this month also signed a nine-month contract with UNDP to work on a system that would help resolve cultural and religious issues still causing conflict in Bosnia and Herzegovina since the end of the Bosnian War in 1995.
The reason the UN is turning to AI in the Israeli-Palestinian conflict, according to Lane, is that it simply has nowhere else to turn. “The way that the UN phrased it to us is that there's no more low-hanging fruit in that situation,” Lane says. “They needed to try something that was new and innovative, something that was really thinking outside of the box yet still really addressing the root issues of the problem.”
Updated at 12:55 pm ET, November 3, 2023, to clarify the scope and limitations of the AI model CulturePulse is currently building in relation to the Israeli-Palestinian conflict and the details of the founders' attempt to visit Gaza while in the region prior to the ongoing Israel-Hamas war.