Terrorists are and have always been children of their time - like you and me. We increasingly make use of modern technology, whether it is by ordering groceries through an app or reading articles selected through artificial intelligence (AI). These benefits come with a downside: those with bad intentions might use these technologies as well. This contribution aims to put the threat posed by the use of technology in terrorism to the West, more specifically to The Netherlands, in perspective. It will reflect in particular on the signals to watch in the Global Security Pulse on ‘Terrorism in the Age of Tech’ (further referred to as Global Security Pulse). It will reflect on the probability and evidence-based nature of the threats that are prominent in this publication and highlight some threats that have not been included.

The Global Security Pulse touches upon a fundamental question: ‘How can we strike (and safeguard) a proportional balance between the realistic probability and the impact of the use of these new technologies, without then neglecting other threats?’ Putting the debate on terrorism in the age of technology somewhat in perspective is necessary. The public discourse paints a futuristic picture of a threat that could quite easily spin so far from our understanding that it will become difficult to control and counter. It widely accepts scenarios of weaponisation of AI and killer drones. But how probable are these scenario’s? Are they based on actual evidence or on a fear that terrorists will go as far as our imagination allows us to imagine? Are incentives and motivations also taken into account or is the threat assessment mainly based on assumed technical capacities and unhindered access to new technologies? Fact is that in counter-terrorism (CT) policy, also on the international level, time and money is spent on these scenarios[1].

Even though this contribution propagates a more proportional discourse on the use of new and innovative technology in terrorism, it does not propose to close one’s eyes for the ‘unknown unknowns’. On the contrary: it is important to not only prepare for the last and known attacks. We need to be able to anticipate new forms of threat, to make sure that we do not suffer from ‘failure of imagination’. But in doing so, we should take care that we do not fall victim to conscious or unconscious fear mongering, by crying wolf over threats that are conceivable but borne largely from speculation instead of indications of probability. We should particularly take heed of this when it comes to innovative technology, a field in which it is relatively difficult for a layperson to assess whether scenarios that are warned for are indeed viable or probable.

Looking back

Most terrorist attacks in Europe in the past years have been committed not by using new technologies, but by low tech means, varying from more large scale attacks either by bombs or mass-shootings to small-scale attacks committed with easily attainable objects used as weapons, for instance cars, vans, knives and of course guns[2]. Jihadist terrorist organisations like Al-Qaeda and IS have, in the recent years, actively called upon their following to ‘keep it simple’[3].

Nevertheless, terrorist use of (crude) unconventional weapons has been a worry for decades, with the potential capability of terrorist organisations to acquire or build nuclear weapons as the sum of all fears, even though researchers have contended that this scenario is unlikely[4]. Marianne van Leeuwen, in her study “Crying Wolf? Assessing unconventional terrorism” researched the quality of the public debate on the nuclear terrorist threat in 2000, working with the premise that this debate influences political and policy choices. She concluded that ‘in the United States in particular, the debate has stimulated an atmosphere of ill-defined alarm rather than creating the right conditions for well-considered and effective counterterrorist policies. (...) Opinion-leaders have been concentrating on technical capabilities while sidestepping the equally important issue of incentives and motivations’[5].

Confronted with a growing discourse on the threat posed by ‘killer drones’ or ‘AI assassinations’, it is important to assess whether terrorists or terrorist organisations actually need these means for their ends – indeed, as Van Leeuwen stated, investigate incentives and motivation.

Looking at the Global Security Pulse, the top two of the list of ‘novel and important signals to watch’ that could impact the terrorist threat assessment for The Netherlands are indeed the weaponisation of AI and the threat posed by drones. How probable are these scenarios?

Use of AI by terrorist organisations

The assertion that terrorist organisations will use –and do use- AI for their benefits is a likely one. Just as we all do, terrorists probably benefit from machine learning and other forms of AI, for instance in the preparations for their military operations and for the gathering of information. Particularly when carrying out cyber-attacks, automated tasks executed by using AI can make the scale and impact of these attacks potentially larger[6]. These scenarios are not only possible, but seem to also align with the motives and incentives of terrorist groups.

The scenario that seems to both frighten and fascinate people most seems to be that of AI robots. Scenarios conjure up an image of some sort of sentient, killer robot with capacities that mimic that of the human brain but without the human inhibitions that people –thankfully- still seem to attribute to us ‘real’ people. . We apparently attribute to our fellow human beings an innate sense of decency, or morality, that would keep a human from committing assassinations the way an AI powered robot would commit them. Do terrorist organisations actually need that kind of AI operatives? Do these scenarios align with their needs, incentives and motivations?

One of the most difficult things to counter when it comes to terrorist attacks, is the willingness of terrorist operatives to give their own life for their cause. They do not stop at decency, morality or proportionality and are, in a sense, as unstoppable as a ‘killer robot’. Recent years have shown that suicide operatives are readily available and that their attacks are quite successful. There is added gain for terrorist organisations too: the use of self-sacrificing human operatives sends a strong and empowering message of fanaticism and determination that strengthens their image as victors and increases their impact.

Terrorist groups, then, do not need AI powered operatives to carry out assassinations or attacks. Neither do they need “self-driving vehicles carrying car bombs and conducting ramming attacks”, as mentioned in the Global Security Pulse: they have experience with car bomb factories and plenty of followers who are willing to die whilst committing similar attacks. This reduces the plausibility of these scenarios.

States and AI

The fear of terrorists using ‘AI killer robots’ runs parallel to the discussion on the use of them in the military. Even though at this point there are no autonomous targeting systems without human decision making employed in the (US) military, there are widespread concerns about this possibility and the ethical questions that are involved[7]. But here too some nuance is in place: the Pentagon is aware of the risks involved, and has put guidelines in place to guard them, one of them being to always have ‘a human in the loop’[8]. Moreover, their usefulness in the military has been questioned. Just two weeks ago, the Pentagon issued a press release in which it tempered expectations for the use of AI in the military, stating that they “can’t show the rewards [of AI] right now on mission-critical systems”[9]

But concern about the use of AI by states is warranted. AI tools and technology are being deployed and developed by countries that may not use it within proportional frameworks that take individual liberties and fundamental rights into account, as also signalled in the Global Security Pulse. China, for instance uses sophisticated facial recognition software for mass-surveillance. But more countries than we might be generally aware of use AI technologies for surveillance purposes. The 2019 Carnegie AI Global Surveillance Index (AIGS) reports that 75 out of the 176 countries globally do so. These include liberal democracies, also The Netherlands. The Carnegie report furthermore warns that ‘democracies are not taking adequate steps to monitor and control the spread of sophisticated technologies linked to a range of violations’[10].

AI technologies are sold to and used by fragile and instable states, according to the AIGS. Even though innovative AI technology is unlikely to be developed by terrorist organisations, and might be too expensive for them to buy, the possession of this technology by fragile states might mean that when these states collapse or when conflict within these states arises or intensifies, chances that these technologies will end up in the hands of terrorist organisations increase.

Killer drones

The Global Security Pulse warns not only for scenarios involving weaponised AI robots, but also scenarios involving ‘swarms of killer drones’. There is a YouTube-video online, called ‘Slaughterbots’, posted in 2017, which has generated over 3 million views[11]. In it, a dystopian scenario is presented in which an unidentified government has produced large swarms of small weaponized drones in order to kill critical civilians in order to assert thought control. Notwithstanding the fact that this video was created and posted by an activist group called ‘Stop Autonomous Weapons’ with the clearly articulated political purpose to stop the military use on unmanned weapons, this video easily made its way into the discourse on the use of technology by terrorists[12]. The scenario of ‘swarms of killer drones’ is linked to evidence of ISIS having used consumer grade, small drones armed with grenades on the Syrian battlefield. These cases then merge into a seemingly all too plausible scenario of terror that we are potentially faced with[13].

But the technological tool as presented in the video is not readily available. All parts of it separately – the facial recognition, the explosives, the algorithms, the sophisticated drone technology, etcetera- have been developed but in different contexts, and mostly not in consumer goods. It is imaginable that at one point these separate technologies could be assembled as presented. But the Hollywoodesque nature of the video – echoing both a Ted-talk and an episode of the Netflix dystopian series Black Mirror- helps the viewer to ignore the illogical nature of the video. It presumes, for instance, that governments will mass-produce small weaponised drones to openly mass-execute civilians for thinking or behaving in unwanted ways. This does not ring likely. If a government would indeed want to openly assassinate large amounts of citizens, history unfortunately proofs that there are easier ways. Atrocities like the killing fields of Cambodia or Ruanda have shown us this. And even when carrying out historical mass-scale atrocities, governments tried to hide their actions.

The video also ignores the fact that there are countermeasures against drones in place, and they are increasingly sophisticated – and sometimes quite unsophisticated but effective. Swarms of drones, for instance, can be countered by simple measures like chicken wire and nets, asserts for instance Paul Scharre, director of the National Security Program at the Centre for New American Security and author of “Army of None: Autonomous Weapons and the Future of War”[14].

Online technologies: anonymous fora

When discussing the threat of the use of technology by terrorists, next to AI and drones, one needs to pay attention to the use of Internet. Two prominent and well-researched areas that terrorists use digital and Internet technology for are their propaganda efforts and their communications. This contribution focuses on a use of online space by terrorists that has not received the same attention: the use of anonymous fora, particularly the increase of attacks that have a connection with anonymous fora like 8chan[15].

On the 9th of October 2019, a 27-year old man killed two people and injured two others on the Jewish holiday of Yom Kippur when he tried to enter a synagogue in Halle, Germany[16]. This right-wing terrorist attack is of the sort that we have increasingly seen in the more recent years. The attacker shot his victims while filming his attack with a camera attached to a helmet and live-streaming it on the internet. The man identified himself as an ‘anon’, a user of anonymous and no-rules internet fora like 8chan[17]. The general public have come to know these fora because of their attraction to people with extremist views and the fact that recent mass-shooters were users of these fora, announced their attacks there, and published manifestos or (links to the) livestreams of the attacks on these online spaces. When trying to put the threat posed by use of technology by terrorists into perspective, the use and role of these anonymous fora is a signal to watch.

The difference between a forum like 8chan and other fora, like Reddit, is that it lacks moderation efforts to remove or avert content[18]. Users of such a forum can be completely anonymous: there is no need to register or create an account, as is the case with mainstream fora. This makes them an ideal place for potential terrorists to anonymously express intentions or sentiments that would otherwise be taken offline. Terrorists that use these fora seem to purposefully want to inspire and incite people to commit violence, and a snowball-effect is already visible: recent perpetrators of right-wing inspired mass-shootings that used these fora have openly referred to each other as inspirations.

3D printing

The Halle-attack points to another potentially dangerous use of innovative technology: the use of 3D printers to fabricate homemade guns that can be used in terrorist attacks.

The Halle-perpetrator used a homemade gun to execute his attack, using freely online available PDF manuals. Parts of the gun used in Halle were allegedly printed by using a 3D printer[19] - the shooter himself posted, right before the attack, a message online stating “All you need is a weekend worth of time and $50 for the materials”[20]. The manifesto the Halle-attacker posted online before his attack stated that his main purpose was to “prove the viability of improvised weapons”[21]. His gun jammed during the attack, and in the video of the attack he can be heard stating “I have certainly managed to prove how absurd improvised weapons are”. The attack has consequently been framed as a ‘failure’ in right-wing extremist circles[22]. This will assumingly undermine the shooters efforts to advocate for the use of homemade weapons in terrorist attacks and decreases chances of copy-cat behaviour. However, researchers have warned that this in turn could lead to an underestimation of the risk posed by the 3D printing techniques[23]. Particularly combined with readily available manufacturing manuals, easy home manufacturing like 3D printing makes it much easier for potential terrorists to acquire guns, also in countries where there is strict gun control in place, like in Germany[24]. Terrorism by mass-shooting in the USA are sometimes seen as a phenomenon reserved for low gun control areas. With these dynamics that might change.

Guns made completely from current, commercially available 3D printed material will not last, probably not more than one round fired[25]. The 3D printed elements in the gun used in the Halle-attack were not essential to the functioning of the gun. But as quality of 3D printing increases, and the technique becomes more known and available, the risk and effectiveness of terrorists making their own guns will likely further increase.

Public-private partnerships

An important recent development in the international order, as also touched upon in the Global Security Pulse, is the intensification of public-private partnerships when it comes to countering terrorist and violent extremist online content.

Technological expertise is not necessarily the core expertise of CT or countering violent extremism (CVE) professionals. In order to keep up with developments in order to counter misuse of them, government organisations need to make an effort to attract the ‘best and brightest’ to their teams. Working together with private technology companies can help governments make use of the expertise of the private industries’ ‘best and brightest’.

Another advantage of working in public-private partnerships is the fact that private communication and social media companies, often have direct access to information and communication that is disseminated through their communication channels- and are in the position to take communication down and identify signals that a threat is manifesting on their channels.

We have seen a particular increase in efforts in public-private cooperation in this field after the Christchurch shooting. The Christchurch Call to Action Summit brought together Heads of State and governments and leaders from the tech sector, like Facebook, Microsoft and Google, to commit to eliminating terrorist and extremist content online, taking into account the fundamental importance of freedom of expression.

Large social media and technology companies, like Facebook and Microsoft, are increasingly taking up their role and responsibility in the countering of terrorism[26]. A question that remains unanswered is how to involve small tech companies in these initiatives. We have seen potential terrorists move away from mainstream Internet and social media, and towards more obscure outlets, like 8chan. These can be very small companies like anonymous sharing portals as justpaste.it that was used by IS to distribute information like names and addresses of military personnel[27]. It seems almost impossible for such a company, run by one person, to get involved in international cooperation –the costs involved alone would likely be too high. The same goes for anonymous fora like 8chan – and it is probable that its owner would not be inclined to work with government authorities in the first place.

Conclusion

It is important to not fall into the trap of failure of imagination when it comes to assessing new threats. In the case of the threat posed by technology used by terrorists, though, it seems that participants in the debate fall into the technology trap, as Marianne van Leeuwen also concluded in her study on the proportionality of the debate on nuclear terrorism. The focus is too much on what the technological possibilities are and it often seems to be assumed that what is possible will therefore happen. There are, as this contribution tries to show, arguments that offer counterbalance to this alarmist discourse, but they are not often taken into account. The most important elements that need a place on the scales are the motives and incentives as to why a terrorist or terrorist organisation would want to use complex technical means for their purpose. They do not always need to do so –they do not require AI suicide bombers because they already have willing human suicide bombers, for instance-, or it is not always economical – guns, vans or knifes are a lot cheaper then small weaponized drones.

Furthermore, it seems that possibilities to counter the use of technology by terrorists, measures already in place, but also potential countermeasures, are not often weighed in either by participants in the debate when assessing the threat.

The debate looks very much into a potential far future, and in doing so seems to run the risk to defeat the purpose: to create a healthy sense of awareness to be able to find solutions today for the likely and feasible threats posed by the use of technology that benefits terrorists. Instead, the dissemination of worst-case and unlikely scenarios could lead to an increase of fear for terrorists and terrorist organisations –and even for the technology itself.

The increase of involvement of private partners in countering threats related to the use of technology by terrorists is a positive development. However, it does make it increasingly difficult to assess whether people who take part in the debate are objective: there are stakes and agendas, it is imaginable that private parties like consultants or advisors have an interest in keeping an alarmist narrative alive around the topics they sell their services for- or to have it downplayed. Particularly on themes that require specialist technical expertise, it is difficult for the non-technical participants or audiences in the debate to distinguish a biased or influenced narrative from an evidence-based and neutral one.

Innovative technology speaks to our imagination. And imagination is good to keep us from preparing only for threats that are already known. But in thinking about which threats we should prepare for, it is adamant that we assess the feasibility and proportionality of scenarios next to their possibility. If we can strike the right balance here, we are most effective in making this digital age a safe one for all.

Notes

For instance in United Nations Circles: See Victor Radivinovski, “New technologies, artificial intelligence aid fight against global terrorism”, UN News, 4 Sep 2019, link , and within the framework of the Global Counter Terrorism Forum (GCTF), see for instance the initiative to Counter Unmanned Aerial System Threats: N.a. “Countering Unmanned Aerial System Threats”, GCTF, N.d., link.
Parker, D. et al., “Challenges for Effective Counterterrorism Communication: Practitioner Insights and Policy Implications for Preventing Radicalization, Disrupting Attack Planning, and Mitigating Terrorist Attacks”, Studies in Conflict & Terrorism, Vol. 42, No.3, (2019), pp. 264-291.
Marc Ambinder, ‘’Al Qaeda's First English Language Magazine Is Here’’, The Atlantic, June 30 2010, link ; Zoie O’Brien, ‘’‘Lone wolves rise up’ ISIS calls on jihadis to attack US and Europe in revenge for Mosul’’, Jul 11 2017, link
Rathore, Shahzeb Ali, ‘’Is the threat of ISIS using CBRN real?’’, Counter Terrorist Trends and Analyses, Vol. 8, No.2. (February 2016), pp. 4-10
Marianne van Leeuwen, ‘’Crying Wolf ? Assessing Unconventional Terrorism’’, Clingendael – Study, (September 2000), link
Miles Brundage et al., ‘’The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, (February 2018)’’, linkver=1553030594217
Marcus Roth, ‘’Artificial Intelligence in the Military – An Overview of Capabilities’’, emerj, Feb 22 2019, link
Khari Johnson, ‘’Defense Innovation Board unveils AI ethics principles for the Pentagon’’, VB, Oct 31 2019, link
Carlo Munoz, ‘’Pentagon Task Force Chief Curtails Expectations for Near-term AI capabilities’’, Jane's International Defence Review, October 30 2019, link
Steven Feldstein, ‘’The Global Expansion of AI Surveillance’’, Carnegie Endowment for International Peace, Sep 17 2019, link
Stop Autonomous Weapons, ‘’Slaughterbots’’, Nov 12 2017, linkv=9CO6M2HsoIA
See for instance: Jacob Ware, ‘’Terrorist Groups, Artificial Intelligence and Killer Drones’’, War on the Rocks, Sep 24 2019, link , used secondarily as the source for the Global Security Pulse that mentions the threat of ‘swarms of killer drones’ in its first signal.
Daveed Gartenstein – Ross, ‘’Terrorists Are Going to Use Artificial Intelligence’’, Defense One, May 3 2018, link
Paul Scharre, ‘’Why You Shouldn't Fear “Slaughterbots”’’, IEEE Spectrum, Dec 22 2017, link
Recently, the 8chan forum was removed from its server, but it reappeared under the name 8kun shortly after.
Oliver Holmes and Philip Oltermann, ‘’Halle synagogue was fortified before antisemitic attack’’, The Guardian, Oct 11 2019, link
Julia Carrie Wong, ‘’Germany shooting suspect livestreamed attempted attack on synagogue’’, The Guardian, Oct 10 2019, link
Patrick Lucas Austin, ‘’What Is 8chan, and How Is It Related to This Weekend's Shootings? Here's What to Know”, Time, Aug 05 2019, link
Miriam Berger, ‘’The attack on a German synagogue highlights the threat posed by do-it-yourself’’, The Washington Post, Oct 11 2019, guns link
Lizzie Dearden, ‘’Use of 3D printed guns in German synagogue shooting must act as warning to security services, experts say’’, Independent, Oct 11 2019, link
Miriam Berger, ‘’The attack on a German synagogue highlights the threat posed by do-it-yourself’’, The Washington Post, Oct 11 2019, guns link
Beau Jackson, ‘’Interview with the ICSR : A 3D printed Gun was not used in the Halle Terror Attack’’, 3D Printing Industry, Oct 18 2019, link
Lizzie Dearden, ‘’Use of 3D printed guns in German synagogue shooting must act as warning to security services, experts say’’, Independent, Oct 11 2019, link
The availability of these manuals hit the global news this week, when a federal judge blocked the Trump administration from allowing blueprints for making plastic guns on 3D printers to be posted on the internet: Mihir Zaveri, ‘’Trump administration blocked from allowing blueprints for 3D printed guns to be published online’’, Independent, Nov 14 2019, link
N.a., ‘’Die tödliche Gefahr aus dem 3D-Drucker’’, Süddeutsche Zeitung, Oct 14 2019, link
The Global Internet Forum to Counter Terrorism (GIFCT) was established in 2017 by Facebook, Microsoft, Twitter and YouTube and has evolved into a forum that aims at fostering collaboration with large and small tech companies, governments, international organisations like the EU and UN, academics and thinks tanks and civil society. The GIFCT supports Tech against Terrorism which was, launched by the United Nations Counter Terrorism Executive Directorate (UN CTED) in 2017, that supports the technology sector in responding to terrorist use of the internet, and works to promote public-private partnerships in this area.
Gov. Philip D. Murphy and Lt. Gov. Sheila Y. Oliver, ‘’ISIS: Escalating Threats Against US Military Personnel’’, State of New Jersey Office of Homeland Security and Preparedness, March 22 2015, link