Cybersecurity - Elcano Royal Institute empty_context Copyright (c), 2002-2018 Fundación Real Instituto Elcano Lotus Web Content Management <![CDATA[ Cyber Terrorism. Why it exists, why it doesn’t, and why it will ]]> 2020-04-17T09:55:07Z

While the discussion on cyber terrorism research and related government policies have hit a wall in recent years, adversarial tactics to create terror in and through cyberspace are only at their beginning.


While the discussion on cyber terrorism research and related government policies have hit a wall in recent years, adversarial tactics to create terror in and through cyberspace are only at their beginning.


For more than two decades, the idea of cyber terrorism has survived in the absence of a concise definition and rigorous case-studies that prove its actual existence. Many researchers have moved the ball forward over the years by investigating –among others topics– whether cyber terrorism is a real or imagined threat, which actors can conduct cyber terrorism, what the motivations behind an act of cyber terrorism might be, and whether terrorism logics in real-space hold true in cyberspace.1 This article is not going to revise this existing knowledge. Instead, it seeks to explain different government logics on the defensive end for buy-into the cyber terrorism narrative and outline operational thinking on the offensive end to create terror as a desired outcome.


The conversation on cyber terrorism began in the late-1990s amidst a wave of high-profile terrorist attacks in the United States, including the bombing of the World Trade Center in 1993 and the Oklahoma bombing in 1995. By 1997, the US Department of Defense conducted its first ever no-notice information warfare exercise to test the cybersecurity of its own systems, and in the same year, the Marsh Commission report on critical infrastructure protection put the growing cyber threat landscape on the policy map in Washington.2 Following the simultaneous bombings of the US embassies in Kenya and Tanzania in 1998 and the subsequent rise of al-Qaeda, terrorist attacks in and through cyberspace were seen as a potential future threat vector to the homeland. In October 1999, the Naval Post Graduate School prepared the first and to date most comprehensive study on ‘cyberterror’ for the US Defense Intelligence Agency.

The 1999 study included numerous definitions and statements that outlined the contours of cyber terrorism research. The authors for example noted that “terrorist use of information technology in their support activities does not qualify as cyberterrorism.” Similarly, they also excluded script kiddie techniques, including dictionary attacks, spoofed emails, and the bombardment of e-mail inboxes. Overall, the study narrowly defined cyber terrorism as “the unlawful destruction or disruption of digital property to intimidate or coerce governments or societies in the pursuit of goals that are political, religious or ideological.”3 For a study compiled in 1999, this was a well-rounded framework. The only problem was that, in the United States, all cases that could theoretically fit the profile are statutorily considered either acts of cybercrime acts under the Computer Fraud and Abuse Act (18 U.S.C. 1030), or deemed armed attacks/acts of aggression under international law that would trigger the entire toolbox of US national defense mechanisms. For the last 20 years, cyber terrorism researchers have unsuccessfully tried to carve out their own space that could stand apart from cybercrime, hacktivism, and offensive military cyber operations. It should thus not come as a surprise that, writing in 2012, Jonalan Brickey still had to explain that cyberterrorism could be defined as “the use of cyber to commit terrorism,” or characterized as the “use of cyber capabilities to conduct, enabling, disruptive, and destructive militant operations in cyberspace to create and exploit fear through violence or the threat of violence in the pursuit of political change.”4 Similarly in 2014, Daniel Cohen literally wrote a book chapter on ‘cyber terrorism: case studies’ in which all examples are either cases of hacktivism, cybercrime, or nation state operations.5

Different government approaches

With cyber terrorism research hitting a wall very early on, some notions of cyber terrorism were nonetheless picked up by governments and agencies alike. 7000 miles away from Washington D.C., the Japanese government embarked on its mission to combat what it termed ‘cyber terror’ in the year 2000, when a combination of cyber-linked incidents caused by Japanese left-wing extremists, Chinese nationalistic hacktivist, and the Aum Shinrikyo doomsday sect, shook the public’s confidence. In December 2000, Tokyo implemented an Special Action Plan which defined cyber terror as “any attacks using information and communication networks and information systems that could have a significant impact on people's lives and socio-economic activities.”8 In practice, this included everything from DDoS attacks, and the defacement of websites, to the deployment of highly advanced tooling like Stuxnet. Curiously, today, Japan’s National Police Agency literally uses these three categories to officially define cyber terror.

For the Japanese government the primary motivation to introduce the term cyber terror was to mobilize government resources and secure the buy-in from critical infrastructure providers to build-out the nation’s cybersecurity posture. Cyber terror was thus initially not viewed as a specific form of cybercrime or a distinct area of national defense but was more aching to a natural hazard that could negatively affect society as a whole. Over the years, the cyber terror narrative naturally crumbled as more precise definitions, distinctions, and insights degraded the terrorism aspect. Notwithstanding these developments, the term is still widely used in Japan and has practical implications for public-private cooperation. For example, the National Police Agency’s ‘Cyber Terrorism Countermeasure Councils’ facilitate public-private partnerships and outreach on the prefecture level through discussions, lectures, and demonstrations. Meanwhile the ‘Cyber Terrorism Countermeasures Council,’ maintained by the Tokyo Metropolitan Police, serves a coordinating hub to secure all big events in Japan –including the 2021 Tokyo Olympics and Paralympics.

In the United States, the attacks on 9/11 introduced a host of legislative measures to tackle the threat of cyber terrorism. Stand out from the crowd are the US Patriot Act of 2001 and the Terrorism Risk Insurance Act of 2002. The Patriot Act provided federal law enforcement new tools to detect and prevent terrorism, including the “authority to intercept wire, oral, and electronic communications relating to computer fraud and abuse offenses” (Section 202), “emergency disclosure of electronic communications to protect life and limb” (Section 212), and “interception of computer trespasser communications” (Section 217).7 According to the Department of Justice’s 2005 Report, Section 202 was used on only two occasions which both “occurred in a computer fraud investigation that eventually broadened to include drug trafficking.”8 Meaning, it was never used to tackle a case of cyber terrorism. By contrast, section 212 was used, according to the DoJ’s 2004 Report to “successfully respond to a cyberterrorist threat to the South Pole Research Station.9 While it is indeed true that in May 2003 Romanian hackers intruded into the network of the National Science Foundation's Amundsen-Scott South Pole Station and threatened to “sell the station’s data to another country and tell the world how vulnerable [the systems] are,” the DoJ’s 2004 Report falsely claims that “the hacked computer also controlled the life support systems for the South Pole Station.”10  In fact, this dramatic detail was not included in any of the FBI’s public releases and according to internal memos, the station’s network was “purposely [less secure] to allow for our scientists at this remotest of locations to exchange data under difficult circumstances,” and was penetrated two months prior by another hacking group. When it comes to section 217 of the Patriot Act, under which “victims of hacking and cyber-terrorism [could] now obtain law enforcement assistance in catching intruders on their systems,” neither the DoJ’s 2004 nor the 2005 report offer any known connection to a cyber terrorism case. Instead, the Sunsets Report merely points out that section 217 was used in “investigations into hackers’ attempts to compromise military computer systems” and serious criminal cases, such as “an international conspiracy to use stolen credit cards.”11 Overall, the DoJ’s own reporting shows that even a law that specifically combats terrorism was not utilized to investigate one case of cyber terrorism.

In fact, the closest the DoJ has gotten to successfully prosecute an act of cyber terrorism was back in 2016, when 20-year old Ardit Ferizi –a citizen of Kosovo– was sentenced to 20 years in prison for “accessing a protected computer without authorization and obtaining information in order to provide material support to ISIL.” and according to Assistant Attorney General for National Security John Carlin, “this case represents the first time we have seen the very real and dangerous national security cyber threat that results from the combination of terrorism and hacking.” 12 But far from conducting an elaborate cyberattack, Ferizi only gained sys admin level access to a US company server that hosted the personally identifiable information of tens of thousands of US customers –including military personnel and government officials. Ferizi then proceeded to cull the data to approx. 1300 military and government individuals and forwarded it in June 2015 to Junaid Hussain –a former hacktivist and at the time ISIS’ most prolific English-language social media propagandist. While Ferizi was subsequently arrested in Malaysia and extradited to the United States in October 2015, a US drone strike took out Junaid at a petrol station in Raqqa, Syria, in August.13 The incident marked the first publicly known case of an enemy cyber operator being specifically targeted on the kinetic battlefield.

In contrast to the Patriot Act, the Terrorism Risk Insurance Act (TRIA) is a different animal. TRIA became necessary when following the 9/11 attacks, reinsurers began to exclude terrorist attacks from their coverage, with in turn forced insurance companies to excluded them, which I turn stalled development projects in their tracks due to the unavailability of terrorism risk coverage and uncertainty as to who would pay if another terrorist attack occurred. To ease the jitter, TRIA put in place a three-year Terrorism Insurance Program under which the US government would “share the losses on commercial property and casualty insurance should a foreign terrorist attack occur, with potential recoupment of this loss sharing after the fact.”14 The program has been reauthorized multiple times and is set to expire at the end of 2027. What makes TRIA important for the contextualization of cyber terrorism is that it does not specifically exclude cyber terrorism nor generally includes it from coverage. Meaning, the way terrorism is defined under TRIA would make it theoretically also applicable for every cyber incident if the Secretary of the Treasury, the Secretary of State, and the Attorney General of the United States, certify the incident to be act of terrorism or a “violent act or an act that is dangerous to (I) human life; (II) property; or (III) infrastructure.” (Section 102). Complicating the matter further is that in 2016 the US Department of the Treasury issued a notice clarifying that cyber liabilities in cyber insurance policies are considered “property and casualty insurance” under TRIA. Meaning, cyber terrorism coverage cannot be excluded from any cyber insurance policy. Now, given that many insurers have chosen to exclude acts of war and other items from their cyber insurance policies, cyber terrorism faces a fundamental theoretical conundrum. Let’s assume for a moment we would ask our insurance company to draw up a cyber insurance that does not cover anything. Nothing at all. Let us also assume that a cyber incident occurs, and the US government classifies it as an act of terrorism. Would my cyber insurance –which does not cover anything– still have to cover the incident under TRIA? If true, then is an act of cyber terrorism only cyber terrorism when the US government says it is? Similarly, how would an insurance company correctly price my cyber insurance premium? Would they sell it to me for almost nothing, since it does not cover anything, or would the probability of a cyber incident being identified by the US government as an act of cyber terrorism form the baseline for the premium calculation?

Apart from the US and Japan, numerous other governments have sporadically incorporated the term cyber terrorism into their strategic documents for one reason or another. Austria’s 2013 Cyber Security Strategy for example defines cyber terrorism “as a politically motivated crime of state and/or non-state actors.”15And South Korea’s 2012 Defense White paper specifically calls out “various forms of cyber terrorism: Hacking, DDoS attacks, denials of service, logic bombs, Trojan horses, Worm viruses, [High-Energy Radio Frequency] guns etc.”16 Looking at the variety of cyber terrorism interpretations out there, it ought to be obvious that from a strategic point of view, the conversation on cyber terrorism is all over the place everywhere. But what if we could introduce some sense of sanity into the discussion by operationalizing fragments of cyber terrorism to clarify what threat vectors we ought to be looking for? Let’s give it a try.

Operational thinking

From an operational point-of-view, we have to treat an act of cyber terrorism as a black box, similarly to how first responders treat any incident affecting their network. Meaning, for our analysis it does not matter who is behind the attack. It could be a non-radicalized individual with no links to any terrorist organization. It could be a hardcore terrorist group that regularly interfaces with cybercriminals. Or it could be a nation state. Equally, because the attacker’s political, religious, and ideological motivations remain largely hidden from us, the “tie to terrorism may not reveal itself for days, weeks, or months, if ever.”17 Therefore our focus has to be on technical attribution e.g. (a) how did the attacker do it, (b) when did he do it, and (c) did he achieve his objective? Analytically, we are thus trying to discern whether the attack was targeted, coordinated, and persistent, rather than diffuse, opportunistic, and random.
The second item we have to de-conflict is whether the attack actually terrorized the intended target. The reason for this is simple: Image there is a blackout affecting your entire neighborhood. Then the lights come back on for a moment –everyone is relieved– and then the lights go out again. There are numerous plausible explanations as to why the blackout occurred, and in most cases, those affected might never get to know the underlying reason after the incident is finally fixed.Now, compare that to a blackout that only affects your apartment. You go into the kitchen and your smart-light bulbs do not light up. You look at bulb, tinker around with your network, and search for possible fixes online, but you cannot locate the problem. Then suddenly, the lights go on… and switch off again. Which one of those two blackouts would terrify you more? Analytically, the only two differences that are important to us are: (a) the distance between the attacker and the intended target –e.g. how personal is it?–, and (b) the psychological resonance effect emanating from the attack –e.g. how “terroristic” is it.

Let’s briefly showcase this with the help of one real-life case.

In 2007, then 14-year old Polish teenager Adam Dabrowski was struck by a combination of curiosity and evil ingenuity that led him to conduct nightly break-ins into the tram depot of the city of Lodz. His objective: figuring out how the tram network worked and whether he could control the trams remotely. Combined with months of online research and his electronic classes at school, Adam succeeded sometime in early-2008 in converting an old tv remote to exploit a flaw in Lodz’ infrared based signaling system. Under the right circumstances –as Adam figured out– it was possible to capture the track switching signal at one junction point and play it back at another junction point to get the same result.18 According to Miroslaw Micor, spokesman for Lodz police, Adam subsequently treated the Lodz tram network “like any other schoolboy might a giant train set, but it was lucky nobody was killed. Four trams were derailed, and others had to make emergency stops that left passengers hurt. He clearly did not think about the consequences of his actions.”19

Now imagine if Adam would have continued his spree and would have never had been caught. Would this qualify as a case of cyber terrorism? From a strategic point-of-view it ticks several boxes. First, Adam succeeded in collecting actionable intelligence on a critical infrastructure target. Second, Adam was persistent enough to build a targeted exploit over month of dedicated work. And third, Adam did cause bodily harm by disrupting tram operations with his device. But what we are strategically lacking is any information on the attacker’s motivation, his objective, and whether he is potentially connected to a terrorist cell. Meaning, Adam’s campaign only becomes terrorism in case of self-attribution, e.g. if Adam leaves behind clues that explain his reasoning or a tape that shows him declaring his allegiance to a terrorist group.

Operationally, we do not need any of that information. Technically, Adam’s campaign fulfills the notion of targeted, coordinated, and persistent. But on the second pillar it notably falls short. In terms of how personal the attack was, our investigation would have to figure out whether there were any common targets in the trams that were derailed and whether the campaign had any major repercussions for the company that maintains the Lodz tram network. In both instances we will not find much. Similarly, given the low severity and frequency of the incidents, the fallout radius of the attack’s psychological effect will be fairly small. To turn Adam –operationally– into a terrorist he would have to do one of two things: Either build more devices and get more people involved to increase the frequency and spread of the derailments. The downside of which is an expansion of trust, information sharing, and a decline in absolute control (e.g. outsourcing). Or, Adam could walk in the opposite direction by targeting specific trams at specific times to terrorizing specific individuals. Which would necessitate an information gathering operation aimed at mapping real-time target locations, habitual movement patterns, and potential insights into a target’s social interactions (e.g. espionage).

In essence, by closing the proximity to the desired target and persistently engaging it over time, the modified campaign becomes the vehicle for the subsequent creation of terror –even though the individual attacks stay the same. Thus, rather than looking at the severity of a cyberattack, e.g. physical destruction and disruption –investigating attacker motivations, or question the feasibility of terrorism in cyberspace altogether, analyzing adversarial campaign tactics and maneuvering behavior will provide a much richer framework to explore, replicate, and defend against the terror component. Indeed, valuable lessons have yet to be learned from issues as disparate as cyber stalking and mental recovery after a cyber incident, to hacking back in the civilian realm and converging military cyber operations with information warfare campaigns.20


In sum, cyber terrorism is probably best viewed as an operational tactic aimed at a distinct psychological outcome rather than a field of research that connects the cyber domain at the hip to terrorism in real space. Notably, while cyber terrorism research and policy has hit somewhat of a deadlock in recent years, leveraging tactical approaches to create terror in and through cyberspace is only at its beginning.

Stefan Soesanto
Senior Researcher, Cyber Defense Team, ETH Zurich | @iiyonite

1 See: Gabriel Weimann, ‘Cyberterrorism – How real is the threat’, United States Institute of Peace, Special Report 119, December 2004; Zahri Yunos and Sharifuddin Sulaman, ‘Understanding Cyber Terrorism from Motivational Perspectives’, Journal of Information Warfare, Vol. 16, No. 4 (Fall 2017), pp. 1-13; Maura Conway, ‘Reality Bytes: Cyberterrorism and Terrorist ‘Use’ of the Internet’, First Monday, Vol. 7, No. 11-4, November 2002.

3 Defense Intelligence Agency, ‘Cyberterror. Prospects and Implications’, pp. 10 and 9, respectively.

4 Jonalan Brickey, ‘Defining Cyberterrorism: Capturing a Broad Range of Activities in Cyberspace’, CTC Sentinel, August 2012, Vol. 5, No. 8, p. 6.

5 Daniel Cohen, ‘Cyber terrorism: Case studies’, in Cyber Terrorism Investigator’s Handbook, Chapter 13.

7 Public Law 107–56, October 26, 2001.

8 Department of Justice, ‘USA Patriot Act. Sunsets Report’, April 2005, p. 6.

9 Department of Justice, ‘Report from the Field: The USA Patriot Act at Work’, July 2004, p. 33.

10 ‘Report from the Field’, p. 27.

11 ‘USA Patriot Act. Sunsets Report’, p. 48.

12 Department of Justice, ‘ISIL-Linked Kosovo Hacker Sentenced to 20 Years in Prison’, Justice News, September 26, 2016.

13 Frank Gardner, ‘UK jihadist Junaid Hussain killed in Syria drone strike says US’, BBC News, August 27, 2015.

14 Congressional Research Service, ‘Terrorism Risk Insurance. Overview and Issue Analysis’, December 27, 2019, p. summary.

15 Federal Chancellery, ‘Austria Cyber Security Strategy’, 2013, p. 21.

16 Ministry of National Defense, ‘Defense White Paper’, 2012, p. 10.

17 ISE Bloggers, ‘Unpacking Cyber Terrorism’, May 31, 2016.

18 John Bull, ‘You Hacked: Cyber-Security and the Railways’, London Reconnections, May 12, 2017.

19 John Leyden, ‘Polish teen derails tram after hacking tram network’, The Register, January 11, 2008.

20 Ellen Nakashima, ‘U.S. Cybercom contemplates information warfare to counter Russian interference in 2020 election’, The Washington Post, December 25, 2019.

<![CDATA[ Bias and Misperception in Cyberspace ]]> 2020-03-17T01:02:07Z

With cyber operations serving as an instrument of foreign policy, it is fair to posit that cognitive factors that account for behavior in the physical domain are equally applicable to cyberspace.


With cyber operations serving as an instrument of foreign policy, it is fair to posit that cognitive factors that account for behavior in the physical domain are equally applicable to cyberspace.


A Psychological Turn. Our understanding of interstate behavior in cyberspace over the past decade rests firmly on systemic and technological attributes as determinants of strategic choices in this increasingly relevant domain. Scholars and policy specialists alike invoke established concepts such as the offense-defense balance, coercion, and signaling to account for state-associated cyber operations. Yet despite technological advancements, cyber operations continue to deliver limited strategic outcomes. This is paradoxical when accelerating investments in cyber capabilities are contrasted against lackluster performance thus far. Consequently, one may argue that attempts to frame strategic choices as a function of material and strategic realities hinders rather than enlightens attempts to comprehend state behavior in cyberspace. This, however, is not necessarily the case.

Recent cybersecurity scholarship acknowledges the importance of micro-level attributes. Whereas emphasis is commonly placed on the balance of power, dependence, and technological expertise; it is becoming apparent that cognition plays a crucial role in the decision-making processes that influence strategic choices. This psychological “turn” is not a novel occurrence as associated disciplines such as political science and international relations long recognized its importance. With cyber operations serving as an instrument of foreign policy, it is fair to posit that cognitive factors that account for behavior in the physical domain are equally applicable to cyberspace. Consequently, this ARI demonstrates this by discussing recent scholarship and how these affect the stability of cyberspace. In doing so, it surfaces the importance of taking a simultaneous top-down and bottom-up approach in evaluating state behavior in this man-made domain.

Analysis: uncertainty and cyberspace

One may argue that strategic choices made by states are an attempt to settle questions of uncertainty. Whether this pertains to the credibility of an emergent threat or adversarial resolve, uncertainty is a fundamental aspect of the international system. Yet uncertainty is not a monolithic construct. As argued by Rathbun1, it emerges either due to the lack of information or the ambiguity of the available information.

Concerning the former, the lack of information is resolved either through a test of arms that favors pre-emption as a means of settling questions of power balances or through information collected using espionage to gain an advantage over another. As Rovner2 notes, this situation manifests itself in cyberspace as either military or intelligence contests. Whereas military contests suggest the need to dominate adversaries, intelligence contests are characterized by persistent competition and restrained action. Empirically, cyber operations over the past two decades are better represented by the latter given muted effects and restrain exercised by capable actors. This, however, does not account for the emergence of strategies such as persistent engagement that limits opportunities for further information collection due to its emphasis on pre-emption and disruption of adversarial operations. This calls into question whether uncertainty in cyberspace is indeed due to a lack of information available to decision-makers.

As Dunn-Cavelty3 concisely argues, threat perception in cyberspace is due to the unknowability, vulnerability, and inevitability that characterizes the domain. Given its complexity, it is difficult to guarantee the security of cyberspace and its underlying components. Furthermore, recognizing the second to n-order effects of disruption is equally difficult. This complexity similarly gives rise to vulnerabilities within crucial systems that are not identified before deployment that, in turn, grants malicious actors the opportunity for exploitation. With cyberspace serving as an instrument of state power, adversarial motivations to resort to cyber operations becomes apparent and seems to shift the advantage in their favor. But this may not necessarily be the case.

Although low-hanging vulnerabilities are easily exploitable, these are likely to result in limited effects. Recent studies highlight the substantial material and organizational requirements necessary to develop and execute operations that adversely affect the strategic calculus of adversaries4. Relatedly, the apparent vulnerability of critical systems may also be used for deceptive purposes, luring adversaries into a sense of success when in fact defenders are in a position to observe and develop the necessary countermeasures against future operations5. Finally, the attribution problem often associated with the cyber operations is not as insurmountable when technical evidence is analyzed alongside the strategic environment in which these occur6. Consequently, the amount of information available to decision-makers is substantial and continues to grow with advances in technology and forensic processes. This is not to say, however, that the information is unambiguous.

Although states may be aware of adversarial operations on their systems, the underlying intent remains obscured. Buchanan7, for instance, argues that the discovery of malicious code does not provide a clear indicator as to whether this is part of an on-going espionage operation or a precedent for future disruptive operations. The opacity of intent is further complicated by the criticality of the affected systems. Gartzke and Lindsay8, for instance, argue that its appearance in sensitive systems such as Nuclear Command, Control, and Communication (N3C) infrastructure increases the likelihood of escalation. Aggravating matters further, forensic analysis is limited in its ability to identify the individual or organization directing the execution of specific cyber operations9. As was seen in past cases, the line between state and non-state actors in cyberspace is easily blurred, increasing the difficulty of assigning responsibility. Consequently, uncertainty in cyberspace is more a question of ambiguity rather than scarcity. 

Heuristics driving strategic choices

As Rathbun argues, decision-makers overcome informational ambiguity through the use of cognitive heuristics. These serve to simplify complex environments, allowing decision-makers to achieve closure as efficiently and quickly as possible. From the perspective of operating in an ambiguous international system, these trigger two cognitive processes relevant to understanding strategic choices: first, setting and influencing expectations, and second, determining the plausibility of certain propositions10. The former is crucial in determining how state actors are expected to behave while the latter limits the range of plausible explanations for deviant behavior and appropriate responses to such.  

Yet despite its efficiency, heuristics are constrained by the extent to which they match empirical reality. Otherwise, misperception and inappropriate strategic choices are likely to follow as observed in events such as the Yom Kippur war. In this case, Israeli assumptions regarding Egyptian capabilities and intent where made based on invalid assumptions. For cyber operations, similar cognitive mechanisms appear to be at work.

Thus far, our understanding of the extent to which elite decision-making is a function of heuristics is informed by a growing body of scholarship employing experimental designs and wargames. These studies reveal the extent to which individuals resort to analogical reasoning, enemy images, and schemas when deciding on how best to utilize and respond to cyber operations.

In her longitudinal study of wargames conducted at the United States Naval War College, Schneider11 illustrates how analogical reasoning influences and constraints the use of cyber operations during periods of conflict. Specifically, the escalatory potential of cyber operations was deemed comparable to that of nuclear weapons. In effect, participants were hesitant to authorize these before the onset of militarized conflict despite having the balance of power in their favor. Even in cases wherein conflict has already been militarized, hesitation persists with participants only willing to authorize operations that are easily recalled or those that are unlikely to be attributed. 

Given the novelty of cyber operations and the lack of real-world cases of significant and extended physical effects, it is unsurprising that decision-makers are drawing parallels between cyber operations and nuclear weapons. The persistence of narratives that emphasize the destabilizing potential of malicious behavior in cyberspace in conjunction with a continued shortage of expertise in this domain is likely to motivate individuals to draw analogies between these and those with seemingly comparable features. For instance, on-going research at the University of Haifa12 demonstrates the equivalency between cyber operations and terrorist activities based on comparable emotional responses generated by these events in a laboratory environment. Given our understanding that emotion precedes cognition, it is plausible that two distinct events that generate a similar emotional cue may lead to related, if not similar similar, strategic responses (i.e. retaliatory action employing a kinetic response) that may or may not be suitable for the given situation.

Besides analogies, enemy images are also found to be crucial in forming attributional judgments and evaluations of intent. These are cognitive constructs in which certain actors are believed to behave consistently in bad faith. This is built over time through repeated exposure to malicious behavior directed towards another such as repeated aggression or threats. Given the prevalence of rivalries among the cyber powers, enemy images likely play a crucial role in attributing and evaluating intent. Experimentally, this is demonstrated in a series of survey experiments where participants are informed of an incident involving a known adversary. Responsibility for the said incident, it seems, is consistently placed on the adversary irrespective of available information13

While this tendency to resort to beliefs is unsurprising, it signals a number of troubling possibilities. First, in attempting to avoid cognitive dissonance, elites may be susceptible to false flag operations as a means of either redirecting blame or further aggravating a situation that benefits a third party. As with analogical reasoning, this is further aggravated by a lack of domain expertise. Second, adherence to enemy images increases escalatory risk. While escalation has yet to take place in response to a cyber operation, the absence of observable cases does not guarantee its impossibility. As noted by Gartzke and Lindsay, operations that affect crucial systems in the context of an established rivalry increases the likelihood of escalatory spirals. Third, while it is possible to alter beliefs, the dissonance required to achieve this is rare in the international environment. Consequently, advancements in forensic capabilities that generate more information are, on their own, unlikely to dislocate firmly embedded beliefs of adversarial malintent. 

Finally, cognitive schemas offer elites a pre-established mechanism used to select an appropriate response to a cybersecurity incident. Schemas are cognitive constructs that “represent knowledge about a concept or a type of stimulus, including its attributes and relations among those attributes”14. Cybersecurity incidents, given its underlying geopolitical context, may contain specific stimuli that trigger schemas initially formed in response to non-cyber interactions. 

Cross-national survey experiments conducted by Valeriano and Jensen15 suggest that strategic preferences vary across groups. In their study involving participants from the United States, Russia, and Israel; responses to cybersecurity incidents appear to have unique national features. This appears to confirm earlier research that notes specific “national ways” of cyber conflict16. Relatedly, an on-going series of cross-national wargames conducted by Gomez and Whyte17 illustrates the application of strategic solutions, previously employed in other domains, enacted in cyberspace. For instance, groups diverge on the appropriate approach in evaluating the culpability of malicious actors. Participants from the United States, for instance, drew parallels between a state hosting a terrorist group within its borders and its responsibility to address this security threat. In contrast, those from Taiwan and the Philippines formulated their assessments by extrapolating beyond the geographic descriptors offered by the technical evidence.

The preceding observations derived from the above highlight the challenges associated with schematically driven decision-making. While the existence of a national preference ensures a ready response to conflict, the underlying assumptions necessary for the successful use of these may not hold true for cyberspace. Moreover, cyber operations interact with the saliency of underlying issues. At best actions in cyberspace are perceived as a continuation of existing rivalry/adversarial behavior. At worst, it may trigger a security dilemma due to the appearance of an, until then unused, instrument18

The likelihood of misperception also increases with the introduction of unsubstantiated assumptions. As mentioned above, schemas represent knowledge of a given stimuli (i.e. a specific threat) and its corresponding attributes. The absence of specific attributes (e.g. intent) is thought to be unproblematic for schemas as these are simply assumed to exist to avoid cognitive dissonance on the part of a decision-maker. Real-world cases such as the cyber operation targeting the Pyeongchang Winter Olympics demonstrates this tendency. Barring additional evidence, it was initially assumed to have originated from North Korea given the underlying strategic environment on the Korean peninsula.

Conclusions: grappling with bias and misperception

While research on the effects of micro-level attributes on strategic choices in cyberspace continues, the above observations hint at the importance of moving beyond systemic and technological aspects of interstate cyber interactions. Although escalation resulting from cyber operations has not occurred, its absence ought not to invite complacency on our part. As cyberspace serves as a cornerstone of state power, its exploitation for strategic ends is likely to continue. Moreover, with the decision to exercise power ultimately resting in the hands of an individual or small groups, a better understanding of their cognitive processes is necessary if stability in cyberspace is to be achieved.

Given that the cognitive mechanisms discussed above operate below the level of consciousness, it would be difficult to explicitly restrict its usage among decision-makers. Instead, increasing domain expertise among elites is an appropriate step in mediating biased reasoning. For the studies mentioned, domain expertise appears to move decision-making closer to the normative expectations of rational choice. That is to say, while heuristics usage continues to persist, knowledge aligns these with empirical realities. This finding is consistent with the body of research in the field of cognitive psychology which confirms that individuals with greater domain expertise are those best suited to effectively employ heuristics.

The need to better educate elites, however, is not a novel idea19. However, more effort is necessary to increase domain expertise among elites given the use of cyber operations as an instrument of foreign policy. Alongside on-going efforts to develop cyber norms, taking this step is a move in the right direction in maintaining the stability of cyberspace and warding off unnecessary disputes due to misperception and biased reasoning.

Miguel Alberto Gomez
Center for Security Studies, ETH Zurich | @mgomez85

1 Brian C. Rathbun (2007), “Uncertain about uncertainty: understanding the multiple meanings of a crucial concept in international relations theory”, International Studies Quarterly51(3), 533-557.

2 Joshua Rovner (2019), “Cyber War as an Intelligence Contest”, War on the Rocks, September 16.

3 Myriam Dunn Cavelty (2013), “From cyber-bombs to political fallout: Threat representations with an impact in the cyber-security discourse”, International Studies Review, 15(1), 105-122.

4 Allyson Pytlak and George E. Mitchell (2016), “Power, rivalry and cyber conflict: an empirical analysis”, in Karsten Friis and Jens Ringsmore (Eds.), Conflict in Cyber Space, Routledge, 81-98.  Rebecca Slayton (2017). “What is the cyber offense-defense balance? Conceptions, causes, and assessment”, International Security, 41(3), 72-109.

5 Erik Gartzke and Jon R. Lindsay (2015), “Weaving tangled webs: offense, defense, and deception in cyberspace”, Security Studies, 24(2), 316-348.

6 Thomas Rid and Ben Buchanan (2015), “Attributing cyber-attacks”, Journal of Strategic Studies, 38(1-2), 4-37.

7 Ben Buchanan (2016), The cybersecurity dilemma: Hacking, trust, and fear between nations, Oxford University Press.

8 Erik Gartzke and Jon R. Lindsay (2017), “Thermonuclear cyberwar”, Journal of cybersecurity3(1), 37-48.

9 Herbert Lin (2016), “Attribution of malicious cyber incidents: from soup to nuts”, Journal of International Affairs70(1), 75-137.

10 Robert Jervis (2009), “Understanding beliefs and threat inflation”, in Trevor Thrall (Ed.), American Foreign Policy and The Politics of Fear, Routledge, 34-57.

11 Jacquelin Schneider (2017), “Cyber and crisis escalation: insights from wargaming”, USASOC Futures Forum, March.

12 Michael L. Gross, Daphna Canetti and Dana R. Vashdi (2016), “The psychological effects of cyber terrorism”, Bulletin of the Atomic Scientists72(5), 284-291.

13 Miguel Alberto Gomez (2019), “Sound the alarm! Updating beliefs and degradative cyber operations”, European Journal of International Security4(2), 190-208.

14 Deborah Welch Larson (1994), “The role of belief systems and schemas in foreign policy decision-making”, Political Psychology, 17-33.

15 Brandon Valeriano and Benjamin Jensen (2019), “What Do We Know about Cyber Escalation and Conflict? Observations from Cross-National Surveys”, Atlantic Council, November.

16 Brandon Valeriano, Benjamin Jensen and Ryan C. Maness (2018), Cyber Strategy: The Evolving Character of Power and Coercion, Oxford University Press.

17 Miguel Alberto Gomez and Christopher Whyte (forthcoming), Cyber Uncertainties: Observations from Cross-National Wargames, in Myriam Dunn-Cavelty and Andreas Wenger (Eds.), Cyber Security Politics: Dealing with Socio-Technological Uncertainty and Political Fragmentation, Routledge.

18 Ryan C. Maness and Brando Valeriano (2016), “The impact of cyber conflict on international interactions”, Armed Forces & Society, 42(2), 301-323.

19 Lene Hansen and Helen Nissenbaum (2009), “Digital disaster, cyber security, and the Copenhagen School”, International studies quarterly53(4), 1155-1175.

<![CDATA[ The Future of Values in Cyber Security Strategies ]]> 2020-02-27T05:21:49Z

This paper addresses the challenges that have arisen from an overly technical focus on cyber security that has failed to consider the application of value sets in strategy creation.


While national cyber security strategies have proliferated worldwide in the past decade, most have been overwhelmingly focused on resilience at the expense of political values. This paper addresses the challenges that have arisen from an overly technical focus on cyber security that has failed to consider the application of value sets in strategy creation.


The efforts and public expenditures that have been committed to the pursuit of cyber security in the past decade are no doubt vast. In numerous nations these efforts have included the creation of several iterations of national cyber security strategy to guide public efforts. Despite such investment of both public monies as well as intellectual and policy capital however, it would be difficult to claim that the state of cyber security is much improved.

Indeed, even subject experts and seasoned professionals within cyber security will be quickly overwhelmed with the available data reported by numerous outlets on the pernicious and ever-growing volume of cybercrime worldwide. In the face of conflicting real-world evidence of a problem that is barely improving, a hard question must be asked: do our national cyber security strategies carry flaws?

This author contests that this is the case, arguing that our present strategies have been overly focused on technological matters and establishing national resilience, to the detriment of values establishment that would aid the development of normative behaviour. With the scale of cyber security challenges faced by nations, and the lack of consensus internationally to temper the geopolitical future of cyberspace, a concerted drive in values assertion is a missing element of cyber security strategies that should be taken seriously.


There are of course dozens of national cyber security strategies to use as examples. For the purposes of this article, those from the UK and the Netherlands will suffice. The UK is currently finishing its third cyber strategy iteration, due to compete in 2021. The strategy declared its aim as a vision in 2021 of a UK ‘secure and resilient to cyber threats, prosperous and confident in the digital world.’1 This is built on with a model of Defend, Deter and Develop. The logic of this triptych is to establish resilience in Defend, Deter bad actors by becoming a hardened target, and Develop indigenous talent through numerous schemes to plug a digital skills gap. Immediately the critical reader will note that values development and normative standards are not primary concerns represented in this strategy.

Others will contest this perspective, arguing that the strategy’s chapter 8 on International Action does indeed address the need for normative development. To take this at face value would be immature however, as UK efforts have remained overwhelmingly focused on technical resilience measures –most notably in the establishment of the UK’s National Cyber Security Centre– and by noting the paralysis of international efforts in the UN Government Group of Experts on cyber. The UN GGE has been in a state of de facto paralysis with no consensus being reached during this period; any British efforts to shape values in this regard have clearly been both under resourced and unsuccessful in achieving aims and objectives.

The Dutch National Cyber Security Agenda, meanwhile, is much more forthright in its declared commitment to a values driven strategy, listing its role in ‘contributing to international peace and stability in the digital domain’ as the second of seven priority ambitions in the Dutch agenda.2  A core difference in comparison between the British and Dutch approaches is declared centrality in the latter’s approach to interweaving values throughout its entire approach to cyber security. This comparison notwithstanding, it must be noted here that with the broader geopolitical stalemate in norms development, even the very strong Dutch approach to a values driven strategy has its limits without both broad allied adoption but also concerted actions to drive values forward.

The data

One could be forgiven for pointing to well written national level strategies and believe that those efforts have been proven sufficient. In the case of notable incidents –such as the WannaCry and Not-Petya global attacks in 2017– national resilience models did indeed prove themselves highly capable. Despite this however, serious consideration must be paid to the fact that malevolent behaviour, whether criminally motivated or state-sponsored, continues to rise.

In establishing this, readers can easily be pointed to a bewildering array of statistical outlets to illustrate the continuing growth of cyber insecurity. This article will offer two, Accenture’s Annual Cost of Cyber Crime Study and the British Government’s Cyber Breaches Survey. Comparing 2017 to 2018, Accenture reported an 11% increase in security breaches among their industry respondents. Further to this was the 12% increase in the financial cost of cybercrime across those same years; in the five years from 2013 to 2018, these are cumulative –and staggering– increases of 67% and 72% respectively.3

Meanwhile, in the UK the Department for Culture, Media, and Sport (DCMS), provided intriguing statistics regarding the breadth of victims across UK business. In their Cyber Breaches Survey a whole 43% of UK businesses –regardless of industry, size or turnover– had detected an active cyber-attack within the previous 12 months. Added to that was a response of 75% of respondents who had received malicious phishing emails.4

Such numbers are compelling in revealing a picture of cyberspace as an environment rife with criminality. Fundamentally, it is clear that for all the efforts at establishing resilience at the national level, the scale of the problem posed by cybercrime has not only not been tamed but continues to grow at alarming rates in volume and cost. All of this together raises an intractable problem: national cyber resilience is a necessary but not sufficient condition in achieving a safe and secure cyberspace.

The geopolitical picture

While the heady days of the post-Cold War liberal optimism are certainly over, it is necessary to briefly revisit them in order to reveal how we have reached the levels of insecurity today. The rise of cyberspace was grounded fundamentally in the maturation of three key technologies –the personal computer, the shared protocols for information in TCP/IP and DNS, and the World Wide Web. The explosion of the internet was seen as an inherently liberating tool, one that did not carry the need to establish any form of true political governance. One need only look to Barlow’s 1996 “declaration” on the independence of cyberspace to reveal how some viewed the romanticised, apolitical view of the Internet’s potential.5

Instead, governance was taken forward primarily as a technical practice, it was, in effect, politically agnostic for a time. Yet the fact that this was so was not down to any inherent libertarian idealism, but rather the result of historical coincidence, where technological maturation coincided with the end of the Cold War and the victory of the liberal political order. As this author has argued elsewhere, political governance was not established over cyberspace partly because without any form of political challenger, it was simply believed it did not need it.6 The passage of events however –with our nations and the lives of citizens now intimately invested in these technologies– have since mandated the issue as a serious security concern, one that strategies and legislations have scrambled to catch up with. This has placed the liberal order at a distinct disadvantage given the evolution of the geopolitical landscape.

Long have the arguments persisted about the decline of the Western liberal order. Mearshimer believed the liberal order’s hegemony as ‘destined to fail’7 with international realities now revealing ‘cracks in the liberal edifice’ that need to be tempered. Following in the same vein, Luce notes a fundamental ‘retreat’ of Western liberalism, arguing that the West is facing a crisis that ‘is real, structural, and likely to persist.’9 Emmott also follows suit in stating that ‘a battle of ideas’ is underway in the West10 that parallels the positions established by Mearsheimer and Luce.

The growing, indeed burgeoning literature that has emerged dissecting the fate of the liberal order, combined with the realities of the challenges posed by areas such as fake news, serve to illustrate a key problem that has yet to be recognised in western policy circles. This problem is a fundamental crisis of confidence throughout the liberal West; a lack of faith in fundamental values that has implicitly infected the ability of Western nations to drive liberal values into the governance of cyberspace. If a liberal hegemony is under strain and challenge as Mearsheimer suggests, then other have clearly moved into the battleground in cyberspace.

Cyber sovereignty

The rise of the challenge from cyber sovereignty has created a strong need for values assertion in future cyber security strategies. With the fundamental logic of cyber sovereignty being centred on each state retaining absolute authority over its cyberspace, those who call for this position are arguing against the establishment of values and norms behaviour. Chinese premier Xi Jinping’s remarks at the 2015 World Internet Conference serve as the clearest indication of the challenge to any Western view of a free and open Internet. By stating that “We should respect the right of individual countries to independently choose their own path of cyber development…’11

Cyber sovereignty represents, essentially, a clear and present challenge to any notion of a values-based, open cyberspace. It determines that state sovereignty should be absolute, abiding in non-interference not only in a state’s internal activities in the digital space, but complete non-judgement in the choice of any state as to how it manages its cyberspace. Such a position is a key reason for the lack of international consensus on how to govern cyberspace in the future, and why current national level cyber strategies find themselves unable to achieve their aims. So long as the international vision for the future of cyberspace itself is contested, any strategy aiming only at resilience will find itself in a purely reactive manner.
A key battleground in how sovereignty is being asserted across cyberspaces lies in the establishment of data localisation laws worldwide. With the pernicious challenges of cybercrime and attribution, states rightly recognise that in order to ensure law and order, as well as the pursuit of justice following a crime, they must be able to gather digital evidence. This very reasonable position does, however, enjoy a highly varied interpretation of applying sovereignty.

The Vietnamese 2018 Cybersecurity Law mandates not only that big tech firms –such as Facebook– retain their data within Vietnam, but also maintain a manned office presence within the country as well. The British 2016 Investigatory Powers Act also interprets sovereignty based on the location of housed data, compelling all Internet Service Providers to retain records for 12 months. This is intended to enable retrospective collection and analysis under warrant if needed for investigation.

Russia, meanwhile, carries a far more expansive view of sovereignty than most with their data localisation law, Federal Law No. 242-FZ, which requires all providers and managers of data to repatriate data on Russian national anywhere in the world back to storage in Russia itself. This law in effect declares digital sovereignty not over data, but over any individual Russian, regardless of where they are in the world. Disputes over the ethics of the law led to a ban on LinkedIn from being able to operate within Russia at all in 2016 due to their unwillingness to comply with such regulation. The Russian regulator, the Roskomnadzor, glibly stated that the company’s refusal to comply confirmed ‘their disinterest in working on the Russian market.’12

Building values-driven strategies

The hope by this stage has been to establish that the vision of a free and open internet, carrying liberal values in its core DNA, is under severe political challenge. What today certainly appears as utopian times for the internet in those early post-Cold War years –where a free and open internet seemed the epitome of the liberating promise of technology– is indeed very different to the realities of today’s cyberspace.

The onslaught of cybercrime and cyber espionage, as well as the experimental actions of states themselves with cyber warfare methods, have all served to destabilise a space shared by all. Cyber insecurity has become the norm, and former President Barack Obama’s view that cyberspace had become a sort of ‘wild, wild west’13 remains not only true, but all available data indicates that the problems have only worsened since by volume, impact, and cost.

Not only have criminal, espionage, and acts of warfare destabilised cyberspace, they have also contributed to normalising insecurity in an environment that leads to a fundamental political challenge of the values that underlie it. Such rife and uncontrolled conditions raise a fundamental question, “why should this space be trusted?”. The pervasiveness of such insecurity combined with emerging norms of state-sponsored bad behaviour and wanton criminality places a firm question mark against the ability of foundations built on liberal values to deliver a safe and secure space for users.

In turn, this has created the opportunity for political challenge to emerged, which of course has in the form of Cyber Sovereignty. This view is gaining traction less because of its inherent political appeal, but rather because it presents a political case for how to secure the internet based on national authority. Such a case has not only not been built by Western liberal nations and, as demonstrated through even just two example national strategies above, enjoys little practical unity among allies in rhetorical construct and by actions.

It is clear that the current construct behind national cyber security strategies carries a significant flaw, which is an over-dominance of technical concerns on resilience, and a lack of focus and action on the place of values. For any strategy to not only begin protecting the nation it serves, but also to actively contribute towards stabilising the geopolitical conditions of cyberspace itself, values assertion must be considered.

To achieve this, two principles should be followed in order to bridge the gap between rhetoric and actions on values. First, there in the next iterations of each Western liberal nation’s cyber security strategy should be a declared acknowledgement of a contrarian political position to the liberal vision of a free and open internet. Cyber Sovereignty should be called out as a strategic competitor that is to be resisted by all who wish to preserve and nurture cyberspace in its current form. Cyber Sovereignty has, in effect, been allowed to gain political traction due to the lack of moral and political consensus among and within Western liberal nations. A first step in aligning an alliance position should begin with this explicit declaration, which will serve to focus diplomatic efforts accordingly.

Secondly, bad behaviour in cyberspace should be called out. Much akin to how human rights organisations routinely highlight abuses and transgressions for public dialogue, bad behaviour that undermines desired norms, standards, and values should be called out. Joel Rogers de Waal words this best in saying that in any kind of information war the West should seek to ‘weaponise embarrassment’ as much as possible. De Waal sagely establishes that ‘if liberal democracy holds unique vulnerabilities to the social media age, then so does autocracy in its existential fear of embarrassment.’14 

De Waal’s argument can be seen as proven true in the recent Iranian reaction to the shooting down of Ukranian International Airlines Flight PS752 in January 2020, where originally Iran claimed the incident an accident before confession that it had indeed shot the airliner down. This led to uproar within Iran itself as well as significant embarrassment with, and condemnation by, the international community. Such as example serves to highlight words from an unexpected source, Sacha Baron Cohen, normally known for his comedic work, who recently gave an impassioned speech about safety and freedom on social media, making a very prescient point about the difference between democracy and autocracy: ‘Democracy, which depends on shared truths, is in retreat, and autocracy, which depends on shared lies, is on the march.’15 Recent events in Iran serve to reveal how autocracies, the driving force behind cyber sovereignty, seek to create a shared lie, and what can happen when that lie is exposed and a shared truth is established.

The same hope must be held for driving values in cyberspace, the establishment of shared truths across an environment to prevent these technologies from becoming tools of oppression concealed behind an absolute interpretation of national sovereignty. A key measure in driving the Western vision to maintain and improve a free and open internet will lie fundamentally in calling out bad behaviour at every possible turn.


To conclude, this article has established a flaw at the heart of national efforts to create effective national cyber security strategies. While very successful in many regards, those strategies have fallen prey to becoming overly focused on measures of technological resilience, making them inherently reactive strategies. What has also been established is that the geopolitical environment in which these strategies operate puts a reactive strategy at a severe disadvantage, particularly in cyberspace where, as voluminous data sets reveal, cyber insecurity continues to grow at alarming rates.

National cyber security strategy constructs in their present form may deliver good performing levels of national resilience, but they have been proven unsuccessful in shaping the international dynamics in ways desirable to liberal Western nations. A core omission has been the place of values; while rhetoric can be found in many documents declaring support for values, what is missing is tangible action and alignment with allies to operationalise values into a collective allied moral position on the future of cyberspace.

By calling out Cyber Sovereignty as a challenger in future strategies, and taking concerted, allied, and regular actions to call out bad behaviour, the liberal West will begin to challenge actors who wish to shape cyberspace in their image, for autocratic purposes at odds with the original vision for cyberspace. For the vision of a free and open internet to endure, all liberal nations must recognise that the articulation, inclusion, and deployment of values must lie at the heart of all future national cyber security strategies.

Danny Steed
Author, Public Speaker and Consultant | @TheSteed86

1 HM Government, United Kingdom (2016), ‘National Cyber Security Strategy 2016-2021’, p. 9.

2 National Cyber Security Centre Netherlands, CSC-NL (2018), ‘National Cyber Security Agenda: A cyber secure Netherlands’, p. 7.                

3 Accenture (2019), ‘Ninth Annual Cost of Cybercrime Study, March 6, pp. 10-11.

4 Department for Digital, Culture, Media and Sport, DCMS, United Kingdom (2019), “Cyber Security Breaches Survey 2018”,

5 John Perry Barlow (1996), ‘A Declaration on the Independence of Cyberspace’, February 8.

6 Danny Steed (2019), The Politics and Technology of Cyberspace, Abingdon, Routledge, Ch. 1.2.

7 John J. Mearsheimer (2018), The Great Delusion: Liberal Dreams and International Realities, New Haven, Yale University Press, p. 11.

8 Ibid, ch. 4.

9 Edward Luce (2017), The Retreat of Western Liberalism, New York, Atlantic Monthly Press, p. 28.

10 Bill Emmott (2017), The Fate of the West: The Battle to Save the World's Most Successful Political Idea, New York, Public Affairs, p. 467.

12 The Federal Service for the Supervision of Communications (2017), ‘LinkedIn Refused to Eliminate Violations of Russian Law’, Information Technology, March 7.

14 Joel Roger de Waal (2019), ‘The West Should Weaponise Embarrassment in the New Information Wars’, RUSI Commentary, April 26.

15 Sacha Baron Cohen (2019), ‘Read Sacha Baron Cohen’s Scathing Attack on Facebook in Full: ‘Greatest Propaganda Machine in History’’, The Guardian UK, November 22, Italics added.

<![CDATA[ Diplomacy in the Age of Artificial Intelligence ]]> 2019-10-11T06:42:31Z

The key question on the mind of policymakers now is whether Artificial Intelligence would be able to deliver on its promises instead of entering another season of scepticism and stagnation.


The key question on the mind of policymakers now is whether Artificial Intelligence would be able to deliver on its promises instead of entering another season of scepticism and stagnation.


The quest for Artificial Intelligence (AI) has travelled through multiple “seasons of hope and despair” since the 1950s. The introduction of neural networks and deep learning in late 1990s has generated a new wave of interest in AI and growing optimism in the possibility of applying it to a wide range of activities, including diplomacy. The key question on the mind of policymakers now is whether AI would be able to deliver on its promises instead of entering another season of scepticism and stagnation. This paper evaluates the potential of IA to provide reliable assistance in areas of diplomatic interest such as in consular services, crisis management, public diplomacy and international negotiations, as well as the ratio between costs and contributions of AI applications to diplomatic work.


The term “artificial intelligence” was first coined by an American computer scientist, John McCarthy in 1956, who defined AI as “the science and engineering of making intelligent machines, especially intelligent computer programs”.1In basic terms, AI refers to the activity by which computers process large volumes of data using highly sophisticated algorithms to simulate human reasoning and/or behaviour.2 Russell & Norvig use these two dimensions (reasoning and behaviour) to group AI definitions according to the emphasis they place on thinking vs acting humanly. 3

Another approach to defining AI is by zooming in on the two constitutive components of the concept. Nils J. Nilsson defines, for instance, artificial intelligence as the “activity devoted to making machines intelligent” while “intelligence is that quality that enables an entity to function appropriately and with foresight in its environment”. 4 Echoing Nilsson’s view, the European Commission’s High-Level Group on AI provides a more comprehensive understanding of the term:

“Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal.”5

While the concept of artificial intelligence continues to evolve, one could argue that the ambition to push forward the frontier of machine intelligence is the main anchor that holds the concept together. As the authors of the report on “Artificial Intelligence and Life in 2030” point out, we should not expect AI to “deliver” a life-changing product, but rather to continue to generate incremental improvements in its quest to achieve and possibly surpass human standards of reasoning and behaviour. In so doing, AI also sets in motion the so-called “AI effect”: as AI brings a new technology into the common fold, people become accustomed to this technology, it stops being considered AI, and newer technology emerges.6

In the same way that cars differ in terms of their quality and performance, AI programs also significantly vary along a broad spectrum ranging from rudimentary to super-intelligent forms. In consular and diplomatic affairs, the left side of this spectrum is already visible. At the lower end of the complexity scale, chat-bots now assist with visa applications, legal aid for refugees, and consular registrations.7  More sophisticated algorithms are being developed by MFAs to either advance the spread of positive narratives or inhibit online disinformation and propaganda.8 However, all these applications, regardless of their degree of technical sophistication, fall in the category of ‘narrow’ or ‘weak’ AI, as they are programmed to perform a single task. They extract and process information from a specific dataset to provide guidance on legal matters and consular services. The ‘narrow’ designation for such AI applications comes from fact that they cannot perform tasks outside the information confines delineated by their dataset.

By contrast, general AI refers to machines that exhibit human abilities ranging from problem-solving and creativity to taking decisions under conditions of uncertainty and thinking abstractly. They are thus able to perform intellectual activities like a human being, without any external help. Most importantly, strong AI would require some form of self-awareness or consciousness in order to be able to fully operate. If so, strong AI may reach a point in which it will be able not only to mimic the human brain but to surpass the cognitive performance of humans in all domains of interest. This is what Nick Bostrom calls superintelligence, an AI system that can do all that a human intellect can do, but faster (‘speed superintelligence’), or that it can aggregate a large number of smaller intelligences (‘collective superintelligence’) or that it is at least as fast as a human mind but vastly qualitatively smarter (‘quality superintelligence’).9

That being said, strong AI, let alone superintelligence, remain merely theoretical constructs at this time, as all applications developed thus far, including those that have attracted media attention such as Amazon’s Alexa or Tesla’s self-driving prototypes fall safely in the category of narrow AI. However, this may change soon, especially if quantum computing technology will make significant progress. Results from a large survey of machine learning researchers on their beliefs about progress in AI is relatively optimistic. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and even working as a surgeon (by 2053). Furthermore, they believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years.10

AI and Diplomacy

Riding the waves of growing interest about AI in IR and security studies,11 the debate about the role of AI in diplomacy is also gaining momentum, although academic discussions are progressing rather slowly, without a clear analytical focus. As the authors of a recent report on AI opportunities for the conduct of diplomacy point out, discussions about AI in the context of foreign policy and diplomacy often lack clarity in terminology. They suggest that a better understanding of the relationship between AI and diplomacy could come from building on the distinction between AI as a diplomatic topic, AI as a diplomatic tool, and AI as a factor that shapes the environment in which diplomacy is practised. As a topic for diplomacy, AI is relevant for a broader policy agenda ranging from economy, business, and security, all the way to democracy, human rights, and ethics. As a tool for diplomacy, AI looks at how it can support the functions of diplomacy and the day-to-day tasks of diplomats. As a factor that impacts the environment in which diplomacy is practised, AI could well turn out to be the defining technology of our time and as such it has the potential to reshape the foundation of the international order.12

Taking note of the fact that developments in AI are so dynamic and the implications so wide-ranging, another report prepared by a German think tank calls on Ministries of Foreign Affairs (MFAs) to immediately begin planning strategies that can respond effectively to the influence of AI in international affairs. Economic disruption, security & autonomous weapons, and democracy & ethics are the three areas they identify as priorities at the intersection of AI and foreign policy. Although they believe that transformational changes to diplomatic institutions will eventually be needed to meet the challenges ahead, they favour, in the short term, an incremental approach to AI that builds on the successes (and learns from the failures) of “cyber-foreign policy”, which, in many countries, has been already internalised in the culture of the relevant institutions, including of the MFAs.13 In the same vein, the authors of a report prepared for the Centre for a New American Security see great potential for AI in national security-related areas, including diplomacy. For example, AI can help improve communication between governments and foreign publics by lowering language barriers between countries, enhance the security of diplomatic missions via image recognition and information sorting technologies, and support international humanitarian operations by monitoring elections, assisting in peacekeeping operations, and ensuring that financial aid disbursements are not misused through anomaly detection.14

From an AI perspective, consular services could be a low-hanging fruit for AI integration in diplomacy as decisions are amenable to digitisation, the analytical contribution is reasonable relevant and the technology favours collaboration between users and the machine. Consular services rely on highly structured decisions, as they largely involve recurring and routinised operations based on clear and stable procedures, which do not need to be treated as new each time a decision has to be made (except for crisis situations, which are discussed further below). From a knowledge perspective, AI-assisted consular services may embody declarative (know-what) and procedural knowledge (know-how) to automate routinised operations and scaffold human cognition by reducing cognitive effort. This can be done by using data mining and data discovery techniques to organize the data and make it possible to identify patterns and relationships that would be difficult to observe otherwise (e.g., variation of demand for services by location, time, and audience profile).

Case study #1: AI as Digital Consul Assistant
The consulate of country X has been facing uneven demand for emergency passports, visa requests and business certifications in the past five years. The situation has led to a growing backlog, significant loss of public reputation and a tense relationship between the consulate and the MFA. An AI system trained with data from the past five years uses descriptive analytics to identify patterns in the applications and concludes that August, May and December are the most likely months to witness an increase of the demand in the three categories next year. AI predictions are confirmed for August and May but not for December. AI recalibrates its advice using updated data and the new predictions help consular officers manage requests more effectively. As the MFA confidence in the AI system grows, the digital assistant is then introduced to other consulates experiencing similar problems.

Digital platforms could also emerge as indispensable tools for managing diplomatic crises in the digital age and for good reasons. They can help embassies and MFAs make sense of the nature and gravity of the events in real-time, streamline the decision-making process, manage the public’s expectations, and facilitate crisis termination. At the same time, they need to be used with great care as factual inaccuracies, coordination gaps, mismatched disclosure level, and poor symbolic signalling could easily derail digital efforts of crisis management.15 AI systems could provide great assistance to diplomats in times of crisis by helping them make sense of what it is happening (descriptive analytics) and identify possible trends (predictive analytics). The main challenge for AI is the semi-structured nature of the decisions to be taken. While many MFAs have pre-designed plans to activate in case of a crisis, it is safe to assume that reality often defies the best crafted plans. Given the high level of uncertainty in which crisis decision-making operates and the inevitable scrutiny and demand of accountability to occur if something goes wrong, AI integration can work only if humans retain control over the process. As a recent SIPRI study pointed out, AI systems may fail spectacularly when confronted with tasks or environments that differ slightly to those they were trained for. Their algorithms are also opaque, which makes difficult for humans to explain how they work and whether they include bias that could lead to problematic –if not dangerous– behaviours.16

As data is turning into the “new oil”, one would expect that the influence of digital technologies on public diplomacy to maximise interest in learning how to make oneself better heard, listened and followed by the relevant audiences. As the volume of data-driven interactions continue to grow at an exponential rate, one can make oneself heard by professionally learning how to separate ‘signals’ from the background ‘noise’ and by pro-actively adjusting her message to ensure maximal visibility in the online space, in real time. Making oneself listened would require, by extension, a better understanding of the cognitive frames and emotional undertones that enable audiences to meaningfully connect with a particular message. Making oneself followed would involve micro-level connections with the audience based on individual interests and preferences.17

Case study #2: AI as Digital PD Assistant
The embassy of country X in Madrid would like to conduct a public diplomacy campaign in support of one of the following policy priorities: increasing the level of educational exchanges of Spanish students in the home country,  showcasing the strength of the military relationship between country  X and the Spain and boosting Spanish investments in the home country.  As it has only £25,000 in the budget for the campaign, it needs to know which version can demonstrate better return on investment. Using social media data, an AI system will first seek to listen and determine the level of interest and reception (positive, negative, neutral) of the public in the three topics. The next step will be to use diagnostic analytics to explain the possible drivers of interest in each topic (message, format, influencers) and the likelihood of the public reacting to the embassy’s campaign. The last step will be to run simulations to evaluate which campaign will be able to have the strongest impact given the way in which the public positions itself on each topic and the factors that may help increase or decrease public interest in them.

At the operational level of the digital diplomat decisions are expected to take a structured form as the way to meaningfully communicate with the audience would rely on continuously tested principles of digital outreach with a likely focus on visual enhancement, emotional framing, and algorithmic-driven engagement. AI could assist these efforts by providing reliable diagnostics of the scope conditions for impact via network, cluster and semantic analyses. Prescriptive analytics could also offer insight into the comparative value-added of alternative approaches to digital engagement (e.g., which method proves more impactful in terms of making oneself heard, listened and followed). On the downside, the knowledge so generated would likely stimulate a competitive relationship between the AI system and digital diplomats as most of the work done by the later could be gradually automated. However, such a development might be welcome by budget-strapped MFAs and embassies seeking to maintain their influence and make the best of their limited resources by harnessing the power of technological innovation.

Given the growing technical complexity and resource-intensive nature of international negotiations it is hardly surprisingly that AI has already started to disrupt this field. The Cognitive Trade Advisor (CTA) developed by IBM aims to assist trade negotiators dealing with rules of origin (criteria used to identify the origin /nationality of a product) by answering queries related to existing trade agreements, custom duties corresponding to different categories of rules of origin, and even to the negotiating profiles of the party of interest.18 CTA uses descriptive analytics to provide timely and reliable insight into technically complex issues that would otherwise require days or possibly weeks for an experienced team to sort out. It does not replace the negotiator in making decisions, nor does it conduct negotiations by itself, or at least not yet. It simply assists the negotiator in figuring out the best negotiating strategy by reducing critical information gaps, provided that the integrity of the AI system has not been compromised by hostile parties. The competitive advantage that such a system could offer negotiators cannot be ignored, although caveats remain for cases in which negotiations would involve semi-structured decisions such as climate negotiations or the Digital Geneva Convention to protect cyberspace. The problem for such cases lies with the lower degree of data veracity (confidence in the data) when dealing with matters that can easily become subject to interpretation and contestation, hence the need for stronger human expertise and judgement to assess the value of competing courses of action in line with the definition of national interests as agreed upon by foreign policy makers. 


As Bostrom has shown, the quest for Artificial intelligence has travelled through multiple “seasons of hope and despair”. The early attempts in the 1950s at the Dartmouth College sought to provide a proof of concept for AI by demonstrating that machines were able to perform complicated logical tests. Following a period of stagnation, another burst of innovative thinking took place in early 1970s, which showed that logical reasoning could be integrated with perception and used to control physical activity. However, difficulties in scaling up AI findings soon led to an “AI winter” of declining funding and increased scepticism. A new springtime arrived with the launch of the Fifth Generation Computer System Project by Japan in early 1980s, which led to the proliferation of expert systems as new tools of AI-supported decision-making. After another period of relative stagnation, the introduction of neural networks and deep learning in late 1990s has generated a new wave of interest in AI and growing optimism in the possibility of applying it to a wide range of activities, including diplomacy. The key question on the mind of policymakers now is whether AI would be able to deliver on its promises instead of entering another season of scepticism and stagnation. If AI would be able to demonstrate value in a consistent manner by providing reliable assistance in areas of diplomatic interest such as in consular services, crisis management, public diplomacy and international negotiations, as suggested above, then the future of AI in diplomacy should look bright. If, on the other hand, the ratio between costs and contributions of AI applications to diplomatic work would stay high, then the appetite for AI integration would likely decline.

Corneliu Bjola
Head of the Oxford Digital Diplomacy Research Group, University of Oxford (#DigDiploROx)
| @CBjola

1 John McCarthy (2011): ‘What Is AI? / Basic Questions’, author’s website accessed 22 May 2019.

2 In simple terms, behaviour refers to the way in which people act to a situation in response to certain internal or external stimuli. The classical theory of human behaviour, the Belief-Desire-Intention (BDI) model argues that  individual behaviour is best explained by the way in which agents develop intentions (desires that the agent has committed to pursue) out of a broader range of desires (states of affairs they would like to bring about), which in turn are derived from a set of beliefs (information the agent has about the world). The way in which intentions are formed remain a matter of dispute between different schools of thought, with a traditional view emphasizing the role of rational reasoning (the rational dimension) , while others stressing the importance of internal mental processes (the cognitive dimension), or the social context in which this occurs (the social dimension). See Michael E. Bratman (1999): Intention, Plans, and Practical Reason, Cambridge: Cambrigde University Press.

3 Stuart Russell and Peter Norvig (2010): Artificial Intelligence A Modern Approach, Third Ed., Pearson, p. 2.

4 Nils J Nilsson (2010): The Quest for Artificial Intelligence: A History of Ideas and Achievements, Cambridge: Cambridge University Press, p. 13.

5 High-Level Expert Group on Artificial Intelligence (2019): ‘A Definition of Artificial Intelligence: Main Capabilities and Scientific Disciplines’, European Commission, p. 6.

6 Peter Stone et al. (2016): ‘Artificial Intelligence and Life in 2030’, Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA. (September). 

7 Elena Cresci (2017): ‘Chatbot That Overturned 160,000 Parking Fines Now Helping Refugees Claim Asylum’, The Guardian, 6/III/2017.

8 Simon Cocking (2016): ‘Using Algorithms to Achieve Digital Diplomacy’, Irish Tech News, 19/IX/2016.

9 Nick Bostrom (2014), Superintelligence : Paths, Dangers, Strategies, First Ed., Oxford: Oxford University Press, pp. 63–69 and 6-11.

10 Katja Grace et al. (2018): ‘Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts’, Journal of Artificial Intelligence Research 62, July 31, pp. 729–54.

11 Daniel W Drezner (2019): ‘Technological Change and International Relations’, International Relations, 20/III/2019. ; Stoney Trent and Scott Lathrop (2019): ‘A Primer on Artificial Intelligence for Military Leaders, Small Wars Journal, 2019; Edward Geist and Andrew Lohn (2018): How Might Artificial Intelligence Affect the Risk of Nuclear War? RAND Corporation; Greg Allen and Taniel Chan (2017): ‘Artificial Intelligence and National Security’, Belfer Center; Miles Brundage et al. (2018): ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation’, 20/II/2018.

13 Ben Scott, Stefan Heumann, and Philippe Lorenz (2018): ‘Artificial Intelligence and Foreign Policy’, Stiftung Neue Veranwortung.

14 Michael C Horowitz et al. (2018): ‘Artificial Intelligence and International Security’, Center for New American Security (CNAS).

15 Corneliu Bjola (2017): ‘How Should Governments Respond to Disasters in the Digital Age?’, The Royal United Services Institute (RUSI).

16 Vincent Boulanin (2019): ‘The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk’, SIPRI.

17 Bjola, Cassidy, and Manor (2019): ‘Public Diplomacy in the Digital Age’, The Hague Journal of Diplomacy 14, April, p. 87.

18 Maximiliano Ribeiro Aquino Santos (2018), ‘Cognitive Trading Using Watson’, IBM blog, 12/XII/2018.

<![CDATA[ Cyber Security: How GDPR is already impacting the public-private relationship ]]> 2019-10-11T06:40:43Z

For anybody wanting to participate in a public-private relationship within the General Data Protection Regulation (GDPR), the calculus must be weight of the well-known “carrot” gained from collaborative exchange with the new regulatory “stick” being wielded in Europe.


For anybody wanting to participate in a public-private relationship within the General Data Protection Regulation (GDPR), the calculus must be weight of the well-known “carrot” gained from collaborative exchange with the new regulatory “stick” being wielded in Europe.


The public-private relationship has long been heralded as a key arena in cyber security, where collaborative arrangements can ensure greater resilience to cyberattacks and swifter responses to cyber incidents. This article argues, however, that the relationship will change greatly in a post-GDPR world. The relationship should now be considered as one where collaboration is augmented with coercive measures in order to change private sector behavior in cyberspace.


The arenas, industries, and actors that cyber security affects are today almost too vast to fully cover. Cyberspace has become ubiquitous throughout our economies, infrastructures, and our individual lives, so cyber security too is therefore ubiquitous. Regardless of whether one is addressing cybercrime amongst civil society, to organized cybercrime against corporate enterprise, or through to state level cyberattacks and espionage between nations, cyber security has become a central concern to all in both industry and government.

Within this concern it has long been believed that a key remedy in ensuring better cyber security throughout society has been to cultivate an enduring public-private relationship between the state and the core components of its economy. The motivating logic being that, with a culture of information sharing at its heart, contributions to policy discussions and greater awareness of each other’s positions, greater resilience and behavior could be established to reduce the impact of cybercrime throughout nations’ respective economies.

While there have been measures of success, it cannot be claimed that other factors have not impacted consideration as to how else cyber security should be managed, which in the EU has led to the implementation of the General Data Protection Regulation (GDPR) in 2018. Following scandals such as Cambridge Analytica and Facebook in recent years, as well as a veritable catalogue of increasingly severe data breaches, concerns about the use and misuse of data have evolved into what Wired has imaginatively –although certainly accurately– termed the “Great Privacy Awakening”.1 It has since become clear that the maturing approach to addressing cyber security lies as much with coercive regulatory measures as it does with collaborative public-private relationships. Simply put, big tech cannot be trusted to regulate itself.

What this article argues, however, is that measures such as GDPR are sure to impact the fruitfulness of any public-private relationships moving forward, with very notable cases already illustrating the changed dynamic that is certain to become the norm. Any consideration of the future of the public-private relationship within and among EU nations must also now strongly factor in the impact of GDPR, particularly as a deterrent to private sector organizations, from giving more to national level cyber security resilience. At its core, for anybody wanting to participate in a public-private relationship within GDPR’s jurisdiction, the calculus must be weight of the well-known “carrot” gained from collaborative exchange with the new regulatory “stick” being wielded in Europe.

The public-private relationship: What and Why?

Public-private relationships have experienced many manifestations across numerous nations, of course predating cyber security concerns as well. Definitions can prove troublesome due the catalogue of varied approaches and recommendations and are perhaps not the best place to begin in understanding what they are. Instead, it is better to understand why they are necessary in the first place. Christensen and Peterson were correct when they highlighted then President Obama’s opinion of the cyber challenges, that “This is a shared mission” that must include private sector input every bit as much as the public sector.2

The key need for such a relationship lies in the recognition that certain national and societal challenges extend far beyond the capability of the state alone to tackle, that private sector bodies are key stakeholders, and that a sense of shared responsibility is recognized. Cyber security certainly qualifies for this measure, with the three key aspects of a public-private relationship –risk sharing, innovation, and longevity– a clear necessity in finding long-term solutions to cyber security issues.

Despite this, clear ideas as to what the public-private relationship is for cyber security has, according to Madeline Carr, “always been unclear”.3 This is because, fundamentally, there remain challenges between what the public sector and private sector seeks to achieve through the relationship at its core. The public sector approaches public-private relationships with national security objectives in mind, whereas the private sector engages in order to pursue market-based objectives at minimizing liabilities in the market. It is on this basis that readers must always recognize that achieving and maintaining any public-private relationship is a fragile affair, with numerous actors pursuing a variety of different, if not always opposing, objectives.

Key cases in GDPR to date

The implementation of the General Data Protection Regulation Act across Europe since May 2018 has already had immediate impact in the cyber security landscape, not only for Europe but worldwide as well. A clear example of the concerns raised could be seen from numerous American publications –particularly the publishing houses of major news outlets–4 who replaced their homepages for European readers with a landing page announcing, in effect, a denial of service until the ramifications of publishing into the jurisdiction of GDPR have been clarified.

The private sector immediately became concerned about walking into regulatory fines on a scale not seen before. This is understandably so, with the fines set to be faced now ranging to 4% of total turnover or €20 million, with the emphasis being on issuing a fine for the highest amount of the two. To put this into context, although domestic fines varied across the continent under the previous legislation, for one example under the UK’s previous Data Protection Act the maximum fine faced by organizations was £500,000. This maximum fine was only ever issued once, against Facebook in October 2018 following the Cambridge Analytica scandal.5

Following the implementation of GDPR, it has been a question of wait and see who would become the first victims of the new regulatory big “stick” on the European continent. In 2019 we now have two key cases to explore the impact of GDPR on the public-private relationship, British Airways and the Marriott International hotel group.

Anatomy of a hack: British Airways and Marriott International

In September 2018, the personal and financial information of an estimated 380,000 British Airways passengers was compromised. Between 22:58 (British Summer Time, BST) on 21 August and 21:45 (BST) on 5 September 2018 a malicious piece of software –22 lines of code linked to the British Airways baggage claim information page– harvested the payment details of passengers purchasing tickets from the British Airways portal, including card verification value (CVV) numbers. The software script ran with the intent to capture payment details at the point of payment, thereby not breaching British Airways servers but instead stealing payment information at the point of interaction between payment and airline databases.6 This form of attack, commonly known as formjacking, avoids the difficulty of breaching databases and servers directly, focusing instead on targeting the exchange of monies.

Meanwhile, Marriott International suffered a data breach in 2018, following the discovery and tracking of a Remote Access Trojan (RAT) active on Marriott’s Starwood’s IT network from September through to clear knowledge of an active breach in November 2018. Once the key discovery in November was made that the RAT has been present on the network since 2014 –prior to Marriott’s acquisition of Starwood– it was clear that there was indeed a bad problem.

Marriott issued a public notification on 30 November of a data breach, with an estimated 500,000,000 customers affected. Within these enormous numbers were included 5.25 million unencrypted passport numbers and 385,000 payment card numbers that were still valid at the time of the breach.7 Unlike the British Airways hack targeting financial exchange, Marriott’s experience highlights the risks associated with due diligence when carrying out a corporate acquisition.

Under the authority of GDPR, the British Information Commissioner’s Office (ICO) chose to impose a fine of £183 million on British Airways. This was levied at 1.5% of the airline’s 2017 turnover,8 revealing both a significant watermark beyond previous maximum regulatory fines while also still remaining significantly below GDPR’s 4% maximum cap. Continuing to send a loud message globally, the ICO was quick to follow up its BA fine with a fine of £99 million levied against Marriott. The filing specifically citing that Marriott “failed to undertake sufficient due diligence when it bought Starwood and should also have done more to secure its systems”.9

The impact on the public-private relationship

At this stage, one might question the impact of GDPR on the public-private relationship and consider them to be two separate issues entirely. This is certainly not the case, with the argument presented that the new practice of fines being implemented against the private sector represents the first step of difficulty to affect the public-private relationship moving forward. This impact will evolve in four ways; a divergence of interests, increased uncertainty in trust, greater fear of regulatory punishment, and an increased difficulty to entice private sector collaboration.

In the first instance, an already fragile balance of interests will continue to diverge. Now that the public-private relationship is no longer one born of purely a rewarding “carrot” but also a punitive “stick”, the private sector now knows that the public sector will seek to punish misdeeds against the data they bear the responsibility to safeguard. This dynamic can only serve to increase the distance between interests that were always difficult to align and maintain in the first place, the pursuit of generally national security interests on the part of the public sector, and the pursuit of market stability and competitiveness on the part of the private sector. A careful balance must be achieved if any functioning public-private relationships –especially in the realm of information sharing– is to endure.

This brings us to the second dynamic, a greater uncertainty in the bedrock of the relationship –trust. The key dynamic that underwrites the public-private relationship is trust between the actors involved. With regards to information sharing, trust in the discretion exercised in disclosure not being made outside of the trusted circle is paramount. An increased punitive framework decreases the incentive to share information as fully as could be expected otherwise.

The third impact is that a greater fear of regulatory punishment logically increases the fear held by any private sector body in the relationship, only exacerbating the issue of trust placed in the public sector. A question would be posed by any business facing this, which is “how can a private sector body trust public bodies to help in hours of need when a firm may well be opening itself up to a punitive regulatory measure?”

Finally, if these first three dynamics only increase, there will be an increased difficulty in enticing existing outsiders to join a public-private relationship for cyber security. Why would new private sector bodies join a relationship if the dynamic with existing members has visibly shifted from one of collaborative exchange to one of potentially large regulatory punishments? Given the reaction of many American news outlets in blocking webpages to European visitors in 2018, it is entirely plausible in the American example to believe that their businesses would be hesitant to participate in European public-private relationships with such a question hanging over their decision making.

These dynamics are clear and present in the maintenance of a healthy public-private relationship. Such a relationship that is geared towards cyber security is fundamentally built around information sharing as its core manifestation, initiatives like the FS-ISAC grouping for the financial sector10 in general and the UK Government’s Cyber Information Sharing Partnership (CiSP)11 platform in particular being exemplars of long-term relationships in practice.

How to maintain public-private relationships post-GDPR

So far, readers might infer that the GDPR holds a purely negative effect on the public-private relationship. While the argument here is certainly that the dynamic has changed considerably, it is still seen as being fully within the ability of relationship managers to balance effectively. Public-private relationships focused on cyber security certainly have a healthy future in a GDPR world, which when manifested in information sharing practices are always overwhelmingly focused on achieving two objectives: the prevention of cyber security incidents through building increased resilience, and the mitigation of actual incidents through the sharing of essential information to reduce harm.

Those core objectives are not incompatible with GDPR but do require careful management from the public sector in order to not be seen by private sector bodies as no longer worth their investment if the regulatory punishment outweighs the collaborative gains. Two guiding principles can serve to assist this relationship moving forward.

Establish clear boundaries

There are numerous factors that serve to build public-private relationships, with plenty of views on offer from many researchers. Almost all, however, begin with a clear recognition of the foundational factor, trust.12 In any public-private relationship, the place of trust is paramount in bringing and keeping stakeholders at the table; no practice of information sharing to prevent and mitigate cyber security incidents can take place if the participants do not trust that sensitive information is handled discreetly.

The establishment of clear boundaries is therefore essential. In particular, to make sure than the forum for the exchange of information is protected from regulatory involvement. There can be no faithful exchange of sensitive information if a relationship carries the fear that regulators are also present. In as far as it is possible to achieve within each European nation’s domestic legal environment, the public-private relationship for cyber security should be separated from the activities of the regulatory bodies charged with pursuing GDPR cases.

Should there be any doubt in the logic of this position, the justification can be made immediately apparent. Even in a “carrot and stick” relationship, the presence of the stick does not remove the “carrot” outright. The place for incentives remains even in the presence of punishment. So too with GDPR, the presence of regulatory fines does not remove the clear and present need for an incentivizing relationship that seeks to prevent harmful cyber security incidents in the first place. Indeed, to fall back on a position of claiming that GDPR could solve cyber security itself would be to accept a position of accepting an increase in incidents and simply to punish victims after the act. Such a position is not only illogical, it is plainly against the public good even when referring to private actors.

Advise, don’t mandate

Building on the first principle of establishing clear boundaries is another principle for the public sector, which is to advise, not mandate the actions of private actors within their public-private relationship. The central dynamic of such a relationship is the mutual exchange of information for mutual benefit. If the public-sector hosts of the relationship believe their role is to mandate, the balance of the relationship will be put under the impression of a hierarchical one.

This is particularly acute when it comes to the submission of live incident information. Public actors are frequently sought for guidance on incident response and mitigation measures, as well as advise on what steps to take from a law enforcement angle. The boundary must be set that advice is precisely that, advice, and not an order to be followed. There must be space in a public-private relationship to hand a decision back to the private actor, the owner of the data or incident, to deliberate and reach their own judgement on what actions to take according to their legal and regulatory responsibilities.

An attempt to impose such a responsibility on those managing a public-private relationship from the public sector compromises both impartiality as well as the foundation of trust in the relationship. It is a careful, yet increasingly fragile balance to be struck in the management of such a relationship in the face of regulation such as GDPR, but one that experience in the field of public-private relationships says can be achieved.


To conclude, this author believes that the first year of GDPR has introduced a difficult element in the pursuit of public-private relationships for cyber security, but it is one that is within the realms of existing experience to manage. Previously, one would argue that public-private relationships were based on purely shared incentives; open collaboration and information exchange between actors to prevent and/or mitigate cyber security incidents. To this “carrot” has however now been added the “stick” of GDPR, which is a recognition that public-private relationships by themselves have not been enough to condition both the cyber security landscape and the behavior of actors within it as desired.

While it cannot be argued that there is a need for increased regulatory punishment to match the scale of misdeeds that have evolved in the handling of data and incidents, it must also recognized that GDPR introduces an element of doubt in the minds of those who may be hesitant to contribute to public-private relationships. Those charged with maintaining and growing these relationships in the pursuit of greater resilience to the cyber threats faced need to work to ensure that the new regulatory “stick” is not seen to overshadow the many benefits that come from the “carrot” of mutual information exchange. A world in which cyber security is managed only by the issuing of punitive fines would become a sad place indeed.

Danny Steed
Head of Strategy, ReSolve Cyber | @TheSteed86

1 Issie Lapowski (2019): ‘How Cambridge Analytica Sparked the Great Privacy Awakening’, Wired,17/IX/2019.

2 President Obama quoted in Kristoffer Kjærgaard Christensen,and Karen Lund Peterson (2017): ‘Public-private relationships on cyber security: a practice of loyalty’, International Affairs 93:6, p. 1437 and 1439.

3 Madeline Carr (2016): ‘Public-private relationships in national cyber-security strategies’, International Affairs, 92:1, p. 61.

4 Renae Reints (2018), ‘These Majors U.S. News Sites are blocked in the EU’, Fortune, 9/VIII/2018.

5 Information Commissioner’s Office (2019): ‘ICO issues maximum £500,000 fine to Facebook for failing to protect users’ personal information’, ICO, United Kingdom

6 Jordan Bishop (2018): ‘This is how 380,000 British Airways Passengers Got Hacked’, Forbes (11/IX/2018); Lily Hay Newman (2018): ‘How Hackers Slipped by British Airways Defenses’, Wired, 11/IX/2018.

7 Catalin Cimpanu (2019): ‘Marriott’s CEO shares post-mortem on last year’s hack’, ZDNet, 3/VIII/2019.

8 C.R. (2019): ‘British Airways faces a £183m fine over a data breach’, The Economist, 7/VIII/2019.

9 ICO quoted in Zack Whittaker (2019): ‘Marriott to face $123 million fine by UK authorities over data breach’, Tech Crunch, 7/IX/2019.

10 The Financial Services Information Sharing and Analysis Centre, although a private initiative, shares common public-private relationship goals and is a well-known example of an information sharing initiative.

11 The Cyber Information Sharing Partnership began in 2014 under CERT-UK before being transferred to the National Cyber Security Centre (NCSC) upon its formation in 2016.

12 Many views are on offer, but the following will endow readers with the core case behind trust.
Max Manley (2015): ‘Cyberspace’s Dynamic Duo: Forging a Cybersecurity Public-Private Relationship’, Journal of Strategic Security, Vol. 8, No. 3, p. 98; and Eric A. Kaijankowski (2015): Cybersecurity Information Sharing Between Public-Private Sector Agencies, Naval Postgraduate School Thesis, Part V: Conclusion.

<![CDATA[ In virality we trust! The quest for authenticity in digital diplomacy ]]> 2019-07-08T07:05:15Z

Virality in digital diplomacy is the new black. For Ministers of Foreign Affairs and diplomats, being on social media is no longer only about presence and networking, but about standing out through the virality of their messages.


Virality in digital diplomacy is the new black. For Ministers of Foreign Affairs and diplomats, being on social media is no longer only about presence and networking, but about standing out through the virality of their messages.


For Ministers of Foreign Affairs (MFA) and embassies, being on social media is no longer only about presence and networking, but about standing out through the virality of their messages. Virality allows digital diplomats to step out of their immediate ‘bubble’ and reach out to unfamiliar audiences, showcase their position on important policy issues or normative claims, and enhance their social authority in front of their peers or the online public. The challenge for digital diplomacy lies in achieving the proper know-how and technical capacity to make their messages ‘go viral’. This ARI provides some clues and rules to improve the virality in digital diplomacy.

The Studium and the Punctum

Virality in digital diplomacy is the new black, and rightly so, one may add! For Ministers of Foreign Affairs (MFA) and embassies, being on social media is no longer only about presence and networking, but about standing out through the virality of their messages. Creating content that is shared exponentially on social media, in a very short timeframe, with multiple levels of reactions from a mosaic of audiences is, to put is simply, ‘pure gold’ from a communicational perspective.

Virality allows digital diplomats to step out of their immediate ‘bubble’ and reach out to unfamiliar audiences, showcase their position on important policy issues or normative claims, and enhance their social authority in front of their peers or the online public. In the attention-deficit space of the digital medium, virality promises to inject a high-dose of authenticity and engagement, even though, the outcome often has a short-lifespan and generates transient effects. The challenge lies, of course, with the fact that viral content is not that easily to create, especially by MFAs and embassies who generally lack the human, know-how and technical capacity to make their messages ‘go viral’.

As a first step towards addressing this challenge, we need to develop a good theoretical understanding of how virality works in the context of digital diplomacy. Roland Barthes’ reflections on the study of photography may assist us with this task, as they provide some useful clues about how to think analytically about the issue of virality.1 More specifically, Barthes argues that the way in which we make sense of the meaning of a photograph much depends on the distinction we draw between the studium and the punctum aspects of the image. The studium represents the contextual reading of the image that is, the historical, social or cultural details that the picture makes available to the viewer. The punctum, on the other hand, is the ‘out of place’ aspect of the photo that punctuates the studium and ‘pierces’ the viewer with an unexpected arrow of acuity. Put it differently, while the studium tells the viewer what the image is about in a manner that can be similarly understood by many, the punctum disrupts the studium and establishes a personal connection between the viewer and the image.

For example, Barthes finds that the picture taken by a Dutch reporter of a military unit patrolling a street in an unnamed Nicaraguan city during the uprising in 1978-79 (see Image 1) resembles well the duality and co-presence of the studium and punctum. The studium informs us about the gravity of the political situation, the desolation and destruction produced by the insurrection, the casual display of military force, and the bleakness of the future. The punctum, on the other hand, reveals, at least for Barthes, an unexpected contrast between two elements that do not usually belong together, the nuns and the soldiers, which seems to invite the reader to reflect on questions about war and death, violence and religion, destruction and reconstruction. The studium makes available to viewers a particular narrative about a historical situation with the goal to stimulate his/her interest and take notice of the human tragedy unfolding in Nicaragua at that time. However, it is the punctum that makes the photo transcend its state of general interest and connect it more intimately with the viewer by reaching out to Barthes’ subjectivity and rendering the image personally meaningful to him.

Image 1: Koen Wessing, Nicaragua, 1978
Image 1: Koen Wessing, Nicaragua, 1978

Barthes’ reflections on the art of photography carry good analytical value for the study of online virality as well. It offers us a framework for deconstructing viral content into tangible components by which to capture the interplay between the general and the specific, the common and the personal, the informative and the emotional, the inconsequential and the meaningful. Understanding the studium of viral content can give us a better sense of the themes, compositions, formats and approaches that makes certain messages highly popular. Understanding the punctum can reveal us the “out of place” profile of viral messages and their propensity for personalisation and micro-engagement. Drawing on relevant case studies of viral digital diplomacy, the next two sections will integrate the concepts of studium and the punctum into the discussion of two important aspects of online virality: contextual dimensions of viral dissemination (external vs internal) and rules of operation (information, emotions and personalisation).

External vs Internal Virality

A tweet by the former UK Ambassador to Egypt, John Casson, showing him strolling in Cairo, shortly before ending his tenure in Sept 2018, gathered 1.4k Reactions,  1.7k ReTweets and 11k Likes. By contrast, the tweet of the High Commissioner of Cyprus to the UK, Euripides Evriviades, showing him posing in front of the residence of the British Prime Minister as a memento before his departure from the post, garnered only 28 reactions, 25 ReTweets and 294 Likes (see Image 2). The question this example invites us to address is threefold: a) to what extent the virality of the two tweets is comparable?, b) how well each tweet performs relative to other messages produced by the same author? and c) what characteristics of the two tweets explain their virality? The first part of the question concerns itself with the issue of ‘external virality’ (cross-source comparison), the second with the issue of ‘internal virality’ (same source comparison) and the third part with the application of the stadium/punctum framework.

Image 2. Tweets with asymmetrical viral content

ماينفعش امشي من غير ماسلم عليكم

مفيش مع السلامة إنما إلى القاء #منورة_باهلها
🙏🏼👋❤️ 🇪🇬 🇬🇧

— John Casson (@JohnCassonUK) 31 August 2018

No. I am not in the race to become the next PM of #UK. Just had my pic taken as #Cyprus HC, in front of one of the most famous doors in the world. A keepsake of my tenure in 🇬🇧 & in this mega metropolis called London.

— Euripides Evriviades 🇨🇾🇪🇺 (@eevriviades) 12 June 2019

Many would be probably tempted to consider the tweet of Amb. Casson as being decisively more viral in its outlook than that of HC Evriviades given its sizeable lead in quantitative metrics. However, closer scrutiny reveals the two tweets are somewhat similar in terms of online influence once the number of followers is considered (see Table 1). More specifically, Amb. Casson’s tweet has only a small lead in terms of RT, but a stronger presence in terms of likes and reactions. This is so because the number of followers distorts the quality of the virality metrics by amplifying the randomness of the reactions. Put differently, it is clearly impressive when an account with 100 followers generates 1000 RTs, but arguably less so when the same number of RTs come from an account with 1 million followers. Therefore, RT/Likes/Reactions per follower provides a better basis of comparison of the ‘external’ virality of competing accounts.

Table 1. External virality adjusted by the number of followers
  Amb. John Casson HC Euripides Evriviades
Number of followers 1.26M 16.6K
Number of RT per follower 741 664
Number of Likes per follower 114 56
Number of Reactions per follower 900 592

At the same time, it is important to observe the ‘internal’ dimension of virality that is, the extent to which a tweet aligns itself or diverge from the average reach of other messages generated by the same source. For illustration purposes, the average number of RT, Likes and Reactions of a sample of the most recent ten tweets produced by Amb. Casson (28 Aug – 5 Sept, 2018) and HC Evriviades (21-23 June 2019) fall well outside the normal distribution, between 2-3 standard deviations in the case of Amb. Casson and even further in the case of HC Evriviades. In other words, while both Tweets have performed extremely well relative to other tweets posted by the same author, the one posted by HC Evriviades is a clear outlier, especially in terms of Likes and Reactions.

Table 2. Internal virality relative to the average tweet reach in each account
  Amb. John Casson HC Euripides Evriviades
  Average Standard Deviation Average Standard Deviation
RT 370 473 6 6
Likes 3993 3466 24 27
Reactions 420 404 2 1

This brings us back to Barthes’ distinction between the studium and the punctum and its role as an analytical tool for explaining the performance of these two tweets. From a studium perspective, both tweets speak to traditional themes about what it means to be a diplomat. Engaging with people and cultivating issues of common interest in the case of Amb. Casson and building political relationships in the case of HC Evriviades. However, it is not the conventionalism of the stories that makes the difference in terms of the reception by the audience, but the punctum by which the viewer is invited to interpret the message. The casualness and naturalness by which Amb. Casson mingles with regular Egyptian citizens stands in clear contrast with its official position, while the note of subtle humour that HC Evriviades drops in his message acts as a relaxing counterpoint to the solemnity that the 10 Downing Street door conveys as the centre of political power in UK.

The stadium/punctum framework also adds an interesting reflexive angle to the discussion regarding the influence of external vs internal virality. As the audience gradually becomes familiar with the style of the author, internal validity can sustain itself if the punctum constantly refreshes itself. For example, the casualness shown by Amb. Casson in his public interactions can demonstrate its viral value if it continues to surprise the viewer, by engaging, for instance, with unexpected guests, or changing the dynamic of the interaction with the public. In the same way, the light/solemn punctum adopted by HC Evriviades will require creatively updated formats of expression so as to maintain the attention of the audience. From the perspective of external virality, the studium can offer interesting insights about how certain themes of diplomatic reflection travel across space and time. For example, does the idea of direct engagement with the public resonate better in places where the local relationship between citizens and policy-makers is more hierarchical? Similarly, would humour be able to drive viral content in the same way in places where the reputation of power holders is negative?    

Virality Rules

As mentioned elsewhere,2diplomatic communication has been traditionally embedded in a text-oriented culture that has favoured ‘constructive ambiguity’ over precision, politeness over frankness, reason over passion, and confidentiality over transparency. The arrival of digital technologies has infused the public sphere in which diplomacy operates with a set of new elements that have completely rearranged the ‘grammar rules’ of online engagement. Data and algorithms are now the new syntactic units of the new ‘digital language’ to which various combinations of visuals, emotions and cognitive frames are attached to create semantic meaning. This also means that digital content on social media platforms must tailor itself closely to these rules in order to be able to go viral. If so, what exactly are these rules and how the stadium/punctum framework can help us unpack the scope of application of these rules?

Rule 1. Information overload and limited attention contribute to the degradation of the quality of information that goes viral

As shown by Weng et al. the combination of social network structure (the denser, the better) and competition for finite user attention provides a sufficient condition for the emergence of a broad diversity of viral content.3 However, out of the ‘soup’ of contending viral messages, it is more likely that those that come on top expose low-quality information as both the information load and the limited attention lead to low discriminative power. As Qiu et al. point out, viral diversity can coexist with network discriminative power when we have plenty of attention and we are not overloaded with information,4 conditions that are increasingly difficult to meet in the digital medium. In other words, the network structure of social media platforms favours the formation of viral content, but the attention deficit of the users acts as a filter for the quality of the viral content.

Image 3. Tweets with asymmetrical quality of information

“Normally I say 'I'll be back', but now I say: 'I'll be there'" -- Arnold @Schwarzenegger accepts invitation from @antonioguterres to the UN #ClimateAction Summit in September.

— United Nations (@UN) 22 June 2019

With European prosperity and Asian peace and security closely connected, the EU has decided to strengthen its security cooperation in and with Asia. Check out the factsheet

— European External Action Service - EEAS 🇪🇺 (@eu_eeas) 31 May 2019

As suggested in Image 3, Rule 1 carries empirical relevance. The tweet posted by the UN account showing the UN Secretary General, Antonio Guterres, inviting Arnold Schwarzenegger to attend the Climate Action Summit in September 2019 went quickly viral (by internal standards). It has swiftly reached roughly three times the average of Likes and RTs received by the UN account despite the scarcity of the information provided, except for a brief reference to the actor’s ‘I’ll be back” famous line. By contrast, the information rich tweet posted by the European External Action Service (EEAS) outlining EU-Asia security priorities, an important topic in the current geopolitical context, has been hardly noticed by the online public. One important implication of Rule 1 is that the punctum needs to really stand out (via emotional framing of the use of a dynamic format) if the quality of the information reflected by the studium is to stay high and make a significant difference for the audience.

Rule 2. Emotions rule! Content that evokes intense emotions is more likely to go viral

One important school of thought on the psychology of emotions links Paul Ekman and Robert Plutchik’s influential theories of basic emotions to the pleasure,5 arousal and dominance model of environmental perception developed by Mehrabian and Russell6 (see Image 4). It is thus argued that emotions are associated with different degrees of positive (joy, surprise) or negative feelings (anger, disgust, fear, sadness), that they come with different levels of high (joy, anger, fear) or low energy (sadness, disgust) and that they are connected to feelings of control (anger, joy) or inadequacy (fear, sadness). Building on this model, Rule 2 states that messages reflecting high levels of valence, arousal and dominance, such as joy and anger, are more likely to go viral.

Image 4: Affective space spanned by the Valence-Arousal-Dominance model, together with the position of six Basic Emotions.7
Image 4: Affective space spanned by the Valence-Arousal-Dominance model, together with the position of six Basic Emotions

Rule 2 has received empirical support from a few studies. Fan et al. have found, for instance, that angry emotions could spread more quickly and broadly on social media than joy.8 Stieglitz & Dang-Xuan have also found that emotionally charged Twitter messages tend to be retweeted more often and more quickly compared to neutral ones.9 In a controversial experiment conducted on Facebook, Kramer et al. have demonstrated that emotional states can be actually transferred to others via emotional contagion, leading people to experience the same emotions without even their awareness.10

Image 5. Tweets with asymmetrical emotional valence

Goaded by #B_Team, @realdonaldTrump hopes to achieve what Alexander, Genghis & other aggressors failed to do. Iranians have stood tall for millennia while aggressors all gone. #EconomicTerrorism & genocidal taunts won't "end Iran". #NeverThreatenAnIranian. Try respect—it works!

— Javad Zarif (@JZarif) 20 May 2019

Mi silla 😉

— Mark Kent (@KentArgentina) 21 June 2019

What is interesting in the case of Rule 2 is that the punctum is not necessarily defined by a particular feature of the message, but by the emotion that transpires from the message. The two tweets in Image 5, posted by the Iranian Foreign Minister, Javad Zarif (left) and the UK Ambassador to Argentina, Mark Kent (right), illustrate well this point. FM Zarif’s tweet conveys a pugnacious expression of angry defiance, while Amb. Kent relies on the positive emotion of surprise to ‘pierce’ and establish a connection with the viewer. Both emotions enjoy high levels of energy and dominance, which explains the excellent reception by the audience (several times the average of RTs and Likes normally received by the two diplomats). An interesting implication of Rule 2 is the potential constitutive effect of emotion-driven virality on the formation of online audiences: do emotional punctums provide the anchor around which audiences coalesce and if so, at what stage the studium becomes irrelevant for how messages are received by the emotionally primed audience?  

Rule 3. Content that can be easily personalised is more likely to go viral

In a seminal article, later expanded in a book, Bennett and Segerberg make the argument that unlike the top-down mechanisms of content distribution favored by hierarchical organizations, social networking involves co-production and co-distribution based on personalized expression. According to this connective logic, taking public action becomes less an issue of demonstrating support for some generic goals, as noble as they may be, but an act of personal expression and self-validation achieved by sharing ideas online, negotiating meanings and structuring trusted relationships.11 For example, the personalized action frame ‘we are the 99 per cent’ that emerged from the US occupy protests in 2011, or the more recent ‘MeToo” movement, quickly turned viral and traveled the world via personal stories, images and videos shared on social networks such as Twitter, Facebook, and Instagram. In short, the easier to personalise a message, as Rule 3 states, the lower the barriers for individual identification with social or political goals, the more opportunities for horizontal engagement, and by extension the more likely for such content to be absorbed, reflected upon, and disseminated through the social networks.

Image 6: Tweets with asymmetrical degree of personalization

Twelve Allies founded #NATO in 1949. Today we are 29.
Join us in celebrating the 70th Anniversary of our Alliance.#WeAreNATO

— NATO (@NATO) 1 de abril de 2019

Traute Lafrenz is the last survivor of the White Rose resistance group. She is one of the few people who had the courage to stand up to the Nazis´ crimes. Consul General Heike Fuller presented the Order of Merit of the Federal Republic of Germany to Traute Lafrenz today

— GermanForeignOffice (@GermanyDiplo) 3 de mayo de 2019

For MFAs and embassies, personalisation is not necessarily an easy task, as often case their online activities are primarily about projecting and emphasising their own set of policy priorities, approaches, and strategies to addressing various issues on the global agenda. Personalisation would imply exactly the opposite: removing oneself from the “digital spotlight” and identifying themes that can connect with as many individuals as possible. The examples in Image 6 aim to achieve this in slightly different ways.

The #WeAreNATO videoclip produced by NATO for its 70th anniversary in April places the member states at the forefront of the story about the historical evolution of the organisation. Personalisation takes place, in this case, via state representatives who come together to share their commitment to the values of freedom and security projected by the organisation. The viral tweet of the German Ministry of Foreign Affairs takes a different approach. It invites viewers to recall the suffering of those persecuted for fighting for justice and freedom and to identify themselves with the courage demonstrated by one of the last survivors of the resistance movement to the Nazi regime.  

In contrast to Rule 2, personalisation does not primarily focus on emotions but rather on recognition and self-validation. The studium moves back to the centre stage as the repertoire of themes it proposes for discussion needs to offer points of connection by which individuals can express themselves in their own voice through the sharing of similar stories, images and actions. In the case of Rule 3, the punctum emerges not as an anchor by which the viewer is drawn to absorb the message of the studium via subtle contradictions or surprises, but as an invitation to engage as a co-participant in the production of stories connected to the studium that maximise perceptions of self-worth and social recognition.


To conclude, the dynamic environment in which digital diplomacy operates has increased the pressure on MFAs and embassies to become more conscious of the need to better understand how their messages could excel in terms of engagement. Barthes offers us good analytical tools (the studium and the punctum) for unpacking the contextual dimensions of viral dissemination (external vs internal) as well as the role of information, emotions and personalisation in informing the rules of operation of viral engagement.

Corneliu Bjola
Head of the Oxford Digital Diplomacy Research Group, University of Oxford (#DigDiploROx) | @CBjola

1 Roland. Barthes, Camera Lucida : Reflections on Photography (Hill and Wang, 1981).

2 Corneliu Bjola, Jennifer Cassidy, and Ilan Manor, “Public Diplomacy in the Digital Age”, The Hague Journal of Diplomacy 14, no. 1–2 (April 22, 2019): 86.

3 Weng et al., “Competition among Memes in a World with Limited Attention”, Scientific Reports 2, no. 1 (December 29, 2012): 335.

4 Xiaoyan Qiu et al., “Limited Individual Attention and Online Virality of Low-Quality Information”, Nature Human Behaviour 1, no. 7 (July 26, 2017): 5.

5 According to Ekman, human emotions can be grouped in six families (anger, disgust, fear, happiness, sadness, and surprise) while Plutchik eight, which he grouped into four pairs of polar opposites (joy-sadness, anger-fear, trust-distrust, surprise-anticipation). Paul Ekman, “An Argument for Basic Emotions,” Cognition and Emotion 6, no. 3–4 (1992): 169–200; Robert Plutchik, “The Nature of Emotions”, American Scientist 89, no. 4 (2001): 344–50.

6 A Mehrabian and J A Russell, An Approach to Environmental Psychology, Cambridge, Mass (M.I.T. Press, 1974).

7 Graph adapted from Sven Buechel and Udo Hahn, “Word Emotion Induction for Multiple Languages as a Deep Multi-Task Learning Problem”, 2018, 1908.

8 Rui Fan et al., “Anger Is More Influential Than Joy: Sentiment Correlation in Weibo”, accessed June 25, 2019,

9 Stefan Stieglitz and Linh Dang-Xuan, “Emotions and Information Diffusion in Social Media—Sentiment of Microblogs and Sharing Behavior”, Journal of Management Information Systems 29, no. 4 (April 8, 2013): 217–48.

10 Adam D I Kramer, Jamie E Guillory, and Jeffrey T Hancock, “Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks”, Proceedings of the National Academy of Sciences of the United States of America 111, no. 24 (June 17, 2014): 8788–90.

11 W. Lance Bennett and Alexandra Segerberg, “The Logic of Connective Action”, Information, Communication & Society 15, no. 5 (June 2012): 752–54.

<![CDATA[ The ‘dark side’ of digital diplomacy: countering disinformation and propaganda ]]> 2019-01-15T11:05:13Z

For diplomatic institutions, protecting themselves against disinformation and propaganda by governments and non-state actors remains a problem. This paper outlines five tactics that, if applied with a strategic compass in mind, could be helpful for MFAs and embassies.


The ‘dark side’ of digital diplomacy, that is, the strategic use of digital technologies as tools to counter disinformation and propaganda by governments and non-state actors has exploded in the recent years thus putting the global order at risk.


The ‘dark side’ of digital diplomacy, that is, the strategic use of digital technologies as tools to counter disinformation and propaganda by governments and non-state actors has exploded in the recent years thus putting the global order at risk. Governments are stepping up their law enforcement efforts and digital counterstrategies to protect themselves against disinformation, but for resource-strapped diplomatic institutions, this remains a major problem. This paper outlines five tactics that, if applied consistently and with a strategic compass in mind, could help MFAs and embassies cope with disinformation.


Like many other technologies, digital platforms come with a dual-use challenge that is, they can be used for peace or war, for good or evil, for offense or defence. The same tools that allow ministries of Foreign Affairs and embassies to reach out to millions of people and build ‘digital’ bridges with online publics with the purpose to enhance international collaboration, improve diaspora engagement, stimulate trade relations, or manage international crises, can be also used as a form of “sharp power” to “pierce, penetrate or perforate the political and information environments in the targeted countries”, and in so doing to undermine the political and social fabric of these countries.1 The “dark side” of digital diplomacy, by which I refer to the strategic use of digital technologies as tools to counter disinformation and propaganda by governments and non-state actors in pursuit of strategic interests, has expanded in the recent years to the point that it has started to have serious implications for the global order.2

For example, more than 150 million Americans were exposed to the Russian disinformation campaign prior to the 2016 presidential election, which was almost eight times more the number of people who watched the evening news broadcasts of ABC, CBS, NBC and Fox stations in 2016. A recent report prepared for the U.S. Senate has found that Russia’s disinformation campaign around the 2016 election used every major social media platform to deliver words, images and videos tailored to voters’ interests to help elect President Trump, and allegedly worked even harder to support him while in office.3 Russian disinformation campaigns have also been highly active in Europe4, primarily by seeking to amplify social tensions in various countries, especially in situations of intense political polarisation, such as during the Brexit referendum, the Catalonian separatist vote5, or the more recent “gilets jaunes” protests in France.6

Worryingly, the Russian strategy and tactics of influencing politics in Western countries by unleashing the “firehose of falsehoods” of online disinformation, fake news, trolling, and conspiracy theories, has started to be imitated by other (semi)authoritarian countries, such as Iran, Saudi Arabia, Philippines, North Korea, China, a development which is likely to drive more and more governments to step up their law enforcement efforts and digital counter-strategies to protect themselves against the ‘dark side’ of digital diplomacy.7 For resource-strapped governmental institutions, especially embassies, this is clearly a major problem, as with a few exceptions, many simply do not simply have the necessary capabilities to react to, let alone anticipate and pre-emptively contain a disinformation campaign before it reaches them. To help embassies cope with this problem, this contribution reviews five different tactics that digital diplomats could use separately or in combination to counter digital disinformation and discusses the possible limitations these tactics may face in practice.

Five counter-disinformation tactics for diplomats

Tactic #1: Ignoring

Ignoring trolling and disinformation is often times the default option for digital diplomats working in embassies and for good reasons. The tactic can keep the discussion focused on the key message, it may prevent escalation by denying trolls the attention they crave, it can deprive controversial issues of the ’oxygen of publicity’, and it may serve to psychological protect digital diplomats from verbal abuse or emotional distress. The digital team of the current U.S. Ambassador in Russia seems to favour this tactic as they systematically steer away from engaging with their online critics (see Fig 1, the left column). This approach stands in contrast with the efforts of the former Ambassador, Michael McFaul, who often tried to engage online with his followers and to explain the position of his country on various political issues to Russian audiences, only to be harshly refuted by the Russia Ministry of Foreign Affairs (MFA) or online users (see Fig 1, the right column).

Fig 1: To ignore or not to ignore: US Ambassadors’ communication tactics in Russia

Current U.S. Ambassador in Russia, Jon H. Huntsman pays tribute to the Soviet dissident and human rights activist, Lyudmila Alekseeva

[Automatic translation: “Three days ago, the world lost one of the most dedicated human rights defenders. Today many colleagues and associates of Lyudmila Alekseeva paid tribute to the memory of a woman who devoted more than 50 years of her life to the protection of human rights”]

Former U.S. Ambassador in Russia, Michael McFaul engaging on Twitter with the Russian MFA as well as with one of his followers

Tweet by Michael McFaul: &quot;@MFA_Russia  My HSE talk highlighted over 20 positive results of &quot;reset,&quot; that our governments worked together to achieve.&quot;

Tweet by Michael McFaul: &quot;@Varfolomeev thank you for this information. Still learning the craft of speaking more diplomatically.&quot;

Source: @USEmbRu, @MFA_Russia, @McFaul (12) and @Varfolomeev on Twitter (captures of the tweets by @MFA_Russia and @Varfolomeev from author’s archive).

At the same time, one should be mindful of the fact that the ignore tactic may come at the price of letting misleading statements go unchallenged, indirectly encouraging more trolling due to the perceived display of passivity and of missing the opportunity to confront a particular damaging story in its nascent phase, before it may grow into a full-scale, viral phenomenon with potentially serious diplomatic ramifications.  

Tactic #2: Debunking

In the post-truth era, fact-checking is “the new black” as the manager of the American Press Institute’s accountability and fact-checking program neatly described it.8 Faced with an avalanche of misleading statements, mistruths and ‘fake news’ often disseminated by people in position of authority, diplomats, journalists and the general public require access to accurate information in order to be able to take reliable decisions. It makes thus sense for embassies and MFAs to seek to correct false or misleading statements and to use factual evidence to protect themselves and the policies they support from deliberate and toxic distortions. The #EuropeUnited campaign launched by the German MFA in June 2018 in response to the rise of nationalism, populism and chauvinism, is supposed to do exactly that: to correct misperceptions and falsehoods spread online about Europe by presenting verifiable information about what European citizens have accomplished together as members of the European Union.9

Fig 2: #EuropeUnited campaign by the German MFA

The key question, however, is whether fact-checking actually works and if so, under what conditions? Research shows that misperceptions are widespread, that elites and the media play a key role in promoting these false and unsupported beliefs10, and that false information actually outperforms true information.11 Providing people with sources that share their point of view, introducing facts via well-crafted visuals, and offering an alternate narrative rather than a simple refutation may help dilute the effect of disinformation, alas not eliminate it completely. While real-time fact checks can reduce the potential for falsehoods to ‘stick’ to the public agenda and go viral, direct factual contradictions may actually strengthen ideologically grounded beliefs as disinformation may make those exposed to it extract certain emotional benefits.12  This is why using emotions in addition to facts may prove a more effective solution for countering online disinformation, although the right format of fact-based emotional framing arguably varies with the context of the case and the profile of the audience.

Tactic #3: Turning the tables

The jiu-jitsu principle of turning the opponent's strength into a weakness may also work well when applied to the case of counter-disinformation strategies. The use of humour in general, and of sarcasm in particular, could be reasonably effective for enhancing the reach of the message, deflecting challenges to ones’ narrative without alienating the audience, avoiding emotional escalation, and undermining the credibility of the source.13 The case of the Israeli embassy in the US using a “Mean Girls” meme in June 2018 to confront Ayatollah Ali Khamenei’s hateful tweet about Israel being “malignant cancerous tumour” that “has to be removed and eradicated” is instructive: it was widely shared and praised on social media and proved effective in calling attention to Israel’s plea for a harsher international stance towards Iran. In a slightly different note, the sarcastic tweet of the joint delegation of Canada at NATO in Aug 2014 poking fun at the statements of the Russian government about is troops entering Crimea by “mistake”, showcased Canada’s commitment to European security and the NATO alliance and further undermined the credibility of Kremlin in the eyes of the Western public opinion.

Fig 3: Using humour to discredit opponents and their policies

While memetic engagement is attracting growing attention as a possible tool for countering state and non-state actors in the online information environment, one should also bear in mind the potential risks and limitations associated with this tactic.14 It is important, for instance, to understand well the audience, not only for increasing the effectiveness of the memetic campaign, but more critically for avoiding embarrassing situations when the appeal to humour may fall flat or even backfire thus undermining one’s own narrative and standing. The overuse of memes and humour may also work against public expectations of diplomatic conduct, which generally revolve around associations with requirements of decorum, sobriety and gravitas. Most importantly, memetic engagement should not be conducted loosely, for entertaining the audience, but with some clear objectives in mind about how to enhance the visibility of your positions or policies and/or undermine those of the opponent.

Tactic #4: Discrediting

A stronger version of the jiu-jitsu principle mentioned above is the tactic of discrediting the opponent. The purpose in this case is not to undermine the credibility of the message, but of the messenger itself so that the audience will come to realise that whatever messages come from a particular source, they cannot be trusted. This tactic should be considered very carefully, and should be used only in special circumstances, as it would most likely lead to an escalation of the online info dispute and would probably trigger a harsh counter-reaction from the opponent. The way in which this tactic may work is by turning the opponent’s communication style against itself: amplifying contradictions and inconsistencies in his/her message, exposing the pattern of falsehoods disseminated through his/her channels of communication, and maximising the impact of the counter-narrative via the opponent’s ‘network of networks’.

Fig 4: FCO campaign to discredit Russian MFA as a credible source

Following the failed assassination attempt of Sergei Skripal and his daughter in March 2018, pro-Kremlin accounts on Twitter and Telegram started to promote a series of different conspiracies and competing narratives, attached to various hashtags and social media campaigns, with the goal, as one observer noted, to confuse people, polarise them, and push them further and further away from reality.15 In response to this, the FCO launched a vigorous campaign in which it took advantage of the Russian attempt to generate confusion about the incident by forcefully making the point that the 20+ different explanations offered by Kremlin and Russian sources, including the story that the assassination might have been connected to Mr Skripal’s mother in law, made absolutely no sense and therefore whatever claim Russian sources might make, they could be trusted. While the campaign proved effective in further undermining the credibility of Kremlin as a trustworthy source and convincing partners to back up U.K.’s position in international fora, it should nevertheless be noted that the bar set by Russian authorities after the invasion of Crimea and the shooting down of MH17 was already low. In addition, while the tactic of discrediting the opponent may work well to contain its influence online, it may do little to deter him/her from engaging in further disinformation as long as the incentives and especially the costs for pursuing this strategy remain unaltered.  

Tactic #5: Disrupting

One way in which the costs of engaging in disinformation could be increased is by disrupting the network the opponent uses for disseminating disinformation online. This would imply the mapping of the network of followers of the opponent, the tracing of the particular patterns by which disinformation is propagated throughout the network, and the identification of the gatekeepers in the network who can facilitate or obstruct the dissemination of disinformation (e.g., nodes 4 and 5 in the network described in Fig 5). Once this accomplished, the disruption of the disinformation network could take place by targeting gatekeepers with factual information about the case, encouraging them not to inadvertently promote ‘fake news’ and falsehoods, and in extreme situations by working with representatives of digital platforms to isolate gatekeepers who promote hate and violence.  

Fig 5: Mapping the disinformation network

Sample node map

The Israeli foreign ministry has been one of the MFAs applying this tactic, in this case for stopping the spread of anti-Semitic content. Accordingly, the ministry starts first by identifying gatekeepers and ranking them by their level of online influence.16 It then begins approaching and engaging with them online, with the purpose of making them aware of the fact that they sit an important junction of hate speech. The ministry then attempts to cultivate relationships with these gatekeepers so that they may refrain from sharing hate content online. In so doing, the ministry can effectively manage to contain or quarantine online hate networks and prevent their malicious content from reaching broader audience.

If properly implemented, this tactic could indeed significantly increase the costs of disseminating disinformation as opponents need to constantly protect and by case to rebuild their network of gatekeepers. They may also have to frequently re-configure the patterns by which they disseminate disinformation to their target audiences. At the same time, this tactic requires specialised skills for successful design and implementation, which might not be available to many embassies or even MFAs. The process of engineering the disruption of the disinformation network also prompts important ethical questions about how to make sure this tactic is not abused for stifling legitimate criticism of the ministry or the embassy.


As argued elsewhere, digital disinformation against Western societies works by focusing on exploiting differences between EU media systems (strategic asymmetry), targeting disenfranchised or vulnerable audiences (tactical flexibility), and deliberately masking the sources of disinformation (plausible deniability). The five tactics outlined in this paper may help MFAs and embassies better cope with these challenges if applied consistently and with a strategic compass in mind. Most importantly, they need to be carefully adapted to the context of the case in order to avoid unnecessary escalation. Here are ten questions that may help guide reflection about how to decide what tactic is appropriate to use and in what context:

  • What type of counter-reaction would reflexively serve to maximise the strategic objectives of the opponent?
  • What are the risks of ignoring a trolling attack or disinformation campaign? 
  • What type of disinformation has the largest potential to have a negative political impact for the embassy or the MFA? 
  • To what extent giving the “oxygen of publicity” to a story will make the counter-reaction more difficult to sustain?
  • What audiences are most open to persuasion via factual information? What audiences are less open to be convinced by facts?
  • What type of emotions resonate with the audience in specific contexts and how to invoke them appropriately as a way of introducing factual information?
  • What type of humor works better with the target audience and how to react to situations when humor is used against you?
  • How best to leverage the contradictions and inconsistencies in the opponent’s message without losing the moral ground?
  • Who are the gatekeepers in the opponent’s network of followers and to what extent can they be convinced to refrain from sharing disinformation online?
  • Under what conditions is reasonable to escalate from low-scale counter-reactions (ignoring, debunking, ‘turning the tables’) to more intense forms of tactical engagement (discrediting, disrupting)?

Corneliu Bjola
Head of the Oxford Digital Diplomacy Research Group (#DigDiploROx)
 | @CBjola

1 Christopher Walker and Jessica Ludwig, “The Meaning of Sharp Power”Foreign Affairs, November 26, 2017.

2 “The dark side of digital diplomacy”, in Countering Online Propaganda and Extremism, Corneliu Bjola and James Pamment (Eds.), Routledge (2018).

3 Craig Timberg and Tony Romm, ‘New Report on Russian Disinformation, prepared for the Senate’The Washington Post, December 17, 2018.

4 Corneliu Bjola and James Pamment, “Digital containment: Revisiting containment strategy in the digital age”Global Affairs, Volume 2, 2016.

5 Robin Emmott, ‘Spain sees Russian interference in Catalonia’Reuters, November 13, 2017.

6 Carol Matlack and Robert Williams, ‘France Probe Possible Russian Influence on Yellow Vest Riots’Bloomberg, December 8, 2018.

7 Daniel Funke, ‘A guide to anti-misinformation actions around the world’Poynter, October 31, 2018.

8 Jane Elizabeth, ‘Finally, fact-checking is the new black’American Press Institute, September 29, 2016.

9 Foreign Minister Heiko Maas, ‘Courage to Stand Up for Europe’, Federal Foreign Office, June 23, 2018.

10 D.J. Flynn, Brendan Nyhan and Jason Reifler, ‘The Nature and Origins of Misperceptions’, Dartmouth College, October 31, 2016.

11 Soroush Vosoughi, Deb Rai and Sinan Aral, ‘The spread of true and false news online’Science, Volume 359, March 9, 2018.

12 Jess Zimmerman, ‘It’s Time to Give Up on Facts’Slate, February 8, 2018.

14 Vera Zakem, Megan K. McBride and Kate Hammerberg, ‘Exploring the Utility of Memes for U.S. Government Influence Campaigns’, Center for Naval Analyses, April 2018.

15 Joel Gunten and Olga Robinson, ‘Sergei Skripal in the Russian disinformation game’BBC News, Sep. 9, 2018.

16 Ilan Manor, ‘Using the Logic of Networks in Public Diplomacy’Centre on Public Diplomacy Blog, Jan. 31, 2018.

<![CDATA[ Diplomacy in the Digital Age ]]> 2018-10-11T08:01:20Z

Diplomacy in the Digital Age depends on how diplomats understand and transform online influence into tangible offline diplomatic influence.


Diplomacy in the Digital Age depends on how diplomats understand and transform online influence into tangible offline diplomatic influence.


The core mission of diplomacy in the Digital Age is still about finding the middle ground among the broadest possible audience but it needs several prerequisites. This ARI analyses three case studies that show that successful digital diplomacy requires a keen understanding of the online space in which the digital diplomat operates, a competent strategy to building and managing a well-designed ‘network of networks’ of followers and influencers, and a pro-active approach to connecting digital diplomatic outputs to tangible foreign policy outcomes so that online influence could be successfully converted into offline diplomatic influence (actions and policies).  


Commenting on the challenges that the Digital Age has generated for the craft of diplomacy, the former U.S. Secretary, John Kerry, provocatively remarked that “the term digital diplomacy is almost redundant –it's just diplomacy, period.” For Kerry, digital technologies in general, and social media, do help advance states’ foreign policy objectives, bridge gaps between people across the globe, and engage with people around the world, but ultimately, they fulfil the same core diplomatic function that is, to create dialogue and find common ground among the broadest possible audience. After all, he claimed, “that's what diplomacy’s all about”.1

Interestingly, Kerry made this remark in 2013 before the ‘dark side’ of digital technologies had the chance to disclose itself in various forms of digital disinformation, propaganda and info war. Five years later, it is worth asking whether Kerry’s statement still resonates: is digital diplomacy still capable of finding the common ground, and if so, how exactly? Three issues need to be unpacked to address this question. First, what are the main features of the process of digital transformation and why shall we take them seriously?  Second, how these features have influenced the practice of diplomacy, both for better and for worse? And third, what lessons can we draw from existing cases of good practices of digital diplomacy and to what extent these lessons can be generalised to the digital activities of other embassies and Ministries of Foreign Affairs (MFA)?

Going Digital

The rise of digital diplomacy in the past decade cannot be separated from the technological context in which it has developed. Three features of the process of digital transformation stand out, among others, for understanding the evolution of digital diplomacy and the challenges it continues to face under the influence of the changing technological landscape. Speed is the first one and refers to the fast rate at which new digital technologies enter the market and the swiftness by which they are adopted by individuals, companies and institutions. For example, it has taken the telephone 75 years to reach 100 million users worldwide, but only 16 years and 4 ½ years to the mobile phone and its most popular app, Facebook, to pass the same milestone respectively.2 It is worth recalling that the mass adoption of smartphones and the spread of mobile internet was made possible by the launch of the third generation of wireless mobile telecommunications technology (3G) in early 2000. With the arrival of the 5G technology in the next few years, a fresh stream of digital technologies (mixed reality, artificial intelligence, blockchain, digital twinning) are expected to become widely available and to accelerate the pace of information exchange, social interaction, digital innovation, and public entrepreneurship.

The second important feature refers to the cognitive impact of the process of digital transformation. More specifically, the way we use digital technologies to interact with others is not limited to an instrumental, means-ends mode of engagement, but it also reshapes the cognitive settings that we rely on for defining our own social identities and even for making sense of the social reality. In fact, the digital medium represents a completely new language in which the semantic function of traditional nouns, adjectives, or verbs is now played by the type of data we share, the growing role of emotions and visuals, together with Augmented Reality/Virtual Reality (AR/VR) simulations in the near future, in framing the messages that we communicate and the (opaque) patterns by which algorithms structure our interactions with online audiences. By intimately influencing the way in which social relations are conducted online, the digital medium thus has a potentially transformative impact on the offline interests and values of social actors, and in extreme situations on their epistemological understandings of social reality, as evidenced by the unsettling ascent of ‘post-truth’ politics in the recent years.

Third, Big Data, the ‘bloodstream’ of the digital revolution, has become the most valuable commodity of our age due to its capacity to capture, predict and potentially shape behavioural patterns. It is expected, for instance that by 2025 the global data sphere will grow to 163 zettabytes (a trillion gigabytes), which represents ten times the 16.1ZB of data generated in 2016.3 To put things into perspective, every two days we create as much information, the former Google CEO Eric Schmidt once claimed, as we had done from the dawn of civilisation up until 2003, roughly five exabytes of data  (or 0.005 ZB).4 Big data analytics can provide a better understanding of the main issues of concern for the online audiences, of the cognitive frames and emotional undertones that enable audiences to connect with a particular message, as well as of the dominant patterns of formation and evolution of online communities. At the same time, this massive process of data generation increases the competitiveness for attention in the online space and stimulates demand for new skills and algorithmic tools necessary for filtering, processing and interpreting relevant data.

Institutional adaptation

Driven by the opportunities that the digital revolution has created for engaging with millions of people, in real time, and at minimal costs, foreign ministries, embassies and diplomats have developed a constellation of new tools and methods in support of their activities. They range from the use of dedicated platforms for engaging with foreign publics and diaspora communities, to communication with nationals in times of international crises, and to the development of consular applications for smartphones.5 Intriguingly, but not entirely unpredictable, the features that have enabled the ‘digital turn’ in diplomacy, have also generated several challenges for its practice. The access costs to the public space have been dramatically decreased by the arrival of digital platforms to the extent that MFAs need now to compete for the public’s attention with a wide range of state and non-state actors, not all of them friendly. Digital tools facilitate engagement between MFAs and embassies and foreign publics, but, at the same time, their adoption and use without a strategic compass runs the risk of digital public diplomacy becoming decoupled from foreign policy. Digital platforms also create conditions for more rigorous assessment of the online impact of digital strategies, but that may prove misleading for understanding the broader implications and levels of success of foreign policy.

It is also important to recognize that digital platforms do not simply add value to pre-designed communication strategies, but they subtlety inform and re-shape the norms of communication, engagement, and decision-making based on which diplomats conduct their work. Transparency, decentralisation, informality, interactivity, real-time management are critical norms for ensuring the effectiveness of digital activity, but they do not always sit well with MFAs’ institutionally entrenched preferences for confidentiality, hierarchy, instrumentality and top-down decision making. In addition, while diplomatic communication has been traditionally embedded in a text-oriented culture that has favoured ‘constructive ambiguity’ over precision, politeness over frankness, reason over passion, and confidentiality over transparency, the arrival of digital technologies has infused the public sphere in which diplomacy operates with a set of new features (e.g., direct and concise language, visual storytelling, emotional framing, algorithmic navigation), which challenges the way in which diplomatic engagement is expected to take place.

Like many other technologies, social media platforms also come with a dual-use challenge, that is, they can be used for peace or war, for offense or defence, for good or evil. By allowing for the decentralization and diffusion of power away from traditional stakeholders (states and governments), digital technologies can serve to empower the powerless, such as happened during the Arab Spring, or they can be deliberately weaponized to undermine the social fabric of modern societies, as in the cases of foreign electoral subversion or via the hate-speech of extremist groups. Algorithmic dissemination of content and the circumvention of traditional media filters and opinion-formation gatekeepers, makes disinformation spread faster, reach deeper, be more emotionally charged, and most importantly, be more resilient due to the confirmation bias that online echo-chambers enable and reinforce.6 To contain the ‘dark side’ of digital technologies and create a normative environment conducive to reconciliation, MFAs and embassies need to collaborate with tech-companies with the goal to support media literacy and source criticism, encourage institutional resilience, and promote clear and coherent strategic narratives capable of containing the corrosive effect of disinformation and post-truth politics.    

From theory to practice

To better understand the influence of the digital medium on diplomatic communication, let us compare and examine the activity of three prominent digital diplomats: Dave Sharma, the Australian Ambassador to Israel between 2013 and 2017; Euripides L. Evriviades, the High Commissioner for the Republic of Cyprus to the United Kingdom of Great Britain and Northern Ireland since 2013; and Jorge Heine, the Chilean Ambassador to the People’s Republic of China between 2014 and 2017. All three diplomats have used social media platforms, especially Twitter, quite extensively in their work and with considerable success as illustrated, for instance, by their large number of followers and the intensity of digital interaction (number of likes, retweets, and responses). What makes their case particularly interesting is that all three diplomats represent medium-size countries, which means they need to do extra work to receive a similar level of attention from the online public to their American, Russian, British, French, or Chinese colleagues, who organically benefit from the long diplomatic shadow and global influence of their countries. Is therefore important to investigate how the three diplomats have used digital platforms in their work and how well they have managed to cope with the competitive pressure of the digital environment. For reasons of space, the following discussion is going to focus only on the Twitter activity of the three diplomats. It is also worth mentioning the three diplomats have personally managed their Twitter accounts, a fact that highlights the importance they have attached to this channel of communication.7

The first observation to note is the consistency of their digital agenda (see figure 1), which mainly covers diplomatic, economic and cultural issues. This is exactly what diplomats are supposed to talk about when they are posted abroad, so this finding is not particularly surprising. The interesting aspect is, perhaps, the different weight the three diplomats assign to these topics, which gives indication of the specific priorities they face. The High Commissioner Evriviades is more interested, for instance, in political and diplomatic affairs, which makes good sense in the context of the ongoing Brexit negotiations and the current regional security concerns for Cyprus. Ambassadors Heine and Sharma take a more balanced approach and comment on additional issues (tourism, environment, science, technology), alongside political and economic aspects, as a basis for developing the diplomatic partnership with the host country. 

Figure 1. Digital Agenda. Source: the author.

A key component of digital influence is the ‘network of networks’ that digital diplomats are expected to build and manage online so that they can firmly establish and enhance and their online presence. The “network of networks” may include policy makers, journalists, academics, diplomats, business people and diaspora leaders, who take an active interest in the positions and policies of the country represented by the embassy. The more diverse, the larger and the more connected these networks are, the stronger their ability to extend themselves in multiple configurations and by extension, the greater the influence of digital diplomats.8 From a network perspective, all three diplomats enjoy a rather diverse group of followers, but they seem to engage rather differently with their audience (see figure 2). Ambassador Sharma pays primary attention to the media, the High Commissioner Evriviades engages preferentially with fellow diplomats, while Ambassador Heine seems to enjoy the online company of academics. These approaches are reflective of the preferred strategies of each individual to develop his broader network of contacts and influencers by relying on a personal strength: communication skills in the case of Ambassador Sharma, networking abilities in the case of the High Commissioner Evriviades and a well-respected academic profile in the case of Ambassador Heine.

Figure 2. Digital Networks. Source: the author.

The digital style of the three diplomats is also important to examine as it may provide useful clues about the conditions for success or failure in adapting diplomatic communication to the characteristics of the digital medium discussed above. All three diplomats have clearly understood the importance of visuals in digital communication as they have relied on images for emphasising points in 60-80% of the time (see figure 3). To a lesser degree, they have also grasped the role of emotions, as illustrated by their moderate use of positive and occasionally uplifting language in their messages. Ambassador Sharma stands out for the use of humour and original tweets in his communication, an approach that resonates well with his audience. The High Commissioner Evriviades interestingly favours tweets with sophisticated intellectual content, which appears to serve the function of sending indirect signals to target audiences on controversial topics. Finally, Ambassador Heine is the only one that tweets bilingually, in English and Spanish, with the purpose of ensuring that the domestic audiences back home stay well informed about his diplomatic activity so that they continue supporting his mandate in the Chinese capital.  

Figure 3. Digital Style. Source: the author.


Echoing Secretary Kerry’s observation, we can conclude that the core mission of diplomacy in the Digital Age is still about finding the middle ground. What has changed is the context in which this mission is supposed to be accomplished as new digital technologies significantly broaden the spectrum of actors that can take part and influence the diplomatic conversation, reshape the “grammar rules” and institutional norms to guide online diplomatic engagement, and opens the door to the use of digital tools for disrupting the middle ground via disinformation and propaganda. As the three case studies have shown, successful digital diplomacy requires a keen understanding of the online space in which the digital diplomat operates, a competent strategy to building and managing a well-designed ‘network of networks’ of followers and influencers, and a pro-active approach to connecting digital diplomatic outputs to tangible foreign policy outcomes so that online influence could be successfully converted into offline diplomatic influence (actions and policies).

Dr. Corneliu Bjola
Head of the Oxford Digital Diplomacy Research Group (#DigDiploROx)
| @CBjola

1 Kerry J. (2013), ‘Digital Diplomacy: Adapting Our Diplomatic Engagement’, DipNote U.S Department of State Official Blog, 6/V/2013,

2 Dreischmeier, R., Close K., and Trichet, P. (2015), ‘The Digital Imperative’,  Boston Consulting Group, 2/III/2015,

3 Reinsel, D., Gantz J. and Rydning J. (2017), ‘Data Age 2025: The Evolution of Data to Life-Critical’, International Data Corporation, April 2017,

4 Siegler, MG. (2010), ‘Eric Schmidt: Every 2 Days We Create as Much Information as We Did up to 2003’, TechCrunch,

5 Bjola, C. (2017), ‘Digital diplomacy 2.0 pushes the boundary’, Global Times, 5/XI/2017,

6 Bjola, C. (2018), ‘Propaganda in the digital age’, Global Affairs, no. 3 (3): 189.

7 The data for this study was collected in March 2017. Subsequent interviews with the three diplomats were conducted between April 2017-Sept 2018.

8 Bjola, C. (2018), ‘Digital Diplomacy From tactics to strategy’, The Berlin Journal, nr 32, Fall 2018, p. 78-81,

<![CDATA[ Coercion and Cyberspace ]]> 2018-09-11T02:06:46Z

Cyberspace is a new domain for coercive operations in support of foreign policy and security with advantages for offensive actions and hindrances to its success.


Cyberspace is a new domain for coercive operations in support of foreign policy and security with advantages for offensive actions and hindrances to its success.


This ARI provides an overview of factors crucial in our understanding of coercive cyber operations as the exercise of power through cyberspace in order to coerce an adversary into a particular course of action. It its focused on the compellent actions of the state actors though they, and non-state actors, may carry out deterrent actions as well. The first section presents the fundamentals of coercion. The second frames coercion in the context of cyberspace and surfaces the characteristics of the domain that enables it. Finally, the third establishes the causes behind coercive failure and, inversely, success.


Over the past decade, cyber operations are increasingly employed as coercive instruments of foreign policy. From the Bronze Soldier incident between Russia and Estonia in 2007 to the long-standing dispute on the Korean peninsula, cyber operations are exercised in hopes of altering an adversary’s behavior. Yet despite such optimism, less than 5% of these have achieved their intended objectives.1 Paradoxically, states continue to engage in coercive behavior in cyberspace despite its seeming inefficacy. This raises two important questions. First, how are cyber operations instruments of coercion? Second, what accounts for their limited outcomes?

Coercive cyber operations are not exempt from principles that enable coercive interstate behavior. Commonly understood as “the threat of damage, or of more damage to come that can make someone yield or comply” (Schelling, 1966). Unfortunately, the concept is muddled by a lack of a clear operational definition. Typically, characterizations proposed by either Schelling or George (1991) are often adopted.2 And while most agree that deterrence refers to the use of threats to coerce an adversary from engaging in an undesired action, the debate centers on whether the threat or limited use of force to alter an adversary’s behavior ought to be referred to as compellance or coercive diplomacy.  

Schelling treats compellence as “a threat intended to make an adversary do something” and does not distinguish between a reactive or proactive use of force in order to influence an adversary’s behavior. He assumes the presence of a unitary rational actor behaving in a manner that maximizes gains while minimizing losses. George, in contrast, frames coercive diplomacy as a narrower and reactive response to an adversary’s actions. Whereas Schelling offers a parsimonious account grounded in rational choice, George offers a more nuanced and context-dependent explanation of the phenomenon. In recent years, a growing number of studies have started to use the term (military) coercion in place of either compellence or coercive diplomacy.3

With respect to coercive cyber operations, the umbrella term of coercion suits this phenomenon for three reasons. First, the proactive or reactive nature of compellence fits the image of cyber tools being preemptively deployed on an adversary’s system. Fears of such may convince an adversary to reconsider its actions. Second, coercive cyber operations often take place during on-going regional disputes.4 Its employment as one in a handful of instruments (i.e. military threats, economic sanctions, etc.) highlights the primacy of strategy in its use and, consequently; the importance of context as suggested by George. Finally, the restraint with which cyber capabilities are exercised reflects a degree of rationality on the part of coercers.

Yet despite its conceptual simplicity, coercive success is difficult to achieve. The outcome of coercion is contingent on the clear communication of a threat, suitable cost-benefit calculations, the credibility of the coercer, and reassurances from the coercer upon compliance. Although George identifies a host of other factors that contribute to the outcome of coercion, these may be consolidated into the above.

Unambiguous communication is the cornerstone of successful coercion. Adversaries must know what behavior needs to be modified, the time in which these needs to occur, and the costs/threats associated with compliance or resistance. Yet reality poses difficulties in clearly communicating threats. Systemically, the anarchic nature of the international system can result in misperception between states. Fearon (1995) posits that fragmentary information encourages misrepresentation and an excess in confidence during periods of conflict that increases the possibility of war and, consequently, coercive failure.5 Complementing this, cognitive biases may also encourage a breakdown in communication. Research demonstrates the use of pre-existing schemas in the formation of decisions regarding the behavior of other states.6 And while this tool serves to mitigate cognitive limitations, it increases the possibility of bias that results in misperception sub-optimal judgements.

Successful coercion assumes the presence of a rational actor capable of evaluating the costs and benefits associated with resisting or conceding to a coercer. Although the importance of costs and benefits in determining the outcome of coercion is straightforward, a number of factors can influence a breakdown of this process. Systemically, two (2) complementary factors that result in such a failure are conspicuous compliance and the possibility that this invites further demands.7

Initially forwarded by Schelling, conspicuous compliance is rooted in the argument that “the very act of compliance – of doing what is demanded – is more conspicuously compliant, more recognizable as submission under duress, than when an act is merely withheld in the face of a deterrent threat. Compliance is likely to be less casual, less capable of being rationalized as something that one was going to do anyhow.” Phrased simply, the act of conceding signals the weakness of an actor. Within an anarchic system in which each state is poised to ensure its own survival, such a situation is not beyond reason and leads to the second point – complying with a previous demand can invite additional demands in the future.

As Schelling argues, “compellent threats tend to communicate only the general direction of compliance, and are less likely to be self-limiting, less likely to communicate in the very design of the threat just what, or how much, is demanded.... The assurances that accompany a compellent action— move back a mile and I won’t shoot (otherwise I shall) and I won’t then try again for a second mile—are harder to demonstrate in advance [than with deterrence], unless it be through a long past record of abiding by one’s own verbal assurances.” Although this statement highlights key differences between compellence and deterrence, its core argument continues to cite the possibility that compliance with earlier threats does not guarantee the absence of future threats. Other actors may perceive previous concessions as an opportunity to improve their current standing with the international.

Apart from systemic factors that impinge on cost-benefit considerations, individual cognitive processes similarly affect the outcome of coercive threats. Prospect Theory which posits that losses are valued more than gains cause decision-makers to resists rather than comply even if the cost of doing the former is much higher than the latter.8 Additionally, coercion may also fail when the coercing actor incorrectly recognizes an adversary’s values and thus fails to impose a credible threat that results in the require cost-benefit calculations.

Besides clear communication and the imposition of costs, the outcome of coercion is further determined by the capability and resolve of the coercer to follow through. Talk is cheap and coercers must be able to demonstrate their ability to carry out threats should their demands not be met. While both capability and resolve are difficult to assess, the latter is particularly challenging. A coercer may fail to follow through with a threat for a number of reasons. These include, but are not limited to, grandstanding, lack of domestic support, or past failures to carry out threats. To demonstrate resolve, coercers resort to costly signaling that binds them to follow through with their intended actions.

Costly signals can be done in one of two ways. First, states may choose to tie their hands and force themselves into a specific course of action should their demands not be met. Second, states can incur sunk costs. Examples of which include the forward deployment of armed forces to the border or severing diplomatic relations with their adversaries. Either method, however, is not without risk. Costly signaling increases the possibility of armed conflict by forcing states into an inflexible course of action.9 The idea being that the adversary realizes this possible outcome and would, in a timely manner, concede. This is predicated, however, on how well these signals are interpreted and the outcome of the cost-benefit analysis.

Lastly, the coercer must be able to reassure an adversary that compliance results in the threat being rescinded. Relatedly, coercers must be able to provide an adversary a means with which to comply that minimizes damage to its reputation. Great powers, however, find this last requirement challenging given their inherent capabilities as these, paradoxically, reduce their credibility in the eyes of weaker adversaries. Power imbalances in favor of the coercer may be interpreted as a justification for further demands despite previous concessions. Thusly, an adversary may find that resistance is a better course of action in the face of coercive threats.

Cyber Coercion: An Overview

If coercion is the exertion of pressure on an adversary by threatening something of value, then cyberspace is an ideal medium given its growing strategic value.10 Over the past decade, (broadband) connectivity has nearly tripled globally. Similarly, Information Communication Technology (ICT) usage has grown rapidly over the past decade (ITU, 2016). Although greater awareness, education, and improvements in developmental processes have mitigated certain vulnerabilities, these continue to persist within critical systems. Fortunately, contextual factors such as the unique implementation of cyber infrastructure across states and the resources required to inflict persistent damage tempers such concerns. Yet regardless of such reassurances, the fundamental structure of cyberspace assists, if not enables, coercive behavior.

Cyberspace is treated as consisting of three key layers: physical, syntactic, and the semantic layer.11 The physical layer consists of hardware components that store, process, and transmit electrical, optical or radio signals. Within this layer, vulnerabilities are subject to physical and environmental constraints such as the susceptibility to theft or the susceptibility to noise within the electromagnetic spectrum. A step above this is the syntactic layer through which the representation, processing, storage, and transmission of data is governed by pre-defined rules or protocols. These serve to provide the desired functionality and to ensure interoperability between manufacturers. Vulnerabilities exist through flaws in the implementation of these protocols that may lead to unplanned and undesired outcomes. Finally, the semantic layer presents the data in a form that is interpretable and useful to users. At this layer, the conceptualization of cyberspace varies greatly.

From what has been termed as the “western consensus”, cyberspace ceases when the information serves defined strategic goals such as economic growth. On the other hand, other actors extend cyberspace to include the mental processes of individuals such that both perception and behavior are influenced by available information, thus introducing another source of vulnerability.12 Yet regardless of this variation, it is important to note that each layer is dependent on the other for cyberspace to function. Consequently, this interdependence enables the exploitation of cyberspace to meet strategic objective.

For advocates of coercive cyber operations, arguments are often grounded on the offensive advantage offered by the domain. An offensive advantage is defined as an instance in which new technologies skew the balance of the difficulty between conducting offense or defense in favor of the former. Specifically, new technologies are thought to increase the mobility and damage potential of offensive weapons vis-à-vis defensive ones. For instance, the creation of the machine gun or the development of combat aircraft are thought to provide aggressors with the above advantage. The interconnectivity between the components of cyberspace conceptually grants these advantages. The linkage between the physical, syntactic, and semantic layer results in the disruption of a lower layer to adversely affect those above it. Cutting an undersea cable, for instance, prevents the transmission, processing, and receipt of information at the higher levels. Similarly, the corruption of data at the syntactic layer prevents the proper use of it at the semantic level.

In parallel to this cascading effect, the consequences are also magnified from layer to layer. The loss of communication from a cut cable would immediately result in the disruption of communication, at the first two layers. But at the semantic layer, the loss of information may adversely affect specific strategic objectives, the severity of which increases over time. Consequently, the coercive potential of cyber operations is contingent upon its ability to (1) cascade damage across layers, (2) the magnification of consequences, and (3) the persistence of the threat. And while offensive tools are accessible, those meeting these criteria requires organizational maturity and significant economic resources.

While a standardized taxonomy of cyber operations remains elusive, actions in cyberspace may be categorized based on intent: disruptive, espionage, and degradative. As implied by its name, disruptive cyber operations aim to disturb the routine functions of its target. Examples of these include website defacement and (Distributed) Denial-of-Service. These operations do not require a significant amount of expertise or resources to execute as the tools required are readily available. Consequently, its ease of use comes at the cost of its reduced severity and lack of persistence as these threats are easily identified and contained. In contrast, espionage operations are meant to be persistent so as to allow the exfiltration of privileged information. As with its real world namesake, these provide aggressors with an informational asymmetry over adversaries that may result in a strategic advantage in times of conflict. The use of this information to threaten an adversary, however, has a relatively long time horizon that limits its coercive value. Finally, degradative cyber operations are those intended to damage or destroy an adversary’s cyber infrastructure for the purpose of inhibiting their strategic interests. These operations rely on the growing importance of cyberspace in sectors such as the military, the economy, and other public services. Actions within this category are designed to cause cascading effects with both technical and strategic consequences. Consequently, degradative cyber operations are ideal for coercion. The case of Stuxnet proves this point.

The features of Stuxnet allowed it to meet the 3 criteria previously established. While it operated within the syntactic layer of the systems controlling Iranian nuclear centrifuges, it managed to affect both the physical and semantic layers as well. By manipulating the rate with which these devices spun, it was able to inflict physical damage on the hardware. Similarly, by manipulating the protocols within the system it was able to send false information (semantic) to operators suggesting that all was well; thus allowing it to persist. Strategically, the physical damage inflicted on the nuclear centrifuges limited the amount of weapons grade fissile material that was produced that, in turn, affected the nuclear weapon’s program of the Iranian regime. These make Stuxnet, and its related operations, viable coercive tools – at least in theory.

In reality, however, Stuxnet and other similar operations have resulted in coercive failure despite meeting the aforementioned criteria. Despite growing technical sophistication alongside a vulnerable cyberspace, coercive cyber operations are far less successful than expected. Yet its dismal performance may have less to do with technological constraints and more with the organizational and strategic considerations associated with its execution.

Coercive Failure and Cyberspace and Its Future

To better understand the root causes of coercive failure of cyber operations, the attributes for successful coercion need to be revisited. In summary these are: clear communication of a threat, suitable cost-benefit calculations, the credibility of the coercer, and reassurances from the coercer upon compliance. While technological advancements allow aggressors to meet the contingent technological requirements for success, the above requirements are either infeasible or poorly understood in the context of cyberspace.

In order for coercion to be successful, an aggressor need to be able to clearly communicate this threat. In cyberspace, this is easier said than done. Unlike conventional means, cyber operations do not come with a return address. The attribution problem associated with cyberspace limits the ability of targets to assess the source of the operation. While cyber operations are more frequently observed in the context of the on-going dispute, uncertainty as to the identity of the aggressor muddies the message. What action should be stopped on the part of the target? Are we even certain that X is the source of the operation? Questions such as these hinder the communicative exchange between the coercer and the target that, in turn, limit the efficacy of coercion as a whole. And while experience now allows targets to move beyond the question of “who was behind it?” to “what do we do about it?” the consequences of conceding or resisting remains a pressing issue. That is to say, the identity of the coercer does not alleviate other considerations with respect to coercion.

The decision to comply or resist depends on the costs and benefits associated with either course of action. In the context of cyberspace, this decision is predicated on (1) how a target perceives the domain (2) and the larger strategic picture. As previously mentioned, there is no unified definition as to what cyberspace is. Available research suggests that the value of cyberspace is based on existing worldviews.13 Liberal regimes treat cyberspace as an enabler of economic growth and democratic values. In contrast, illiberal regimes perceive it as threat to their legitimacy. Consequently, the outcome of coercive cyber operations are contingent on the recognition and exploitation of these variations. At one end of the spectrum, cyber operations that threaten the banking sector of a target interested primarily in controlling online content will not generate sufficient cost that results in compliance. On the other end, threatening physical/bodily harm in order to limit freedom of speech in a society that values such would incur significant resistance. Over the past decade, the majority of coercive cyber operations appear to have fallen into either one of these extremes; thus resulting in failure.

Assuming that threats are clearly communicated and aligned correctly, coercers are still required to demonstrate their resolve. In the physical domain this is easily done via clearly worded threats or demonstrations of force. Within cyberspace, demonstrating these capabilities affords targets the opportunity to develop the necessary countermeasures. Although Smeets and Lin (2018) argue that signaling resolve in this manner is unnecessary and that past actions should serve as demonstrations of capability, this is not sufficient for coercers that have just begun to use the domain for this purpose.14 Apart from burning cyber capabilities through demonstrations, other resources may be imperiled as well. Stuxnet, for instance, required not only advanced engineering skills but also demanded an existing espionage network capable of delivering the malware over an air-gaped network. Its discovery and analysis would have certainly tipped the Iranian regime of the presence of this network.

Finally, the success of coercion hinges on the ability of the coercer to provide guarantees that compliance results in the cessation of threats. While a coercer may indeed decide to stop coercive operations in exchange for compliance, this does not necessarily mean that other non-coercive operations will cease. In light of the growing importance of cyberspace, cyber espionage appears to have become a routine occurrence between states. While the activity is routinely accepted as normal interstate behavior, the tools and techniques required for both espionage and coercion (degradative cyber operations) are quite similar to one another. Consequently, discovery can result in a belief that the target is part of a new coercive campaign despite previous concessions. The inability to discern intent from the mere presence of these tools along with previous coercive behavior fosters the belief of malicious intent on the part of the target and reduces that chances of coercive success in the future.


Despite advances in capabilities and its growing frequency, the success of coercive cyber operations is not a forgone conclusion. Although states are increasingly dependent on the domain in order to achieve its strategic objectives, the exercise of coercion remains subject to previous strategic considerations. While certain scholars and pundits continue to espouse its revolutionary potential, cyber operations are fast becoming perceived as an adjunctive foreign policy instrument. Rather than its independent exercise, the coming years will see cyberspace as one of many means with which states are able to achieve their stated strategic objective via coercive means.

Miguel Alberto Gomez
Senior researcher at the Center for Security Studies, ETH Zurich
 | @mgomez85

1 Iasiello, E. (2013). Cyber attack: A dull tool to shape foreign policy. In K. Podins, J. Stinissen & M. Maybaum (Eds.), 2013 5th International Conference on Cyber Conflict (Cycon), 451-470.

IEEE. Jensen, B., Maness, R. C., & Valeriano, B. (2016). Cyber Victory: The Efficacy of Cyber Coercion Annual Meeting of the International Studies Association. Valeriano, B., & Maness, R. C. (2014). The dynamics of cyber conflict between rival antagonists, 2001-11. Journal of Peace Research 51(3), 347-360.

2 Schelling, T. C. (1966). Arms and Influence. New Haven, CT: Yale University Press. George, A. L. (1991). Forceful persuasion: Coercive diplomacy as an alternative to war. US Institute of Peace Press.

3 Jakobsen, P. V. (2006). Coercive Diplomacy. In Collins, A. Contemporary Security Studies. Oxford: Oxford University Press, 225-247.

4 Valeriano, B., & Maness, R. C. (2015). Cyber war versus cyber realities: cyber conflict in the international system. Oxford; New York: Oxford University Press.

5 Fearon, J. D. (1995). Rationalist Explanations for War. International Organization, 49(3), 379-414.

6 Herrmann, R. K., Voss, J. F., Schooler, T. Y. E., & Ciarrochi, J. (1997). Images in international relations: An experimental test of cognitive schemata. International Studies Quarterly, 41(3), 403-433.

7 Schaub, G. (2004). Deterrence, compellence, and prospect theory. Political Psychology, 25(3), 389-411.

8 Kahneman, D. (2011). Thinking, fast and slow (1st ed.). New York: Farrar, Straus and Giroux.

9 Fearon, J. D. (1997). Signaling foreign policy interests: Tying hands versus sinking costs. Journal of Conflict Resolution, 41(1), 68-90.

10 Kuehl, D. T. (2009). From Cyberspace to Cyberpower: Defining the Problem. In F. D. S. Kramer, Stuart H.; Wentz, Larry (Ed.), Cyberpower and National Security. Dulles: Potomac Books, 24-42.

11 Libicki, M. C. (2009). Cyberdeterrence and cyberwar. Santa Monica: Rand Corporation.

12 Giles, K., & Hagestad, W. (2013). Divided by a Common Language: Cyber Definitions in Chinese, Russian and English. 2013 5th International Conference on Cyber Conflict (Cycon)

13 Hare, F. (2010). The Cyber Threat to National Security: Why Can't We Agree? Conference on Cyber Conflict, Proceedings 2010, 211-225. Rivera, J. (2015). Achieving Cyberdeterrence and the Ability of Small States to Hold Large States at Risk. 2015 7th International Conference on Cyber Conflict - Architectures in Cyberspace (Cycon), 7-24.

14 Smeets, M., & Lin, H. S. (2018). Offensive cyber capabilities: To what ends2018 10th International Conference on Cyber Conflict (CyCon), 65-71.

<![CDATA[ Cyber cells: a tool for national cyber security and cyber defence ]]> 2013-09-17T11:53:38Z

Cyber cells are effective tools that enable countries to operate, defend themselves or go on the offensive in a specific area of cyberspace, and they are destined to complement existing cyber security and cyber defence capabilities.

Theme[1]: Cyber cells are effective tools that enable countries to operate, defend themselves or go on the offensive in a specific area of cyberspace, and they are destined to complement existing cyber security and cyber defence capabilities.

Summary: Except for countries that are pioneers in cyber security and cyber defence such as the US, China and Israel, these days most nations are developing basic cybernetic capabilities, such as information and communications technologies and the organisations and procedures that will make them work when they reach maturity. When this happens it will be necessary to devise the organisations and operational procedures –cyber cells– that allow countries to operate using those previously established capabilities. This paper describes the concept of cyber cells, their functions, tasks and areas of operation, as well as the enablers that will allow them to work. Although it is a matter of a next-generation capability that will complement those which are now being set up, the authors argue that Spain should think about what kind of cyber cells would in fact complement the cyber defence and cyber security capabilities that are being established for use by the military and the national security forces.

Analysis: After several decades shaped by spectacular technological development, a significant lack of attention from politicians and overconfidence among general public about the power, impact, penetration and political, social and economic influence of information and communications technologies (ICT), most governments have begun to take note of both the possibilities and risks that cyberspace entails. Cyber defence and cyber security strategies and organisations abound, and there are many recent studies on them.[2]

Cyber space was initially considered a global common good for all of humanity, but it is actually far from being neutral, free and independent. In fact, cyber space has been rife with conflict from its very outset and countries such as China, the US, Russia, Israel and Iran are spending huge amounts in terms of human, technical and financial resources to develop cyber forces, with a dual goal: to ensure the security and defence of their specific patches of cyber space while wielding power and influence among their citizens, allies and potential adversaries.

At the same time, as international regulation of the Internet is impossible –and neither is it subject to global governance–, cyber space has seen an increase in the risks associated with the security of advanced countries: a relentless rise in cyber crime, the use of cyber space by terrorist groups for activities involving financing, intelligence gathering, propaganda and recruiting, large-scale cyber espionage between States and/or companies and a spike in crimes against the privacy of Internet users are just some of the challenges that security forces tasked with cyber security must confront.

In the same way and with regard to national defence, the armed forces rely on information and communications technologies to communicate with each other, exercise command and control of operations, obtain and distribute information and intelligence, carry out surveillance and reconnaissance tasks or acquire targets and coordinate fire. So these technologies serve as force multipliers. They optimise the conception, planning and execution of operations and can shape how a conflict evolves and who wins. Therefore, possessing a robust, secure and resilient ICT infrastructure, systematising the dimensions that make up cyber space and integrating them into operational planning or the capability to act in this realm are some of the issues to which the armed forces are paying most attention.

Risk in cyber space
The state of risk in cyberspace is not homogeneous. This is the case both because there are different threat levels for specific national cyberspaces and the cyber security and cyber defence systems and capabilities of different countries are not at all homogeneous. Countries can be broken down into four major groups, depending on the level of implementation and functionality of their national systems of cyber security and cyber defence:

  • Group 1, made up of countries with an operational national system of cyber security and cyber defence, formally defined as such and constantly being evaluated, revised and upgraded. Countries in this category would include the US, China and Israel.
  • Group 2, made up of countries which are in the formal process of building national systems of cyber security and cyber defence. It would include nations such as Australia, France and Iran.
  • Group 3, made up of countries that are in the process –formal or informal– of defining their national cyber security systems. The vast majority of countries would fall into this category, including Spain.
  • Group 4, comprising countries which have not yet undertaken a process of defining, be it formally or informally, their national cyber security system.

The US government recently acknowledged that an exponential increase in the volume of resources that its adversaries –particularly China– are earmarking for their cyber forces and the growing technical sophistication of the attacks that these forces carry out are making it tremendously difficult to analyse and research the attacks and therefore to maintain an efficient and effective national defence in cyber space.

Regardless of the origin and nature of the threat it faces, the cyber force of a country should be based on a set of capabilities that allow it to reach a known and controlled state of risk. This state of risk can be attained only by states whose specific cyber spaces feature levels of maturity, resilience and security which, over the short term, are able to withstand TIER I and TIER II level attacks and recover from assaults at the TIER III and IV levels. This is outlined in Figure 1.

Figure 1. Levels of cybernetic threat

Traditional capabilities –grouped within the concepts of information security and information assurance– are necessary but not enough in and of themselves to guarantee national cyber security and national cyber defence. So the world’s major powers and international organisations such as NATO and EUROPOL are working actively to redefine these capabilities and develop new ones, both to defend and attack.

The increase in the state of risk in cyber space means governments must develop specific capabilities to enhance security and defence in it. One of these is the cyber cell. This is an advanced capability which can complement traditional cyber security and cyber defence capabilities and be used both in a defensive way and to carry out offensive operations in cyber space. Cyber cells are prepared to resolve those operational problems which existing cybernetic means cannot address with sufficient flexibility or effectiveness, and they can be integrated into both police and military forces. With these elements in mind, we will now present the concept of cyber cells and detail how they might be organised and work and what their responsibilities might be.

The cyber cell concept
A cyber cell could be defined as a capability of high functional specialisation and of a dual nature –both defensive and offensive–. Its function is to carry out a task with the goal of guaranteeing the security and defence of a specific area of cyber space. Depending on the operational needs and on the area in which it operates, a cyber cell might be assigned three major functions:

  • To carry out specific cybernetic operations or ones in conjunction with other operational dimensions (land, sea, air and space).
  • To support the evaluation and improvement of the level of maturity, resilience and security of national, allied and multinational cybernetic capabilities.
  • To contribute to experimenting with new operational concepts and cybernetic capabilities.

In the same way, and depending on the function it is carrying out at any given time, a cyber cell can have one of the following four tasks assigned to it: (1) assurance; (2) experimentation; (3) exercises; and (4) operation. In the first three cases the cyber cell will assume the role of a ‘red team’ under which it will simulate the behaviour of a potential adversary so as to try and exploit the vulnerabilities of the area being evaluated. However, when a cyber cell is in operational mode, it will be able to carry out both defensive and offensive cybernetic activity.

  1. Assurance: this will allow analysing the state of maturity, resilience and security of the area in which the cyber cell is operating.
  2. Experimentation: here the cyber cell might do a wide variety of things, such as study new operational concepts or evaluate the maturity, resilience and security of new cybernetic capabilities that complement existing ones.
  3. Exercises: during exercises the cyber cell must test what it can do. These exercises will be designed and planned with the goal of simulating situations as close as possible to those found in the real world.
  4. Operation: when operational needs require it, the cyber cell must engage in defensive or offensive actions, or ones to exploit a given area.

Each of the four tasks assigned to a cyber cell will be executed in a given area of the five outlined as follows:

  1. Local, limited to a local ICT system.
  2. National, limited to a local realm or a set of local areas, the command and control of which is exercised by a national body.
  3. Allied, limited to a local area or set of local areas, the command and control of which is exercised by an agency of NATO or Europol or bodies belonging to one of their member states.
  4. Possible adversaries, limited to a local area or set of local areas, the command and control of which is exercised by organizations belonging to possible adversaries. The nature of the possible adversaries is heterogeneous; they can be States or non-State actors, such as terrorist groups, cyber gangs or so-called hacktivist groups.
  5. Multinational, defined by a local area or set of local areas, the command and control of which is exercised by a multinational organisation or by a State that belongs to the multinational organisation.

Figure 2. Areas of activity of a cyber cell

Enablers of cyber cells
Before countries create cyber cells, they must have the right enablers. By this we mean those defensive and offensive cybernetic means which have a sufficient level of maturity and are already established in the country and at the disposal of both the security forces and the military. Their existence under the terms described here will make it possible for cyber cells to carry out the tasks assigned to them with some degree of likelihood of success.

These enablers are the following: command and control, organisation, a legislative framework, methodology, knowledge of the cyber situation, risk analysis and management, the sharing of information, technology, staff and constant training. Command and control of cyber cells should be exercised at the strategic, operational and tactical levels, and each of these levels will have assigned to it a set of responsibilities and activities so that the cyber cells do their work with guarantees. At the strategic level, the high-level goals, priorities and achievements that the cyber cell must attain as it goes about the task assigned to it will be defined. What is more, from this level the viability and evolution of the cell must be guaranteed, with all necessary human, financial and technological resources provided. At the operational level, all activities related to the assigned task will be authorised and directed, and each will be controlled by an operational team (OT), in such a way that, as the task is undertaken, there will be as many operational teams as there are activities that comprise each task. The make-up of these teams will be determined by the nature of the task. Finally, at the tactical level, the people in charge of each operational team will define the tactical plans related to the activities. In order to do this, they will outline in the greatest detail possible each of the actions that make up an activity, with input from those in charge of the tactical teams assigned to each action (each operational team will be supported by as many tactical teams as there are actions making up the activity).

Figure 3. External and internal contexts of cyber cells

Despite the difficulty inherent in finding those directly responsible for carrying out an act of aggression in cyber space, and the ubiquity, high level of inter-connectivity and cross-border nature of cyber space, the tasks, activities and actions of cyber cells must remain within the bounds of national and international law. In order for the legal framework to serve as an enabler, it must be up to date in terms of regulation of the main elements of cyber warfare and cyber crime, the regulatory frameworks surrounding them and how they are defined as crimes. The legal framework must also regulate the procedural aspects of electronic evidence, criminal justice and international cooperation. Finally, it must be integrated into national and international legislation associated with the prevention of armed conflicts and the exercise of self-defence of sovereignty over national cyber space.

Cyber cells must have a working methodology that features a common language, homogeneous theoretical and technological foundations and procedures that standardise their functioning at the strategic, operational and tactical levels. Furthermore, they must be provided with immediate knowledge of a country’s own cyberspace, allied cyber space, multinational cyber space, and that of potential adversaries and any other group that might be of interest, as well as knowledge of the status and availability of the operational capabilities necessary for the planning, leading and management of the activities needed to carry out the cybernetic mission that is assigned. Knowledge of the status of the cybernetic situation will be obtained as a result of combining intelligence and operational activities in cyber space along with those activities carried out in electromagnetic space and any other of the dimensions of the operational environment (land, sea, air and space). So integrating the cybernetic situation with the rest of the capabilities is essential to achieving the goals set out in the task that is assigned. In this way the processes, procedures and capabilities associated with knowing the cyber situation must be developed –always in line with the working methodology that is in place– so that those in charge of the cyber cell attain complete knowledge of the overall cyber situation and can work towards achieving the goals established in the assigned tasks. Furthermore, knowledge of the cyber situation must give the operational leader of the cyber cell real-time visibility of local and national networks, systems and services and of the actions of the potential adversary on the opposing networks, systems and services, as well as the possible impact of these actions on the achieving of operational goals. Knowledge of the cyber situation of the mission and cyber space will also help cyber cells to make decisions if they have the best available information and intelligence and to act if they know the operational effect of their decisions on the mission as a whole.

Each task assigned to a cyber cell carries with it a set of risks that will depend on the nature of the task and the realm in which the cell is acting. Therefore, a continuous process of dynamic risk assessment and management in all phases of the task must be developed. In these phases all available information will be collected, analysed and distributed in an appropriate way to the rest of the actors involved in the task. So it will be necessary to devise a set of mechanisms that distribute information in order to have reliable and up to date knowledge of the cybernetic situation, optimise results and improve the maturity, resilience and security of national cyberspace, as well as to manage cybernetic crises.

Technology is the central component of cyberspace. For this reason cyber cells must be equipped with state-of-the-art technological capabilities. They must also be made up of highly qualified and specialised professionals who cover each and every one of the areas of knowledge of the activities and actions that are part of the assigned tasks. It will also be necessary to have a continuous and highly specialised training plan in place depending on each member’s specific role in the cyber cell and in accordance with the constant technological transformation of and changing state of risk in cyberspace. Therefore, training will be one of the key elements that will determine the success or failure of cyber cells.

Organising a cyber cell
Figure 4 shows the organisation of a cyber cell as deduced from the command and control structure described in the section on enablers in this paper. The person in charge of the cyber cell’s area has responsibility for translating the strategic goals, planning and overseeing the execution of the tasks assigned to the cyber cells, providing knowledge of the cyber situation at all times, directing those in charge of the operational aspects of the mission, and planning training, assessing results, managing risks and enabling the necessary technical and human resources. Reporting to this person are the operational officials. They report to the leader of their area of the cyber cell as to the operational and tactical evolution of the assigned tasks, and have responsibilities that are similar to those of the area operational leaders but at a lower level.

Figure 4. Structure of a cyber cell

Each operational leader of a team will be in charge of carrying out each of the various activities of the cyber cell. This includes reporting to the operational leader of the cyber cell about how the activity assigned is progressing, dividing the activities up into actions, breaking down into as much detail as possible the actions that will be assigned to the tactical teams, planning and overseeing the work of these teams, carrying out a non-stop process of analysis and management of the assigned activities and devising relevant reports on each activity. Finally, each leader of a tactical team will be in charge of carrying out one or more actions, so he will carry out the actions assigned by the person in charge of the operational control team, report to the leader of the operational control team about how the action is progressing, carry out the constant process of analysis and management of the assigned actions and devise relevant reports on each activity.

Conclusions: A cyber cell can be an efficient tool for security forces and the military to improve the security and defence of a given area of cyber space. Cyber cells are composed of operational and tactical teams acting under the control of a strategic cybernetic command and require that from the outset there be a set of mature, traditional cyber security and cyber defence capabilities: a modern ICT infrastructure, a set of cybernetic capabilities and staff that is experienced and used to operating in this kind of setting.

From there on, cyber cells could carry out cybernetic operations both of a defensive and offensive nature, support the assessment and improvement of national, multinational or allied capabilities, allow experimenting with new operational concepts and train people assigned to work in the cell. The implementation of these cells can make a significant improvement to a country’s cybernetic defence and offense capability, thus contributing to control of cyberspace and the creation of a modern and effective national cyber force that is completely interoperable with allied cyber forces. In the specific case of Spain, and as is the case with the rest of its allies, efforts must be concentrated on increasing the maturity of the cybernetic capabilities of the security forces and the military over the short and medium term as a step toward the effective establishment of advanced capabilities like cyber cells. However, and again, as its allies already do, Spain should consider establishing them so that capabilities that are under development can become operational as soon as possible.


[1] The authors are part of the ‘cyber cell’ working group led by THIBER, The Cybersecurity Think Tank, which in turn is part of the Institute of Forensic and Security Sciences at the Autonomous University of Madrid. In alphabetical order, they are: Guillem Colom Piella, who holds a PhD in international security; José Ramón Coz Fernández, PhD in Computer Sciences and BSc in physical sciences; Enrique Fojón Chamorro, computer sciences engineer and member of  ISMS Forum Spain; and Adolfo Hernández Lorente, computer sciences engineer and managing director for security at Ecix Group.

[2] Applegate, Scott D. (2012), Leveraging Cyber Militias as a Force Multiplier in Cyber Operations, Center for Secure Information Systems, George Mason University, Fairfax, Virginia.

Berman, Ilan (2012), The Iranian Cyber Threat to the US Homeland, appearance before  the Homeland Security Committee of the House of Representatives, Washington, D.C., 26/IV/2012.

Cabinet Office (2012), The UK Cyber Security Strategy Protecting and Promoting the UK in a Digital World, HMSO, London.

Defence Science Board (2013), Task Force Report: Resilient Military Systems and the Advanced Cyber Threat, US Department of Defense, Washington DC.

Department of Defense (2013), Defense Budget Priorities and Choices – Fiscal Year 2014, US Government Printing Office, Washington DC.

Dev Gupta, Keshav, & Jitendra Josh (2012), ‘Methodological and Operational Deliberations in Cyber-attack and Cyber-exploitation’, International Journal of Advanced Research in Computer Science and Software Engineering, vol. 2, nr 11, p. 385-389.

Liles, Samuel, & Marcus Rogers (2012), ‘Applying traditional military principles to cyber warfare’, Cyber Conflict (CYCON), NATO CCD CoE Publications, Tallin, p. 1-12.

Office of Public Affairs (2010), US Cyber Command Fact Sheet, Department of Defense, Washington, DC.

Office of the Secretary of Defense (2013), Military and Security Developments Involving the People’s Republic of China 2013, US Government Printing Office, Washington DC.