Cybersecurity - Elcano Royal Institute empty_context Copyright (c), 2002-2018 Fundación Real Instituto Elcano Lotus Web Content Management <![CDATA[ In virality we trust! The quest for authenticity in digital diplomacy ]]> 2019-07-08T07:05:15Z

Virality in digital diplomacy is the new black. For Ministers of Foreign Affairs and diplomats, being on social media is no longer only about presence and networking, but about standing out through the virality of their messages.


Virality in digital diplomacy is the new black. For Ministers of Foreign Affairs and diplomats, being on social media is no longer only about presence and networking, but about standing out through the virality of their messages.


For Ministers of Foreign Affairs (MFA) and embassies, being on social media is no longer only about presence and networking, but about standing out through the virality of their messages. Virality allows digital diplomats to step out of their immediate ‘bubble’ and reach out to unfamiliar audiences, showcase their position on important policy issues or normative claims, and enhance their social authority in front of their peers or the online public. The challenge for digital diplomacy lies in achieving the proper know-how and technical capacity to make their messages ‘go viral’. This ARI provides some clues and rules to improve the virality in digital diplomacy.

The Studium and the Punctum

Virality in digital diplomacy is the new black, and rightly so, one may add! For Ministers of Foreign Affairs (MFA) and embassies, being on social media is no longer only about presence and networking, but about standing out through the virality of their messages. Creating content that is shared exponentially on social media, in a very short timeframe, with multiple levels of reactions from a mosaic of audiences is, to put is simply, ‘pure gold’ from a communicational perspective.

Virality allows digital diplomats to step out of their immediate ‘bubble’ and reach out to unfamiliar audiences, showcase their position on important policy issues or normative claims, and enhance their social authority in front of their peers or the online public. In the attention-deficit space of the digital medium, virality promises to inject a high-dose of authenticity and engagement, even though, the outcome often has a short-lifespan and generates transient effects. The challenge lies, of course, with the fact that viral content is not that easily to create, especially by MFAs and embassies who generally lack the human, know-how and technical capacity to make their messages ‘go viral’.

As a first step towards addressing this challenge, we need to develop a good theoretical understanding of how virality works in the context of digital diplomacy. Roland Barthes’ reflections on the study of photography may assist us with this task, as they provide some useful clues about how to think analytically about the issue of virality.1 More specifically, Barthes argues that the way in which we make sense of the meaning of a photograph much depends on the distinction we draw between the studium and the punctum aspects of the image. The studium represents the contextual reading of the image that is, the historical, social or cultural details that the picture makes available to the viewer. The punctum, on the other hand, is the ‘out of place’ aspect of the photo that punctuates the studium and ‘pierces’ the viewer with an unexpected arrow of acuity. Put it differently, while the studium tells the viewer what the image is about in a manner that can be similarly understood by many, the punctum disrupts the studium and establishes a personal connection between the viewer and the image.

For example, Barthes finds that the picture taken by a Dutch reporter of a military unit patrolling a street in an unnamed Nicaraguan city during the uprising in 1978-79 (see Image 1) resembles well the duality and co-presence of the studium and punctum. The studium informs us about the gravity of the political situation, the desolation and destruction produced by the insurrection, the casual display of military force, and the bleakness of the future. The punctum, on the other hand, reveals, at least for Barthes, an unexpected contrast between two elements that do not usually belong together, the nuns and the soldiers, which seems to invite the reader to reflect on questions about war and death, violence and religion, destruction and reconstruction. The studium makes available to viewers a particular narrative about a historical situation with the goal to stimulate his/her interest and take notice of the human tragedy unfolding in Nicaragua at that time. However, it is the punctum that makes the photo transcend its state of general interest and connect it more intimately with the viewer by reaching out to Barthes’ subjectivity and rendering the image personally meaningful to him.

Image 1: Koen Wessing, Nicaragua, 1978
Image 1: Koen Wessing, Nicaragua, 1978

Barthes’ reflections on the art of photography carry good analytical value for the study of online virality as well. It offers us a framework for deconstructing viral content into tangible components by which to capture the interplay between the general and the specific, the common and the personal, the informative and the emotional, the inconsequential and the meaningful. Understanding the studium of viral content can give us a better sense of the themes, compositions, formats and approaches that makes certain messages highly popular. Understanding the punctum can reveal us the “out of place” profile of viral messages and their propensity for personalisation and micro-engagement. Drawing on relevant case studies of viral digital diplomacy, the next two sections will integrate the concepts of studium and the punctum into the discussion of two important aspects of online virality: contextual dimensions of viral dissemination (external vs internal) and rules of operation (information, emotions and personalisation).

External vs Internal Virality

A tweet by the former UK Ambassador to Egypt, John Casson, showing him strolling in Cairo, shortly before ending his tenure in Sept 2018, gathered 1.4k Reactions,  1.7k ReTweets and 11k Likes. By contrast, the tweet of the High Commissioner of Cyprus to the UK, Euripides Evriviades, showing him posing in front of the residence of the British Prime Minister as a memento before his departure from the post, garnered only 28 reactions, 25 ReTweets and 294 Likes (see Image 2). The question this example invites us to address is threefold: a) to what extent the virality of the two tweets is comparable?, b) how well each tweet performs relative to other messages produced by the same author? and c) what characteristics of the two tweets explain their virality? The first part of the question concerns itself with the issue of ‘external virality’ (cross-source comparison), the second with the issue of ‘internal virality’ (same source comparison) and the third part with the application of the stadium/punctum framework.

Image 2. Tweets with asymmetrical viral content

ماينفعش امشي من غير ماسلم عليكم

مفيش مع السلامة إنما إلى القاء #منورة_باهلها
🙏🏼👋❤️ 🇪🇬 🇬🇧

— John Casson (@JohnCassonUK) 31 August 2018

No. I am not in the race to become the next PM of #UK. Just had my pic taken as #Cyprus HC, in front of one of the most famous doors in the world. A keepsake of my tenure in 🇬🇧 & in this mega metropolis called London.

— Euripides Evriviades 🇨🇾🇪🇺 (@eevriviades) 12 June 2019

Many would be probably tempted to consider the tweet of Amb. Casson as being decisively more viral in its outlook than that of HC Evriviades given its sizeable lead in quantitative metrics. However, closer scrutiny reveals the two tweets are somewhat similar in terms of online influence once the number of followers is considered (see Table 1). More specifically, Amb. Casson’s tweet has only a small lead in terms of RT, but a stronger presence in terms of likes and reactions. This is so because the number of followers distorts the quality of the virality metrics by amplifying the randomness of the reactions. Put differently, it is clearly impressive when an account with 100 followers generates 1000 RTs, but arguably less so when the same number of RTs come from an account with 1 million followers. Therefore, RT/Likes/Reactions per follower provides a better basis of comparison of the ‘external’ virality of competing accounts.

Table 1. External virality adjusted by the number of followers
  Amb. John Casson HC Euripides Evriviades
Number of followers 1.26M 16.6K
Number of RT per follower 741 664
Number of Likes per follower 114 56
Number of Reactions per follower 900 592

At the same time, it is important to observe the ‘internal’ dimension of virality that is, the extent to which a tweet aligns itself or diverge from the average reach of other messages generated by the same source. For illustration purposes, the average number of RT, Likes and Reactions of a sample of the most recent ten tweets produced by Amb. Casson (28 Aug – 5 Sept, 2018) and HC Evriviades (21-23 June 2019) fall well outside the normal distribution, between 2-3 standard deviations in the case of Amb. Casson and even further in the case of HC Evriviades. In other words, while both Tweets have performed extremely well relative to other tweets posted by the same author, the one posted by HC Evriviades is a clear outlier, especially in terms of Likes and Reactions.

Table 2. Internal virality relative to the average tweet reach in each account
  Amb. John Casson HC Euripides Evriviades
  Average Standard Deviation Average Standard Deviation
RT 370 473 6 6
Likes 3993 3466 24 27
Reactions 420 404 2 1

This brings us back to Barthes’ distinction between the studium and the punctum and its role as an analytical tool for explaining the performance of these two tweets. From a studium perspective, both tweets speak to traditional themes about what it means to be a diplomat. Engaging with people and cultivating issues of common interest in the case of Amb. Casson and building political relationships in the case of HC Evriviades. However, it is not the conventionalism of the stories that makes the difference in terms of the reception by the audience, but the punctum by which the viewer is invited to interpret the message. The casualness and naturalness by which Amb. Casson mingles with regular Egyptian citizens stands in clear contrast with its official position, while the note of subtle humour that HC Evriviades drops in his message acts as a relaxing counterpoint to the solemnity that the 10 Downing Street door conveys as the centre of political power in UK.

The stadium/punctum framework also adds an interesting reflexive angle to the discussion regarding the influence of external vs internal virality. As the audience gradually becomes familiar with the style of the author, internal validity can sustain itself if the punctum constantly refreshes itself. For example, the casualness shown by Amb. Casson in his public interactions can demonstrate its viral value if it continues to surprise the viewer, by engaging, for instance, with unexpected guests, or changing the dynamic of the interaction with the public. In the same way, the light/solemn punctum adopted by HC Evriviades will require creatively updated formats of expression so as to maintain the attention of the audience. From the perspective of external virality, the studium can offer interesting insights about how certain themes of diplomatic reflection travel across space and time. For example, does the idea of direct engagement with the public resonate better in places where the local relationship between citizens and policy-makers is more hierarchical? Similarly, would humour be able to drive viral content in the same way in places where the reputation of power holders is negative?    

Virality Rules

As mentioned elsewhere,2diplomatic communication has been traditionally embedded in a text-oriented culture that has favoured ‘constructive ambiguity’ over precision, politeness over frankness, reason over passion, and confidentiality over transparency. The arrival of digital technologies has infused the public sphere in which diplomacy operates with a set of new elements that have completely rearranged the ‘grammar rules’ of online engagement. Data and algorithms are now the new syntactic units of the new ‘digital language’ to which various combinations of visuals, emotions and cognitive frames are attached to create semantic meaning. This also means that digital content on social media platforms must tailor itself closely to these rules in order to be able to go viral. If so, what exactly are these rules and how the stadium/punctum framework can help us unpack the scope of application of these rules?

Rule 1. Information overload and limited attention contribute to the degradation of the quality of information that goes viral

As shown by Weng et al. the combination of social network structure (the denser, the better) and competition for finite user attention provides a sufficient condition for the emergence of a broad diversity of viral content.3 However, out of the ‘soup’ of contending viral messages, it is more likely that those that come on top expose low-quality information as both the information load and the limited attention lead to low discriminative power. As Qiu et al. point out, viral diversity can coexist with network discriminative power when we have plenty of attention and we are not overloaded with information,4 conditions that are increasingly difficult to meet in the digital medium. In other words, the network structure of social media platforms favours the formation of viral content, but the attention deficit of the users acts as a filter for the quality of the viral content.

Image 3. Tweets with asymmetrical quality of information

“Normally I say 'I'll be back', but now I say: 'I'll be there'" -- Arnold @Schwarzenegger accepts invitation from @antonioguterres to the UN #ClimateAction Summit in September.

— United Nations (@UN) 22 June 2019

With European prosperity and Asian peace and security closely connected, the EU has decided to strengthen its security cooperation in and with Asia. Check out the factsheet

— European External Action Service - EEAS 🇪🇺 (@eu_eeas) 31 May 2019

As suggested in Image 3, Rule 1 carries empirical relevance. The tweet posted by the UN account showing the UN Secretary General, Antonio Guterres, inviting Arnold Schwarzenegger to attend the Climate Action Summit in September 2019 went quickly viral (by internal standards). It has swiftly reached roughly three times the average of Likes and RTs received by the UN account despite the scarcity of the information provided, except for a brief reference to the actor’s ‘I’ll be back” famous line. By contrast, the information rich tweet posted by the European External Action Service (EEAS) outlining EU-Asia security priorities, an important topic in the current geopolitical context, has been hardly noticed by the online public. One important implication of Rule 1 is that the punctum needs to really stand out (via emotional framing of the use of a dynamic format) if the quality of the information reflected by the studium is to stay high and make a significant difference for the audience.

Rule 2. Emotions rule! Content that evokes intense emotions is more likely to go viral

One important school of thought on the psychology of emotions links Paul Ekman and Robert Plutchik’s influential theories of basic emotions to the pleasure,5 arousal and dominance model of environmental perception developed by Mehrabian and Russell6 (see Image 4). It is thus argued that emotions are associated with different degrees of positive (joy, surprise) or negative feelings (anger, disgust, fear, sadness), that they come with different levels of high (joy, anger, fear) or low energy (sadness, disgust) and that they are connected to feelings of control (anger, joy) or inadequacy (fear, sadness). Building on this model, Rule 2 states that messages reflecting high levels of valence, arousal and dominance, such as joy and anger, are more likely to go viral.

Image 4: Affective space spanned by the Valence-Arousal-Dominance model, together with the position of six Basic Emotions.7
Image 4: Affective space spanned by the Valence-Arousal-Dominance model, together with the position of six Basic Emotions

Rule 2 has received empirical support from a few studies. Fan et al. have found, for instance, that angry emotions could spread more quickly and broadly on social media than joy.8 Stieglitz & Dang-Xuan have also found that emotionally charged Twitter messages tend to be retweeted more often and more quickly compared to neutral ones.9 In a controversial experiment conducted on Facebook, Kramer et al. have demonstrated that emotional states can be actually transferred to others via emotional contagion, leading people to experience the same emotions without even their awareness.10

Image 5. Tweets with asymmetrical emotional valence

Goaded by #B_Team, @realdonaldTrump hopes to achieve what Alexander, Genghis & other aggressors failed to do. Iranians have stood tall for millennia while aggressors all gone. #EconomicTerrorism & genocidal taunts won't "end Iran". #NeverThreatenAnIranian. Try respect—it works!

— Javad Zarif (@JZarif) 20 May 2019

Mi silla 😉

— Mark Kent (@KentArgentina) 21 June 2019

What is interesting in the case of Rule 2 is that the punctum is not necessarily defined by a particular feature of the message, but by the emotion that transpires from the message. The two tweets in Image 5, posted by the Iranian Foreign Minister, Javad Zarif (left) and the UK Ambassador to Argentina, Mark Kent (right), illustrate well this point. FM Zarif’s tweet conveys a pugnacious expression of angry defiance, while Amb. Kent relies on the positive emotion of surprise to ‘pierce’ and establish a connection with the viewer. Both emotions enjoy high levels of energy and dominance, which explains the excellent reception by the audience (several times the average of RTs and Likes normally received by the two diplomats). An interesting implication of Rule 2 is the potential constitutive effect of emotion-driven virality on the formation of online audiences: do emotional punctums provide the anchor around which audiences coalesce and if so, at what stage the studium becomes irrelevant for how messages are received by the emotionally primed audience?  

Rule 3. Content that can be easily personalised is more likely to go viral

In a seminal article, later expanded in a book, Bennett and Segerberg make the argument that unlike the top-down mechanisms of content distribution favored by hierarchical organizations, social networking involves co-production and co-distribution based on personalized expression. According to this connective logic, taking public action becomes less an issue of demonstrating support for some generic goals, as noble as they may be, but an act of personal expression and self-validation achieved by sharing ideas online, negotiating meanings and structuring trusted relationships.11 For example, the personalized action frame ‘we are the 99 per cent’ that emerged from the US occupy protests in 2011, or the more recent ‘MeToo” movement, quickly turned viral and traveled the world via personal stories, images and videos shared on social networks such as Twitter, Facebook, and Instagram. In short, the easier to personalise a message, as Rule 3 states, the lower the barriers for individual identification with social or political goals, the more opportunities for horizontal engagement, and by extension the more likely for such content to be absorbed, reflected upon, and disseminated through the social networks.

Image 6: Tweets with asymmetrical degree of personalization

Twelve Allies founded #NATO in 1949. Today we are 29.
Join us in celebrating the 70th Anniversary of our Alliance.#WeAreNATO

— NATO (@NATO) 1 de abril de 2019

Traute Lafrenz is the last survivor of the White Rose resistance group. She is one of the few people who had the courage to stand up to the Nazis´ crimes. Consul General Heike Fuller presented the Order of Merit of the Federal Republic of Germany to Traute Lafrenz today

— GermanForeignOffice (@GermanyDiplo) 3 de mayo de 2019

For MFAs and embassies, personalisation is not necessarily an easy task, as often case their online activities are primarily about projecting and emphasising their own set of policy priorities, approaches, and strategies to addressing various issues on the global agenda. Personalisation would imply exactly the opposite: removing oneself from the “digital spotlight” and identifying themes that can connect with as many individuals as possible. The examples in Image 6 aim to achieve this in slightly different ways.

The #WeAreNATO videoclip produced by NATO for its 70th anniversary in April places the member states at the forefront of the story about the historical evolution of the organisation. Personalisation takes place, in this case, via state representatives who come together to share their commitment to the values of freedom and security projected by the organisation. The viral tweet of the German Ministry of Foreign Affairs takes a different approach. It invites viewers to recall the suffering of those persecuted for fighting for justice and freedom and to identify themselves with the courage demonstrated by one of the last survivors of the resistance movement to the Nazi regime.  

In contrast to Rule 2, personalisation does not primarily focus on emotions but rather on recognition and self-validation. The studium moves back to the centre stage as the repertoire of themes it proposes for discussion needs to offer points of connection by which individuals can express themselves in their own voice through the sharing of similar stories, images and actions. In the case of Rule 3, the punctum emerges not as an anchor by which the viewer is drawn to absorb the message of the studium via subtle contradictions or surprises, but as an invitation to engage as a co-participant in the production of stories connected to the studium that maximise perceptions of self-worth and social recognition.


To conclude, the dynamic environment in which digital diplomacy operates has increased the pressure on MFAs and embassies to become more conscious of the need to better understand how their messages could excel in terms of engagement. Barthes offers us good analytical tools (the studium and the punctum) for unpacking the contextual dimensions of viral dissemination (external vs internal) as well as the role of information, emotions and personalisation in informing the rules of operation of viral engagement.

Corneliu Bjola
Head of the Oxford Digital Diplomacy Research Group, University of Oxford (#DigDiploROx) | @CBjola

1 Roland. Barthes, Camera Lucida : Reflections on Photography (Hill and Wang, 1981).

2 Corneliu Bjola, Jennifer Cassidy, and Ilan Manor, “Public Diplomacy in the Digital Age”, The Hague Journal of Diplomacy 14, no. 1–2 (April 22, 2019): 86.

3 Weng et al., “Competition among Memes in a World with Limited Attention”, Scientific Reports 2, no. 1 (December 29, 2012): 335.

4 Xiaoyan Qiu et al., “Limited Individual Attention and Online Virality of Low-Quality Information”, Nature Human Behaviour 1, no. 7 (July 26, 2017): 5.

5 According to Ekman, human emotions can be grouped in six families (anger, disgust, fear, happiness, sadness, and surprise) while Plutchik eight, which he grouped into four pairs of polar opposites (joy-sadness, anger-fear, trust-distrust, surprise-anticipation). Paul Ekman, “An Argument for Basic Emotions,” Cognition and Emotion 6, no. 3–4 (1992): 169–200; Robert Plutchik, “The Nature of Emotions”, American Scientist 89, no. 4 (2001): 344–50.

6 A Mehrabian and J A Russell, An Approach to Environmental Psychology, Cambridge, Mass (M.I.T. Press, 1974).

7 Graph adapted from Sven Buechel and Udo Hahn, “Word Emotion Induction for Multiple Languages as a Deep Multi-Task Learning Problem”, 2018, 1908.

8 Rui Fan et al., “Anger Is More Influential Than Joy: Sentiment Correlation in Weibo”, accessed June 25, 2019,

9 Stefan Stieglitz and Linh Dang-Xuan, “Emotions and Information Diffusion in Social Media—Sentiment of Microblogs and Sharing Behavior”, Journal of Management Information Systems 29, no. 4 (April 8, 2013): 217–48.

10 Adam D I Kramer, Jamie E Guillory, and Jeffrey T Hancock, “Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks”, Proceedings of the National Academy of Sciences of the United States of America 111, no. 24 (June 17, 2014): 8788–90.

11 W. Lance Bennett and Alexandra Segerberg, “The Logic of Connective Action”, Information, Communication & Society 15, no. 5 (June 2012): 752–54.

<![CDATA[ The ‘dark side’ of digital diplomacy: countering disinformation and propaganda ]]> 2019-01-15T11:05:13Z

For diplomatic institutions, protecting themselves against disinformation and propaganda by governments and non-state actors remains a problem. This paper outlines five tactics that, if applied with a strategic compass in mind, could be helpful for MFAs and embassies.


The ‘dark side’ of digital diplomacy, that is, the strategic use of digital technologies as tools to counter disinformation and propaganda by governments and non-state actors has exploded in the recent years thus putting the global order at risk.


The ‘dark side’ of digital diplomacy, that is, the strategic use of digital technologies as tools to counter disinformation and propaganda by governments and non-state actors has exploded in the recent years thus putting the global order at risk. Governments are stepping up their law enforcement efforts and digital counterstrategies to protect themselves against disinformation, but for resource-strapped diplomatic institutions, this remains a major problem. This paper outlines five tactics that, if applied consistently and with a strategic compass in mind, could help MFAs and embassies cope with disinformation.


Like many other technologies, digital platforms come with a dual-use challenge that is, they can be used for peace or war, for good or evil, for offense or defence. The same tools that allow ministries of Foreign Affairs and embassies to reach out to millions of people and build ‘digital’ bridges with online publics with the purpose to enhance international collaboration, improve diaspora engagement, stimulate trade relations, or manage international crises, can be also used as a form of “sharp power” to “pierce, penetrate or perforate the political and information environments in the targeted countries”, and in so doing to undermine the political and social fabric of these countries.1 The “dark side” of digital diplomacy, by which I refer to the strategic use of digital technologies as tools to counter disinformation and propaganda by governments and non-state actors in pursuit of strategic interests, has expanded in the recent years to the point that it has started to have serious implications for the global order.2

For example, more than 150 million Americans were exposed to the Russian disinformation campaign prior to the 2016 presidential election, which was almost eight times more the number of people who watched the evening news broadcasts of ABC, CBS, NBC and Fox stations in 2016. A recent report prepared for the U.S. Senate has found that Russia’s disinformation campaign around the 2016 election used every major social media platform to deliver words, images and videos tailored to voters’ interests to help elect President Trump, and allegedly worked even harder to support him while in office.3 Russian disinformation campaigns have also been highly active in Europe4, primarily by seeking to amplify social tensions in various countries, especially in situations of intense political polarisation, such as during the Brexit referendum, the Catalonian separatist vote5, or the more recent “gilets jaunes” protests in France.6

Worryingly, the Russian strategy and tactics of influencing politics in Western countries by unleashing the “firehose of falsehoods” of online disinformation, fake news, trolling, and conspiracy theories, has started to be imitated by other (semi)authoritarian countries, such as Iran, Saudi Arabia, Philippines, North Korea, China, a development which is likely to drive more and more governments to step up their law enforcement efforts and digital counter-strategies to protect themselves against the ‘dark side’ of digital diplomacy.7 For resource-strapped governmental institutions, especially embassies, this is clearly a major problem, as with a few exceptions, many simply do not simply have the necessary capabilities to react to, let alone anticipate and pre-emptively contain a disinformation campaign before it reaches them. To help embassies cope with this problem, this contribution reviews five different tactics that digital diplomats could use separately or in combination to counter digital disinformation and discusses the possible limitations these tactics may face in practice.

Five counter-disinformation tactics for diplomats

Tactic #1: Ignoring

Ignoring trolling and disinformation is often times the default option for digital diplomats working in embassies and for good reasons. The tactic can keep the discussion focused on the key message, it may prevent escalation by denying trolls the attention they crave, it can deprive controversial issues of the ’oxygen of publicity’, and it may serve to psychological protect digital diplomats from verbal abuse or emotional distress. The digital team of the current U.S. Ambassador in Russia seems to favour this tactic as they systematically steer away from engaging with their online critics (see Fig 1, the left column). This approach stands in contrast with the efforts of the former Ambassador, Michael McFaul, who often tried to engage online with his followers and to explain the position of his country on various political issues to Russian audiences, only to be harshly refuted by the Russia Ministry of Foreign Affairs (MFA) or online users (see Fig 1, the right column).

Fig 1: To ignore or not to ignore: US Ambassadors’ communication tactics in Russia

Current U.S. Ambassador in Russia, Jon H. Huntsman pays tribute to the Soviet dissident and human rights activist, Lyudmila Alekseeva

[Automatic translation: “Three days ago, the world lost one of the most dedicated human rights defenders. Today many colleagues and associates of Lyudmila Alekseeva paid tribute to the memory of a woman who devoted more than 50 years of her life to the protection of human rights”]

Former U.S. Ambassador in Russia, Michael McFaul engaging on Twitter with the Russian MFA as well as with one of his followers

Tweet by Michael McFaul: &quot;@MFA_Russia  My HSE talk highlighted over 20 positive results of &quot;reset,&quot; that our governments worked together to achieve.&quot;

Tweet by Michael McFaul: &quot;@Varfolomeev thank you for this information. Still learning the craft of speaking more diplomatically.&quot;

Source: @USEmbRu, @MFA_Russia, @McFaul (12) and @Varfolomeev on Twitter (captures of the tweets by @MFA_Russia and @Varfolomeev from author’s archive).

At the same time, one should be mindful of the fact that the ignore tactic may come at the price of letting misleading statements go unchallenged, indirectly encouraging more trolling due to the perceived display of passivity and of missing the opportunity to confront a particular damaging story in its nascent phase, before it may grow into a full-scale, viral phenomenon with potentially serious diplomatic ramifications.  

Tactic #2: Debunking

In the post-truth era, fact-checking is “the new black” as the manager of the American Press Institute’s accountability and fact-checking program neatly described it.8 Faced with an avalanche of misleading statements, mistruths and ‘fake news’ often disseminated by people in position of authority, diplomats, journalists and the general public require access to accurate information in order to be able to take reliable decisions. It makes thus sense for embassies and MFAs to seek to correct false or misleading statements and to use factual evidence to protect themselves and the policies they support from deliberate and toxic distortions. The #EuropeUnited campaign launched by the German MFA in June 2018 in response to the rise of nationalism, populism and chauvinism, is supposed to do exactly that: to correct misperceptions and falsehoods spread online about Europe by presenting verifiable information about what European citizens have accomplished together as members of the European Union.9

Fig 2: #EuropeUnited campaign by the German MFA

The key question, however, is whether fact-checking actually works and if so, under what conditions? Research shows that misperceptions are widespread, that elites and the media play a key role in promoting these false and unsupported beliefs10, and that false information actually outperforms true information.11 Providing people with sources that share their point of view, introducing facts via well-crafted visuals, and offering an alternate narrative rather than a simple refutation may help dilute the effect of disinformation, alas not eliminate it completely. While real-time fact checks can reduce the potential for falsehoods to ‘stick’ to the public agenda and go viral, direct factual contradictions may actually strengthen ideologically grounded beliefs as disinformation may make those exposed to it extract certain emotional benefits.12  This is why using emotions in addition to facts may prove a more effective solution for countering online disinformation, although the right format of fact-based emotional framing arguably varies with the context of the case and the profile of the audience.

Tactic #3: Turning the tables

The jiu-jitsu principle of turning the opponent's strength into a weakness may also work well when applied to the case of counter-disinformation strategies. The use of humour in general, and of sarcasm in particular, could be reasonably effective for enhancing the reach of the message, deflecting challenges to ones’ narrative without alienating the audience, avoiding emotional escalation, and undermining the credibility of the source.13 The case of the Israeli embassy in the US using a “Mean Girls” meme in June 2018 to confront Ayatollah Ali Khamenei’s hateful tweet about Israel being “malignant cancerous tumour” that “has to be removed and eradicated” is instructive: it was widely shared and praised on social media and proved effective in calling attention to Israel’s plea for a harsher international stance towards Iran. In a slightly different note, the sarcastic tweet of the joint delegation of Canada at NATO in Aug 2014 poking fun at the statements of the Russian government about is troops entering Crimea by “mistake”, showcased Canada’s commitment to European security and the NATO alliance and further undermined the credibility of Kremlin in the eyes of the Western public opinion.

Fig 3: Using humour to discredit opponents and their policies

While memetic engagement is attracting growing attention as a possible tool for countering state and non-state actors in the online information environment, one should also bear in mind the potential risks and limitations associated with this tactic.14 It is important, for instance, to understand well the audience, not only for increasing the effectiveness of the memetic campaign, but more critically for avoiding embarrassing situations when the appeal to humour may fall flat or even backfire thus undermining one’s own narrative and standing. The overuse of memes and humour may also work against public expectations of diplomatic conduct, which generally revolve around associations with requirements of decorum, sobriety and gravitas. Most importantly, memetic engagement should not be conducted loosely, for entertaining the audience, but with some clear objectives in mind about how to enhance the visibility of your positions or policies and/or undermine those of the opponent.

Tactic #4: Discrediting

A stronger version of the jiu-jitsu principle mentioned above is the tactic of discrediting the opponent. The purpose in this case is not to undermine the credibility of the message, but of the messenger itself so that the audience will come to realise that whatever messages come from a particular source, they cannot be trusted. This tactic should be considered very carefully, and should be used only in special circumstances, as it would most likely lead to an escalation of the online info dispute and would probably trigger a harsh counter-reaction from the opponent. The way in which this tactic may work is by turning the opponent’s communication style against itself: amplifying contradictions and inconsistencies in his/her message, exposing the pattern of falsehoods disseminated through his/her channels of communication, and maximising the impact of the counter-narrative via the opponent’s ‘network of networks’.

Fig 4: FCO campaign to discredit Russian MFA as a credible source

Following the failed assassination attempt of Sergei Skripal and his daughter in March 2018, pro-Kremlin accounts on Twitter and Telegram started to promote a series of different conspiracies and competing narratives, attached to various hashtags and social media campaigns, with the goal, as one observer noted, to confuse people, polarise them, and push them further and further away from reality.15 In response to this, the FCO launched a vigorous campaign in which it took advantage of the Russian attempt to generate confusion about the incident by forcefully making the point that the 20+ different explanations offered by Kremlin and Russian sources, including the story that the assassination might have been connected to Mr Skripal’s mother in law, made absolutely no sense and therefore whatever claim Russian sources might make, they could be trusted. While the campaign proved effective in further undermining the credibility of Kremlin as a trustworthy source and convincing partners to back up U.K.’s position in international fora, it should nevertheless be noted that the bar set by Russian authorities after the invasion of Crimea and the shooting down of MH17 was already low. In addition, while the tactic of discrediting the opponent may work well to contain its influence online, it may do little to deter him/her from engaging in further disinformation as long as the incentives and especially the costs for pursuing this strategy remain unaltered.  

Tactic #5: Disrupting

One way in which the costs of engaging in disinformation could be increased is by disrupting the network the opponent uses for disseminating disinformation online. This would imply the mapping of the network of followers of the opponent, the tracing of the particular patterns by which disinformation is propagated throughout the network, and the identification of the gatekeepers in the network who can facilitate or obstruct the dissemination of disinformation (e.g., nodes 4 and 5 in the network described in Fig 5). Once this accomplished, the disruption of the disinformation network could take place by targeting gatekeepers with factual information about the case, encouraging them not to inadvertently promote ‘fake news’ and falsehoods, and in extreme situations by working with representatives of digital platforms to isolate gatekeepers who promote hate and violence.  

Fig 5: Mapping the disinformation network

Sample node map

The Israeli foreign ministry has been one of the MFAs applying this tactic, in this case for stopping the spread of anti-Semitic content. Accordingly, the ministry starts first by identifying gatekeepers and ranking them by their level of online influence.16 It then begins approaching and engaging with them online, with the purpose of making them aware of the fact that they sit an important junction of hate speech. The ministry then attempts to cultivate relationships with these gatekeepers so that they may refrain from sharing hate content online. In so doing, the ministry can effectively manage to contain or quarantine online hate networks and prevent their malicious content from reaching broader audience.

If properly implemented, this tactic could indeed significantly increase the costs of disseminating disinformation as opponents need to constantly protect and by case to rebuild their network of gatekeepers. They may also have to frequently re-configure the patterns by which they disseminate disinformation to their target audiences. At the same time, this tactic requires specialised skills for successful design and implementation, which might not be available to many embassies or even MFAs. The process of engineering the disruption of the disinformation network also prompts important ethical questions about how to make sure this tactic is not abused for stifling legitimate criticism of the ministry or the embassy.


As argued elsewhere, digital disinformation against Western societies works by focusing on exploiting differences between EU media systems (strategic asymmetry), targeting disenfranchised or vulnerable audiences (tactical flexibility), and deliberately masking the sources of disinformation (plausible deniability). The five tactics outlined in this paper may help MFAs and embassies better cope with these challenges if applied consistently and with a strategic compass in mind. Most importantly, they need to be carefully adapted to the context of the case in order to avoid unnecessary escalation. Here are ten questions that may help guide reflection about how to decide what tactic is appropriate to use and in what context:

  • What type of counter-reaction would reflexively serve to maximise the strategic objectives of the opponent?
  • What are the risks of ignoring a trolling attack or disinformation campaign? 
  • What type of disinformation has the largest potential to have a negative political impact for the embassy or the MFA? 
  • To what extent giving the “oxygen of publicity” to a story will make the counter-reaction more difficult to sustain?
  • What audiences are most open to persuasion via factual information? What audiences are less open to be convinced by facts?
  • What type of emotions resonate with the audience in specific contexts and how to invoke them appropriately as a way of introducing factual information?
  • What type of humor works better with the target audience and how to react to situations when humor is used against you?
  • How best to leverage the contradictions and inconsistencies in the opponent’s message without losing the moral ground?
  • Who are the gatekeepers in the opponent’s network of followers and to what extent can they be convinced to refrain from sharing disinformation online?
  • Under what conditions is reasonable to escalate from low-scale counter-reactions (ignoring, debunking, ‘turning the tables’) to more intense forms of tactical engagement (discrediting, disrupting)?

Corneliu Bjola
Head of the Oxford Digital Diplomacy Research Group (#DigDiploROx)
 | @CBjola

1 Christopher Walker and Jessica Ludwig, “The Meaning of Sharp Power”Foreign Affairs, November 26, 2017.

2 “The dark side of digital diplomacy”, in Countering Online Propaganda and Extremism, Corneliu Bjola and James Pamment (Eds.), Routledge (2018).

3 Craig Timberg and Tony Romm, ‘New Report on Russian Disinformation, prepared for the Senate’The Washington Post, December 17, 2018.

4 Corneliu Bjola and James Pamment, “Digital containment: Revisiting containment strategy in the digital age”Global Affairs, Volume 2, 2016.

5 Robin Emmott, ‘Spain sees Russian interference in Catalonia’Reuters, November 13, 2017.

6 Carol Matlack and Robert Williams, ‘France Probe Possible Russian Influence on Yellow Vest Riots’Bloomberg, December 8, 2018.

7 Daniel Funke, ‘A guide to anti-misinformation actions around the world’Poynter, October 31, 2018.

8 Jane Elizabeth, ‘Finally, fact-checking is the new black’American Press Institute, September 29, 2016.

9 Foreign Minister Heiko Maas, ‘Courage to Stand Up for Europe’, Federal Foreign Office, June 23, 2018.

10 D.J. Flynn, Brendan Nyhan and Jason Reifler, ‘The Nature and Origins of Misperceptions’, Dartmouth College, October 31, 2016.

11 Soroush Vosoughi, Deb Rai and Sinan Aral, ‘The spread of true and false news online’Science, Volume 359, March 9, 2018.

12 Jess Zimmerman, ‘It’s Time to Give Up on Facts’Slate, February 8, 2018.

14 Vera Zakem, Megan K. McBride and Kate Hammerberg, ‘Exploring the Utility of Memes for U.S. Government Influence Campaigns’, Center for Naval Analyses, April 2018.

15 Joel Gunten and Olga Robinson, ‘Sergei Skripal in the Russian disinformation game’BBC News, Sep. 9, 2018.

16 Ilan Manor, ‘Using the Logic of Networks in Public Diplomacy’Centre on Public Diplomacy Blog, Jan. 31, 2018.

<![CDATA[ Diplomacy in the Digital Age ]]> 2018-10-11T08:01:20Z

Diplomacy in the Digital Age depends on how diplomats understand and transform online influence into tangible offline diplomatic influence.


Diplomacy in the Digital Age depends on how diplomats understand and transform online influence into tangible offline diplomatic influence.


The core mission of diplomacy in the Digital Age is still about finding the middle ground among the broadest possible audience but it needs several prerequisites. This ARI analyses three case studies that show that successful digital diplomacy requires a keen understanding of the online space in which the digital diplomat operates, a competent strategy to building and managing a well-designed ‘network of networks’ of followers and influencers, and a pro-active approach to connecting digital diplomatic outputs to tangible foreign policy outcomes so that online influence could be successfully converted into offline diplomatic influence (actions and policies).  


Commenting on the challenges that the Digital Age has generated for the craft of diplomacy, the former U.S. Secretary, John Kerry, provocatively remarked that “the term digital diplomacy is almost redundant –it's just diplomacy, period.” For Kerry, digital technologies in general, and social media, do help advance states’ foreign policy objectives, bridge gaps between people across the globe, and engage with people around the world, but ultimately, they fulfil the same core diplomatic function that is, to create dialogue and find common ground among the broadest possible audience. After all, he claimed, “that's what diplomacy’s all about”.1

Interestingly, Kerry made this remark in 2013 before the ‘dark side’ of digital technologies had the chance to disclose itself in various forms of digital disinformation, propaganda and info war. Five years later, it is worth asking whether Kerry’s statement still resonates: is digital diplomacy still capable of finding the common ground, and if so, how exactly? Three issues need to be unpacked to address this question. First, what are the main features of the process of digital transformation and why shall we take them seriously?  Second, how these features have influenced the practice of diplomacy, both for better and for worse? And third, what lessons can we draw from existing cases of good practices of digital diplomacy and to what extent these lessons can be generalised to the digital activities of other embassies and Ministries of Foreign Affairs (MFA)?

Going Digital

The rise of digital diplomacy in the past decade cannot be separated from the technological context in which it has developed. Three features of the process of digital transformation stand out, among others, for understanding the evolution of digital diplomacy and the challenges it continues to face under the influence of the changing technological landscape. Speed is the first one and refers to the fast rate at which new digital technologies enter the market and the swiftness by which they are adopted by individuals, companies and institutions. For example, it has taken the telephone 75 years to reach 100 million users worldwide, but only 16 years and 4 ½ years to the mobile phone and its most popular app, Facebook, to pass the same milestone respectively.2 It is worth recalling that the mass adoption of smartphones and the spread of mobile internet was made possible by the launch of the third generation of wireless mobile telecommunications technology (3G) in early 2000. With the arrival of the 5G technology in the next few years, a fresh stream of digital technologies (mixed reality, artificial intelligence, blockchain, digital twinning) are expected to become widely available and to accelerate the pace of information exchange, social interaction, digital innovation, and public entrepreneurship.

The second important feature refers to the cognitive impact of the process of digital transformation. More specifically, the way we use digital technologies to interact with others is not limited to an instrumental, means-ends mode of engagement, but it also reshapes the cognitive settings that we rely on for defining our own social identities and even for making sense of the social reality. In fact, the digital medium represents a completely new language in which the semantic function of traditional nouns, adjectives, or verbs is now played by the type of data we share, the growing role of emotions and visuals, together with Augmented Reality/Virtual Reality (AR/VR) simulations in the near future, in framing the messages that we communicate and the (opaque) patterns by which algorithms structure our interactions with online audiences. By intimately influencing the way in which social relations are conducted online, the digital medium thus has a potentially transformative impact on the offline interests and values of social actors, and in extreme situations on their epistemological understandings of social reality, as evidenced by the unsettling ascent of ‘post-truth’ politics in the recent years.

Third, Big Data, the ‘bloodstream’ of the digital revolution, has become the most valuable commodity of our age due to its capacity to capture, predict and potentially shape behavioural patterns. It is expected, for instance that by 2025 the global data sphere will grow to 163 zettabytes (a trillion gigabytes), which represents ten times the 16.1ZB of data generated in 2016.3 To put things into perspective, every two days we create as much information, the former Google CEO Eric Schmidt once claimed, as we had done from the dawn of civilisation up until 2003, roughly five exabytes of data  (or 0.005 ZB).4 Big data analytics can provide a better understanding of the main issues of concern for the online audiences, of the cognitive frames and emotional undertones that enable audiences to connect with a particular message, as well as of the dominant patterns of formation and evolution of online communities. At the same time, this massive process of data generation increases the competitiveness for attention in the online space and stimulates demand for new skills and algorithmic tools necessary for filtering, processing and interpreting relevant data.

Institutional adaptation

Driven by the opportunities that the digital revolution has created for engaging with millions of people, in real time, and at minimal costs, foreign ministries, embassies and diplomats have developed a constellation of new tools and methods in support of their activities. They range from the use of dedicated platforms for engaging with foreign publics and diaspora communities, to communication with nationals in times of international crises, and to the development of consular applications for smartphones.5 Intriguingly, but not entirely unpredictable, the features that have enabled the ‘digital turn’ in diplomacy, have also generated several challenges for its practice. The access costs to the public space have been dramatically decreased by the arrival of digital platforms to the extent that MFAs need now to compete for the public’s attention with a wide range of state and non-state actors, not all of them friendly. Digital tools facilitate engagement between MFAs and embassies and foreign publics, but, at the same time, their adoption and use without a strategic compass runs the risk of digital public diplomacy becoming decoupled from foreign policy. Digital platforms also create conditions for more rigorous assessment of the online impact of digital strategies, but that may prove misleading for understanding the broader implications and levels of success of foreign policy.

It is also important to recognize that digital platforms do not simply add value to pre-designed communication strategies, but they subtlety inform and re-shape the norms of communication, engagement, and decision-making based on which diplomats conduct their work. Transparency, decentralisation, informality, interactivity, real-time management are critical norms for ensuring the effectiveness of digital activity, but they do not always sit well with MFAs’ institutionally entrenched preferences for confidentiality, hierarchy, instrumentality and top-down decision making. In addition, while diplomatic communication has been traditionally embedded in a text-oriented culture that has favoured ‘constructive ambiguity’ over precision, politeness over frankness, reason over passion, and confidentiality over transparency, the arrival of digital technologies has infused the public sphere in which diplomacy operates with a set of new features (e.g., direct and concise language, visual storytelling, emotional framing, algorithmic navigation), which challenges the way in which diplomatic engagement is expected to take place.

Like many other technologies, social media platforms also come with a dual-use challenge, that is, they can be used for peace or war, for offense or defence, for good or evil. By allowing for the decentralization and diffusion of power away from traditional stakeholders (states and governments), digital technologies can serve to empower the powerless, such as happened during the Arab Spring, or they can be deliberately weaponized to undermine the social fabric of modern societies, as in the cases of foreign electoral subversion or via the hate-speech of extremist groups. Algorithmic dissemination of content and the circumvention of traditional media filters and opinion-formation gatekeepers, makes disinformation spread faster, reach deeper, be more emotionally charged, and most importantly, be more resilient due to the confirmation bias that online echo-chambers enable and reinforce.6 To contain the ‘dark side’ of digital technologies and create a normative environment conducive to reconciliation, MFAs and embassies need to collaborate with tech-companies with the goal to support media literacy and source criticism, encourage institutional resilience, and promote clear and coherent strategic narratives capable of containing the corrosive effect of disinformation and post-truth politics.    

From theory to practice

To better understand the influence of the digital medium on diplomatic communication, let us compare and examine the activity of three prominent digital diplomats: Dave Sharma, the Australian Ambassador to Israel between 2013 and 2017; Euripides L. Evriviades, the High Commissioner for the Republic of Cyprus to the United Kingdom of Great Britain and Northern Ireland since 2013; and Jorge Heine, the Chilean Ambassador to the People’s Republic of China between 2014 and 2017. All three diplomats have used social media platforms, especially Twitter, quite extensively in their work and with considerable success as illustrated, for instance, by their large number of followers and the intensity of digital interaction (number of likes, retweets, and responses). What makes their case particularly interesting is that all three diplomats represent medium-size countries, which means they need to do extra work to receive a similar level of attention from the online public to their American, Russian, British, French, or Chinese colleagues, who organically benefit from the long diplomatic shadow and global influence of their countries. Is therefore important to investigate how the three diplomats have used digital platforms in their work and how well they have managed to cope with the competitive pressure of the digital environment. For reasons of space, the following discussion is going to focus only on the Twitter activity of the three diplomats. It is also worth mentioning the three diplomats have personally managed their Twitter accounts, a fact that highlights the importance they have attached to this channel of communication.7

The first observation to note is the consistency of their digital agenda (see figure 1), which mainly covers diplomatic, economic and cultural issues. This is exactly what diplomats are supposed to talk about when they are posted abroad, so this finding is not particularly surprising. The interesting aspect is, perhaps, the different weight the three diplomats assign to these topics, which gives indication of the specific priorities they face. The High Commissioner Evriviades is more interested, for instance, in political and diplomatic affairs, which makes good sense in the context of the ongoing Brexit negotiations and the current regional security concerns for Cyprus. Ambassadors Heine and Sharma take a more balanced approach and comment on additional issues (tourism, environment, science, technology), alongside political and economic aspects, as a basis for developing the diplomatic partnership with the host country. 

Figure 1. Digital Agenda. Source: the author.

A key component of digital influence is the ‘network of networks’ that digital diplomats are expected to build and manage online so that they can firmly establish and enhance and their online presence. The “network of networks” may include policy makers, journalists, academics, diplomats, business people and diaspora leaders, who take an active interest in the positions and policies of the country represented by the embassy. The more diverse, the larger and the more connected these networks are, the stronger their ability to extend themselves in multiple configurations and by extension, the greater the influence of digital diplomats.8 From a network perspective, all three diplomats enjoy a rather diverse group of followers, but they seem to engage rather differently with their audience (see figure 2). Ambassador Sharma pays primary attention to the media, the High Commissioner Evriviades engages preferentially with fellow diplomats, while Ambassador Heine seems to enjoy the online company of academics. These approaches are reflective of the preferred strategies of each individual to develop his broader network of contacts and influencers by relying on a personal strength: communication skills in the case of Ambassador Sharma, networking abilities in the case of the High Commissioner Evriviades and a well-respected academic profile in the case of Ambassador Heine.

Figure 2. Digital Networks. Source: the author.

The digital style of the three diplomats is also important to examine as it may provide useful clues about the conditions for success or failure in adapting diplomatic communication to the characteristics of the digital medium discussed above. All three diplomats have clearly understood the importance of visuals in digital communication as they have relied on images for emphasising points in 60-80% of the time (see figure 3). To a lesser degree, they have also grasped the role of emotions, as illustrated by their moderate use of positive and occasionally uplifting language in their messages. Ambassador Sharma stands out for the use of humour and original tweets in his communication, an approach that resonates well with his audience. The High Commissioner Evriviades interestingly favours tweets with sophisticated intellectual content, which appears to serve the function of sending indirect signals to target audiences on controversial topics. Finally, Ambassador Heine is the only one that tweets bilingually, in English and Spanish, with the purpose of ensuring that the domestic audiences back home stay well informed about his diplomatic activity so that they continue supporting his mandate in the Chinese capital.  

Figure 3. Digital Style. Source: the author.


Echoing Secretary Kerry’s observation, we can conclude that the core mission of diplomacy in the Digital Age is still about finding the middle ground. What has changed is the context in which this mission is supposed to be accomplished as new digital technologies significantly broaden the spectrum of actors that can take part and influence the diplomatic conversation, reshape the “grammar rules” and institutional norms to guide online diplomatic engagement, and opens the door to the use of digital tools for disrupting the middle ground via disinformation and propaganda. As the three case studies have shown, successful digital diplomacy requires a keen understanding of the online space in which the digital diplomat operates, a competent strategy to building and managing a well-designed ‘network of networks’ of followers and influencers, and a pro-active approach to connecting digital diplomatic outputs to tangible foreign policy outcomes so that online influence could be successfully converted into offline diplomatic influence (actions and policies).

Dr. Corneliu Bjola
Head of the Oxford Digital Diplomacy Research Group (#DigDiploROx)
| @CBjola

1 Kerry J. (2013), ‘Digital Diplomacy: Adapting Our Diplomatic Engagement’, DipNote U.S Department of State Official Blog, 6/V/2013,

2 Dreischmeier, R., Close K., and Trichet, P. (2015), ‘The Digital Imperative’,  Boston Consulting Group, 2/III/2015,

3 Reinsel, D., Gantz J. and Rydning J. (2017), ‘Data Age 2025: The Evolution of Data to Life-Critical’, International Data Corporation, April 2017,

4 Siegler, MG. (2010), ‘Eric Schmidt: Every 2 Days We Create as Much Information as We Did up to 2003’, TechCrunch,

5 Bjola, C. (2017), ‘Digital diplomacy 2.0 pushes the boundary’, Global Times, 5/XI/2017,

6 Bjola, C. (2018), ‘Propaganda in the digital age’, Global Affairs, no. 3 (3): 189.

7 The data for this study was collected in March 2017. Subsequent interviews with the three diplomats were conducted between April 2017-Sept 2018.

8 Bjola, C. (2018), ‘Digital Diplomacy From tactics to strategy’, The Berlin Journal, nr 32, Fall 2018, p. 78-81,

<![CDATA[ Coercion and Cyberspace ]]> 2018-09-11T02:06:46Z

Cyberspace is a new domain for coercive operations in support of foreign policy and security with advantages for offensive actions and hindrances to its success.


Cyberspace is a new domain for coercive operations in support of foreign policy and security with advantages for offensive actions and hindrances to its success.


This ARI provides an overview of factors crucial in our understanding of coercive cyber operations as the exercise of power through cyberspace in order to coerce an adversary into a particular course of action. It its focused on the compellent actions of the state actors though they, and non-state actors, may carry out deterrent actions as well. The first section presents the fundamentals of coercion. The second frames coercion in the context of cyberspace and surfaces the characteristics of the domain that enables it. Finally, the third establishes the causes behind coercive failure and, inversely, success.


Over the past decade, cyber operations are increasingly employed as coercive instruments of foreign policy. From the Bronze Soldier incident between Russia and Estonia in 2007 to the long-standing dispute on the Korean peninsula, cyber operations are exercised in hopes of altering an adversary’s behavior. Yet despite such optimism, less than 5% of these have achieved their intended objectives.1 Paradoxically, states continue to engage in coercive behavior in cyberspace despite its seeming inefficacy. This raises two important questions. First, how are cyber operations instruments of coercion? Second, what accounts for their limited outcomes?

Coercive cyber operations are not exempt from principles that enable coercive interstate behavior. Commonly understood as “the threat of damage, or of more damage to come that can make someone yield or comply” (Schelling, 1966). Unfortunately, the concept is muddled by a lack of a clear operational definition. Typically, characterizations proposed by either Schelling or George (1991) are often adopted.2 And while most agree that deterrence refers to the use of threats to coerce an adversary from engaging in an undesired action, the debate centers on whether the threat or limited use of force to alter an adversary’s behavior ought to be referred to as compellance or coercive diplomacy.  

Schelling treats compellence as “a threat intended to make an adversary do something” and does not distinguish between a reactive or proactive use of force in order to influence an adversary’s behavior. He assumes the presence of a unitary rational actor behaving in a manner that maximizes gains while minimizing losses. George, in contrast, frames coercive diplomacy as a narrower and reactive response to an adversary’s actions. Whereas Schelling offers a parsimonious account grounded in rational choice, George offers a more nuanced and context-dependent explanation of the phenomenon. In recent years, a growing number of studies have started to use the term (military) coercion in place of either compellence or coercive diplomacy.3

With respect to coercive cyber operations, the umbrella term of coercion suits this phenomenon for three reasons. First, the proactive or reactive nature of compellence fits the image of cyber tools being preemptively deployed on an adversary’s system. Fears of such may convince an adversary to reconsider its actions. Second, coercive cyber operations often take place during on-going regional disputes.4 Its employment as one in a handful of instruments (i.e. military threats, economic sanctions, etc.) highlights the primacy of strategy in its use and, consequently; the importance of context as suggested by George. Finally, the restraint with which cyber capabilities are exercised reflects a degree of rationality on the part of coercers.

Yet despite its conceptual simplicity, coercive success is difficult to achieve. The outcome of coercion is contingent on the clear communication of a threat, suitable cost-benefit calculations, the credibility of the coercer, and reassurances from the coercer upon compliance. Although George identifies a host of other factors that contribute to the outcome of coercion, these may be consolidated into the above.

Unambiguous communication is the cornerstone of successful coercion. Adversaries must know what behavior needs to be modified, the time in which these needs to occur, and the costs/threats associated with compliance or resistance. Yet reality poses difficulties in clearly communicating threats. Systemically, the anarchic nature of the international system can result in misperception between states. Fearon (1995) posits that fragmentary information encourages misrepresentation and an excess in confidence during periods of conflict that increases the possibility of war and, consequently, coercive failure.5 Complementing this, cognitive biases may also encourage a breakdown in communication. Research demonstrates the use of pre-existing schemas in the formation of decisions regarding the behavior of other states.6 And while this tool serves to mitigate cognitive limitations, it increases the possibility of bias that results in misperception sub-optimal judgements.

Successful coercion assumes the presence of a rational actor capable of evaluating the costs and benefits associated with resisting or conceding to a coercer. Although the importance of costs and benefits in determining the outcome of coercion is straightforward, a number of factors can influence a breakdown of this process. Systemically, two (2) complementary factors that result in such a failure are conspicuous compliance and the possibility that this invites further demands.7

Initially forwarded by Schelling, conspicuous compliance is rooted in the argument that “the very act of compliance – of doing what is demanded – is more conspicuously compliant, more recognizable as submission under duress, than when an act is merely withheld in the face of a deterrent threat. Compliance is likely to be less casual, less capable of being rationalized as something that one was going to do anyhow.” Phrased simply, the act of conceding signals the weakness of an actor. Within an anarchic system in which each state is poised to ensure its own survival, such a situation is not beyond reason and leads to the second point – complying with a previous demand can invite additional demands in the future.

As Schelling argues, “compellent threats tend to communicate only the general direction of compliance, and are less likely to be self-limiting, less likely to communicate in the very design of the threat just what, or how much, is demanded.... The assurances that accompany a compellent action— move back a mile and I won’t shoot (otherwise I shall) and I won’t then try again for a second mile—are harder to demonstrate in advance [than with deterrence], unless it be through a long past record of abiding by one’s own verbal assurances.” Although this statement highlights key differences between compellence and deterrence, its core argument continues to cite the possibility that compliance with earlier threats does not guarantee the absence of future threats. Other actors may perceive previous concessions as an opportunity to improve their current standing with the international.

Apart from systemic factors that impinge on cost-benefit considerations, individual cognitive processes similarly affect the outcome of coercive threats. Prospect Theory which posits that losses are valued more than gains cause decision-makers to resists rather than comply even if the cost of doing the former is much higher than the latter.8 Additionally, coercion may also fail when the coercing actor incorrectly recognizes an adversary’s values and thus fails to impose a credible threat that results in the require cost-benefit calculations.

Besides clear communication and the imposition of costs, the outcome of coercion is further determined by the capability and resolve of the coercer to follow through. Talk is cheap and coercers must be able to demonstrate their ability to carry out threats should their demands not be met. While both capability and resolve are difficult to assess, the latter is particularly challenging. A coercer may fail to follow through with a threat for a number of reasons. These include, but are not limited to, grandstanding, lack of domestic support, or past failures to carry out threats. To demonstrate resolve, coercers resort to costly signaling that binds them to follow through with their intended actions.

Costly signals can be done in one of two ways. First, states may choose to tie their hands and force themselves into a specific course of action should their demands not be met. Second, states can incur sunk costs. Examples of which include the forward deployment of armed forces to the border or severing diplomatic relations with their adversaries. Either method, however, is not without risk. Costly signaling increases the possibility of armed conflict by forcing states into an inflexible course of action.9 The idea being that the adversary realizes this possible outcome and would, in a timely manner, concede. This is predicated, however, on how well these signals are interpreted and the outcome of the cost-benefit analysis.

Lastly, the coercer must be able to reassure an adversary that compliance results in the threat being rescinded. Relatedly, coercers must be able to provide an adversary a means with which to comply that minimizes damage to its reputation. Great powers, however, find this last requirement challenging given their inherent capabilities as these, paradoxically, reduce their credibility in the eyes of weaker adversaries. Power imbalances in favor of the coercer may be interpreted as a justification for further demands despite previous concessions. Thusly, an adversary may find that resistance is a better course of action in the face of coercive threats.

Cyber Coercion: An Overview

If coercion is the exertion of pressure on an adversary by threatening something of value, then cyberspace is an ideal medium given its growing strategic value.10 Over the past decade, (broadband) connectivity has nearly tripled globally. Similarly, Information Communication Technology (ICT) usage has grown rapidly over the past decade (ITU, 2016). Although greater awareness, education, and improvements in developmental processes have mitigated certain vulnerabilities, these continue to persist within critical systems. Fortunately, contextual factors such as the unique implementation of cyber infrastructure across states and the resources required to inflict persistent damage tempers such concerns. Yet regardless of such reassurances, the fundamental structure of cyberspace assists, if not enables, coercive behavior.

Cyberspace is treated as consisting of three key layers: physical, syntactic, and the semantic layer.11 The physical layer consists of hardware components that store, process, and transmit electrical, optical or radio signals. Within this layer, vulnerabilities are subject to physical and environmental constraints such as the susceptibility to theft or the susceptibility to noise within the electromagnetic spectrum. A step above this is the syntactic layer through which the representation, processing, storage, and transmission of data is governed by pre-defined rules or protocols. These serve to provide the desired functionality and to ensure interoperability between manufacturers. Vulnerabilities exist through flaws in the implementation of these protocols that may lead to unplanned and undesired outcomes. Finally, the semantic layer presents the data in a form that is interpretable and useful to users. At this layer, the conceptualization of cyberspace varies greatly.

From what has been termed as the “western consensus”, cyberspace ceases when the information serves defined strategic goals such as economic growth. On the other hand, other actors extend cyberspace to include the mental processes of individuals such that both perception and behavior are influenced by available information, thus introducing another source of vulnerability.12 Yet regardless of this variation, it is important to note that each layer is dependent on the other for cyberspace to function. Consequently, this interdependence enables the exploitation of cyberspace to meet strategic objective.

For advocates of coercive cyber operations, arguments are often grounded on the offensive advantage offered by the domain. An offensive advantage is defined as an instance in which new technologies skew the balance of the difficulty between conducting offense or defense in favor of the former. Specifically, new technologies are thought to increase the mobility and damage potential of offensive weapons vis-à-vis defensive ones. For instance, the creation of the machine gun or the development of combat aircraft are thought to provide aggressors with the above advantage. The interconnectivity between the components of cyberspace conceptually grants these advantages. The linkage between the physical, syntactic, and semantic layer results in the disruption of a lower layer to adversely affect those above it. Cutting an undersea cable, for instance, prevents the transmission, processing, and receipt of information at the higher levels. Similarly, the corruption of data at the syntactic layer prevents the proper use of it at the semantic level.

In parallel to this cascading effect, the consequences are also magnified from layer to layer. The loss of communication from a cut cable would immediately result in the disruption of communication, at the first two layers. But at the semantic layer, the loss of information may adversely affect specific strategic objectives, the severity of which increases over time. Consequently, the coercive potential of cyber operations is contingent upon its ability to (1) cascade damage across layers, (2) the magnification of consequences, and (3) the persistence of the threat. And while offensive tools are accessible, those meeting these criteria requires organizational maturity and significant economic resources.

While a standardized taxonomy of cyber operations remains elusive, actions in cyberspace may be categorized based on intent: disruptive, espionage, and degradative. As implied by its name, disruptive cyber operations aim to disturb the routine functions of its target. Examples of these include website defacement and (Distributed) Denial-of-Service. These operations do not require a significant amount of expertise or resources to execute as the tools required are readily available. Consequently, its ease of use comes at the cost of its reduced severity and lack of persistence as these threats are easily identified and contained. In contrast, espionage operations are meant to be persistent so as to allow the exfiltration of privileged information. As with its real world namesake, these provide aggressors with an informational asymmetry over adversaries that may result in a strategic advantage in times of conflict. The use of this information to threaten an adversary, however, has a relatively long time horizon that limits its coercive value. Finally, degradative cyber operations are those intended to damage or destroy an adversary’s cyber infrastructure for the purpose of inhibiting their strategic interests. These operations rely on the growing importance of cyberspace in sectors such as the military, the economy, and other public services. Actions within this category are designed to cause cascading effects with both technical and strategic consequences. Consequently, degradative cyber operations are ideal for coercion. The case of Stuxnet proves this point.

The features of Stuxnet allowed it to meet the 3 criteria previously established. While it operated within the syntactic layer of the systems controlling Iranian nuclear centrifuges, it managed to affect both the physical and semantic layers as well. By manipulating the rate with which these devices spun, it was able to inflict physical damage on the hardware. Similarly, by manipulating the protocols within the system it was able to send false information (semantic) to operators suggesting that all was well; thus allowing it to persist. Strategically, the physical damage inflicted on the nuclear centrifuges limited the amount of weapons grade fissile material that was produced that, in turn, affected the nuclear weapon’s program of the Iranian regime. These make Stuxnet, and its related operations, viable coercive tools – at least in theory.

In reality, however, Stuxnet and other similar operations have resulted in coercive failure despite meeting the aforementioned criteria. Despite growing technical sophistication alongside a vulnerable cyberspace, coercive cyber operations are far less successful than expected. Yet its dismal performance may have less to do with technological constraints and more with the organizational and strategic considerations associated with its execution.

Coercive Failure and Cyberspace and Its Future

To better understand the root causes of coercive failure of cyber operations, the attributes for successful coercion need to be revisited. In summary these are: clear communication of a threat, suitable cost-benefit calculations, the credibility of the coercer, and reassurances from the coercer upon compliance. While technological advancements allow aggressors to meet the contingent technological requirements for success, the above requirements are either infeasible or poorly understood in the context of cyberspace.

In order for coercion to be successful, an aggressor need to be able to clearly communicate this threat. In cyberspace, this is easier said than done. Unlike conventional means, cyber operations do not come with a return address. The attribution problem associated with cyberspace limits the ability of targets to assess the source of the operation. While cyber operations are more frequently observed in the context of the on-going dispute, uncertainty as to the identity of the aggressor muddies the message. What action should be stopped on the part of the target? Are we even certain that X is the source of the operation? Questions such as these hinder the communicative exchange between the coercer and the target that, in turn, limit the efficacy of coercion as a whole. And while experience now allows targets to move beyond the question of “who was behind it?” to “what do we do about it?” the consequences of conceding or resisting remains a pressing issue. That is to say, the identity of the coercer does not alleviate other considerations with respect to coercion.

The decision to comply or resist depends on the costs and benefits associated with either course of action. In the context of cyberspace, this decision is predicated on (1) how a target perceives the domain (2) and the larger strategic picture. As previously mentioned, there is no unified definition as to what cyberspace is. Available research suggests that the value of cyberspace is based on existing worldviews.13 Liberal regimes treat cyberspace as an enabler of economic growth and democratic values. In contrast, illiberal regimes perceive it as threat to their legitimacy. Consequently, the outcome of coercive cyber operations are contingent on the recognition and exploitation of these variations. At one end of the spectrum, cyber operations that threaten the banking sector of a target interested primarily in controlling online content will not generate sufficient cost that results in compliance. On the other end, threatening physical/bodily harm in order to limit freedom of speech in a society that values such would incur significant resistance. Over the past decade, the majority of coercive cyber operations appear to have fallen into either one of these extremes; thus resulting in failure.

Assuming that threats are clearly communicated and aligned correctly, coercers are still required to demonstrate their resolve. In the physical domain this is easily done via clearly worded threats or demonstrations of force. Within cyberspace, demonstrating these capabilities affords targets the opportunity to develop the necessary countermeasures. Although Smeets and Lin (2018) argue that signaling resolve in this manner is unnecessary and that past actions should serve as demonstrations of capability, this is not sufficient for coercers that have just begun to use the domain for this purpose.14 Apart from burning cyber capabilities through demonstrations, other resources may be imperiled as well. Stuxnet, for instance, required not only advanced engineering skills but also demanded an existing espionage network capable of delivering the malware over an air-gaped network. Its discovery and analysis would have certainly tipped the Iranian regime of the presence of this network.

Finally, the success of coercion hinges on the ability of the coercer to provide guarantees that compliance results in the cessation of threats. While a coercer may indeed decide to stop coercive operations in exchange for compliance, this does not necessarily mean that other non-coercive operations will cease. In light of the growing importance of cyberspace, cyber espionage appears to have become a routine occurrence between states. While the activity is routinely accepted as normal interstate behavior, the tools and techniques required for both espionage and coercion (degradative cyber operations) are quite similar to one another. Consequently, discovery can result in a belief that the target is part of a new coercive campaign despite previous concessions. The inability to discern intent from the mere presence of these tools along with previous coercive behavior fosters the belief of malicious intent on the part of the target and reduces that chances of coercive success in the future.


Despite advances in capabilities and its growing frequency, the success of coercive cyber operations is not a forgone conclusion. Although states are increasingly dependent on the domain in order to achieve its strategic objectives, the exercise of coercion remains subject to previous strategic considerations. While certain scholars and pundits continue to espouse its revolutionary potential, cyber operations are fast becoming perceived as an adjunctive foreign policy instrument. Rather than its independent exercise, the coming years will see cyberspace as one of many means with which states are able to achieve their stated strategic objective via coercive means.

Miguel Alberto Gomez
Senior researcher at the Center for Security Studies, ETH Zurich
 | @mgomez85

1 Iasiello, E. (2013). Cyber attack: A dull tool to shape foreign policy. In K. Podins, J. Stinissen & M. Maybaum (Eds.), 2013 5th International Conference on Cyber Conflict (Cycon), 451-470.

IEEE. Jensen, B., Maness, R. C., & Valeriano, B. (2016). Cyber Victory: The Efficacy of Cyber Coercion Annual Meeting of the International Studies Association. Valeriano, B., & Maness, R. C. (2014). The dynamics of cyber conflict between rival antagonists, 2001-11. Journal of Peace Research 51(3), 347-360.

2 Schelling, T. C. (1966). Arms and Influence. New Haven, CT: Yale University Press. George, A. L. (1991). Forceful persuasion: Coercive diplomacy as an alternative to war. US Institute of Peace Press.

3 Jakobsen, P. V. (2006). Coercive Diplomacy. In Collins, A. Contemporary Security Studies. Oxford: Oxford University Press, 225-247.

4 Valeriano, B., & Maness, R. C. (2015). Cyber war versus cyber realities: cyber conflict in the international system. Oxford; New York: Oxford University Press.

5 Fearon, J. D. (1995). Rationalist Explanations for War. International Organization, 49(3), 379-414.

6 Herrmann, R. K., Voss, J. F., Schooler, T. Y. E., & Ciarrochi, J. (1997). Images in international relations: An experimental test of cognitive schemata. International Studies Quarterly, 41(3), 403-433.

7 Schaub, G. (2004). Deterrence, compellence, and prospect theory. Political Psychology, 25(3), 389-411.

8 Kahneman, D. (2011). Thinking, fast and slow (1st ed.). New York: Farrar, Straus and Giroux.

9 Fearon, J. D. (1997). Signaling foreign policy interests: Tying hands versus sinking costs. Journal of Conflict Resolution, 41(1), 68-90.

10 Kuehl, D. T. (2009). From Cyberspace to Cyberpower: Defining the Problem. In F. D. S. Kramer, Stuart H.; Wentz, Larry (Ed.), Cyberpower and National Security. Dulles: Potomac Books, 24-42.

11 Libicki, M. C. (2009). Cyberdeterrence and cyberwar. Santa Monica: Rand Corporation.

12 Giles, K., & Hagestad, W. (2013). Divided by a Common Language: Cyber Definitions in Chinese, Russian and English. 2013 5th International Conference on Cyber Conflict (Cycon)

13 Hare, F. (2010). The Cyber Threat to National Security: Why Can't We Agree? Conference on Cyber Conflict, Proceedings 2010, 211-225. Rivera, J. (2015). Achieving Cyberdeterrence and the Ability of Small States to Hold Large States at Risk. 2015 7th International Conference on Cyber Conflict - Architectures in Cyberspace (Cycon), 7-24.

14 Smeets, M., & Lin, H. S. (2018). Offensive cyber capabilities: To what ends2018 10th International Conference on Cyber Conflict (CyCon), 65-71.

<![CDATA[ Cyber cells: a tool for national cyber security and cyber defence ]]> 2013-09-17T11:53:38Z

Cyber cells are effective tools that enable countries to operate, defend themselves or go on the offensive in a specific area of cyberspace, and they are destined to complement existing cyber security and cyber defence capabilities.

Theme[1]: Cyber cells are effective tools that enable countries to operate, defend themselves or go on the offensive in a specific area of cyberspace, and they are destined to complement existing cyber security and cyber defence capabilities.

Summary: Except for countries that are pioneers in cyber security and cyber defence such as the US, China and Israel, these days most nations are developing basic cybernetic capabilities, such as information and communications technologies and the organisations and procedures that will make them work when they reach maturity. When this happens it will be necessary to devise the organisations and operational procedures –cyber cells– that allow countries to operate using those previously established capabilities. This paper describes the concept of cyber cells, their functions, tasks and areas of operation, as well as the enablers that will allow them to work. Although it is a matter of a next-generation capability that will complement those which are now being set up, the authors argue that Spain should think about what kind of cyber cells would in fact complement the cyber defence and cyber security capabilities that are being established for use by the military and the national security forces.

Analysis: After several decades shaped by spectacular technological development, a significant lack of attention from politicians and overconfidence among general public about the power, impact, penetration and political, social and economic influence of information and communications technologies (ICT), most governments have begun to take note of both the possibilities and risks that cyberspace entails. Cyber defence and cyber security strategies and organisations abound, and there are many recent studies on them.[2]

Cyber space was initially considered a global common good for all of humanity, but it is actually far from being neutral, free and independent. In fact, cyber space has been rife with conflict from its very outset and countries such as China, the US, Russia, Israel and Iran are spending huge amounts in terms of human, technical and financial resources to develop cyber forces, with a dual goal: to ensure the security and defence of their specific patches of cyber space while wielding power and influence among their citizens, allies and potential adversaries.

At the same time, as international regulation of the Internet is impossible –and neither is it subject to global governance–, cyber space has seen an increase in the risks associated with the security of advanced countries: a relentless rise in cyber crime, the use of cyber space by terrorist groups for activities involving financing, intelligence gathering, propaganda and recruiting, large-scale cyber espionage between States and/or companies and a spike in crimes against the privacy of Internet users are just some of the challenges that security forces tasked with cyber security must confront.

In the same way and with regard to national defence, the armed forces rely on information and communications technologies to communicate with each other, exercise command and control of operations, obtain and distribute information and intelligence, carry out surveillance and reconnaissance tasks or acquire targets and coordinate fire. So these technologies serve as force multipliers. They optimise the conception, planning and execution of operations and can shape how a conflict evolves and who wins. Therefore, possessing a robust, secure and resilient ICT infrastructure, systematising the dimensions that make up cyber space and integrating them into operational planning or the capability to act in this realm are some of the issues to which the armed forces are paying most attention.

Risk in cyber space
The state of risk in cyberspace is not homogeneous. This is the case both because there are different threat levels for specific national cyberspaces and the cyber security and cyber defence systems and capabilities of different countries are not at all homogeneous. Countries can be broken down into four major groups, depending on the level of implementation and functionality of their national systems of cyber security and cyber defence:

  • Group 1, made up of countries with an operational national system of cyber security and cyber defence, formally defined as such and constantly being evaluated, revised and upgraded. Countries in this category would include the US, China and Israel.
  • Group 2, made up of countries which are in the formal process of building national systems of cyber security and cyber defence. It would include nations such as Australia, France and Iran.
  • Group 3, made up of countries that are in the process –formal or informal– of defining their national cyber security systems. The vast majority of countries would fall into this category, including Spain.
  • Group 4, comprising countries which have not yet undertaken a process of defining, be it formally or informally, their national cyber security system.

The US government recently acknowledged that an exponential increase in the volume of resources that its adversaries –particularly China– are earmarking for their cyber forces and the growing technical sophistication of the attacks that these forces carry out are making it tremendously difficult to analyse and research the attacks and therefore to maintain an efficient and effective national defence in cyber space.

Regardless of the origin and nature of the threat it faces, the cyber force of a country should be based on a set of capabilities that allow it to reach a known and controlled state of risk. This state of risk can be attained only by states whose specific cyber spaces feature levels of maturity, resilience and security which, over the short term, are able to withstand TIER I and TIER II level attacks and recover from assaults at the TIER III and IV levels. This is outlined in Figure 1.

Figure 1. Levels of cybernetic threat

Traditional capabilities –grouped within the concepts of information security and information assurance– are necessary but not enough in and of themselves to guarantee national cyber security and national cyber defence. So the world’s major powers and international organisations such as NATO and EUROPOL are working actively to redefine these capabilities and develop new ones, both to defend and attack.

The increase in the state of risk in cyber space means governments must develop specific capabilities to enhance security and defence in it. One of these is the cyber cell. This is an advanced capability which can complement traditional cyber security and cyber defence capabilities and be used both in a defensive way and to carry out offensive operations in cyber space. Cyber cells are prepared to resolve those operational problems which existing cybernetic means cannot address with sufficient flexibility or effectiveness, and they can be integrated into both police and military forces. With these elements in mind, we will now present the concept of cyber cells and detail how they might be organised and work and what their responsibilities might be.

The cyber cell concept
A cyber cell could be defined as a capability of high functional specialisation and of a dual nature –both defensive and offensive–. Its function is to carry out a task with the goal of guaranteeing the security and defence of a specific area of cyber space. Depending on the operational needs and on the area in which it operates, a cyber cell might be assigned three major functions:

  • To carry out specific cybernetic operations or ones in conjunction with other operational dimensions (land, sea, air and space).
  • To support the evaluation and improvement of the level of maturity, resilience and security of national, allied and multinational cybernetic capabilities.
  • To contribute to experimenting with new operational concepts and cybernetic capabilities.

In the same way, and depending on the function it is carrying out at any given time, a cyber cell can have one of the following four tasks assigned to it: (1) assurance; (2) experimentation; (3) exercises; and (4) operation. In the first three cases the cyber cell will assume the role of a ‘red team’ under which it will simulate the behaviour of a potential adversary so as to try and exploit the vulnerabilities of the area being evaluated. However, when a cyber cell is in operational mode, it will be able to carry out both defensive and offensive cybernetic activity.

  1. Assurance: this will allow analysing the state of maturity, resilience and security of the area in which the cyber cell is operating.
  2. Experimentation: here the cyber cell might do a wide variety of things, such as study new operational concepts or evaluate the maturity, resilience and security of new cybernetic capabilities that complement existing ones.
  3. Exercises: during exercises the cyber cell must test what it can do. These exercises will be designed and planned with the goal of simulating situations as close as possible to those found in the real world.
  4. Operation: when operational needs require it, the cyber cell must engage in defensive or offensive actions, or ones to exploit a given area.

Each of the four tasks assigned to a cyber cell will be executed in a given area of the five outlined as follows:

  1. Local, limited to a local ICT system.
  2. National, limited to a local realm or a set of local areas, the command and control of which is exercised by a national body.
  3. Allied, limited to a local area or set of local areas, the command and control of which is exercised by an agency of NATO or Europol or bodies belonging to one of their member states.
  4. Possible adversaries, limited to a local area or set of local areas, the command and control of which is exercised by organizations belonging to possible adversaries. The nature of the possible adversaries is heterogeneous; they can be States or non-State actors, such as terrorist groups, cyber gangs or so-called hacktivist groups.
  5. Multinational, defined by a local area or set of local areas, the command and control of which is exercised by a multinational organisation or by a State that belongs to the multinational organisation.

Figure 2. Areas of activity of a cyber cell

Enablers of cyber cells
Before countries create cyber cells, they must have the right enablers. By this we mean those defensive and offensive cybernetic means which have a sufficient level of maturity and are already established in the country and at the disposal of both the security forces and the military. Their existence under the terms described here will make it possible for cyber cells to carry out the tasks assigned to them with some degree of likelihood of success.

These enablers are the following: command and control, organisation, a legislative framework, methodology, knowledge of the cyber situation, risk analysis and management, the sharing of information, technology, staff and constant training. Command and control of cyber cells should be exercised at the strategic, operational and tactical levels, and each of these levels will have assigned to it a set of responsibilities and activities so that the cyber cells do their work with guarantees. At the strategic level, the high-level goals, priorities and achievements that the cyber cell must attain as it goes about the task assigned to it will be defined. What is more, from this level the viability and evolution of the cell must be guaranteed, with all necessary human, financial and technological resources provided. At the operational level, all activities related to the assigned task will be authorised and directed, and each will be controlled by an operational team (OT), in such a way that, as the task is undertaken, there will be as many operational teams as there are activities that comprise each task. The make-up of these teams will be determined by the nature of the task. Finally, at the tactical level, the people in charge of each operational team will define the tactical plans related to the activities. In order to do this, they will outline in the greatest detail possible each of the actions that make up an activity, with input from those in charge of the tactical teams assigned to each action (each operational team will be supported by as many tactical teams as there are actions making up the activity).

Figure 3. External and internal contexts of cyber cells

Despite the difficulty inherent in finding those directly responsible for carrying out an act of aggression in cyber space, and the ubiquity, high level of inter-connectivity and cross-border nature of cyber space, the tasks, activities and actions of cyber cells must remain within the bounds of national and international law. In order for the legal framework to serve as an enabler, it must be up to date in terms of regulation of the main elements of cyber warfare and cyber crime, the regulatory frameworks surrounding them and how they are defined as crimes. The legal framework must also regulate the procedural aspects of electronic evidence, criminal justice and international cooperation. Finally, it must be integrated into national and international legislation associated with the prevention of armed conflicts and the exercise of self-defence of sovereignty over national cyber space.

Cyber cells must have a working methodology that features a common language, homogeneous theoretical and technological foundations and procedures that standardise their functioning at the strategic, operational and tactical levels. Furthermore, they must be provided with immediate knowledge of a country’s own cyberspace, allied cyber space, multinational cyber space, and that of potential adversaries and any other group that might be of interest, as well as knowledge of the status and availability of the operational capabilities necessary for the planning, leading and management of the activities needed to carry out the cybernetic mission that is assigned. Knowledge of the status of the cybernetic situation will be obtained as a result of combining intelligence and operational activities in cyber space along with those activities carried out in electromagnetic space and any other of the dimensions of the operational environment (land, sea, air and space). So integrating the cybernetic situation with the rest of the capabilities is essential to achieving the goals set out in the task that is assigned. In this way the processes, procedures and capabilities associated with knowing the cyber situation must be developed –always in line with the working methodology that is in place– so that those in charge of the cyber cell attain complete knowledge of the overall cyber situation and can work towards achieving the goals established in the assigned tasks. Furthermore, knowledge of the cyber situation must give the operational leader of the cyber cell real-time visibility of local and national networks, systems and services and of the actions of the potential adversary on the opposing networks, systems and services, as well as the possible impact of these actions on the achieving of operational goals. Knowledge of the cyber situation of the mission and cyber space will also help cyber cells to make decisions if they have the best available information and intelligence and to act if they know the operational effect of their decisions on the mission as a whole.

Each task assigned to a cyber cell carries with it a set of risks that will depend on the nature of the task and the realm in which the cell is acting. Therefore, a continuous process of dynamic risk assessment and management in all phases of the task must be developed. In these phases all available information will be collected, analysed and distributed in an appropriate way to the rest of the actors involved in the task. So it will be necessary to devise a set of mechanisms that distribute information in order to have reliable and up to date knowledge of the cybernetic situation, optimise results and improve the maturity, resilience and security of national cyberspace, as well as to manage cybernetic crises.

Technology is the central component of cyberspace. For this reason cyber cells must be equipped with state-of-the-art technological capabilities. They must also be made up of highly qualified and specialised professionals who cover each and every one of the areas of knowledge of the activities and actions that are part of the assigned tasks. It will also be necessary to have a continuous and highly specialised training plan in place depending on each member’s specific role in the cyber cell and in accordance with the constant technological transformation of and changing state of risk in cyberspace. Therefore, training will be one of the key elements that will determine the success or failure of cyber cells.

Organising a cyber cell
Figure 4 shows the organisation of a cyber cell as deduced from the command and control structure described in the section on enablers in this paper. The person in charge of the cyber cell’s area has responsibility for translating the strategic goals, planning and overseeing the execution of the tasks assigned to the cyber cells, providing knowledge of the cyber situation at all times, directing those in charge of the operational aspects of the mission, and planning training, assessing results, managing risks and enabling the necessary technical and human resources. Reporting to this person are the operational officials. They report to the leader of their area of the cyber cell as to the operational and tactical evolution of the assigned tasks, and have responsibilities that are similar to those of the area operational leaders but at a lower level.

Figure 4. Structure of a cyber cell

Each operational leader of a team will be in charge of carrying out each of the various activities of the cyber cell. This includes reporting to the operational leader of the cyber cell about how the activity assigned is progressing, dividing the activities up into actions, breaking down into as much detail as possible the actions that will be assigned to the tactical teams, planning and overseeing the work of these teams, carrying out a non-stop process of analysis and management of the assigned activities and devising relevant reports on each activity. Finally, each leader of a tactical team will be in charge of carrying out one or more actions, so he will carry out the actions assigned by the person in charge of the operational control team, report to the leader of the operational control team about how the action is progressing, carry out the constant process of analysis and management of the assigned actions and devise relevant reports on each activity.

Conclusions: A cyber cell can be an efficient tool for security forces and the military to improve the security and defence of a given area of cyber space. Cyber cells are composed of operational and tactical teams acting under the control of a strategic cybernetic command and require that from the outset there be a set of mature, traditional cyber security and cyber defence capabilities: a modern ICT infrastructure, a set of cybernetic capabilities and staff that is experienced and used to operating in this kind of setting.

From there on, cyber cells could carry out cybernetic operations both of a defensive and offensive nature, support the assessment and improvement of national, multinational or allied capabilities, allow experimenting with new operational concepts and train people assigned to work in the cell. The implementation of these cells can make a significant improvement to a country’s cybernetic defence and offense capability, thus contributing to control of cyberspace and the creation of a modern and effective national cyber force that is completely interoperable with allied cyber forces. In the specific case of Spain, and as is the case with the rest of its allies, efforts must be concentrated on increasing the maturity of the cybernetic capabilities of the security forces and the military over the short and medium term as a step toward the effective establishment of advanced capabilities like cyber cells. However, and again, as its allies already do, Spain should consider establishing them so that capabilities that are under development can become operational as soon as possible.


[1] The authors are part of the ‘cyber cell’ working group led by THIBER, The Cybersecurity Think Tank, which in turn is part of the Institute of Forensic and Security Sciences at the Autonomous University of Madrid. In alphabetical order, they are: Guillem Colom Piella, who holds a PhD in international security; José Ramón Coz Fernández, PhD in Computer Sciences and BSc in physical sciences; Enrique Fojón Chamorro, computer sciences engineer and member of  ISMS Forum Spain; and Adolfo Hernández Lorente, computer sciences engineer and managing director for security at Ecix Group.

[2] Applegate, Scott D. (2012), Leveraging Cyber Militias as a Force Multiplier in Cyber Operations, Center for Secure Information Systems, George Mason University, Fairfax, Virginia.

Berman, Ilan (2012), The Iranian Cyber Threat to the US Homeland, appearance before  the Homeland Security Committee of the House of Representatives, Washington, D.C., 26/IV/2012.

Cabinet Office (2012), The UK Cyber Security Strategy Protecting and Promoting the UK in a Digital World, HMSO, London.

Defence Science Board (2013), Task Force Report: Resilient Military Systems and the Advanced Cyber Threat, US Department of Defense, Washington DC.

Department of Defense (2013), Defense Budget Priorities and Choices – Fiscal Year 2014, US Government Printing Office, Washington DC.

Dev Gupta, Keshav, & Jitendra Josh (2012), ‘Methodological and Operational Deliberations in Cyber-attack and Cyber-exploitation’, International Journal of Advanced Research in Computer Science and Software Engineering, vol. 2, nr 11, p. 385-389.

Liles, Samuel, & Marcus Rogers (2012), ‘Applying traditional military principles to cyber warfare’, Cyber Conflict (CYCON), NATO CCD CoE Publications, Tallin, p. 1-12.

Office of Public Affairs (2010), US Cyber Command Fact Sheet, Department of Defense, Washington, DC.

Office of the Secretary of Defense (2013), Military and Security Developments Involving the People’s Republic of China 2013, US Government Printing Office, Washington DC.

<![CDATA[ Cyber Security in Spain: A Proposal for its Management (ARI) ]]> 2010-07-29T04:07:26Z

Social, economic and cultural relations are increasingly dependent on information and communication technologies and infrastructures (cyberspace), making it necessary to devise a national security system (cyber security) that can manage the risks that threaten their adequate operation.

Theme: Social, economic and cultural relations are increasingly dependent on information and communication technologies and infrastructures (cyberspace), making it necessary to devise a national security system (cyber security) that can manage the risks that threaten their adequate operation.

Summary: Information and Communication Technologies (ICTs) have contributed to the wellbeing and progress of societies to such an extent that countless public and private dealings depend on these technologies. Through the years, the advances in ICTs have given rise to threats that make it necessary to manage the security of these technologies. Early on, cyber security was conceived as a reactive system to protect information (Information Security); however, it later evolved to assume a proactive position that identifies and manages the threats to cyberspace(Information Assurance). This ARI explores the concepts of cyberspace and cyber security, the known risks and threats, their current management in Spain and the need to develop a national cyber security system that fosters the integration of all of the public and private agents and resources, to take advantage of the opportunities offered by the new technologies and to face the challenges that they present.


Introduction to the Concepts of Cyberspace and Cyber Security
The terms cyberspace and cyber security have now become widely used by broad sectors of society. Nevertheless, before analysing the state of affairs of cyber security in Spain and proposing an approach to its management, it is essential to define the concept of cyberspace in such a manner that everyone affected by it is aware of its social, economic and cultural implications. Once the concept of cyberspace has been defined, the concept and the need for cyber security will be more easily understood.

Cyberspace is a concept that is used within the ICT community to refer to the whole of the physical and logical media that comprise the information and communication system infrastructures. To attain a definition of cyberspace that enables an understanding of the implications mentioned above, it is helpful to consider the concept of service, which is conceived as something that a user or consumer receives from a provider.

Provider-consumer relations can emerge not only between companies and domestic users, but also between and among companies, public administrations and citizens, and of course individuals. These relations existed long before the advent of ICTs, in the mid-19th century, with the invention of the telegraph, and naturally, before its revolution with the discovery and application of the properties of semi-conductive materials that would enable the birth of the ‘digital era’. However, it was precisely from that moment that ICTs became the catalysers of the traditional services that companies provided to their customers, spurring both their extension and their economic efficiency, while at the same time enabling the emergence of new services.

Therefore, cyberspace can be defined as the whole of ICT-based media and procedures that are configured for the provision of services. This definition immediately explains why cyberspace is now an essential part of our societies and economies and how it could even become a determining factor of the evolution, or perhaps the convergence, of cultures. Hence the importance of protecting cyberspace. In the past, cyber security focused on information security, as it solely set out to protect information from unauthorised access, use, disclosure, interruptions, modifications or destruction. Today this approach is evolving into a model of cyberspace risk management (known as ‘information assurance’). Thus, cyber security now also entails the application of a risk analysis and management process regarding the use, processing, storage and transmission of information or data and the systems and processes used, based on internationally accepted standards.

One of the reasons behind this new approach to security is that cyberspace is characterised by a given unit such as a service-providing ITC, in such a manner that the security of the system is attained when the threats to that system are known and controlled. In fact, these two approaches, information security and information assurance, are different yet complementary, and they are often erroneously used interchangeably. In a word, cyber security must be formulated proactively as a continuous analysis and management process vis-à-vis the risks associated with cyberspace.

State of Risk of Cyberspace
The fear of the catastrophic consequences of a hypothetical ‘cyber-Katrina’ or a ‘cyber-9-11’ has led countries such as the US, France, the UK, Israel and South Korea, as well as international organisations including the UN and NATO, among others, to become aware of the importance and the need for a secure cyberspace. For this reason, they have developed or are developing regulatory frameworks and specific plans and strategies for the protection of cyberspace. In a word, they have taken the decision to systematically manage the security of the cyberspace for which they are responsible.

On the other hand, China, Iran, North Korea, Russia and Pakistan have acknowledged their strategic interest in cyberspace as a vehicle to attain positions of economic and political leadership in their geographical scopes of influence. As a result, they are defining policies and making great economic investments that target ICT resources and human resource training, in order to establish ‘a belligerent defence’ of their cyberspace. These countries, or at least their territories, have been identified as the sources of most of the aggressive actions that have taken place in cyberspace in recent years. The constant and accelerated evolution of ICTs has made for increasingly sophisticated attacks, giving rise to an ever more hostile cyberspace, thus forcing cyber security managers to develop state-of-the-art technical and human resources to confront the threats and their possible impacts.

Once the assets to be protected have been identified and assessed, the next step is to detect the possible threats, which can be highly innovative and diverse. The threats to cyberspace are embodied by cyber attacks, which, depending on their origin and impact, can be classified in the following categories:

  • State-sponsored attacks. The conflicts of the physical or real world spill over into the virtual world of cyberspace. In recent years, cyber attacks have been launched on the critical infrastructures of countries and on very specific, though equally strategic objectives. Some such examples, which are well known to the general public, include the attack on part of Estonia’s cyberspace in 2007, which temporarily rendered much of the Baltic country’s critical infrastructures useless, as well as the cyber attacks on the classified networks of the US government by hackers based in China.

  • Terrorism and political and ideological extremism. Terrorist and extremist groups use cyberspace to plan their actions, to publicise them and to recruit followers to carry them out. These groups have already acknowledged the strategic and tactical importance of cyberspace for their interests.

  • Attacks by organised crime. Organised crime groups (cyber gangs) have begun to move their action to cyberspace, exploiting the anonymity potential offered by this medium. These types of groups aim to obtain sensitive information for later fraudulent use and as a means to procure large economic profits. According to FBI data,[1] in 2009 the cybercrimes committed by organised crime groups generated losses of over US$560 million among US companies and individuals alike.

  • Low-profile attacks. These types of attacks are usually perpetrated by people with enough ICT expertise to launch cyber attacks of very diverse natures, for essentially personal reasons.

A quick glance at the types of threats and impacts on cyberspace assets and dependent services shows that whilst ICTs make it possible for more and better services in many areas of our societies, they also increase the risk of attacks on such services. This is further aggravated by the extension and popularisation of ICTs, which weaken the lines of defence of the goods to be protected. It is just as easy for one person to access cyberspace to manage his/her bank accounts from home as it is for another person to access online information on how to break the security of that service and steal the private codes of that person, usurping his/her identity.

Cyber Security Management in Spain
Once the overall scope of cyberspace and its threats are defined, it is easy to understand the difficulty involved in ensuring its security in a given part of the whole. If we are to speak of cyber security in a given nation, we must consider at least two dimensions: the protection of the goods, assets, services, rights and freedoms that depend on State jurisdiction, and the shared responsibility for cyber security with other States, either bilaterally or through supra-national bodies.

In other words, the challenge resides in ensuring that the combination of partial solutions applied by States, albeit with a certain degree of coordination, will resolve the overall problems created by technologies that break down borders. Cyberspace is in constant growth and it is evolving so fast and reaching such a point of proliferation that it essentially sustains the social, economic and cultural relations and structures that are fundamental for a country’s growth and development.

With reference to the first dimension of the problem, it is necessary identify the assets in Spain that depend on cyberspace, the existing regulations, the governing bodies responsible for this area and the specific participants. Though the protection of cyberspace encompasses every asset and agent imaginable, it must essentially focus on the protection of critical infrastructures, the business sector and individual rights and freedoms.

Spain’s critical infrastructures are grouped into the following 12 sectors: administration, food, energy, space, the financial and tax system, water, the nuclear industry, the chemical industry, research facilities, health, transport and information and communication technologies. In all of these sectors, the degree of cyberspace penetration for both the internal management and the provision of services reached its critical level some time ago. Any contingencies that affect any of the assets belonging to any of the 12 strategic sectors could potentially jeopardise Spain’sa national security.

As regards the Spanish business sector, the vast majority of the large corporations have a sufficiently developed internal organisation that enables them to implement activities and measures that fall within information security and information assurance practices. In the case of the small- and medium-sized enterprises and self-employed workers (99% of the total),[2] the lack of economic and human resources is an obstacle for the implementation of cyber security, although their activities are fundamentally sustained by ICTs. The government is currently promoting access to the ICTs and good cyber-security practices among Spanish companies and self-employed workers through the funding lines of the Plan Avanza.[3]

As regards the country’s citizens, the penetration index of the information society services (electronic mail, social networks and electronic commerce) is now high enough[4] for any of the types of threats described above to gravely affect individual rights and freedoms.

The Current Situation of Cyber Security in Spain
Unlike other countries around it, Spain has not yet defined a specific and complete legislation for cyber security. Whilst legislation does exist in different ministerial areas, it has not been developed on the basis of a common policy that reflects the national and strategic scope of cyber security.

Royal Decree 3/2010, of 8 January, which regulates the National Security Scheme in the area of Electronic Administration,[5] is a good place to start; however, as its very name suggests, this law solely covers the public administration sector, leaving out other important sectors for cyber security management, such as other critical infrastructures, companies and citizens. In addition to the Royal Decree mentioned above, there are national, European and international laws that address the issue of cyber security. These include the Organic Law on Data Protection, the General Telecommunications Law and the Information Society and Electronic Commerce Law.

Despite the existence of this regulatory framework, in some cases its degree of compliance is distressingly low, which implies an increase in threats to the cyberspace. The competences associated with cyber security management are distributed among a group of bodies and institutions that depend from different governmental ministries. Among others, these include:

  • The National Cryptology Centre (Centro Criptológico Nacional, CCN), which depends from the National Intelligence Centre (Centro Nacional de Inteligencia, CNI). This centre aims to manage cyberspace security pertaining to any of the three levels of public administration: state, regional and local. The CCN-CERT (Capacidad de Respuesta ante Incidentes de Seguridad, Capacity to Respond to Information Security-Related Incidents) is a national alert centre that works with all of the public administrations to respond quickly to the security-related incidents in its area of cyberspace. Moreover, this agency is the highest body responsible for classified national information security.

  • The National Institute of Communication Technologies (Instituto Nacional de Tecnologías de la Comunicación, INTECO), which answers to the Ministry of Industry, Tourism and Trade, handles cyberspace protection for Spain’s small and medium-sized enterprises and domestic-use citizens, through its CERT (Computer Emergency Response Team).

  • The National Centre for Critical Infrastructure Protection (Centro Nacional para la Protección de las Infraestructuras Críticas, CNPIC), which depends from the Spanish Ministry of the Interior, promotes cyber security relating to these infrastructures.

  • The Civil Guard’s Telematic Crime Group (Grupo de Delitos Telemáticos de la Guardia Civil) and the National Police Information Technologies Crime Investigation Unit (Unidad de Investigación de la Delincuencia en Tecnologías de la Información de la Policía Nacional), both of which depend from the Ministry of the Interior, work to combat crime in cyberspace.

  • The Spanish Data Protection Agency (Agencia Española de Protección de Datos, AGPD), which depends from the Ministry of Justice, enforces compliance with personal data protection regulations.

Moreover, the regional autonomous governments have centres equivalent to those of the state. These include the Valencian Community’s CSIRT-CV and the Data Protection Agencies of both the Community of Madrid and the Catalan Regional Government, which are similarly responsible for cyber security management within their respective autonomous regions. In a word, despite the existence of bodies with clearly defined responsibilities in different areas of the public administrations, Spain is lacking a single body at the highest tier of government that will assume the strategic value of cyber security and exercise the necessary leadership, so that all of the other bodies can operate in accordance with a single national policy.

Spanish industry in connection with cyber security is in the middle of a growth and development process. This is reflected in INTECO’s most recent ‘Catalogue of Companies and Security Solutions’,[6] which calculates that there are currently in existence over 1,000 Spanish cyber security companies. In 2009, the main companies in the sector came together in the National Cyber-Security Advisory Council (Consejo Nacional Consultor sobre Ciber-Seguridad, CNCCS), which aims to foster the protection of cyberspace, making itself available to government agencies and private organisations to offer guidance on cyber security issues and to strengthen the resultant technological innovation and economic growth.

The companies soon acknowledged the strategic value of cyberspace, both their own and the globally conceived cyberspace, thus giving rise to security departments in their organisations and groups such as the CNCCS. Nevertheless, there are virtually no governmental initiatives that foster state-industry cooperation. This relationship ought to be a two-way system: companies need to create value around the cyber security business, and the State needs technology that will provide it with a reliable and state-of-the-art capacity for cyber security.

Citizen Participation
In 2009, Spain reached an Internet penetration rate of 71.8%,[7] which translates into over 30 million potential cyber users. If we subtract the population of preschoolers and senior citizens over age 75, which still takes in more than 70% of the population with access to cyberspace services, it can be surmised that practically the entire Spanish population accesses these services. The current Spanish legislation on cyber security places special emphasis on the need for education and public awareness in this area, as well as the responsible use of the cyberspace. All the same, these principles have scarcely been applied to date, due primarily to the generalised lack of knowledge of the legislation. The INTECO and the CCN, within the scope of their competence, run interesting awareness and educational campaigns on ICT security, although the impact of those initiatives has left much to be desired. The Spanish industry in the cyber security sector has similarly launched different private campaigns to raise awareness and educate certain sectors of the society, including schoolchildren, retirees and unemployed workers.

International Cooperation
Spain is a member of international organisations that promote the protection of cyberspace. A few such examples include the country’s participation in NATO’s Cooperative Cyber Defence Centre of Excellence and in bodies such as ENISA (European Network Information Security Agency),[8] the AWG (Anti-Phishing Working Group),[9] and the Article 29 Data Protection Working Party.[10] Spain’s participation in and cooperation with international bodies not only enables the country to share experiences and knowledge of the risks and solutions, it also confirms the fact that no national cyberspace can be managed efficiently if the other portions of the global cyberspace do not share the same level of risk. One of the unwritten principles of ICT security asserts that the chain is always broken at the weakest link. It is of little or no use to a nation to implement a highly advanced cyber security plan if all or some of the other countries that take part in cyberspace do not enjoy a similar level of protection.


Proposals for Spain’s Management of Cyber Security
Despite the efforts that have been made, Spain continues to lack a solid system that will enable it to effectively and efficiently direct and manage its cyber security. To define and develop such system, the following principles would need to be applied:

  • The Spanish government must identify the security of its cyberspace as a strategic objective of its National Security, as the materialisation of a threat to its cyberspace could have very grave effects on the country’s social, economic and cultural development.
  • A National Cyber Security Strategy needs to be developed as the foundation for a specific regulatory framework that governs cyberspace and its security. The recent publication of Royal Decree 3/2010, which governs the National Security Scheme in the area of Electronic Administration, is a good starting point. All the same, it will be necessary to adjust and enforce the current applicable legislation.
  • The management of cyber security must be undertaken from a centralised perspective. As a corollary of the principle above, the State must create a body that aims to direct national cyber security, coordinating the public and private institutions involved.
  • The government needs to foster and reinforce international cooperation in cyber security. Multinational and bilateral alliances for cyber security are crucial. In the case of Spain, there is the opportunity to assume a role as a responsible leader with Latin American countries. Moreover, it is advisable that it enters into agreements with those countries, which are not located within its immediate geopolitical sphere, though they are very important in the control of threats to its cyberspace.
  • The State administrations ought to promote a culture of cyber responsibility, based on awareness and ongoing cyber security training. To do so, the study plans of the primary, secondary and university school systems should include subjects pertaining to responsible cyberspace use in their syllabi.
  • The State needs to promote and invest in research, development and innovation (R & D & I) in the cyber security sector, to provide top-quality ICT solutions and qualified employment.

Thus, the government must assume a leadership role in cyber security to make the public aware of the need to protect the cyberspace upon which Spain’s basic services, critical infrastructures, economy and progress as a society depend. ICTs are not the problem; rather, they form part of the solution. Moreover, their protection and secure use are not just the responsibility of the central government; they are also the responsibility of the other regional autonomous and local administrations, along with the private, business and domestic sectors. Everyone shares part of this joint responsibility, yet it is the central government that must assume the leadership and undertake the national management of cyber security. These responsibilities cannot be delegated, and they must translate into providing Spain with the momentum, the ideas and the direction that are needed.

Enrique Fojón Chamorro
Computer Systems Engineer

Ángel F. Sanz Villalba
Telecommunications Engineer

<![CDATA[ Geopolitics 2.0 (ARI) ]]> 2009-10-14T04:09:21Z

An entirely new form of virtual weaponry is transforming the dynamics of geopolitics.

Theme: An entirely new form of virtual weaponry is transforming the dynamics of geopolitics.

Summary: The threat of cyber warfare is not new. The Internet was a product of the Cold War built in the 1960s by US military scientists to protect American communications infrastructure against a Soviet nuclear strike. Nearly a half century later, those threats remain. Today, however, cyber weapons are not only in hands of enemy and rogue states, but are being exploited by isolated individuals ranging from bored teenagers to wild-eyed terrorists. Today the impact of Web 2.0 goes beyond political mobilisation inside countries and digital diplomacy between states. It now includes virtual weaponry that has brought an entirely new form of warfare which is transforming the dynamics of geopolitics. We call this new global reality Geopolitics 2.0, which is –broadly speaking– characterised by three significant shifts: (1) states to individuals; (2) real-world to virtual mobilisation and power; and (3) old media to new media. Forced to react to the impact of these three Geopolitics 2.0 shifts, states are alternatively censoring or deploying Web platforms to achieve their goals and assert their influence –and in some cases, they are doing both–.

Analysis: In the aftermath of Iran’s massive street protests in June, no one was surprised when that country’s authoritarian regime blamed the unrest on Western intelligence agencies and big media organisations like the BBC and Voice of America. This time, however, the ruling mullahs’ litany of accusations included a new list of Western enemies: Twitter, Google, YouTube and Facebook.

Web 2.0 social networks had indeed played a powerful role during the uprising –not only in mobilising action inside Iran, but also in influencing global opinion–. The global media described the turbulent events in Iran as a ‘Twitter Revolution’ due to the widespread use of ‘tweets’ to organise spontaneous protests and disseminate information about what was happening in the country. Also, a young Iranian protestor called Nada became a tragic icon for the Iranian protest when, after being shot during a bloody repression, video images of her bleeding to death in the streets of Tehran were posted on YouTube, provoking horror and outrage throughout the world.

While the Iranian regime was not toppled in the summer of 2009, the ‘Twitter Revolution’ marked a turning point in global politics. Whereas in the past states were acutely conscious of the power of traditional media like CNN and the BBC in shaping world opinion, the sudden explosion of Web 2.0 networks was imposing a new lexicon on the emerging geopolitical realities of digital diplomacy. The so-called ‘CNN Effect’ was now the ‘YouTube Effect’.

The powerful significance of this shift was not lost on Barack Obama as he moved into the White House in early 2009. In fact, President Obama owed his electoral victory in part to the mobilising power of Web 2.0 networks. As a candidate, Obama –constantly pictured thumbing his BlackBerry– had run a campaign that shrewdly leveraged not only Facebook, Twitter and YouTube, but also MySpace, Twitter, Flickr, Digg, BlackPlanet, LinkedIn and many other social networks. Obama’s masterful use of Web 2.0 platforms marked a major e-ruption in electoral politics –in America and elsewhere–. Since the US presidential elections of 2008, political campaigning has been shifting from the old system of top-down political machines towards Web-based mobilisation that gives a powerful role to the bottom-up dynamics of online social networks.

Obama also learned first-hand during the 2008 campaign how the Web can be used as an offensive weapon in political warfare. Hackers had broken into his election team’s computer system and stolen sensitive information about campaign travel plans and Obama policy positions. After being sworn in as President, Obama offered this reflection on that experience: ‘It was a powerful reminder, in this information age, one of your greatest strengths –in our case, our ability to communicate to a wide range of supporters through the Internet– could also be one of your greatest vulnerabilities’.

Not surprisingly, President Obama quickly grasped the strategic importance –and potential threat– of Web-based networks for America’s role as a global superpower. The US and other Western powers possessed reliable intelligence that numerous states –in particular Russia, China and North Korea– were engaged in cyber warfare in various forms: espionage, black propaganda, Web vandalism, data theft, cyber attacks on critical infrastructure and denial-of-service attacks. Facing these threats, one of the first measures President Obama announced after taking office was a White House programme to bolster America’s defences against cyber attacks. Declaring that cyber warfare was ‘one of the most serious economic and national security challenges’ facing America, President Obama earmarked US$335 million for securing US Internet infrastructure and appointed a White House ‘cyber czar’. The Pentagon meanwhile was spending more than US$100 million to repair and strengthen its computer networks. In the US Congress, four Senators were introducing a new bill called the Cybersecurity Act.

At the same time, the Pentagon signed off on the creation of a US ‘Cyber Command’, headed by Lt.-Gen. Keith Alexander, that was expected to be operational by late 2010. General Alexander declared that, in his new role, his mission was to ‘defend vital networks and project power in cyberspace’. While the Cyber Command’s work remains top secret, it is believed that its cyber-security efforts include blocking thousands of foreign electronic attacks on US network systems that occur every year.

‘Our increasing dependency on cyberspace, alongside a growing array of cyber threats and vulnerabilities, adds a new element of risk to our national security’, noted Defense Secretary Robert Gates in an internal Pentagon memo. ‘To address this risk effectively and to secure freedom of action in cyberspace, the Department of Defense requires a command that possesses the required technical capability and remains focused on the integration of cyberspace operations’. Gates had good reason to be on high alert about a cyber threat. In 2008, Chinese military hackers were believed to have broken into an unclassified e-mail system in his own Pentagon office, creating embarrassment at the highest levels of the US government and triggering an immediate review of Pentagon IT procedures. And yet only a year later, Chinese and Russian cyber hackers were believed to have infiltrated the US electrical grid, leaving behind software programmes to disrupt the entire system.

The threat of cyber warfare is not new. In fact, the Internet itself –a product of the Cold War– was built in the 1960s by US military scientists to protect American communications infrastructure against a Soviet nuclear strike. Nearly a half century later, those threats remain. Today, however, cyber weapons are not only in hands of enemy and rogue states, but are being exploited by isolated individuals ranging from bored teenagers to wild-eyed terrorists. Today the impact of Web 2.0 goes beyond political mobilisation inside countries and digital diplomacy between states. It now includes virtual weaponry that has brought an entirely new form of warfare which is transforming the dynamics of geopolitics. We call this new global reality Geopolitics 2.0.

Geopolitics 2.0 is, broadly speaking, characterised by three significant shifts: (1) states to individuals; (2) real-world to virtual mobilisation and power; and (3) old media to new media.

States to Individuals
The first shift is from a state-centric approach in international relations towards a new dynamic involving a widely disparate number of non-state actors, even individuals, who can use Web platforms to exert influence, threaten states and inflict violence.

This shift has been occurring for some time, as states lose their monopoly as the exclusive actors on the global stage, but is now accelerating due to the impact of Web 2.0 networks. Geopolitics 2.0 does no evacuate state-to-state conflict. Make no mistake, states are using Web 2.0 instruments against other states. Communist North Korea is widely suspected, for example, of being at the origin of cyber attacks against neighbouring South Korea and other countries. Another example occurred in April 2007, when the normally tranquil nation of Estonia came under a cyber attack –targeting government, banks and media– following the relocation in that country of a Soviet war memorial. The Estonian government blamed the Kremlin for the sudden and unexpected cyber attack. While the Kremlin denied any direct involvement, the incident prompted the NATO military alliance to step up its readiness for cyber warfare.

What is unique about Geopolitics 2.0, however, is that Web networks like Google and YouTube empower not only states and non-state organisations, but also isolated individuals who can, due to low entry barriers, act upon global events –both constructively and destructively–. The Web 2.0 revolution has allowed individuals with virtually no resources to act and exert influence on the same playing field as powerful states that control massive economic and military resources. Today a lone hacker or influential blogger can play cyber David against Goliath states. This was powerfully demonstrated in 2009 when the Russian government allegedly inflicted a denial-of-service attack on Twitter in order to neutralise a single blogger in Georgia. Twitter users world-wide faced a paralysing brown-out because the Kremlin had launched a cyber attack against one individual.

The Georgian blogger turned out to be a 34-year-old economics professor in Tblisi who –known only as Cyxymu– had previously been unknown on the international stage. The identities of many individuals using Web 2.0 platforms in cyber war activities are, in like manner, either unknown or difficult to discover. This marks a major shift from previous models of geopolitics, where the main actors have been either states or other easily identifiable non-state actors, including terrorist groups like al-Qaeda. In Geopolitics 2.0, the identity of individual actors in the global system is frequently not apparent, and sometimes a baffling mystery. When hackers and cyberspies attack, governments may accuse China or Russia, but its origins and perpetrators are never verified with total certainty. In short, it’s possible to be a significant actor in the global system, and inflict major damage on traditional states, without ever becoming known, let alone apprehended and punished.

Real-world to Virtual Mobilisation and Power
The second shift is from ‘real-world’ to ‘virtual’ forms of mobilisation, action and aggression.

The use of Twitter in Iran provided a powerful example of how Web 2.0 networks diffuse power to the periphery. In Iran, an authoritarian regime was so destabilised at first by the ‘Twitter Revolution’ that it was forced to physically repress its own population to prevent its own overthrow. In liberal democracies, Web 2.0 platforms like Facebook, YouTube and Twitter are now indispensible tools of electoral mobilisation and civic organisation. All governments are now acutely aware that their citizens can use these tools to voice their views, organise action and even challenge their authority.

In terms of coercive power, we are witnessing the same shift from the vertical centre to the horizontal periphery –or, expressed differently, from military ‘hard power’ to ‘virtual power’ forms of aggression in cyberspace–. Virtual power is different from ‘soft power’ in one important aspect: whereas the latter conveys values through culture, consumer behaviour and lifestyle (from Mickey Mouse to McDonald’s), virtual power is located exclusively in cyberspace. America is a soft-power superpower, but is more vulnerable in the sphere of virtual power. This explains why the US is scrambling to invest massively in programmes that strengthen their arsenal of cyber weaponry –both offensively and defensively–. Lt.-Gen. William Shelton, the US Air Force's chief of warfighting integration, has said that in the past the Pentagon relied too heavily on industry efforts to respond to cyber threats. This industry-led approach, he added, failed to keep pace with the threat from cyber space.

‘Threats in cyberspace move at the speed of light, and we are literally under attack every day as our networks are constantly probed and our adversaries seek to exploit vulnerabilities’, General Shelton told the House Armed Services Committee in May 2009. A US National Security Council report concluded meanwhile that the American government’s policies on waging cyber warfare have been ill-formed. While these statements may be motivated by a desire to obtain more substantial budget allocations, it cannot be doubted that they reveal how states –with their traditional institutional bias in favour of ‘hard power’– have been slow to understand the velocity and significance of the cyber war threat.

Today, the so-called ‘military-industrial complex’ may need to rely less on giant arms manufacturers and four-star generals and more on computer geeks with formidable skills on videogames like World of Warcraft. That assertion may seem flippant, but it is actually a fact. The US Army is now using Web 2.0 platforms like Facebook and YouTube as recruitment tools and, what’s more, is looking specifically for certain skills sets that include familiarity with virtual worlds and online videogames. The example is being set at the highest level of command: the US Joint Chiefs of Staff is on Twitter and has a Facebook ‘fan’ page. The British army, for its part, actively encourages its soldiers to use Twitter and Facebook. The CIA meanwhile has its own internal wiki, called Intellipedia, which is used as an information-sharing network that replaces old bureaucratic silos with a transparent collaboration system to gather intelligence on potential threats. As the new generation of so-called ‘millennials’ move into positions of responsibility in government and the military, they will bring with them powerful cyber skills that will be instrumentally useful in espionage and warfare.

Old Media to New Media
The third shift is from old media (like CNN, BBC and Al-Jazeera) to new media like Google, YouTube, Twitter and Facebook as effective platforms of global diplomacy, communication and opinion shaping.

In the past, governments have used mass media to wage information warfare. Prominent statesmen, including Presidents and Prime Ministers, have been willing to appear on CNN and the BBC to be interviewed about their positions and policies, and state and non-state actors have exploited the global media to stage events –and pull off stunts– to attract attention to their causes. Old media have been the privileged forum of global diplomacy. The era of old media dominance is coming to an end. We are witnessing a definite shift in favour of new media, not only with the emergence of Web-based forms of journalism, but more importantly through the explosion of platforms like YouTube, Google Facebook and Twitter as instruments of information and propaganda. Web 2.0 platforms are powerfully effective tools for mobilisation –or ‘digital activism’–.

The Gaza crisis in 2008 provides an excellent example of shift towards new media. Shortly after Israel launched its military operation, a Jewish American citizen called Joel Leyden created a Facebook group called ‘I Support the Israel Defense Forces in Preventing Terror Attacks from Gaza’. At the same time, an Arab called Hamzeh Abu-Abed created a Facebook group called ‘Let’s Collect 500,000 Signatures to Support the Palestinians in Gaza’. Intrigued by the leveraging of Web 2.0 networks on both sides of the crisis, Time magazine published a story under the headline ‘Facebook users go to war over Gaza’. Most of these Facebook initiatives were the work of individuals. But states also joined the Web 2.0 propaganda campaign to get out their message. The Israeli Army, for example, launched its own YouTube video channel in an effort to win the global PR battle by uploading videos showing carefully pinpointed strikes against terrorist targets.

Forced to react to the impact of these three Geopolitics 2.0 shifts, states are alternatively censoring or deploying Web platforms to achieve their goals and assert their influence –and in some cases, they are doing both–.

Authoritarian states routinely imprison so-called ‘cyber-dissidents’. In the Middle East, for example, Syria has jailed bloggers and blocks websites (including Facebook and YouTube) deemed a security threat. In Egypt, an Arab country that enjoys open diplomatic relations with the West, the government has punished online criticism of the state. Beyond the Middle East, the Chinese regime has imprisoned cyber-dissidents and shut down websites including YouTube, particularly over sensitive issues such as Tibet. Indonesia has banned both YouTube and MySpace. Other states that have banned websites or imprisoned cyber-dissidents include Iran, Saudi Arabia, Libya, Belarus, Burma, North Korea, Tunisia, Turkmenistan, Uzbekistan and Vietnam.

Liberal democracies, while undoubtedly developing their cyber war capabilities, are particularly focused on the potential danger of Web 2.0 forms of terrorism. It is believed that terrorists are using Web platforms like Google Earth to locate potential targets, especially in countries like Israel. This may explain why Google has pixilated sensitive zones in Israel and elsewhere in the world that could come under a terrorist attack. The findings of a ‘Dark Web’ research project at the University of Arizona tracked Jihadist extremist groups using Web 2.0 media. The study, published in 2008, came across an alarming number of Jihadist blogs, including one posting news updates about so-called ‘occupied Islamic countries’. Jihadist bloggers were also active on YouTube, uploading videos featuring explosives, attacks, bombings and hostage-taking. On Second Life, meanwhile, a ‘Terrorist of SL’ attracted 228 members and another group called ‘Liberation Front’ counted 65 followers. The ‘Dark Web’ study concluded: ‘Many of the Web 2.0 content providers may only act as Jihadist sympathisers or information dissemination agents for radical extremist materials. Most of them may not be the original content creators, i.e., the groups who performed the violent acts. However, their role and importance as online information dissemination agents or resource hubs cannot be underestimated’.

Some contend that Web 2.0 social networks can be anti-democratic even in liberal democracies. They warn against an ever-present danger that states will succumb to ‘Big Brother’ temptations and use Web 2.0 networks to spy on own their citizens. The CIA admits openly that it uses Facebook for recruitment purposes, but it would be naïve to believe that states and their intelligence agencies around the world are not using Web 2.0 networks to collect information. Facebook’s privacy policy, for example, states that it does not share personal information with third-party companies –but adds that, in order to comply with the law, it may give personal information to ‘government agencies’–.

Conclusion: What has radically changed with Geopolitics 2.0 is that old-fashioned state surveillance is now a two-way mirror. Individuals operating in cyberspace can now spy on, and even threaten, their own governments and other states. The shift from states to individuals, from hard to virtual power, and from old to new media has changed the dynamics of global politics forever.

Matthew Fraser
Senior Fellow at INSEAD and Adjunct Professor at the American University of Paris and the Institut d’Etudes Politiques de Paris


Agence France-Presse (2009), ‘Obama Launches “YouTube Diplomacy”’, 20/III/2009,

Axe, David (2008), ‘Internet Connects Future Army Leaders with Virtual Front Porch’, World Politics Review, 6/V/2008,

Business Week (2009), ‘Iran’s Twitter Revolution? Maybe Not Yet’, 17/VI/2009,

Christian Science Monitor (2009), ‘Obama’s Strategy to Counter Cyber Attacks’, 29/V/2009,

IDG News (2009), ‘Georgian Cyber Attacks Linked to Russian Organized Crime’, 17/VIII/2009,

Morozov, Evgeny (2009), ‘Foreign Policy: Iran’s Terrifying Facebook Police’, Foreign Policy/NPR, 13/VII/2009,

Naim, Moses (2007), ‘The YouTube Effect’, Foreign Policy, January/February,

Time (2009), ‘Tehran’s Trials: Blaming the West, Google and Twitter’, 8/VIII/2009,,8599,1915399,00.html.

Wall Street Journal (2009), ‘US Cyber Infrastructure Vulnerable to Attacks’, 6/V/2009,