Risk without borders: the malicious use of AI and the EU AI Act’s global reach

The image shows one of the entrances to the European Parliament building. The glass façade displays the Parliament's logo and the flag of the European Union. On the left side of the grey building, several flags of the Member States can be seen.
Headquarters of the European Parliament in Brussels. Photo: Steven Lek (Wikimedia Commons / CC BY-SA 4.0).

Key messages[1]

  • The EU’s Artificial Intelligence Act (AI Act) is one of the first binding AI regulations worldwide. EU policy makers intended it to serve as a blueprint for global AI governance, relying on the so-called Brussels Effect.
  • In a fast-moving and transformative domain such as AI, regulatory quality is a prerequisite for influence as a model. In this case, quality includes providing adequate coverage of the most critical risks associated with AI usage, deployment and adoption.
  • Amongst those risks is malicious use, which arises from the intentional use of AI capabilities to cause harm. This analysis stress-tests the AI Act’s provisions against malicious use risks.
  • The results reveal that the AI Act’s coverage of malicious use risks is uneven: while some risks are addressed head on, others are addressed only indirectly through other EU or national regulations, or through international initiatives. By leaving significant gaps, the AI Act undermines its own value as a global model.
  • The reliance on domestic and sectoral regulation to address these gaps, while coherent from an internal perspective in that it helps avoid overlaps and overregulation, assumes that comparable principles are widely shared or will be replicated internationally, an assumption that may not hold.
  • Thus, EU policymakers should use the AI Act’s periodic revisions to strengthen and complete its regulatory coverage. By contrast, recent initiatives such as the Digital Omnibus signal a narrowing of scope, risking reputational damage. In parallel, the EU should engage internationally with a renewed narrative that acknowledges the AI Act’s limited exportability in its current form.

Analysis

AI safety efforts in the face of competitive pressures

In today’s geopolitical landscape, the race for artificial intelligence (AI) dominance among states and corporations focuses on technological leadership and capabilities, not on safety and risk management. This is demonstrated by the policies, investments and scientific breakthroughs of the key geopolitical players.

The US released America’s AI Action Plan in summer 2025, aiming to position American AI as the global standard and to ensure that allies rely on US technology. This ambition is pursued through a largely hands-off regulatory approach, that has included revoking Joe Biden’s Executive Order on safe and responsible AI and efforts to block state-level AI regulation. This strategy has benefited the US private sector, which hosts many of the world’s leading AI firms and led global private AI investment in 2024 with nearly US$110 billion, more than five times Europe’s total.

China is likewise pursuing global AI leadership by 2030, targeting advances in AI theories, technologies and applications. It is doing so through coordinated industrial policy across the AI value chain, including energy, talent, data, algorithms, hardware and applications, with AI positioned as a tool to address economic, social and security challenges. Goldman Sachs estimates that Chinese AI providers will invest US$70 billion in data centres in 2026, alongside extensive state support for domestic semiconductor capacity and pooled computing infrastructure.

The EU has also acknowledged this competitive landscape. In April 2025 it launched the AI Continent Action Plan, aiming to mobilise computing infrastructure, data, talent, algorithms and regulation. The EU has announced 19 AI Factories, 13 AI Factory Antennas and, in cooperation with the European Investment Bank, five AI Gigafactories. Further initiatives on AI are expected to be discussed or passed in early 2026, with the proposed Cloud and AI Development Act aiming to stimulate EU cloud capacity, and the Digital Omnibus simplifying –and reducing– some of the elements of the EU AI Act as part of the EU’s recent deregulation drive.

These developments have captured the attention of the public and policymakers worldwide in recent months. They reflect an environment of fierce competition and rapid advancement of AI. Such a setting often overshadows, and even casts aside, efforts and concerns over AI safety and risk management. This is perfectly exemplified by the little attention that many safety-oriented initiatives have received, despite happening almost in parallel to the above-mentioned AI-promotion efforts. In the EU, international AI safety engagement is spearheaded by the AI Office, and includes the International Network of AI Safety Institutes, and the work on AI safety, mutual recognition and standards within the Digital Partnerships of the EU with Canada, the Republic of Korea or Singapore, or the Trade and Technology Council with India.

The dynamics of extreme competitive pressure give AI regulatory frameworks a more critical role than ever in embedding guardrails that can prevent the materialisation of catastrophic risks associated with AI capabilities and speedy deployment. While decision-makers are turning their attention towards rivalry and, particularly in the EU’s case, competitiveness, the AI academic and civil society community remains focused on trust, safety and risks.

In this context, the EU’s Artificial Intelligence Act (AI Act) stands out as one of the first binding AI regulations worldwide. While other governments and international bodies have issued documents on AI safety, these remain broad, non-binding principles with weak mechanisms for enforcement or monitoring. By contrast, the AI Act is highly specific: it regulates concrete use cases according to their anticipated risk, rather than targeting the technology itself.

Beyond its legal innovation, the AI Act carries global significance. EU policymakers intended it to serve as a blueprint for AI governance. They assumed that the EU’s regulatory first-mover advantage and market size –the so-called Brussels Effect– would serve as strong-enough pull factors for other jurisdictions to base their AI regulations on the EU AI Act. Even though influence was not the primary motivator, it was indeed part of the push to approve the legislation, with the Council of the EU hailing the AI Act as possibly setting ‘a global standard for AI regulation’.

Reactions from the international community to the AI Act have been mixed. In the US, opinion polls suggest public support for the EU AI Act and similar legislation in the US itself, a stance that contrasts sharply with that of the Trump Administration. By contrast, China has developed its own AI safety framework, has expressed support for ‘comprehensive legal and ethical frameworks’ and has put forward proposals for global AI governance. At the same time, other major digital actors in the Global South are shaping their positions on the AI Act and on AI governance more broadly. India, with a strong digital and AI ecosystem, looks at the AI Act as a source of inspiration, particularly with regard to its risk-based approach, human oversight requirements, data protection standards and the streamlining of AI governance through a centralised authority. This perspective is not universally shared in the Global South. In South Africa, for example, some civil society organisations have expressed concern about uncritically following European approaches to digital regulation.

Yet for the AI Act to succeed as a model of AI risk management, regulatory quality is a precondition. In the context of AI, regulatory quality entails the capacity to adequately address, over time, the most critical risks posed by AI systems, that is, those associated with the most severe potential harms and highest negative impact. In other words, this analysis argues that, for foreign policymakers sensitive to AI safety considerations to view the AI Act as a reference model, the relevant risks should be addressed and acknowledged in the regulation. Otherwise, the AI Act will not achieve any global influence and will be superseded by alternative frameworks that better address known AI risks.

Given this context, this analysis focuses on risks arising from the malicious use of AI: intentional practices that use AI to cause harm. These risks are a subset of the most severe and foreseeable pathways to what the AI safety community labels ‘catastrophic’ outcomes, from large-scale disinformation and fraud to cyber offence and bioweapons development. The rationale that guides this analysis is that, if the EU AI Act is to fulfil its aspiration of becoming a global blueprint for AI risk management, it must adequately account for malicious use risks. Where it fails to do so in its current form, regulatory adjustment will be essential.

Therefore, this analysis is structured in four parts: first, it lays out the framework of malicious use that will be used throughout the paper; secondly, it assesses whether the AI Act recognises malicious use sub-risks in its provisions and if gaps remain; thirdly, it explores the reasons behind the AI Act’s design and limitations; and, finally, the paper concludes by linking the problems posed by the findings to the AI Act’s intended Brussels Effect.

Discrepancies in AI risk frameworks: the AI Act versus malicious use risks

The AI Act imposes obligations (mainly) on the providers and deployers of AI systems based on the intensity of the risks of their potential use cases. According to the legal text, AI systems are conceptualised as falling into the following risk categories: those whose use poses unacceptable risks and are thus prohibited; those with high-risk use cases, which face transparency, cybersecurity and risk management obligations; and use cases whose risk arises from the lack of transparency of AI systems and their deployment. This approach, however, varies for general-purpose AI (GPAI) models, which are singled out as a specific type of AI technology with distinct obligations. In their case, the text imposes more extensive transparency obligations to providers of GPAI models, such as information to downstream developers of AI systems and the disclosure of data used for training. The requirements are even wider in the case of GPAI models with systemic risks, with stronger risk management and cybersecurity obligations.

Nevertheless, the AI Act’s risk conceptualisation is only one of the many developed and applied by industry, governments, international organisations, academia and civil society. While the AI Act’s ‘risk-intensity’ focus may be suitable for swift policymaking and enforcement, it is less adequate for evaluating the coverage of AI’s most critical risks. In such a context and given the extraordinary opportunities that AI offers malicious actors to cause harm, this analysis turns to malicious use risks. These are intentional practices that employ AI capabilities to compromise the security of individuals, groups or society. The defining element is the ‘intent to cause harm’, which differentiates malicious use from accidental misuse or other unintended consequences of AI. Malicious use is also distinct from what may be termed as malicious abuse, which exploits the vulnerabilities within AI systems themselves rather than weaponising the systems’ capabilities.

Malicious use risks can be further subdivided into sub-risks. In the spirit of comprehensiveness, this analysis examines the AI Act’s coverage of nine identified sub-risks. The categorisation aims to be exhaustive and is based on the malicious use risks cited and highlighted by international AI safety organisations, policy reports, academia and reported incidents. The resulting nine sub-risks are:

  • Bioweapons and chemical threats: the use of AI to design novel pathogens or toxins (bioterrorism), to conduct a biological attack or to provide instructions for reproducing existing biological and chemical weapons. It includes dual-use risks where AI drug discovery or medical AIs can be repurposed for malicious ends.
  • Intentional rogue AIs: the creation and unleashing of autonomous systems with destructive goals (eg, ChaosGPT). These systems may be deployed and/or pursue harmful objectives, potentially adapting without human oversight.
  • Disinformation and persuasive AIs: AI used to generate false or misleading content at scale or for personalised persuasion (including personalised disinformation) by exploiting cognitive vulnerabilities. These uses undermine public trust and democracy and may include covert foreign influence operations.
  • Fake and abusive content: generative AI used to create content that harms individuals. This includes non-consensual intimate imagery (NCII), AI-generated child sexual-abuse materials (CSAM), voice impersonation fraud, blackmail, extortion, reputational damage and psychological abuse.
  • Fraud, scams and social engineering: AI systems (eg WormGPT or FraudGPT) used to produce convincing phishing, impersonations and scam chatbots that enhance the effective deception of victims.
  • Cyber offence: AI used to support and automate malware generation, vulnerability discovery and multilingual phishing, creating offence-defence asymmetries and lowering entry barriers for attackers.
  • Autonomous weapons and military use: the deployment of AI-enabled drones and weapon systems that can target and attack without human oversight, raising risks of escalation.
  • Concentration of power: governments or corporations may misuse AI to entrench authority, suppress dissent and monopolise AI capabilities.
  • State surveillance and oppression: AI enables governmental mass surveillance, predictive policing, censorship and the repression of minorities.

As argued, comprehensive coverage of malicious use risks is required for any regulation to exert global regulatory influence. Thus, the AI Act’s provisions will be stress-tested against this framework of malicious use risks.

The AI Act’s coverage of malicious use risks is unequal

As Figure 1 shows, the AI Act’s coverage of malicious use sub-risks is highly uneven. Four sub-risks are almost unregulated, four are only partially or indirectly addressed and just one is subject to extensive prohibitions and safeguards. This unevenness not only weakens the AI Act’s internal coherence but also undermines its potential as a global regulatory model.

No direct coverage or incidental overlap

Four sub-risks receive no direct coverage: bioweapons and chemical threats; intentional rogue AIs; autonomous weapons; and the concentration of power. They are only incidentally covered through general GPAI systemic risk provisions or remain entirely outside the Act’s scope.

For bioweapons and chemical threats, only generic provisions on risk management and incident reporting for GPAI models with systemic risks apply. International conventions prohibiting biological and chemical weapons remain the main safeguard.

Intentional rogue AIs face a similar regulatory vacuum. Even though GPAI models, especially open-source models, can be used to develop autonomous AI agents with harmful objectives, mitigation is limited to risk management and incident reporting of GPAI models with systemic risks, with an additional layer of protection stemming from the obligations of human oversight on high-risk AI systems. These could limit the autonomous nature of intentional rogue AIs and consequently, their risk.

Meanwhile, autonomous weapons and military use are explicitly excluded from the AI Act’s scope because defence and national security policy are Member State competences. Only dual use AI systems (those with military and civilian use cases) fall under the AI Act, leaving significant gaps in one of the most catastrophic risk areas.

Finally, major sources of concentration of power risks are largely neglected: while the use of AI for state power is limited via restrictions on predictive policing, corporate concentration of power is entirely unattended. This is particularly relevant in a context where the Digital Markets Act (DMA) does not address some of the concentration dynamics of AI technologies. These include the massive data advantages held by very large digital companies or the infrastructure advantage of cloud providers, which concentrate vast amounts of computing power, both critical for the development and use of large AI models.

Partial or indirect coverage

Four other malicious use sub-risks are only partially addressed: disinformation and persuasive AIs, fake and abusive content, fraud, scams and social engineering, and cyber offence. In all cases, obligations exist (transparency, limited prohibitions and cybersecurity obligations) but they fail to address all aspects and sources of the risk. Often, complementary legislation is needed and relied on to round up risk coverage.

For disinformation and persuasive AIs, the AI Act prohibits manipulative and deceiving techniques to distort behaviour, and requires the disclosure of deepfake, synthetic content and human-to-machine interactions. Despite such obligations it does not prevent personalised persuasion, for example, via AI chatbots, leaving an important gap partially filled by the Digital Services Act (DSA).

Fake and abusive content is touched upon indirectly through prohibitions on exploiting people’s vulnerabilities to impact their behaviour (eg, the use of AI systems for blackmail and extortion). However, key and serious forms of fake and abusive content, such as non-consensual intimate imagery (NCII), AI-generated child sexual-abuse materials (CSAM) –whose targets are mostly women and children, respectively– are absent. In both cases, labelling obligations for deepfakes and synthetic content provide only weak risk and victim protection, especially taking into account that there are easy technical methods available to circumvent such labelling.

For fraud, scams and social engineering risks are not explicitly regulated either. Transparency and disclosure requirements may reduce the effectiveness of impersonation or phishing, but do not prevent these practices outright.

Similarly, cyber offence is addressed mainly through previous EU legislation criminalizing cyberattacks, regardless of the means. The AI Act’s provisions focus more on malicious ‘abuse’ –such as protecting high-risk systems against adversarial attacks– rather than malicious use, meaning that AI-enabled cyberattacks are left largely outside its scope.

Relatively extensive coverage

By contrast, only one sub-risk –state surveillance and oppression– is extensively covered. The AI Act bans social scoring, predictive policing, certain types of biometric identification and biometric categorisation, among other practices. This reflects the political salience of the issue in EU debates, which might be due to the novelty of this risk relative to others, as well as the precedent of authoritarian regimes using the technology, notably China. Advancements and wide availability of connectivity, CCTV, data and better-performing AI models have turned this risk into a key concern for policymakers and society at large.

In sum, the AI Act provides an imbalanced risk coverage of malicious use risks. As will be discussed in the following section, this might make sense from an intra-European perspective; nevertheless, it has negative implications on its potential to become a global regulatory model.

The AI Act’s coverage of malicious use risks is limited by design

The reason for the AI Act’s imbalanced risk coverage lies in its design and it is partly intentional. It has to do with avoiding redundancy in regulation, since the AI Act was conceived within a larger corpus of legal acts. The AI Act aims to address AI-related risks that are not covered in previous legislation. For example, the development and use of bioweapons, and the conduct of scams and cyberattacks was already criminalised before the emergence of AI. Thus, since AI-enabled crimes should not be treated differently from their traditional counterparts, and protections should not be redundant and excessively onerous with respect to existing legislation, there was no point in covering them extensively in the AI Act.

The consequence is that, as mentioned in the previous section and showcased in the last column of Figure 1, many of the malicious use sub-risks are complemented by other EU laws. For instance, persuasive AIs are partly covered by the DSA; some types of fake and abusive content by the Non-Consensual Intimate Image directive; cyber offence protections by the Cyber Resilience Act; and corporate power by the DMA. This makes absolute sense from a domestic perspective to prevent overregulation and simplify compliance. Therefore, what regulators strived to do was address a singular aspect of AI, which is that the technology augments the accessibility and impact of engaging in malicious use activities. Consequently, legislators sought to embed safeguards that increase friction and diminish the incentives to take on illegal activities via AI.

Besides the conscious design choice making the AI Act patchy, there are additional limitations to its scope. These can be seen not in its coverage of each specific sub-risk, but cross cutting the nature of malicious use risks. The limitations lie in the treatment of personal use and the definition of ‘reasonably foreseeable misuse’. The AI Act places personal use of AI in a grey zone, since personal, non-professional uses of AI systems are not covered. The text relies on the obligations of developers and providers to then limit the risk of malicious use downstream. Therefore, malicious individuals fall through the gaps unless caught later by criminal law. Once again, this choice is understandable through the lens of not duplicating (re-criminalising) activities that are already conceived as illegal by EU and Member State law; furthermore, because monitoring compliance with the AI Act on the personal use of individuals would simply be impossible without a large-scale effort of surveillance. However, the trade-off leaves an important and especially problematic gap in malicious use coverage because AI amplifies both the incentives and the ease of malicious activity, making the prospect of criminal prosecution a weak deterrent.

Additionally, the AI Act establishes risk management obligations based on the intended use and the ‘reasonably foreseeable misuse’ of AI systems both for high-risk AI systems and GPAI models with systemic risk. The issue is that ‘reasonably foreseeable’ misuse may have many different meanings and interpretations, weakening enforcement consistency and regulatory certainty. Some companies will certainly hold on to such vagueness, as OpenAI has demonstrated in court by arguing that the use of ChatGPT for self-harm is not its responsibility, as it is part of personal ‘misuse, unauthorised use, unintended use, unforeseeable use and/or improper use’ of its product.

Yet from an influence and regulatory expansion perspective, these design choices and limitations are a problem: by leaving many malicious use risks outside the scope of the AI Act, its value as a model decreases. The assumption from EU regulators might have been that the left-out obligations and principles are shared by other countries and their domestic laws; or it might have been that the regulatory influence of those domestic laws will also spread as a result. However, third countries will most likely not want to import the EU’s entire regulatory ecosystem. While these choices are appropriate for the protection of EU users and companies, they run counter to promoting influence.

Lastly, it is important to note that, in some cases, the safeguards proposed by the AI Act are very weak with regards to the threat. The case of personalised persuasion is telling since it sits awkwardly in between the AI Act and the DSA without being properly regulated. For instance, persuasive chatbots require labelling under the AI Act, signalling the nature of human-to-machine interactions. Nevertheless, cases of suicide and even murder in the last few months have demonstrated that the persuasive potential of AI is not effectively mitigated by explicitly labelling AI chatbots as artificial, nor by communicating users the terms of use of AI tools.

All in all, the outlined limitations create enforcement gaps that may allow malicious use to flourish at the margins of the Act, undermining both its protective function within the EU and its credibility as a global regulatory model.

Conclusions

The EU AI Act has received ample attention in the last few years. Its novelty as the first comprehensive regulatory attempt regarding AI and the assumed incentives to comply made policy makers confident that the EU’s approach would become the global norm.

Much of the debate on the AI Act after its adoption and entry into force has focused on the topics of overregulation, innovation or implementation hurdles. However, little attention has been paid to the adequacy of the Act’s risk coverage and its ability to protect society from malicious uses of AI. This paper has sought to fill that gap.

The analysis reveals that coverage of malicious use risks is unequal. While state surveillance and oppression risks are extensively accounted for, other critical risks –such as bioweapons, rogue AIs or the corporate concentration of power– remain largely unaddressed. In some cases, such as for autonomous weapons, the international community is trying to bridge the gap; in others, the expectation is for other sectoral and horizontal regulations to mitigate malicious use.

Such an imbalance in malicious use risk coverage has a negative impact on the AI Act’s global influence. Its reliance on other domestic and EU regulations, the prevention of ‘reasonably foreseeable misuse’ and the exclusion of personal, non-professional uses of AI systems pose further challenges to the Brussels Effect in AI. These design choices weaken the AI Act’s prospects as a regulatory model by limiting its exportability. The problem stems from disregarding AI’s transformation of the cost-benefit analysis of malicious use: it lowers barriers to access, amplifies incentives and reduces deterrents. Therefore, reliance on domestic and sectoral laws, the exclusion of personal use and ambiguity of interpretation leave ample space for malicious users to inflict harm.

The insights derived from this analysis offer EU policymakers seeking a global imitation of the AI Act three complementary policy options. The first is the re-examination of the AI Act through the lens of other risk conceptualisations. This analysis has provided only one example of how adjusting the framework allows the identification of gaps and loopholes. Putting the text under the scrutiny of other risks (eg, proxy gaming of unintentional rogue AIs or AI selfish behaviour under corporate AI race dynamics) could enrich our understanding of AI risk coverage and provide policymakers with options for the AI Act’s improvement.

The second policy option is the amendment of the AI Act in the foreseen, periodic revisions envisaged in Article 112 in light of the findings. In particular, the list of high-risk AI systems in Annex III could be modified through Delegated Acts, which require less time, resources and politically consuming processes. The goal would be to cover unattended gaps, such as protection against persuasive AIs.

Lastly, the third policy option has to do with international dialogue. EU policymakers must honestly acknowledge to partners that the AI Act cannot be exported in its current form. Therefore, the EU’s AI international engagement and discourse should refrain from portraying the AI Act as a plug-and-play global blueprint, but the foundation for a conversation. Its risk-based approach is compelling, but it is subject to risk-perceptions and risk tolerance, which may differ across cultures and societies. It is an incomplete framework that relies on prior legislation, which in turn builds on subjective understandings of power concentration and freedom of speech, among others. Still, the EU can turn this into an opportunity for self-improvement. The AI field is constantly evolving. Hence, the EU must adopt an open, learning approach in its dialogue with international actors, use its own AI regulation as the initial basis for discussions and identify space for improvement based on other approaches.

As mentioned above, this analysis aims to provide a complementary perspective to the debate around the AI Act and the Brussels Effect, although from an angle that has not garnered extensive attention. Assessing whether the regulation can serve as a model for AI risk management, particularly in addressing catastrophic risks that societies worldwide seek to mitigate, offers a specific way to evaluate the AI Act’s attractiveness. This approach has its own caveats and limitations, which need to be openly acknowledged, the most important being that third countries will look at other cues of success that are not under the scope of this article. Out of those cues, two stand out.

The first is whether the model actually works in the prevention of risks. For that, the model needs to be tested, which can only happen once all the AI Act’s provisions come into force. Therefore, only time will tell.

But there already are some discouraging signs in this respect. The Digital Omnibus proposed by the European Commission in November 2025 includes a series of measures that, on the one hand, delay the implementation of safeguards against malicious use risks and, on the other, reduce coverage. For example, the Omnibus delays the entry into force of some obligations applicable to high-risk AI systems for up to a year or a year and a half and introduces a transitory period for GPAI watermarking. In terms of reduced coverage, it exempts providers of high-risk AI systems conducting narrow tasks from registering in the EU high-risk database and allows for a broadening of the type of data used in model development and training, potentially increasing the incidence and effectiveness of disinformation and persuasive AI systems, fake and abusive content or social engineering uses.

These changes were introduced a little under two years after the European Commission, the Council and the Parliament reached an agreement in trilogues and have created great legal uncertainty amongst enterprises and rage amongst human and digital rights organisations. The amendments need to be negotiated and approved by the Council and Parliament before August 2026, when obligations would apply under the current legal framework. Nevertheless, neither of the co-legislators has shown signs of haste while the industry does not know for which obligations they should prepare. This situation delays the AI Act’s litmus test as a model and undermines the perception of its effectiveness.

Another aspect of success for foreign policymakers is whether the AI Act favours or hinders the development of an AI industry in Europe. Unfortunately, the timing for an objective evaluation in this respect is very inconvenient. The Draghi report unleashed an unintended wave of critique in the European political discourse to the impacts of European regulation on the innovation capacity and competitiveness of EU industries. The above-mentioned Omnibus is testament to the simplification frenzy. Therefore, assessments of the AI Act’s impact over AI innovation and development in the EU are bound to be biased by the current political climate. In any case, this analysis does not evaluate whether the AI Act will promote or prevent a vibrant AI ecosystem in Europe, even though third countries will surely pay attention to such factors when considering regulatory imitation.

In sum, the AI Act is an important step for AI governance, but its reach and global influence will be constrained not only by implementation hurdles but also by its design. EU policymakers, providers and the AI risk community should recognise important limitations in risk coverage. Malicious use risks are only one dimension of the broader set of catastrophic risks posed by AI. Europeans must not let their guard down and pretend that replicating the AI Act abroad will suffice to avert all damage. Should this occur, Europe’s international efforts for AI governance will be misplaced.


[1] The author thanks Judith Arnal, Darío García de Viedma, Amin Hass, Raquel Jorge and Miguel Otero-Iglesias for their comments on this analysis, which have enrichened and improved the text.