In January 2020, Microsoft announced the creation of a permanent delegation at the United Nations in New York City. The goal is to advance the UN Sustainable Development Goals on key technology, humanitarian, environmental, development, and security issues. The same company also appointed for the first-time a (Danish) diplomat as new vice president of European government affairs in Brussels. Meanwhile, IBM has recently set up a lab on technology policy whose members participate in some of the most important policy decision-making meetings.
One of the main pillars for technology companies’ success has long been the existence of a strong pressure group addressing to political and regulative decision-makers. However, goals’ nature and the means to do so have changed in the recent years. While in the past they were external interest group which aimed at maximizing their own corporate interests, now they are becoming actual policy-makers. They are no longer the group which knocked the door. Instead, they are increasingly recognized as relevant voices within decision’s rooms. The picture of them as the “Others” is blurred, and they are strengthening their position as part of the political “Us” arena.
In this year’s Munich Security Conference, big tech companies’ representatives participated as speakers in sessions on transatlantic relationships and conflict resolution measures. In this same conference, Mark Zuckerberg, Facebook’s CEO, brought up the idea of the need for an improved Internet governance system. Foundational principles of this governance would be trust, oversight, and accountability. According to Zuckerberg, global technology platforms must be able to answer about their mistakes and be hold accountable for their effects on potential affected people and groups. To do so, it is important to establish standards which govern the implementation of emerging technologies –especially Artificial Intelligence, quantum computing, and new domains of cyber security–, through a responsible and responsive use of data which minimize unwanted effects and maximize positive impacts. However, how easy is this proposal?
Dilemmas and limitations
There are several dilemmas and regulative, sociological, and security limitations which are not solved. First, there are no binding international norms thereof. But, is the lack of universal rules a problem? The issue is not simply whether there are or are not norms. Beyond that, this begs the question on whether a global package would be something sufficient and, even more, necessary. With regards to digital and technology policy, the first step is States-driven regulation –and not from the global level. It will be done through national experiences and also through heuristic learning –trial and error– of these technologies’ applications will bring us up ideas about best paths to address new international norms –if actors end up agreeing upon any way. If we want to be preventive and go ahead of future harms, companies-States and States-States should work jointly from now on by undertaking tests and simulations. A proposal is the deployment of Artificial Intelligence systems under controlled environments so that unknown uses or better response protocols are found out. This would also allow both companies and States to well assume legal problems related to the extraterritoriality principle, or the definition of “collateral damages” when using Artificial Intelligence in defence or intelligence missions.
Second, the role of values and sociological factors in digital policy regulations is a strategic asset which technology companies need (and wish) to take into account. In the United States, values represent “contributing factors” to national strategies. Instead, the recently released European Strategy for Data shows that, in the European Union, values influence, but are also the foundation of its strategic guidelines. Indeed, European single market aspires to the reuse of data and their interoperability within and among key sectors. This preference proves that the EU places data protection before competition. Hence, technology companies must shift their strategy towards this direction. The United States stands aside of this policy proposal, and there is much to be done if companies wish to mimic governments with regards to accountability. This is especially due to the challenging modification of the Section 230 of the Communications Decency Act, by which “no provider or user of an interactive computer service shall be treated as the publisher of speaker of any information provided by another information content provider”. In such a way, except for the content cases of children’s sex trafficking, hate speech, or terrorism-related content, companies benefit from a high level of immunity within the United States. Thus, it would be difficult to be accountable for harming or unexpected effects of data use or emerging technologies.
Third, it is still up to the States the upmost task of protection of strategic interests, even through the digital path. Concretely, critical infrastructures, competitiveness of small- and medium-sized technology companies, and the integration of advanced technologies within the Security, Defence, and Intelligence architecture at the state, regional, and local levels. Technology companies, through their product innovation, are key actors in the refinement of each State’s capacities when it comes down to promote, respect, protect, and guarantee fundamental rights. However, virtues of technologies also carry high risks that companies must reduce and alleviate, and it is the State the actor in charge of crisis management upon significant digital blockades (and not the traditionally physical blockades on sea, air, and other domains) which may affect people’s liberties.
In conclusion, Big Tech companies are playing an increasingly strategic, decision-level and geopolitical role. Their tactical and advocacy efforts are still present, but its willingness to have an actual impact becomes more important than ever. And this position is clearly understood: the world incorporates their new technologies, and these companies aspire to be at the forefront of what has not yet been done and they want to offer. Hence, there are still three main questions to be solved: where the limit of its participation in the policy-making process is, which level and depth of public-private partnership should be attained, and at what extent these actors –which are everywhere and nowhere alike– would be able to be accountable. Upon differential risks, differential policy options