Cybersecurity Futures 2025

Preparing to load PDF file. please wait...

0 of 0
Cybersecurity Futures 2025

Transcript Of Cybersecurity Futures 2025


Project Preface
Cybersecurity Futures 2025 is a collaboration between the University of California, Berkeley Center for Long-Term Cybersecurity (CLTC) and CNA’s Institute for Public Research, conducted in partnership with the World Economic Forum’s Global Future Council on Cybersecurity (2016-2018) and the Forum’s Centre for Cybersecurity.
This report includes a short description of the process and evolution of the project, along with summary insights from workshops conducted around the world in 2018. It also includes the four scenario narratives that were the foundation for the workshops.
The project website,, features a set of short videos that narrate the key elements of the four scenarios, along with an introductory video that situates these stories and explains how to use them. The site also includes a tool that invites personal interaction with the scenarios and provides a heuristic to inform strategic decision-making.
We hope that readers of this report and the accompanying multimedia content will uncover insights to help drive their organizations to be more anticipatory, more proactive, and ultimately more successful in addressing a wide range of emerging cybersecurity challenges. We welcome your feedback as you interact with these ideas.
We would like to thank the World Economic Forum and its Centre for Cybersecurity for their collaboration throughout this process. We would also like to thank the organizations that helped support this work, including HP, Inc.; Symantec; Qualcomm; and CyberCube, as well as the entities that hosted our workshops. Most importantly, we would like to thank the many colleagues who contributed ideas and critiques to the process of creating the scenarios, as well as the community of experts from industry, government, and civil society who participated on the Global Future Council on Cybersecurity, attended and contributed to our workshops, and helped derive and synthesize the insights from this process.
Steven Weber Professor, UC Berkeley School of Information Director, Center for Long-Term Cybersecurity
David Kaufman Vice President and Director Dawn Thomas Associate Director CNA Institute for Public Research
Alan Cohn Partner, Steptoe & Johnson LLP Adjunct Professor, Georgetown University Law Center Co-Chair, World Economic Forum Global Future Council on Cybersecurity (2016-2018)

Project Description and Preliminary Insights 4 Scenario Summaries 10 Scenario 1—Quantum Leap 12 Scenario 2—The New Wiggle Room 17 Scenario 3—Barlow’s Revenge 22 Scenario 4—Trust Us 28

Project Description and Summary Insights
One observation consistently made about the digital era is that when people and technology mix, the results are surprisingly hard to anticipate. This kind of uncertainty puts cybersecurity professionals at a structural disadvantage because it favors attackers over defenders and protectors. Looking to the future, at the intersection of people and digital technology, there is a gulf between the operational security on the agenda today and the range of cybersecurity issues and challenges that will emerge in a decision-relevant future time frame.
To address this gap, we developed a set of future-looking cybersecurity scenarios that are intended to spur a much-needed discussion about the cybersecurity challenges that government, industry, and civil society will face in the future, beyond the immediate horizon.
Cybersecurity Futures 2025 rests on the foundational idea that if we can anticipate how cybersecurity challenges will evolve and understand how governments, firms, and societies in different parts of the world think about those challenges, we can better position decision-makers to reduce detrimental frictions and seize opportunities for cooperation. By tapping into a broadly felt sense that current policy and strategy frameworks in cybersecurity are inadequate and becoming more so, Cybersecurity Futures 2025 seeks to provide a roadmap for new high-level concepts and strategies that drive operational and tactical adaptation in the future.
In the first phase of the project, we developed a set of scenarios that portray a possibility space of “cybersecurity futures” looking forward to roughly 2025. These four scenarios were designed to stress trade-offs in goals and values that will appear in the near future. The scenarios focus on what is relevant and plausible, while also challenging existing beliefs. They were specifically designed to elicit meaningfully different points of view from different parts of the world.
The Cybersecurity Futures 2025 scenarios (like all scenarios) are not predictions. They are logical narratives that tell stories about how forces of change from a variety of sources—technology, economics, human behavior, corporate strategy, government policy, social and ethical dimensions, and more—may overlap and combine to create a set of cybersecurity problems in 2025 that are different from those encountered today. This future problem set involves a broader set of actors, has greater stakes, sits on different technological foundations, and engages core human values in a novel way. The four scenarios are attached as an appendix to this summary note.

Between May and October 2018, we took these scenarios to seven international locations: Palo Alto, Munich, Singapore, Hong Kong, Moscow, Geneva, and Washington, DC. In each location, we organized a workshop with a mix of participants from government, business, civil society, academia, and other domains. We ran similar workshop processes in order to extract reactions and insights that would be roughly comparable. These comparisons are the most important immediate product of the workshops. Though none of the four scenarios will “come true” in 2025, it is very likely that cybersecurity in 2025 will encompass many of the issues and challenges that these scenarios portray. Anticipating reactions in different parts of the world contributes to a forward-looking research and policy agenda that should be more robust, intellectually and practically—and more broadly applicable across countries and regions.
A set of summary insights took shape from the results of the seven workshops. These insights come with obvious caveats, the most important of which is the use of aggregate geographical categories as placeholders. Ascribing the outcomes of a workshop in Munich to “Europe,” for example (despite broad representation from a number of European countries, institutions, and sectors), is not the same as holding workshops across Europe, or dividing perspectives among the various countries and regions of Europe. The geographic labels are best thought of as imperfect proxies and conceptual “clouds” with fuzzy edges. Another caveat is recency bias; our workshop participants are people, and people read future scenarios in the context of what is most important and urgent in their minds at that moment. We designed our workshop process to minimize these kinds of biases, but it is impossible to fully eliminate them.
Caveats notwithstanding, we believe that the early insights we report below are at least directionally correct and, thus, deserve focused attention in strategic planning and future decision-making. We offer three overarching observations, and propose five new landscape elements that reframe the decisionmaking environment.
Overarching Observations
1. It is notable that the discourse about digital technology and security is now deeply “nationalized” and has become even more so in the context of our scenarios. As recently as three years ago, a “free and open internet” narrative that placed governments squarely in the background of the digital environment was still robust. That ideology, which in some respects was naive, appears to be largely gone. “Data nationalism” of some kind is now a given. The new narrative centers on technology firmly yoked to the goals of national power. While this is more historically familiar, it is also a significant discontinuity for the internet and the digital economy.
2. There is a strong sense of disillusionment with vague discussions about “cybernorms.” Workshop participants around the world were hard-pressed to attach concrete meaning to norms, or to articulate how discussions about norms would

lead—as opposed to follow—emergent behaviors.
3. Some of the most profound upside expectations about what digital technology could do to improve the human experience risk becoming buried in the emerging landscape. The first generations of digital technology came with (possibly outsized) idealism—for wealth creation, safety, efficiency, peace, happiness and more. It was inevitable that those expectations would be adjusted over time. But if the pendulum swings too fast and too far towards the pole of risk and threat—as now appears possible—societies risk losing sight of the massive good these technologies can do if properly managed and secured.
New Landscapes
1. The “golden mean” of light-touch regulation and permission-less innovation that governments and business have carved out together as a foundation for the digital economy over the past 20 years is not necessarily enduring. In our workshops, participants did not try to rescue some version of this formula—by which companies have the freedom to develop and deploy new technologies unless it is shown definitively that those technologies are dangerous—because it was not visible to them how it could become an effective route to improved digital security. The idea that this formula is broken, even as an aspiration, is a significant change in the political-economic environment, and we should expect diverging experiments in new regulatory regimes around the world. While those experiments will share a greater role for governments overall, the global landscape will become increasingly variegated.
This provokes a simple question: Who should lead the charge to course-correct if (perhaps when) things go wrong? In Palo Alto, the answer was “It will have to be the large firms since that is where the capability lies.” In Munich, it was “Europe lacks the firms, and we do not trust governments to respond, so we need a citizen social movement.” In Singapore, the reaction was more muted: “It probably will not go that wrong, but if it does, the government is the fixer-of-last-resort.” Those are very different trajectories that would grate against each other in important ways.
2. Digital geopolitics is no longer a layer superimposed on conventional geopolitics; digital is creating new alignments among new actors, and not only states. At present, there are many who retain the belief that “no one really goes to war over a cyberattack and, if they do, it is not really about the cyber-attack per se.” Our workshops suggest this belief will not endure. Alliances are being reshuffled: arguments about cyber-attack attribution in Europe, for example, focus as much on the US NSA as they do on groups such as APT-28. Parastatal and criminal organizations are becoming equal-status players to large firms and governments: to refer to them as “non-state actors,” implying second-tier geopolitical status, is mistaken. Likewise, “large firms and governments” are now widely seen as nearly co-equal participants in the political

process; countries such as Denmark have already created a formal ambassador to the technology sector, and more will follow. The emergence of new technologies that could drastically reshuffle geopolitical power (possibly quantum computing, for example) will accelerate the reformulation of alliances relating to digital interests, and it is possible that firms will be as significant as states in the new alignments. There will also emerge new definitions of what constitutes criminal activity, and of who or what is a “criminal.” As those definitions are diverging across geographies, the opportunities for digital criminals to arbitrage within the global marketplace will increase.
3. Digital-induced job displacement and inequality will become more than a stressor; these dynamics are set to bring fundamental breakdowns and failures in both labor markets and politics. Social capital and broader societal resilience will be critical assets in navigating the transition towards any new automation and robotics-enabled labor market equilibrium.
Countries and regions are positioned very differently on this dimension; for example, Asians seem to hold a higher level of confidence that societies can endure through these changes, built on the belief that many Asian societies have proven to be resilient and cohesive in the face of comparable challenges. However, there is also a looming recognition that economic growth and development trajectories for most countries are increasingly uncertain. Populist movements in the US and Europe demonstrate in part the strains resulting from a loss of confidence that a mix of conventional markets and politics will ensure the benefits of digital technology help those seemingly being left behind. The success story of the late industrial-era developing country (low-wage manufacturing evolves towards higher value-add along with capital accumulation) is now largely obsolete and the path for late developers to succeed in a global economy dominated by data flows and machine learning has not been defined. Transnational movements—either of distressed and displaced labor, or perhaps of the (massively empowered) technology elite labor force—are nascent in some parts of the world (particularly the US and aspirational in Europe); their possible emergence would become an important new part of the security landscape.
4. The largest intermediation platform firms are now seen everywhere as a truly distinct category of player, whose relationships with governments, consumers, and societies need special assessment, attention, and, possibly, oversight. A striking observation is that while many of the platforms are global, or becoming so, conversations about their societal and economic consequences remained national or regional at best. Market power and oligopoly is now an assumption in most of the world; Europeans emphasize the negative implications most strongly. In Asia, the emphasis falls on speech, and how the act of trying to assess “truth” in platform-structured discourse affects social capital and cohesion. America struggles with the consumer-welfare focus of US competition policy; there is little visibility into (and relatively little concern about) how US-based platform firms affect societies and economies outside the US.

These contemporary observations remained largely robust in the context of the 2025 scenarios, though changes in computing architecture were seen as destabilizing. What is clear is that competition policy and cybersecurity policy are converging in many respects, and this trend brings national differences in approaches to competition policy into the security landscape as well.
5. The cybersecurity challenge of protecting networks and datasets from sovereign and criminal thieves is morphing into a challenge of protection from devious manipulation. Brute force attacks remain on the agenda, but there is a broad assumption that the sophistication of attacks is set to rise through some of these more insidious channels, such as adversarial machine learning, subtle deep fakes, or small changes in training set data that intentionally bias algorithms. This will accelerate the trend of cybersecurity becoming a much more scientifically interesting area, but it will also pile even more demand on a workforce that is already under massive stress. Broad societal resilience programs are one response that is talked about more in Asia than elsewhere; in the US, consumers and users are still seen as mostly passive, and the concept that there is an ability to educate them to be savvier consumers of information is still nascent. Turning more of the burden over to automated systems such as artificial intelligencedriven platforms may be another credible response—with substantial differences in what roles and controls should be maintained for human decision-making.
As a result of these observations, we believe that senior decision-makers developing cybersecurity strategies in government and the private sector must now engage with each of the following questions, individually and collectively, on an ongoing basis over the next few years. These are obviously not operational-level questions specific to a particular industry sector or country. However, the answers to and hypotheses on these questions should inform operational plans that are more robust in a fastevolving environment.
• Where are the new deviant digital black markets evolving? And what is being traded in those markets?
• What is the definition of a criminal? And what are the arbitrage-ready differences among those definitions?
• What new geopolitical alliances are forming and emerging? And how could we better understand the granular nuances of interest cleavage within nations and societies that influence the direction those alliances might take?
• How much digitally exacerbated and/or induced inequality can different societies absorb? And at what rate?
• Where are first-mover advantages to be found—in technologies, of course, but also in policies?

• What characteristics make a society resistant and resilient to digital manipulation? If employees, consumers and citizens need to be reoriented as less-passive players in the cybersecurity landscape of 2025, what new capabilities do they need to attain and how can they attain them?
Grappling with these questions should be a defining focus in 2019 for the C-suite, boards, and government agencies in essentially every country around the world.

Scenario Summaries
Scenario 1—Quantum Leap
The year is 2025, and the first countries to achieve practical quantum computing capabilities have spent the past several years trying to construct a non-proliferation regime that would preserve the economic, strategic and military advantages the technology has begun to generate. But other countries—and even large cities—that are behind in the race have resisted the offer to access watered-down quantum services from the few elite providers in return for restraint in development. Instead, many attempt to pursue “quantum autonomy”. Technology development accelerates almost to the exclusion of ethical, economic and other sociopolitical concerns as quantum leaks into the “deviant globalization” sphere of drug cartels and other worldwide criminal networks. Ultimately, the carrots of a restrictive non-proliferation bargain aimed at governments have not been enticing enough (and the sticks not fearsome enough) to hold a regime together, and the model that more or less worked to contain the spread of nuclear weapons in a previous era fails with quantum. In 2025, the Americans and the Chinese in particular are starting to wonder if their next best move is to reverse course and speed up the dissemination of quantum computing to their respective friends and allies, while the deviant sector is racing ahead.
Scenario 2—The New Wiggle Room
This is a world in which the promise of secure digital technology, the Internet of Things (IoT) and largescale machine learning (ML)—to transform a range of previously messy human phenomena into precise metrics and predictive algorithms—turns out to be in many respects a poisoned chalice. The fundamental reason is the loss of “wiggle room” in human and social life. In the 2020s, societies confront a problem opposite to the one with which they have grappled for centuries: now, instead of not knowing enough and struggling with imprecision about the world, we know too much, and we know it too accurately. Security has improved to the point where many important digital systems can operate with extremely high confidence, and this creates a new set of dilemmas as precision knowledge takes away the valuable lubricants that made social and economic life manageable. As the costs mount of not being able to look the other way from uncomfortable truths, or make constructively ambiguous agreements, or agree to disagree about “facts” without having to say so, people find themselves seeking a new source of wiggle room. They find it in the manipulation of identity—or multiple and fluid identities. This effort to subtly reintroduce constructive uncertainty and recreate wiggle room overlaps with the emergence of new security concerns and changing competitive dynamics among countries.
Scenario 3—Barlow’s Revenge
As digital security deteriorates dramatically at the end of the 2010s, a broad coalition of firms and people around the world come to a shared recognition that the patchwork quilt of governments, firms, engineering standards bodies and others that had evolved to try to regulate digital society during the previous decade was no longer tenable. But while there was consensus that partial measures, piecemeal reforms and marginal modifications were not a viable path forward, there was also radical disagreement