CrowdStrike’s Corporate Citizenship: Making Mapping Matter

By Ravi Nayyar

A Techno-Legal Update
23 min readAug 9, 2024

Welcome to Article 3 in this series on the CrowdStrike incident-non-incident.

Here’s Article 1.

Here’s Article 2.

Now, as I wrote in Article 1, the way I label the event is inspired by the choice of wording from the then-Commonwealth Minister for Home Affairs.

As I also flagged in that piece, this incident-non-incident is the case study which my thesis was designed for.

Look at the below graphic, inspired by my 2023 Cybercon preso.

My PhD in a graphic. (Source: Me)

3 regulatory circles pertaining to:

  1. the cyber resilience of critical infrastructure (‘CNI’) assets;
  2. software supply chain risk management (‘SCRM’) by CNI operators; and
  3. critical software risk to CNI assets (my thesis).

With this in mind, the CrowdStrike incident-non-incident has the ‘Thesis Trifecta’ for me:

  1. a critical software vendor messing up; which jeopardises
  2. the cyber resilience of the operators of critical infrastructure (‘CNI’) assets; and thus captures
  3. the incredible risks to national security from software supply chains.

To nut out the Trifecta, I’ve written three pieces: one on each Limb.

Article 1 set the scene with Limb 1 — the anatomy of the stuff-up from a critical software vendor, CrowdStrike.

Article 2 continued with Limb 2 — how CNI asset operators were caught up in this saga.

Article 3 brings it home with Limb 3 — all this illustrates how software supply chain risks are national security risks.

Once more unto the breach, my friends.

Everywhere you look around.

Should‘ve Woken Up a While Back

First things first, the definition of a software supply chain (emphasis added):

A software supply chain is the entire sequence of events that impacts software from the point of origin where it is designed and developed, to the point of end-use. Each sequence and element in this chain affects the software … The supply chain includes the software code itself as well as the systems and tools used by developers, proprietary and open-source software repositories, signing keys, compilers, and download portals. The entities that comprise the software supply chain can include multiples of developers and technology providers … It is also unusual to find a single company responsible for the entirety of a software code base.

In a word, ‘Sprawling’.

And that’s just one software supply chain.

Software supply chains — especially those for critical software products (here’s the definition of critical software) — have been an increasing source of risk for everyone downstream.

If you want to understand how much of a source of risk, a study by firmware security company, NetRise, of 100 networking devices (routers, switches, firewalls, VPN gateways and Wireless Access Points — code running on them includes critical software) found:

  • an average of 1,267 software dependencies/device;
  • an average of 1,120 known vulnerabilities/device (1/3 of bugs > 5 years old); and
  • 45 Linux Kernel versions deployed, 27 EoL ones.

And, given that these grinning leviathans are growing, they’re domains for a whole host of actors — bad, frustrated or plain incompetent.

We have seen the movie many times:

  • left-pad — the maintainer for those sweet seventeen lines of Java deleted them due to a trademark tiff, breaking tons and tons of JavaScript and websites;
  • WannaCry — you know this one;
  • NotPetya — you know this one;
  • SolarWinds — you know this one;
  • Microsoft Exchange 2021 — the feds had to get a search and seizure warrant to uninstall tons of web shells that Chinese actors (Hafnium) had left on people’s vulnerable Exchange boxes upon their being outed;
  • Kaseya — you know this one;
  • Log4j — you know this one;
  • xz utils — you know this one;
  • #CitrixBleed — this one got, eg, DP World Australia pwned;
  • 3CX — #DoubleCrunch;
  • TeamCity — wonder why the SVR targeted CI/CD boxes.

We have had the CISA Director and Office of the National Cyber Director (‘ONCD’) write and/or give speeches specifically on the risks to our very societies and economies from the technologies that are, as ONCD put it, ‘the foundation on which our future lives — lives we cannot yet imagine — are currently being built’.

As part of this, the Director and ONCD have highlighted how risks can be transmitted through broader technology supply chains and cascade across sectors to cause tons and tons of disruption and/or damage.

How there’s that horrible mismatch between:

  • vendors prioritising their financial interests in speed-to-market and functionality (because corporate law in the absence of specific regulation — see Limb 1); versus
  • the criticality (for all of us) of their products being secure-by-design and -default.

A mismatch which jeopardises national (cyber) resilience.

There’s also the usual criticism about the state not having enough visibility into the problem. About industry needing to be far more transparent regarding, eg, breaches (ASD has certainly noticed that here) and vulnerabilities to enable greater accountability and more effective incident response. (I should note that, eg, the USG has done good work itself to share more data with industry, be it via CISA, NSA’s Cybersecurity Collaboration Center and the Cyber National Mission Force’s Under Advisement program.)

And yet again and again, here we are.

Scrambling as societies, economies and governments to clean up after a critical software vendor’s mess.

More recently, an unforced error which bricked millions of Windows machines and disrupted the cyber resilience of the operators of CNI assets around the world (see Limb 2).

Yawning > ‘Shocked!’

The fallout from this incident-non-incident was hardly a surprise.

My initial response (like most in Infosec land) to this incident-non-incident was, ‘Ah, here we go again’.

For the CrowdStrike Falcon Fiasco borders on technological and cyber policy cliché.

Look again at the outputs from the USG.

First, ONCD’s Strategic Intent Statement:

Critical services for millions can be imperiled because of a single person’s failure to recognize a phishing attempt.

Second, an op-ed co-authored by then-NCD Chris Inglis (emphasis added):

… the security challenges in cyberspace are daunting because the scope and scale of any one security incident can be so vast. In a world where clicking the wrong link or neglecting a single software patch can result in a geopolitical incident, responders often focus on an attack’s perpetrator at the expense of addressing the perverse incentives that create these circumstances in the first place.

Third, the Cybersecurity Posture of the United States, put out in May 2024 by ONCD (emphasis added):

Adversaries are increasingly taking advantage of complex and interconnected relationships between organizations and their suppliers, customers, vendors, and service providers, compromising single nodes that grant surreptitious access to victims in the United States and around the world.

[The 3CX hack showed] how a single initial compromise can quickly spread through interlinked technology supply chains and third-party relationships.

The incident-non-incident is nothing new. The USG said that such a thing — one event or person being the source of a lot of trouble — could happen way back in October 2021.

That the operational resilience of society itself can be disrupted due to one malicious/erroneous thing happening somewhere in the software supply chains making us tick.

(And I’ve been writing about this sort of thing since 2022.)

Fast-forward to July 2024 and the CISA Director’s write-up on the incident-non-incident (emphasis added):

And this is due, in large part, to a fragile software ecosystem that has historically deprioritized security in favor of features and speed to market.

… our highly digitized, highly interdependent, highly connected, and highly vulnerable critical infrastructure ecosystem.

Hardly different to the policy statements from 2021 onwards, eh?

Perhaps that’s why an incident from 2021 provided the template for a great meme on the offending Channel File 291, the immediate trigger for the CrowdStrike Falcon Fiasco.

Software supply chains are critical supply chains. (Source: X)

The Sonatype CTO was on the money when describing the incident-non-incident as ‘the perfect tabletop’ for an actually malicious event.

The Elephant Is the Lessor of the Room

Folks, to reiterate the bleeding obvious, software supply chains are domains for more and more baddies, and home to (unnoticed) vendors and SaaS providers for major companies/entire economic sectors.

The malevolence or stupidity of these groups can have terrible downstream consequences, such as:

(For me, the term, ‘snowflake’, has a rather different meaning since that drama.)

Prominent insurer, Howden, referred to the targeting of software supply chains, through attacks like SolarWinds, Microsoft Exchange, Kaseya, Log4j and MOVEit, as designed ‘to maximise the fallout across multiple organisations’. It used Change Healthcare as an example of the ‘inherent risk of aggregation’ and thus concentration of potential (losses) in sectors ‘due to reliance on industry specific software and payments / administration platforms’.

The insurer provides some great stats too:

Recent disclosures show that the MOVEit file transfer breach, which began in June 2023, affected approximately 2,800 organisations and 96 million people.

The user base for the Change Healthcare payments and claims platform is made up of 900,000 doctors, 33,000 pharmacies and 5,500 hospitals in the United States. The CEO of parent company UnitedHealth has indicated that up to one-third of the U.S. population has had sensitive data leaked.

In general, note the findings of a 2024 SecurityScorecard study (done using SecurityScorecard Automatic Vendor Detection ‘to identify the most frequently and extensively used companies of approximately 12 million public and private sector organizations’):

  • 150 companies = 85% x customer relationships, 90% x product detections;
  • 15 = 62% x products and services;
  • 41% with at least one compromised device on their networks.

Folks who aren’t in threatintel shops/relevant economic sectors reliant on such popular vendors/SaaS shops most likely don’t know that these vendors/service providers even exist before something bad happens, monoculture or not. Perhaps that lack of awareness applies to national security officials as well.

All this makes an incident-non-incident like CrowdStrike’s stuff-up look rather quaint, eh?

Though, if we momentarily return to the CrowdStrike incident-non-incident (or ‘The CrowdStrike disaster’, as Delta Air Lines’ counsel called it), here are some numbers, courtesy of Delta’s counsel (emphasis added):

At Delta, it shut down more than 37,000 computers and disrupted the travel plans of more than 1.3 million Delta customers.

Approximately 60 percent of Delta’s mission-critical applications and their associated data — including Delta’s redundant backup systems — depend on the Microsoft Windows operating system and CrowdStrike.

But then again, these are all examples of vendors messing up or being compromised.

They don’t even include OSS.

Don’t Forget OSS

Most estimates will tell you that almost all code is OSS.

And it’s growing. Sonatype observed, on a YoY basis, there to be 32% growth in package requests served by npm in 2023, 25% by Maven, 31% by PyPI and 43% by .NET. Collectively amounting to trillions of package requests;

This growth encourages gargantuan levels of software supply chain risk for modern computing.

Some examples:

  • A 2023 analysis of 1,067 commercial codebases (from seventeen sectors) by Synopsys found that: the mean of OSS dependencies/codebase was 526; around 65% of codebases had OSS bugs that had been actively exploited/had POCs/were RCE bugs; and around 43% had OSS dependencies without development activity within the previous two years.
  • ReversingLabs observed an over 1300% increase in ‘threats circulating via [OSS repositories]’ from 2020–23, including over 7,200 malicious Python (PyPI) packages from January-September 2023.
  • 25% of all OSS projects have one maintainer and 94%, < 10 maintainers ‘actively contributing code’.
I couldn’t not include this cartoon. (Source: xkcd)
  • A 2022 analysis of 1,883 OSS packages by Endor Labs found 254 Java (Maven) packages with an average of fourteen dependencies/package (< reported average of 77 for Java (npm) packages). Six of said 254 packages had over 100 dependencies each. Endor Labs’ broader analysis discovered that, for the applications we use, around 95% of their vulnerable dependencies were transitive (definition below).
Direct (P1) v Transitive (P2, P3) Software Dependencies of an Application. (Source: Endor Labs)
  • A threat actor leveraged fake Python infrastructure to distribute a malicious version of the hugely popular Colorama package in March 2024.
  • Hacktivists poisoned OSS with ‘protestware’ to (indiscriminately) disrupt the cyber resilience of entities served by those software supply chains (eg 1, 2, 3, 4, 5).

TL;DR

Looking only at vendors/other for-profit developers when formulating cyber→national security policy is incredibly narrow.

Bonus points if you can geolocate this photo. (Source: Me)

Policy is meant to be a numbers game:

  • OSS dependencies of your enterprise code;
  • those OSS dependencies’ own OSS dependencies;
  • most OSS packages having inadequate maintenance; and
  • baddies looking to exploit all of the above.

(Oh, and if you’re interested, Sonatype maintains a timeline of software supply chain attacks, including those involving OSS, going back to 2017.)

As ONCD put it (emphasis added):

Computers, defined by the hardware and the software that runs them, are now so complex that it can be difficult to fully appreciate the sum of their constituent parts — a sum which now extends beyond any given physical location. This problem compounds exponentially as computers have been networked together into the vast and increasingly complex digital systems that define our modern lives, economies, and societies. These sprawling arrays of daunting complexity are easy for malign actors to hide in and exploit, and, to date, too challenging for industry or government alone to defend or protect.

So, you need to survey everything.

Which was implied by the European Systemic Risk Board (‘ESRB’) in relation to the financial sector when it decried (emphasis added):

a lack of comprehensive and timely data on operational linkages, such as common third-party providers, common exposures to hardware and software packages, and common exposures to clients, retail partners and counterparties.

Every. Single. Time.

Given the scale of the problem, one wonders if it was the futility of the whole thing which played a role in President Biden’s extension (by a year) of the ‘National Emergency With Respect to Securing the Information and Communications Technology and Services Supply Chain’ on 8 May 2024.

The original emergency having been declared by President Trump on 15 May 2019.

Sigh.

But lo, we have a potential solution!

The Journey of Self-Discovery

You have to understand where your pressure points are, your desires, needs, issues, etc, so that you can mitigate or leverage them, as appropriate, to make yourself stronger than the sum of your parts. (I’ll be doing book signings in the foyer.)

The same logic must apply to national security risk management.

We, our allies and our partners need to map our jurisdictions’ software supply chains.

It’s like mapping, with the help of allies and partners, supply chains for critical goods like vaccines, refined oil, semiconductors, critical minerals and clean energy tech.

But for OSS, enterprise codebases, SaaS/IaaS/PaaS relationships, as well as the flow of data among and from the operators of the plumbing of the Internet like code repositories, ISPs, CDNs, CAs, DNS providers and domain registrars.

For reference, here’s a highly simplified software supply chain.

A simplified software supply chain. (Source: NTIA)

Now, create a series of these for our societies and economies.

Assumptions

Of course, I am assuming that, eg:

  • firms and OSS developers/maintainers have robust, dynamically-updated asset and software inventories — fundamental to such a mapping exercise;

NOTE: This assumption is also crucial because the map is one of technological relationships and thus must mirror the dynamism of the technology landscape to be useful.

That said, this assumption is problematic because of the horrific amount of technology debt floating (unnoticed) within and among organisations, which significantly multiplies our collective attack surface. Especially as that debt is (unknowingly) carried by all stakeholders, it is incredibly difficult for vendors and other developers to even quantify the costs of their code causing massive problems for users, though it is a good idea for vendors and other developers to do such threat models and scenario planning.

I also note that SBOMs aren’t regarded as mature →useful enough at this stage.

  • bureaucrats are literate in software and other computing terminology, and resourced to collect, process and analyse relevant data to ultimately generate the maps; and

NOTE: The mapping exercise could be shepherded by the jurisdiction’s Siginters/cyber security agency/prudential regulator folk to ensure it is governed by those who are literate.

Resourcing is a key question, given the ESRB warning in 2022 that such an exercise for the financial sector alone would be arduous due to ‘current challenges’ in collating data on financial institutions and their vendors.

  • stakeholders will be willing to: voluntarily share their asset and software inventories, as well as their registers of suppliers and customers, with the state; and be happy to more generally help the state perform said mapping exercise.

NOTE: Will officials require new information-gathering powers (available on application to federal judges)?

Existing powers include:

  • section 215 of the USA PATRIOT Act (codified at 50 USC § 1862), especially as the goal is to understand what the entity is running, and who’s in their client and supplier books;
  • European/American/Australian financial regulators’ powers of oversight over institutions’ third party service providers and/or institutions’ obligations to maintain a register/notify financial regulators of contracts for (critical) third party arrangements; and
  • Security of Critical Infrastructure Act 2018 (Cth) s 37 (‘SOCI Act’)–the broad, general power of Dep-Sec Home Affairs to, roughly speaking, obtain documents or information from the owner/operator of a CNI asset if said stuff could aid the operation of the SOCI Act in relation to the asset.

What Should Be Mapped?

For the sake of the argument, let’s make the aforementioned assumptions and commission the software supply chain mapping exercise.

What are the sorts of persons, services and code that we are wanting to identify and track?

The Reserve Bank of India provides a good rule of thumb through what it requires non-bank Payment System Operators to do:

A complete process flow diagram of network resources, inter-connections and dependencies, and data flows with other information assets, including any other third-party systems, shall be created and maintained.

Let’s flesh out some of the specifics of what should be mapped by the state in partnership with other stakeholders.

The sorts of vendors/service providers that such a mapping exercise is especially designed to capture would have the characteristics, roughly speaking, of folks like:

  • Systemically Important Financial Market Utilities — see 12 USC § 5463(2);
  • critical ICT third-party service providers — article 31 of DORA; or
  • gatekeepers — see article 3(8) of the Digital Markets Act.

And they would have these characteristics generally or in relation to the specific economic sector(s) that they serve.

The sorts of enterprise software that such a mapping exercise is especially designed to capture (if the vendors for this software haven’t already been identified) would be what I’ve called ‘Systemically Critical Software’ (taking inspiration from the OECD and the EU).

That concept revolves around two criteria:

  • scale — the size of the user base; and/or
  • scope — dependency factors like: the criticality of the product to certain social/economic functions; whether it’s a dependency for hugely popular stuff; and whether exploitation of bugs in it would cause a massive problem for everyone.

(One way to understand the scale and scope criteria is by looking at the criteria listed in article 6(2) of the European Commission’s original proposal for the Cyber Resilience Act.)

The sorts of OSS packages that this mapping exercise is especially designed to capture are those identified as critical to ecosystem resilience by major OSS community initiatives like the:

The USG certainly has the right idea by launching (and committing $11 million to) the Open-Source Software Prevalence Initiative, which seeks to map the use of OSS in US CNI assets, ‘allowing the Federal Government and partners in the open-source community to strengthen the security of the open-source software ecosystem’.

Case Study: Financial Sector

Helpfully, financial regulators and agencies have been working on the mapping sorta thing more generally, given their increased focus on risks from (critical) third party service providers to prudentially-regulated institutions and systemic stability.

See, eg, the aforementioned mapping requirement from the Reserve Bank of India.

The ESRB has even developed a conceptual model for tracking systemic cyber risk to the financial sector and defined a ‘cyber map’ as tracking the:

identification of systemic nodes in the [financial] system by monitoring and analysing the main technologies, services and connections between financial sector institutions, service providers and in-house or third-party systems.

Such maps, or those inventories voluntarily created by entities, can then be aggregated by the state, as well as allies and partners in their jurisdictions, to generate nationwide maps of software supply chains.

An expanded version of what the International Monetary Fund has hypothesised for the financial sector.

(Source: International Monetary Fund)

Honourable mention to the Cambridge Centre for Risk Studies for their map of the ‘world’s largest commercial companies and their trading relationships, showing the systemic linkages through major software providers’. From 2014.

How times have changed since, eh? (Source: Cambridge Centre for Risk Studies)

Obviously, the preceding two graphics are simplifications and only feature (tech) vendors (not, eg, critical OSS packages) but their objective is kinda sorta along the lines of mine.

That is, understanding the flow of code/data/services and (trusted) relationships among different stakeholders that, taken together, touch on (inter)national security.

Identifying also those relationships or stakeholders that can amplify incidents at any point into something with systemic implications.

(Source: European Systemic Risk Board)
(Source: European Systemic Risk Board)

Which is especially vital when there is no transparency about network topologies (especially concentrations and monocultures), as well as sheer uncertainty if many screens suddenly go blue, eg, thanks to a very popular third party vendor for folks in a CNI sector.

What is also necessary is to integrate the desired technical software supply chain maps with those of ownership and other non-cyber relationships binding the same stakeholders. This will provide us, including allies and partners, an holistic understanding of how (inter)national security risk flows through and from relevant layers of cyberspace (like software supply chains), as well as other economic and social domains.

Take the below visualisation of the ‘financial sector as a multi-layered network of complex systems’, courtesy of the ESRB.

(Source: European Systemic Risk Board)

And a visualisation of how the different layers fit together.

(Source: European Systemic Risk Board)

As this case study highlights, the state is no stranger to such mapping exercises. Most of the above graphics are from ESRB literature on analysing systemic cyber risk to the European financial system from 2020 onwards. In it, the ESRB has referred to mapping efforts by the Dutch and Norwegian central banks. It has also called for prudential regulators to focus on herding the various arms of the state and industry stakeholders to ensure mapping exercises have meaningful outputs.

By the way, mapping is hardly an alien concept in national security policymaking, right, folks?

We’ve mapped cables for yonks.
And Satcom.

— > Exercises

A comprehensive map of, essentially, a society’s digital economy is a national security enabler. Especially when integrated with maps of non-cyber stuff connecting the same stakeholders (as flagged above), as well as equivalent outputs from allies and partners.

Software supply chain mapping is an enabler of more realistic scenario planning, tabletop exercises and stress testing by: our national security policymakers and regulators; allied and partner counterparts; and other stakeholders, especially industry (which owns and operates most CNI assets, and markets most software) and OSS folk.

This mapping makes (regional and international) crisis response or stress testing frameworks like the EU systemic cyber incident coordination framework (EU-SCICF) or cyber resilience scenario testing (CyRST), respectively, meaningful.

For unless we know how our societies and economies’ plumbing is organised, we’re left with the below scenario.

We can always do better.

Greater realism in planning would particularly stem from our, allies and partners’ newfound awareness of the weakest links in our software supply chains (akin to our knowing whom we are especially dependent on for critical medical inputs like active pharmaceutical ingredients).

Said awareness translates into a more informed estimation of what adversaries would seek to target to cripple our national/collective cyber/economic resilience (eg the Volt Typhoon scenario) and how that targeting would interact with the non-cyber layers of our societies and economies.

After all, adversaries would especially seek to terminate the cybery stuff that we/allies/partners rely on but they don’t (to the same degree), as Professor JD Work highlights below.

Baddies will especially target stuff they don’t rely on themselves.

Mapping thus enables realistic threat modelling and helps ensure we aren’t caught flat-footed the next time something like the CrowdStrike incident-non-incident happens. Or at least something malicious.

Tangentially, we better keep said software supply chain maps far away from prying eyes, the infrastructural Rosetta Stones that they are for us, allies and partners.

By the way, another honourable mention to the Cambridge Centre for Risk Studies. A decade ago, they war-gamed the fallout from an insider attack on a hypothetical ‘Systemically Important Technology Enterprise’ (logic bomb in the Sybil Corporation’s ‘database product used throughout the corporate world’; emphasis added):

The resulting global macro-economic impact portends an economic downturn driven by a reduced trust in IT by business leaders, investors and consumers, which we call an ‘information malaise’.

The damage caused by the more extreme variants of Sybil Logic Bomb is almost as severe as the Great Financial Crisis of 2007–2012.

‘Nuff said.

Concentrations and Monocultures

It is precisely the software supply chain mapping discussed above which enables us to spot concentrations and monocultures of the sorts warned about in the aftermath of the CrowdStrike incident-non-incident.

It helps counter the problem which ONCD defined as follows (emphasis added):

Complex and interconnected supply chains for software and other information technology and services, combined with growing reliance on common third-party service providers, create opportunities for sophisticated adversaries to access victims at scale and complicate the efforts of defenders to identify and manage cybersecurity risks.

To return to the financial sector case study, the ESRB has seen the concentration issue coming. When it suggested a toolkit in 2022 for tackling systemic cyber risk, it called for financial regulators to track, eg (emphasis added):

systemic nodes by size, complexity, substitutability and interconnectedness of institutions and third-party ICT providers, for example through market concentration of external IT service providers (in %), average number of cloud service providers and number of external IT service providers.

The ESRB suggested in 2023 that a tabletop scenario for the financial sector could be developed around ‘a severe disruption at a critical third-party ICT service provider’.

Earlier this year, it referred to the downstream effects of the ION Trading ransomware attack as (emphasis added):

a striking example of how an incident at a relatively little-known third-party provider (albeit one of great significance) can cause major disruption, if that institution provides vital central services through the financial industry’s supply chain.

This is something even governments have to look at for their own cyber resilience.

Indeed, it is software supply chain mapping which enables the state to zero in on concentrated vendors like CrowdStrike, in relation to which it can task the collection of (further) economic intelligence.

In CrowdStrike’s case, intelligence on the industry dynamic of ‘platformisation’.

As Chris Hughes pointed out, the push for ‘platformisation’ by cyber resilience vendors — whereby a vendor seeks to offer, on a single platform, the services provided by other firms — encourages this trend. CrowdStrike itself refers to its Falcon platform as ‘Driving Consolidation’.

(Source: CrowdStrike)

Palo Alto Networks has also dedicated a section of its website to the C-Word, publishing literature on why it’s a great idea and why you should task Palo with delivering a consolidated solution for your organisation.

Again, just look at the above screenshot from CrowdStrike. How doesn’t this industry trend of bringing multiple solutions under the one vendor’s roof engender monocultures if not concentrations, at least by heightening switching costs for customers?

That said, we should be careful when using a word like ‘monoculture’.

After all, between July 2021 and June 2022, CrowdStrike’s market share for EDR was 17.7%.

Yes, 8.5 million Windows boxes were estimated to have been bricked in the incident-non-incident, but that’s <1% of all Windows boxes. As pointed out by Rob Graham, ‘1% is not a monoculture’.

Ditto re CrowdStrike’s market share.

So, to reiterate my earlier word choice, we have a concentration on our hands with CrowdStrike, not a monoculture. The M-word is very much a Microsoft skillset.

Gold.

(Andrew Plato and Chris Rohlf provide some interesting arguments around the software monoculture question, by the way.)

Nomenclature notwithstanding, the issue at hand is the sheer number of (essential) services disrupted by CrowdStrike’s bad Rapid Response Content update. Indeed, of those polled by Aon at a webinar held a week after the incident-non-incident, 32% said they were ‘indirectly impacted’, just more than the 30% who said they were ‘directly impacted’.

The bad update from CrowdStrike did quite the dipsy-doodle around economies, eh?

It is precisely the aforementioned software supply chain mapping which enables us to identify the bits (or larger chunks) of our economies and societies that any concentration in software/services actually implicates.

Look at what and where boxes were bricked, not how many boxes were bricked.

Even though CrowdStrike’s client book is nowhere near as large as Microsoft’s, it is because (as discussed in Limb 2) a fair deal of it is in CNI that we need to perform said mapping and ensure that, as the ESRB declared for the financial system:

Systemic nodes … should operate with elevated levels of cyber resilience.

As Kevin Beaumont put it:

… we have a small number of cyber companies effectively operating as God Mode on the world’s economy now.

Oh, and Dan Geer especially has been warning about this sort of stuff (be it concentrations/monocultures/product security failures) for years.

As Dan put it in the wake of the CrowdStrike Falcon Fiasco, ‘It Is Time to Act’.

And, in explaining why this incident-non-incident has the ‘Thesis Trifecta’ for mine, I have attempted to provide workable law and policy reform recommendations on that front.

Fair bit happening there.

Wrapping Up

This saga, this incident-non-incident, this ‘CrowdStrike disaster’ (as Delta Air Lines’ counsel put it), this great CrowdStrike Falcon Fiasco, was the SolarWinds moment for CrowdStrike — popping into the zeitgeist for all the wrong reasons.

And that too, not because of the merchandise.

The most bizarre stocking stuffers. (Source: CrowdStrike)

Or whatever this is.

Doesn’t this trivialise the very real human and societal costs of cybercrime?

Indeed, software supply chain-driven events like the incident-non-incident make me furrow my brow.

We are quite likely to threaten our collective existence because someone at a major vendor — which most ordinary folk haven’t heard of but is vital to key commercial/military supply chains — sends a bad update (malicious or not) to their customers that happen to be CNI asset operators/other systemically important businesses/entities.

An update which then bricks the boxes that most/all of those customers are using en masse because the boxes are sourced from the one company and are thus prone to being bricked by said bad update.

A cascading, systemically significant, outage (malicious or not) which traverses software supply chains and jeopardises the continuity of essential services worldwide.

All because of (a slew of) concentration(s) or monocultures in software supply chains that we have allowed (yes, allowed) to fester.

An outage like NotPetya (there’s a reason the GRU targeted Linkos Group’s update servers) but on an even larger scale.

Of course, as seen with the CrowdStrike incident-non-incident, jurisdictions have mitigating controls in place (like grounding planes out of caution or medics switching to pen and paper). And yes, we are resilient as a species.

But it would be good if we stopped testing that resilience beyond realistic (red team) exercises and coordinated stress tests.

We should instead be leveraging this golden opportunity to comprehensively recalibrate our policy settings. (And thanking CrowdStrike for bringing this imperative into ever sharper relief.)

For the solution is far larger than a vendor paying off overworked/burnt out IT staff — at the very customers that that vendor bricked — with minuscule food delivery vouchers. (That’s just grubby as heck, CrowdStrike.)

To bring this series home, here are two pithy quotes.

One, from the X user, Pinboard:

Quite the way to find out.

Second, from software engineer, Ellen Ullman:

The computer was suddenly revealed as palimpsest. The machine that is everywhere hailed as the very incarnation of the new had revealed itself to be not so new after all, but a series of skins, layer on layer, winding around the messy, evolving idea of the computing machine ... And down under all those piles of stuff, the secret was written: We build our computers the way we build our cities — over time, without a plan, on top of ruins.

Bah, come on, let’s end on an optimistic, if not wistful, note.

The vision which ONCD imagined in 2021:

No single component is a source of catastrophic risk, no one Achilles heel is waiting to unleash cascading, systemic failure, and no minor slip-up is capable of producing a massive breach in privacy.

--

--

A Techno-Legal Update
A Techno-Legal Update

Written by A Techno-Legal Update

Vignettes from the intersection of law and technology, and a word or two about sport. Composed by Ravi Nayyar.