In 2025, the world has come to be defined by computing and the technologies which are enabled by it. We live in an age where both organisations and individuals relentlessly purchase technology to maximise the efficient realisation of their own missions, securing the greatest available force multipliers which contemporary technology can deliver and which the market is able to sell.
For many individuals, their homes and cars are defined by a relentless advance in information processing technology. Cellphones which provide compute power unprecedented for a supercomputer just a generation ago; smart speakers harnessing advanced voice recognition and Large Language Model (LLM)-based AI technology.
However, this rapid and blind adoption of technologies as they are sold onto the market by commercial organizations is often done without considering where sovereignty and control of that technology truly lies. This is understandable as up until now, people have lived in a world where inanimate objects are essentially inert, and do not embody particular motives or interests. Someone buying a smart TV may not expect it to be any more an active actor in their home than a set of dinnerware.
In reality, software and technologies that embed it are fundamentally transformative. A computerized device is able to act actively as an actor in its own right. With this capability has come the ability for devices to act essentially to advance the interests of parties other than the device owner. When purchasing a smart TV or even a tractor, the shrewd purchaser must ask, in whose interests is this TV designed to act by its creator?
Increasingly often, the answer is not “you, the buyer”. A smart TV may be designed to report information about the owner’s viewing habits to the original manufacturer without their knowledge or consent, for the benefit of the manufacturer and their content partners. The TV may be deliberately designed to prevent skipping advertisements as part of an agreement between the manufacturer and a content network; an example of a conflicted loyalty.
A lack of technological sovereignty: the risks to organizations
Often, these kinds of data and information grabs in the individual’s domain are executed because there is no meaningful resistive force opposing it. An individual may lack the awareness to ask the relevant questions when purchasing a product. Surprisingly however, organizations appear to fall prey to such risks with equal frequency.
A mundane yet physically massive example is found in a robotic parking garage in Hoboken, NJ. In 2006, a robotic parking garage was rendered inoperable when the software, provided by the vendor of the garage, ceased to operate. The owner of the garage had come into dispute with the vendor over the licensing of the software without which the operation of the garage — and the insertion of cars or removal of any cars already inside — is impossible. As a result of this dispute, the operation of the system was shut off by the vendor by a software licensing mechanism, essentially taking all cars inside the garage hostage and forcing the nominal “owner” of the garage to acquiesce to the vendor’s terms.
This is not an isolated example. Increasingly, software and hardware developers have realised that their technologies can be programmed to embody their own interests over and above those of the device owner. Moreover, since there is an acute lack of awareness or realisation that this is being done in both individual and organisational customers, the resultant power grabs are often done without the knowledge or realisation of the customer, and occur without meaningful opposition or contest.
While such hazards are a concern at the individual level, at the level of military and national defence, they become an existential threat. The awareness that devices can embody the agency of their manufacturer, and the implications of that fact on national sovereignty, is only slowly building amongst lawmakers and decision makers.
In the defence space, the principle of technological sovereignty is critical. It must be possible to audit software or electronic technology for undesired functionality, or, in effect, residual loyalty to a manufacturer — or a foreign power — over and above the interests of those in the field.
Such conflicted loyalty can be found both domestically and internationally. Internationally, if a silicon component is sold by a Chinese firm and ends up unwittingly incorporated into a piece of US defence hardware, the foreign power has a natural interest in ensuring that the component can be remotely deactivated. While a foreign power may neglect to act on that interest in every case, it is not a safe assumption that it has not. Domestically, vendors may instead be motivated by their own bottom line, for example by designing a software product to shut down automatically if a fee is not paid.
Even bulldozers today are commonly fitted with remotely operable kill-switches — nominally to render them inoperable if a lease payment lapses. Even if this principle is accepted, in reality, this means the entire fleet of a vendor’s construction equipment could be rendered inoperable if a foreign actor were able to remotely compromise the vendor’s control centre. This demonstrates that the lack of sovereign control is as much a threat to national security in the civilian sector as it is with regards to military hardware.
The path to technological sovereignty
One answer to these conundrums is to develop a procurement framework for ensuring a principle of technological sovereignty is embodied in all hardware, software, devices and equipment procured for military or national security purposes, or for mission-critical civilian infrastructure. For any device with an electronic circuit, the question, “who truly controls this hardware after it has been sold?” must be asked relentlessly; the answers will often be surprising.
The principle of technological sovereignty is essentially as follows:
- A piece of technology must be designed to secure the interests of its owner, over and above that of its creator, a foreign power, or any other entity.
- A piece of technology must be auditable so that its owner can ascertain that criterion (1) is met.
Awareness of the risks posed by the potential for mixed loyalty in purchased equipment, and the concept of technological sovereignty, must be raised if this risk is to be effectively mitigated. Criterion (2) is most effectively addressed in the software space by maximising the adoption of open source software. Since open source software can be freely audited, it ensures no functionality is hidden that would be detrimental to the operator. Moreover, since open source software can be modified in the field as needed, even if detrimental functionality were added by a third party in the supply chain, it can readily be removed, making attempts to add such functionality largely pointless. The use of open source software also more generally ensures that the software can be adapted as needed to meet the changing or unforeseen needs of a defence organization, even if the original vendor is disinterested, defunct or unwilling, ensuring that an organization can control its own destiny.
The risks posed by a lack of technological sovereignty also occur in the hardware space even before any software is considered. All modern computer hardware contains firmware, which is software deeply embedded in the device essential to its operation placed there by the hardware vendor. Just as software can pose a threat to technological sovereignty if it embodies the interests of its vendor over its operator, vendors have realised that firmware can be used to control the usage of their hardware after it is sold.
Moreover, features nominally marketed as providing a security benefit may actually undermine technological sovereignty and reinforce a vendor’s control over a platform after it is sold. Many vendors now proudly advertise that their hardware supports “secure boot” functionality, ensuring that firmware and software is not tampered with by a malicious attacker. However, such functionality is often implemented by “keyfusing”: a chip is designed so that firmware can only be used if digitally signed by the original hardware vendor. The digital fingerprint of the vendor’s signing key is permanently burnt into the chip at manufacturing time, permanently bonding the chip to that key identity.
This approach is antithetical to technological sovereignty, as while it prevents malicious changes to the firmware or software by an attacker, it also prevents the owner of a device from modifying the firmware or software on the device according to its needs. Even if that firmware or software is open source, it is moot if changes required by circumstance cannot be deployed to the actual hardware.
Moreover, it is at odds with contemporary best practices for cryptographic key management, as there is no provision for key agility; the signing key used by a vendor to endorse firmware images cannot be replaced regularly, as the key identity is permanently and physically fused into deployed silicon. Any adversary who can obtain the corresponding private key from the vendor’s own IT infrastructure can compromise this scheme. Given the high level of sophistication of contemporary nation-state adversaries in the cyber domain, a targeted attack by a foreign adversary to obtain such key material is likely to be successful eventually.
Ultimately, the essential principle is that an organization, especially a military organization, should and must be able to control the cryptographic roots of trust of any product it procures, if it so chooses. While an organization might choose to allow a vendor to sign firmware or software images and thereby choose to trust a vendor’s cryptographic signing identity, it must have the option of independence if it so chooses. An organization which is technologically sovereign must be able to maintain its own Public Key Infrastructure (PKI) for the purposes of endorsing software and firmware if it so chooses. In the absence of this prerogative, hardware remains effectively under the control of its original vendor, even after it is sold, as only the original vendor can mint new firmware releases.
As such, the criteria (1) and (2) for technological sovereignty are extended as follows:
- A piece of technology must not “recognise its maker”; its manufacturer must have no greater operational control over it, after it is sold, than a “man on the street”
- Any cryptographic public key identity (such as a Root of Trust (RoT) for secure boot) embedded in hardware must be changeable by the hardware owner as they require — essentially ensuring that “you can change the locks”
Conclusions
A lack of technological sovereignty, amplified by a historical lack of understanding of the depth and extent of the problem among stakeholders, poses a critical national security threat to the United States, as without technological sovereignty there is no basis to ascertain that the defence technologies, equipment, and critical civilian infrastructures embody the interests of the United States, without conflicting loyalty to the interests of any vendor or any foreign power. The United States cannot be defended without confidence in the total and singular loyalty of the information technology, both hardware and software, which enables its military and civilian infrastructures to function. Ultimately, all procurement must be viewed through a lens of technological sovereignty and the hazards in the cyber domain of failing to ascertain whether equipment can be wholly and fully controlled by its purchaser. The risks posed by a lack of technological sovereignty have moved from the realm of the science fiction film (in which all instances of a technology are suddenly remotely disabled by its creator) to contemporary reality, as the lack of stakeholder awareness of the issue has led to a free-for-all in which interested parties seek to create “interested equipment”.
Software goes unnoticed by many because it is invisible to the human eye. It is impossible to tell whether your smart TV is spying on you simply by casting your eye over it and examining the casing. It does not emit a telltale ‘bleep’ like a 1960s Hollywood caricature of a computer when it sends to its manufacturer a report of how long you spent watching 24 yesterday. When technology you own works against you, you are unlikely to know about it — until it is too late, and your Jeep mysteriously won’t start.
It is inevitable that, 100 years from now, the concept of technological sovereignty will be far more permeated across the minds of decision-makers and stakeholders. The first thought that will cross the mind of a lawmaker considering the purchase of a fighter jet or drone will not be “what can it do?” but “who controls it, and who can turn it off?”
The question is not whether this shift in thinking will occur, but whether it will be the product of lessons learned the hard way, or as a result of proactive foresight into the major risks involved and what is at stake.