by Varun Matlani
Sovereignty: the power that a country has to control its own government; in context of this article – power that a firm has to control its knowledge, enterprise value and data.
For the better part of the last decade, “Data Sovereignty” was a matter of geography. It was a concept anchored in the physical world: Where do the servers live? Under whose jurisdiction do the hard drives spin? For a German manufacturer or an Indian bank, sovereignty meant ensuring that customer records are localized, i.e remained in Frankfurt or Mumbai, protected by local laws.
However, the rise of Generative AI has rendered this definition dangerously obsolete.
Recently, in discussions surrounding the future of cloud and AI at Davos 2026 (most notably Satya Nadella’s address regarding “Model Sovereignty” at Davos), a new reality has emerged. The existential risk to the modern firm is no longer just about where data is stored, but about who owns the intelligence derived from it. As firms rush to achieve data operability—integrating their proprietary archives with massive Large Language Models (LLMs)—they face a paradox: they are gaining efficiency, but they may be losing their soul, which may lead to a situation wherein the firms implicitly train the models and then would need to rent back the more perfected model from BigTech.
The Law Firm Paradox
To understand this shift, consider the case of a legacy law firm.
This firm’s primary asset is not its posh property or a building, but its tacit knowledge. It is the fifty years of M&A contracts, the specific phrasing of winning litigation arguments, even formatting and structuring of appeals and the nuanced “deal structures or thought flows or argument tones that senior partners have perfected.
This is the firm’s moat.
In the pursuit of efficiency, the firm uploads this vast repository of documents into a secure enterprise instance of a foundational model like ChatGPT Enterprise. The promise is: instant retrieval, automated drafting, and hyper-efficiency; and not to forget the complete legal and contractual promise of data privacy. Herein, the privacy assured is that X Ltd acquiring A Ltd at a particular price is not used by the model, however, for the model, X Ltd, A Ltd and the price are just placeholders, the tacit knowledge is way this agreement is structured.
Here lies the sovereignty trap.
When the firm uploads this data, they are confident in their “Data Privacy.” They know the model provider (the hyperscaler) will not leak client names or confidential secrets to the public. But they are failing to account for “Pattern Sovereignty.”
By training or heavily prompting the model with their archives, the firm is effectively teaching the model how to think like them. They are transferring their unique logic, their negotiation strategies, and their distinctive rhetorical style into the neural network of a third-party vendor.
The data remains “confidential,” but the capability has been extracted. The model provider now possesses the mathematical representation of the law firm’s expertise.
Weight Sovereignty: The New Asset Class
This brings us to the core of Satya Nadella’s recent arguments regarding operability. In the AI era, the most valuable asset is no longer the row in the database (the Data); it is the Model Weights (the Intelligence).
Model weights are the numerical parameters that determine how an AI processes information. When a firm uses a generic model, they are renting intelligence. When they feed their data into that model without retaining ownership of the resulting fine-tuning, they are engaging in a one-way transfer of value.
We must distinguish between Explicit Knowledge and Tacit Knowledge:
- Explicit Knowledge is the text in the file. Sovereignty over this is easy to check (access controls, encryption).
- Tacit Knowledge is the wisdom required to write that text.
Historically, tacit knowledge lived in the brains of employees. Today, it is moving into the weights of the model. If a firm does not own the weights where its tacit knowledge now resides, it is no longer sovereign. It has become a “Hollow Firm”—a thin wrapper around an intelligence engine owned by a tech giant.
The Risk of Renting Your Own Brain
The ultimate danger of ignoring this shift is commoditization.
If every law firm, consultancy, and creative agency uploads their data to the same few foundational models, the models will eventually regress to the mean. They will learn the aggregate best practices of the entire industry.
The law firm in our example risks a future where they are essentially “renting their own brain” back from the model provider. The vendor charges them a subscription fee to access the very expertise the firm provided in the first place.
Is transfer of knowledge to competition anyways stoppable?
As a child, I had read this quote quite often by Dhirubhai Ambani, “Ideas are no one’s monopoly”. Extending the argument, thought processes are also not monopolies. When a BigLaw firm hires an associate who has been with another firm for 4-5 years, basically they are buying out a knowledge commodity of another firm, which anyways leads to transfer of knowledge, moreover to a competitor. The associate might also have all the data of the firm stored with himself, so an argument can be made that in case of knowledge transfer, instead of associate, it’s a model now – however, the difference is you’d need to rent back the model which you trained.
Some solutions and prisoners dilemma
Some law firms might think – that yes, model sovreignity is real and we can just stop/block the usage of AI outright, but here they would face the prisoners dilemma.
Whilst such firm would stop the tacit knowledge transfer, such law firm would also loose out on productivity and efficiency, and slowly whilst the other firms race, the benefits of proprietary knowledge would outweigh that of efficiency achieved by other firms.
Further, the decision would also be inconsequential since such law firm is not the only firm, and other law firms continue to train the model implicitly. Not only so, but also the new-gen full-stack AI law firms, benefit out of their training. Therefore, decision of a single law firm has no economic impact on others.
However, for broader solutions to balance the equation – law firms would need to “own” (not only control) the AI model. The only way out as of now is to rent a GPU (on a service provider like Runpod) and run an opensource model, which may be 70% of Opus 4.5 but still the distilled model is still owned by the firm.
Conclusion: Redefining Sovereignty
True sovereignty for the modern firm requires a strategy beyond mere data residency. It requires Model Sovereignty.
This means firms must stop viewing AI adoption as merely “buying a software tool” and start viewing it as “building a proprietary mind.” It requires negotiating for the ownership of fine-tuned model weights. It requires Private Cloud environments where the “learning” that happens on the data stays with the firm, not the vendor.
As Nadella hinted, data is no longer just something you “check” in a compliance audit. It is fuel. And if you pour your unique, high-octane fuel into an engine you do not own, you have no control over where the vehicle goes.