Sovereign AI and the New Data Divide
- Kanika Radhakrishnan

- Nov 12
- 3 min read

I will be attending the World Economic Forum in Davos this January, where Firstboard.io is convening conversations at the intersection of governance, innovation, and inclusive leadership. As part of this article series leading up to the Annual Meeting, I’m exploring the questions and tensions boards must navigate as artificial intelligence reshapes the global economy.
This week: Sovereign AI and the emerging data divide.
AI Is Borderless—But the Future of Data Isn’t
Artificial intelligence might run on global data, but countries are drawing lines.
From the EU’s AI Act to China’s cross-border data restrictions, governments are asserting their stake in how AI develops. “Sovereign AI” is no longer a speculative phrase—it’s a policy direction. And the implications for businesses and boards are profound.
Who owns the data that fuels generative tools? How do national AI frameworks impact international partnerships, investment, and innovation? What does “trust” mean when models are trained on global inputs—but regulated by local values?
Boards that operate across jurisdictions must prepare for a future where compliance, competitiveness, and conscience may not always align.
The New Divide: Not Just Data, But Direction
This isn’t just about data sovereignty—it’s about digital ideology.
Some governments see AI as a public good. Others see it as a competitive weapon. Some promote open-source innovation. Others guard models like state secrets. These diverging philosophies are shaping everything from trade agreements to research access, from cloud strategy to IP law.
The result? A new kind of digital divide—one rooted in governance, not infrastructure. And companies navigating this terrain need legal leadership that understands both global policy and product risk.
General Counsels and board members must begin asking: Where is our data coming from? Who governs the models we rely on? What are our obligations in AI development—and to whom?
Why Boards Must Lead, Not Follow
Regulators are moving fast. But businesses can’t afford to wait passively.
Board-level leadership is essential to anticipate AI-related risk, guide responsible innovation, and ensure that governance keeps pace with transformation. The goal isn’t to slow down innovation—it’s to scale it sustainably, ethically, and in alignment with local laws and global expectations.
That means asking tough questions about data access, model transparency, algorithmic bias, and enforcement authority. It also means pushing for international collaboration and cross-sector dialogue—because no single company or country can solve this alone.
As a legal strategist working across borders, I’ve seen firsthand how inconsistent AI frameworks are already complicating operations, partnerships, and product launches. But I’ve also seen how forward-thinking boards can turn this moment into an advantage—by shaping policy through participation rather than just compliance.
Firstboard.io Members Attending Davos 2026
Rita Scroggin Subha Tatavarti Laura Langdon Avital Arora Kanika Radhakrishnan Devi Jarschel Kshama Swamy Paramita Bhattacharya Prathiba David Sullivan Rohinee (Ro) Mohindroo Ekta Sahasi + Shuchi Rana Malina Johnson Betsabe Botaitis Mandy Dhaliwal Tami Rosen Joe Sullivan Richard Slaby
Join us at Davos, where Firstboard.io is hosting a private gathering of global leaders at the intersection of board governance, emerging tech, and inclusive innovation.
A Final Thought for the Boardroom
The AI revolution will not wait for regulatory alignment. But the right board leadership can help bridge the gap. Now is the moment to rethink what responsible innovation looks like—across borders, sectors, and ideologies.
Done right, board governance can do more than manage AI risk—it can shape an AI future worth trusting.



Comments