Sandboxes and AI innovation in Europe

Author(s):

Share:

Join Our Newsletter

Subscribe to the Datasphere Pulse for updates on data governance news and activities.

Europe’s AI innovation pipeline is a topic of much discussion. Sandboxes are emerging as a tool to test and iterate new technologies and/or unpack complex emerging regulatory environments. This blog explores how AI sandboxes are being used across Europe and identifies some questions as their role in the EU AI Act takes shape. 

Sandboxes hold potential to support AI governance 

Since its inception in 2022, the Datasphere Initiative has been working on sandboxes and defined them as: safe spaces to test new technologies and practices against regulatory frameworks or experiment with innovative uses and means of governing data. Initially used in FinTech, sandboxes are now being applied to broader economic sectors—from transportation to telecommunications to AI. They can be operational, regulatory, or hybrid. 

See a five minute intro to sandboxes here

On the occasion of France’s AI Action Summit in February 2025, the Datasphere Initiative released a report Sandboxes for AI: Tools for a new frontier mapping AI and data sandboxes around the world and presenting a guide to when and how to set-up an AI sandbox. The report highlights how sandboxes hold significant potential in regards to AI governance. Sandboxes offer an alternative to traditional regulatory approaches which often fail to keep pace with AI advancements, leading to either regulatory gaps or overly restrictive policies that hinder innovation. Sandboxes instead provide controlled environments for iterative learning, stakeholder engagement, and real-time regulatory experimentation.

Sandboxes are becoming a continent-wide tool for AI innovation and testing

One of the ways that the EU’s Artificial Intelligence Act envisages fostering AI innovation across the EU is through setting up coordinated AI ‘regulatory sandboxes’. Article 57 of the EU AI Act details how these sandboxes are intended to provide a controlled testing environment in which innovators and regulators will work together to identify risks and ensure compliance with the EU AI Act and potentially other EU regulation.

While some countries like France and Sweden, have expertise and experience in setting up sandboxes in the data protection field, other member states have never designed or participated in a sandbox. The European Commission is funding EUSAiR an initiative set-up to support EU member states and start-ups in their sandbox journey. The organization intends to foster multi-disciplinary engagement on AI sandboxes and support independent analysis of sandboxes, involvement of European digital infrastructures (such as those in AI Factories, EDIHs, and TEFs). 

Hopping across the Channel, the United Kingdom Financial Conduct Authority designed one of the first regulatory sandboxes in 2015. Since then there are numerous sandbox examples from the UK in the financial sector but also beyond. For example, MHRA AI Airlock, is a regulatory sandbox for AI as a Medical Device (AIaMD). 

It’s therefore no surprise that the 2025 UK AI Opportunities Action Plan pledges to fund regulators to enhance their capabilities, allowing them to better manage and support AI growth in priority sectors and enable them to establish pro-innovation initiatives, including regulatory sandboxes. The plan highlights that these sandboxes will help companies safely test AI-driven products in real-world environments. 

Countries in the European Economic Area also have experience and interest in sandbox initiatives. The Norwegian Data Protection Authority, Datatilsynet, established a regulatory sandbox in 2020 under the National Strategy for Artificial Intelligence. Its primary goal at first was to stimulate “privacy-friendly” innovation in AI. Now in its fifth iteration, the sandbox supports projects addressing not only AI but also regulatory uncertainties in complex data sharing, the EU General Data Protection Regulation’s provisions on automated decision-making, and secondary data uses, among other topics. In Switzerland local sandboxes are addressing community-specific needs and contexts, like Zurich’s AI sandbox, which assesses and implements AI projects while granting participants access to new data sources.

But AI sandboxes come with some challenges

Sandboxes can vary from country, regulatory authority and are very diverse in scope and purpose. Sandboxes have typically been designed to address specific, narrowly defined problems, limiting their ability to handle cross-sectoral issues or adapt to rapidly evolving AI technologies that span multiple regulatory domains. The experimental nature of sandboxes also means that they are typically not suited for large-scale deployment without significant modifications. These challenges underscore the need for careful planning, robust design, and clear objectives when designing and implementing sandboxes.

The use of sandboxes in the EU AI Act is moving from theory to practice

Countries in the European Union have until August 2026 to start a sandbox in the context of the EU AI Act which promotes the use of sandboxes as key to its compliance mechanism. Some countries like the Netherlands have already started to communicate their plans. As the EU’s approach to AI sandboxes evolve, below are some considerations and questions:

  1. Cross-border sandboxes

While detailed guidance and work on EU AI sandboxes is in progress, it is expected that there will be some level of cross-border collaboration stemming from the implementation of the sandbox provision within the EU AI Act. To support AI excellence from the lab to the market, the European Union has set up Testing and Experimentation Facilities (TEFs) for AI. Arguably somewhat similar to “operational sandboxes” TEFs are expected to contribute to the implementation of the AI Act by supporting regulatory sandboxes in cooperation with competent national authorities for supervised testing and experimentation. In the context of TEFs and regulatory sandboxes some have questioned: How to deal with testing of AI innovations in multiple member states? How to transfer the lessons learned from one national testing site to another national testing site? How to easily replicate tests and experiments in different European countries? In the case of the cross-border sandbox efforts, it remains unclear how many countries are to be included in these cross-border efforts and what sectors or issues these sandboxes will cover.

  1. Coherence across EU member states

The inherently global nature of AI, characterized by technology inputs and outputs frequently crossing national borders, complicates regulatory oversight by any single national sandbox operator. Additionally, AI cannot be regulated from a single perspective, given the multiplicity of involved regulators, its broad pool of training data and interconnected applications. In the case of the EU AI Act the need for cross-country coordination among sandbox initiatives will be important. This coordination will also be important across sectors like health, finance, and energy. From a business perspective, when a company passes through one sandbox, how will their learnings be transposed to activities in other EU countries? How to harmonize the different processes for sandboxes in terms of scope and procedures across the EU? Will AI regulatory sandboxes include scope for testing compliance with other EU legislations such as the GDPR, and as some have explored, the Cyber Resilience Act? How are national and local sandboxes going to share learnings and coordinate and which regulators will be most involved? are other emerging questions.

  1. Timing and scope of sandboxes 

There can be sandboxes at different stages of the AI innovation pipeline. The EU AI Continental Plan lists various layers and goals including: computing infrastructure, high-quality data and development of AI algorithms and leverage their adoption in the EU’s strategic sectors. The ways in which sandboxes could be used to support these goals are worth considering. As research by the Datasphere Initiative shows, sandboxes are not only a tool that can help foster regulatory compliance, but also have operational and hybrid functionalities. What is more, with the speed of innovation, how will sandboxes be able to test technologies that are already on the market but evolving at speed, such as AI connected wearables? What is the capacity of a regulatory sandbox to test large language models? Are sandboxes going to be ongoing and as iterative as the AI products and services themselves? 

Looking ahead

Mostly government to government or government to business affairs, sandboxes can often lack the inputs of civil society. While efforts exist to facilitate dialogue among countries on establishing their own sandboxes, there remains a notable lack of cross-country collaboration within the sandbox initiatives themselves worldwide. 

Research from the Datasphere Initiative has pointed to the lack of information about sandbox experiences as well as the lack of transparency regarding sandboxes’ results, impact, challenges, and best practices. Many regulators are also going to need training and new capacities to support their work in the AI era. This underscores the urgent need to promote cross-regional, cross-sectoral and cross-jurisdictional knowledge sharing to ensure that lessons learned truly enrich sandbox evolution. As European countries look to leverage sandboxes in the AI era, we need to be ready to learn together. 

Join Our Newsletter

Subscribe to the Datasphere Pulse for updates on data governance news and activities.

The Datasphere Initiative is a global network of stakeholders fostering a holistic and innovative approach to data governance to build agile frameworks to responsibly unlock the value of data for all.

@2024. Datasphere Initiative. All rights reserved
Privacy Policy