As artificial intelligence reshapes industries and economies, the race to regulate it intensifies. Digital policy has been more and more about answering key questions on AI policy: how to regulate it? How to make it responsible? This month, the EU has added another layer to the conversation: How to regulate it but still stay competitive?
The challenge with many of these policy questions is that specific answers are uncertain. There’s no one size fits all approach. Depending on the sector, the region, country, community, and (as shared previously) depending on the data, a different answer may come up.
At the Datasphere Initiative, we believe that sandboxes could help regulators craft effective regulatory frameworks to harness AI opportunities and address the risks. Not because they are a magic tool that can work wonders on any policy question or fast emerging tech. Sandboxes are complex to set-up, resource-intensive, and require regulatory skills and capacity to design and participate in. Yet, while there are some barriers to overcome, sandboxes can offer safe, experimental, collaborative methodologies to foster meaningful and intentional regulatory innovation, fit for the policymaking complexities of AI.
In this blog post, we share some reflections on the potential of sandboxes for AI and how the Global Sandboxes Forum plans to explore this further in the months to come:
AI development is speeding up and regulatory frameworks lag behind. Sandboxes can bridge the gap between rapid technological innovation and the slower pace of regulatory evolution, facilitating innovation while promoting safety and compliance. They can build trust between regulators within a country or between countries, and can also build public trust not only by giving consumers greater assurance that novel practices that emerge from the sandbox have been subjected to regulatory scrutiny, but also by identifying when these practices are not compliant with regulations and intervening to prevent their proliferation in the marketplace.
The integration of AI into public services, into law enforcement, and into the workforce, raises numerous social issues and concerns. The use of AI in law enforcement,for instance, can enhance efficiency but also risks exacerbating biases and undermining civil liberties if not properly managed. Similarly, AI’s role in public services could lead to disparities in access and quality, particularly for marginalized communities. Sandboxes can allow stakeholders to explore innovative data practices and uses that do not fit neatly within traditional regulatory frameworks but could help address the pressing societal issues emerging from AI use. Operational sandboxes enable stakeholders to access pooled data resources to explore new uses of data, while regulatory sandboxes can help clarify regulatory parameters and improve regulators’ abilities to respond to sectoral needs. This flexibility is crucial in a landscape where data already flows across borders and data-driven technologies evolve rapidly, helping to ensure that data governance keeps pace with technological advancements and societal needs. Sandboxes can also support AI development by facilitating rights-respecting data sharing and access by incubating technologies such as Privacy-Enhancing Technologies (PETs).
Ethical concerns regarding bias and fairness in AI systems persist. By enabling developers and regulators to collaboratively test new systems and datasets, sandboxes help ensure AI technologies are safe and aligned with societal values before their widespread deployment. This collaborative approach is essential, especially since a single regulatory body, perspective, or set of cultural values cannot effectively govern AI — as the technology transcends borders and sectors.
The EU’s Artificial Intelligence Act envisages setting up coordinated AI ‘regulatory sandboxes’ to foster innovation in AI across the EU. Norway and Singapore are among the countries testing out sandboxes to solve some critical AI regulatory challenges. The Norwegian government established the sandbox as a proactive measure to address the significant challenges regarding AI’s personal data usage, providing a controlled environment to develop compliant and ethical AI solutions. Singapore’s Generative AI Evaluation Sandbox, spearheaded by the Infocomm Media Development Authority (IMDA) and established in 2023 in partnership with the AI Verify Foundation, sought to forge a common standard for GenAI evaluations that not only mitigates risks but also fosters safe adoption, thereby enhancing assessment capabilities across the AI ecosystem.
But while they hold promise for AI, governments and participating organizations such as SMEs need the skills and resources to design and participate in sandboxes. Sandboxes often require significant investment in terms of time and expertise to manage complex technologies and ensure compliance under regulatory regimes which often lack clarity in their application (hence the technology’s inclusion in the sandbox in the first place). Another challenge is that sandboxes are typically developed in silos, meaning that one regulatory body may craft a framework from scratch without necessarily leveraging lessons learned from previous implementations in other jurisdictions.
Addressing these challenges effectively requires a collective, coordinated effort that transcends individual organizations or jurisdictions to collect evidence, build capacity and connect experiences and expertise. The establishment of a Global Sandboxes Forum aims at facilitating this collaborative approach, offering a platform for stakeholders worldwide to share experiences, strategies, and regulatory practices. Such a forum can foster global dialogue and capacity building, promote the harmonization of sandbox frameworks, and help mitigate the resource burdens by pooling expertise and efforts. Ultimately, we are working to enhance the scalability and adaptability of sandboxes, making them more effective tools for managing the complexities of AI regulation across different sectors and regions.
To build this effort, the Datasphere Initiative is embarking on some foundational outputs in the next few months:
- Setting up an experts group on sandboxes and AI
- Collecting case studies and examples of sandboxes for AI
- Releasing a policy brief on sandboxes and AI
We welcome interested individuals and organizations to join us on this journey to think outside the box!