The Datasphere Initiative teamed up with the Data for Development Network (D4D.net) on a webinar series unpacking the complex relationship between data governance and Artificial Intelligence (AI). In the first discussion, Carolina Rossini, Director of Research and Partnerships at the Datasphere Initiative, joined Rachel Adams, Principal Investigator of the Global Index on Responsible AI, Emma Ruttkamp-Bloem, AI Ethics Researcher Professor, University of Pretoria, and Aubra Anthony, Senior Fellow, Technology & International Affairs, Carnegie Endowment for International Peace, to discuss the upcoming Global Index on Responsible AI and the crucial role data plays in AI development and use. This blog captures the discussion’s key takeaways and what can be learned moving forward.
There are multiple understandings of “Responsible AI”
Responsible AI means different things in different parts of the world, as this mapping by the Berkman Klein Center shows, and the Global Index on Responsible AI is exploring interpretations and measuring progress in the areas of human rights, responsible governance, national capacities, policy, and technical environments. The tool not only provides an outlook on the current status of efforts towards responsible AI, but its findings can lay out potential policy roadmaps on mitigating risks and leveraging opportunities.
“Responsible AI means different things in different parts of the world, broadly speaking we are talking about the governance and implementation of AI.” said Rachel Adams, Principal Investigator of the Global Index on Responsible AI.
Countries are at different levels of readiness to realize responsible governance
While the Global Index on Responsible AI fills an important gap to understand how progress is being made towards responsible AI, it deals with countries at different stages of readiness to realize responsible AI governance. The index is useful for considering the ecosystem in which AI is governed and where national or regional approaches interact.
“It is important to interpret what the Global Index on Responsible AI tells us about the ecosystem within which AI is designed, developed, and governed” said Emma Ruttkamp-Bloem, AI Ethics Researcher Professor, University of Pretoria.
While important, principles may not be enough to address all challenges
Research on the need to address potential AI risks and harms has led to the development of principles such as the United Nations Educational, Scientific and Cultural Organization (UNESCO) Ethics of Artificial Intelligence, and the Organisation for Economic Cooperation and Development (OECD) AI Principles. While important steps to building common understandings of AI and responsibilities, principles alone may not be sufficient to address potential harms if they are not translated into meaningful action.
“There is a need to be cautious about AI’s ability to infringe on human rights. Principles are really important to protect rights” Aubra Anthony, Senior Fellow, Technology & International Affairs, Carnegie Endowment for International Peace.
Good data governance policies are an essential foundation and enabler
AI benefits can often come from assumptions that data is plentiful when many countries are facing data poverty. In the case of AI, effective data governance goes beyond privacy policies, since data availability (quantity and quality) is a core factor in ensuring datasets used are representative. Data-sharing frameworks are therefore important in supporting the equitable access and distribution of data. The D4D Global data Barometer is a helpful tool in tracking the availability of data across countries.
“We have an increasing understanding of how data poverty affects populations around the world, especially in the global south. At the same time, many vulnerable populations are putting forward principles on how their data should be accessed and used. A concerted effort to make the invisible visible is crucial to responsibly unlock the value of data for all, and ensure the outputs of AI promote equity.” Carolina Rossini, Director of Research and Partnerships, Datasphere Initiative.
Structural issues impact who can participate in the AI economy
Data access plays a role in global fairness and equity in AI. The biggest datasets are produced on the internet and this can be problematic when internet use is deeply divided – not only access to digital infrastructures and smartphones is not widespread, but specific communities are more affected by structural issues such as the gender divide. Equity, access, and structural challenges are often embedded in data available to AI systems and data scarcity can lead to underrepresented and unrepresented data sets.
“There is a need for data-sharing frameworks, standardization, and categorization to address structural issues related to AI,” said Rachel Adams, Principal Investigator of the Global Index on Responsible AI.
Addressing data bias and discrimination starts with representation
The representativeness of data and the inclusion of concerned groups within action to address data bias is crucial to ensure AI technologies and their outputs reflect the needs of groups they are intended to serve. Conscious efforts are needed to understand how unintended consequences of some policies can impact responsible AI. For example, data minimization tools with the intention to preserve privacy or limit the exploitation of vulnerable groups’ data, can lead to limited data sets and profiling.
To identify and address the data governance challenges that arise within AI, the Data for Development Network (D4D.net) and the Datasphere Initiative year-long dialogue and consultation via a webinar series seeks to unpack regional experiences and shine a light on the trends, challenges, and opportunities coming out of the Global South, strengthening exchanges between the Global South and the Global North. The webinar series will bring together actors from governments, the private sector, civil society, the technical community, and academia to advance thinking and discourse on the need for effective and holistic data governance to ensure the responsible development of AI and the use of data for the public good. The next webinar will be announced in the coming weeks.
Are you working on data governance and responsible AI challenges?