How should government and society come together to address the challenge of regulating artificial intelligence? What approaches and tools will promote innovation, protect society from harm and build public trust in AI?
Artificial intelligence (AI) is a key driver of the Fourth Industrial Revolution. Algorithms are already being applied to improve predictions, optimise systems and drive productivity, but early experience shows that AI can create some challenges.
In 2019, the New Zealand government sent a Fellow to the World Economic Forum’s Centre for the Fourth Industrial Revolution in San Francisco to work with a community of experts on ways that governments might think about regulating AI. The project, “Reimagining Regulation for the Age of AI”, is co-sponsored by New Zealand and has brought together people from throughout society to co‑design innovative frameworks for governing AI. Underpinning the project is the view that trust is necessary for AI’s potential to be fully realised. Having appropriate safeguards in place will increase consumer and citizen confidence and provides opportunities for global mobility.
This emphasis on trust and ethical principles is a key component of the AI Strategy for Aotearoa New Zealand, currently being developed by the New Zealand government in partnership with the AI Forum as part of the Digital Technologies Industry Transformation Plan. One of the Strategy’s six cornerstones is Human-Centred and Trustworthy AI, looking at the appropriate ethical and regulatory frameworks and standards needed to make sure AI development and adoption is safe and the appropriate safeguards are in place to mitigate risks.
The “Reimagining Regulation for the Age of AI” project recognises that to accelerate the benefits of AI and mitigate its risks, governments need to design suitable regulatory frameworks, but knows it is a complex endeavour with diverse views. The key takeaway form the project work so far is that any form of regulation should be planned in a collaborative and open way, encouraging innovation, minimising risks and building trust. Regulation needs to be reimagined as a co‑designed and flexible system of levers, tools and incentives. This fits well with the AI Strategy for Aotearoa New Zealand which has committed to collaborative partnership with our communities, helping develop their understanding of AI and ensuring Maori values, governance and tikanga are part of our AI ecosystem.
As part of the WEF project, members of the project community have produced a series of blog posts on two specific questions, with the aim of helping governments who are grappling with ways to introduce AI standards, oversight or regulation. The blogs are organised around the topics of:
- How to ensure the responsible use of AI by Governments?
- How to ensure that AI systems currently deployed are effectively compliant with existing regulation?
The blogs bring together a set of perspectives from different angles (industry, NGO, academics, and government officials), providing some avenues for discussion or thought.
We are happy to host these blogs as part of our work with the World Economic Forum and as part of our engagement process for the AI Strategy of Aotearoa New Zealand. We hope that they will provide a forum for discussion on how we can make our AI environment in New Zealand safer for society and our businesses, as well as being valuable resources to governments from all parts of the world.
This emphasis on trust and ethical principles is a key component of the AI Strategy for Aotearoa New Zealand, currently being developed by the New Zealand government in partnership with the AI Forum as part of the Digital Technologies Industry Transformation Plan.
One of the Strategy’s six cornerstones is Human-Centred and Trustworthy AI, looking at the appropriate ethical and regulatory frameworks and standards needed to make sure AI development and adoption is safe and the appropriate safeguards are in place to mitigate risks.