printer icon
AI Forum

Taking a Responsible Approach to AI

In the past few months, ChatGPT has exploded into the public consciousness, driving the AI hype cycle into overdrive. As a range of tech titans variously applaud and bemoan the impacts of generative AI, it seems that we are now at a critical inflection point for AI.

As Microsoft’s Vice Chair and President, Brad Smith, said recently:

“AI represents the most consequential technology advance of our lifetime…Like no technology before it, these AI advances augment humanity’s ability to think, reason, learn and express ourselves. In effect, the industrial revolution is now coming to knowledge work. And knowledge work is fundamental to everything.”

But does all this hype and hysteria obscure genuine concerns around non-generative AI? After all, there’s a lot more to AI than just ChatGPT and its LLM cousins. The issues raised by AI more generally deserve our attention too.

What are the risks?

The development and deployment of AI presents numerous risks, including in relation to bias and discrimination; lack of transparency, explainability and accountability; over-reliance by humans; job displacement; misinformation and privacy concerns. Some examples:

·   Algorithms used by criminal justice systems across the United States to predict future criminals were found to be biased against black people.

·   An AI-powered recruitment tool was ditched by Amazon after it was found to penalise female job candidates because it was trained on data from successful applicants over the past 10 years – most of whom were male.

·   An algorithm used by Dutch tax authorities to help identify childcare benefit fraud resulted in more than 20,000 families being wrongly accused, several victims committing suicide and more than a thousand children being taken into foster care. The Dutch government resigned as a result.

What is the regulatory position?

Existing privacy, human rights and discrimination laws all apply to AI, but there are growing demands globally to address the specific risks of AI.

The EU’s “AI Act” – expected to come into force next year – employs a risk-based approach that will prohibit certain high-risk AI systems. It also sets out obligations for the development and use of AI systems. Just like the EU’s General Data Protection Regulation (GDPR), the AI Act will have extra-territorial effect. But it is projected to top GDPR’s already massive sanctions regime, with those found in breach facing fines of up to the greater of EUR 30 million or 6% of global annual turnover.

In a similar vein, China recently announced draft AI regulations to encourage the adoption of safe and trusted software, tools, and data resources. This includes requiring content produced using generative AI to “reflect social values” and not “advocate country separation [sorry Tibet!] racial hatred, terrorism, violence and pornography”. The regulator has powers to impose a wide range of sanctions, ranging from financial fines to suspension of generative AI businesses through to criminal liability.

Various federal bills are in play in the US aiming to regulate AI, but it remains to be seen whether any of them will be enacted. After years of struggling to enact federal privacy laws and in the face of extensive lobbying by Big Tech, it seems unlikely we will see federal AI regulation any time soon.

Meanwhile, European privacy regulators have not been resting on their laurels. Italy’s privacy regulator recently implemented a temporary ban on ChatGPT for GDPR violations, while the European Data Protection Board announced it will be convening a taskforce “to foster cooperation and to exchange information on possible enforcement actions.”

No doubt spurred on by the Future of Life Institute’s open controversial letter urging a 6-month “pause” on the development of advanced AI, the European Parliament has called for international collaboration and political action by world leaders to identify methods of controlling “very powerful” forms of AI generative AI.

Meanwhile in New Zealand, the Office of the Privacy Commissioner is currently exploring how best to regulate the use of biometrics, including facial recognition.

In short, the days of unregulated “wild west” AI may soon be over.

The solution – Responsible AI

Irrespective of whether AI regulation is in place or not, organisations developing and using AI tools need to focus on maintaining customer and public trust to ensure they can take full advantage of the benefits of AI.

A “Responsible AI” approach endeavours to identify and mitigate potential AI harms to help maintain that trust. Alternatively referred to as “AI ethics”, Responsible AI focuses on taking responsibility for AI outputs rather than more subjective and amorphous notions of what is “ethical”.

Every organisation is different, so each Responsible AI programme needs to be tailored to meet its specific business – and potentially regulatory – needs. A typical Responsible AI programme involves:

·       Executive engagement and support – senior level accountability and “tone from the top” will drive your organisation’s AI culture

·       A clear strategy on how AI will be used and why

·       Appropriate AI governance, including clearly defined roles, responsibilities and risk tolerances, as well as appropriate risk management structures, policies and processes

·       A tailored set of ethical or Responsible AI principles (the AI Forum’s own Trustworthy AI in Aotearoa (the AI principles) are a great starting point)

·       Robust ways to operationalise those principles, such as the use of Algorithmic Impact Assessments

·       A transparent approach to use of data and AI, including explanations of AI model outcomes

·       Training and awareness raising

·       Good data governance and privacy practices as a critical underpinning.

More tools are emerging to help organisations develop suitable AI governance and risk management approaches. For example, the US National Institute of Standards and Technology (NIST) recently released a well-regarded AI Risk Management Framework and Microsoft has shared its Responsible AI Standard v2.

As generative AI takes the world by storm and regulators start to sharpen their pencils, now is the time for New Zealand organisations exploring AI to take an “ethics by design” approach to anticipate and address potential risks. Building Responsible AI into the design phase is the best way to manage risks further along the AI lifecycle.

Ultimately, Responsible AI is not just a “nice to have” – increasingly it’s a core foundation of organisations’ social licence. Whether your organisation is only just starting to think about using AI, or is already forging ahead, it’s never too late to adopt a responsible, realistic and safe approach to this powerful technology.


Frith Tweedie is a Principal Consultant at Simply Privacy and has a deep interest in AI and its impacts on society. She has served on the Executive Committee of the AI Forum since 2019, was part of the governance group for the New Zealand Algorithm Hub and is currently on the advisory panel for the Women in AI Awards NZ & Pacific 2023 and the International Association of Privacy Professionals. Her AI-related experience includes drafting the AI Forum’s Trustworthy AI in Aotearoa AI principles, contributing to the national AI ethics framework for the Government of Malta and developing Auror’s Responsible AI Framework. She is currently helping Stats NZ operationalise the Algorithm Charter.

AI Forum The AI Forum brings together New Zealand’s artificial intelligence community, working together to harness the power of AI technologies to enable a prosperous, inclusive and thriving future New Zealand.