While evangelists predict human level intelligence by 2029, doomsayers say AI will bring the end of the human race! The truth is likely to lie between these extremes. However things pan out, AI is likely to have a big impact on our world.
AI is advancing at a rapid pace and currently includes agile robots, autonomous vehicles, open domain question answering, domain general action learning, visual object recognition, scene description and machine translation. With public awareness of AI growing, economists, lawyers and politicians are also engaged in dialogue. So, if AI technologies have the potential to dramatically change our society, how should we respond?
Earlier this month, as part of the Royal Society Te Ap?rangi 150th AnniversaryRegional Lecture series, University of Otago’s Associate Professor Alistair Knott presented in Wanaka. An audience of nearly 200 attended, eager to explore how society can prepare for advances in AI.
In assessing the key impact AI will have in the immediate future, Alistair focussed on employment. Citing Frey and Osborne’s influential 2013 study on the future of employment, he discussed the potential for automation. The main non-automatable skills involve perception, creative and social intelligence. Conversely, jobs with high probability of automation included telemarketers, accountant, retail, technical writers, real estate agents, word processors, machinists and commercial pilots. There is general agreement that most jobs have some tasks that are easy to automate. Ultimately, this may lead to restructuring jobs and allocating some tasks to computers.
In discussing aspects of AI that may require regulation Alistair highlighted employment, machine bias, transparency, accountability and ethics. Questions include: are there certain jobs we don’t want computers to do, how do we review social security for those displaced by AI, should we impose tax on AI systems and questions of universal basic income. Considering machine bias, additional questions emerge: How can we test AI systems for bias? Can we legislate against bias in AI systems? Can we use AI systems to reduce, or eliminate bias?
While AI systems use machine learning techniques to make decisions, this involves large databases and complex computations. With regards to transparency, should some AI systems be required to explain their decisions? Accountability is also an issue. For example if a autonomous vehicle has an accident, who is to blame? The passenger, the car company or the AI system itself? While AI systems are designed to behave flexibly, in a wide range of circumstances, how can a company guarantee the performance of systems? Ethics remains an obvious key consideration. To ensure an AI system doesn’t do something inappropriate what principles do we provide to regulate its behaviour? Should AI be taught in the same way that we teach children? How do we instil human values in AI?
For a coordinated response in preparation for the arrival of AI, Alistair said interdisciplinary structures are required including AI researchers, AI companies, economists, lawyers, social scientists and ethicists. Alistair also advocates providing the right skills for the next generation. For example, computer science students need an understanding of the social consequences of AI and how technologies affect communities, he said. Ethics and social sciences should be required coursework. Likewise, he suggests political and legal training courses should provide opportunities for students to learn about AI administrators gain a comprehensive understanding of AI, so they can make informed regulation decisions.
Alistair Knott is an Associate Professor in the Computer Science department at the University of Otago. He is an expert on human language modelling and has featured at TEDx Auckland and Athens. Throughout his career, Alistair has been interested in the ethical and social implications of AI. Last year, he co-founded the AI and Society discussion group at the University of Otago, together with colleagues Colin Gavaghan from the Faculty of Law and James Maclaurin from the Department of Philosophy. With these colleagues he is also coordinating the AI and Law in New Zealand project. Learn more about organisations studying AI and ethics/society and further reading here.
Want to know more? Sign up for the AIFNZ monthly update, it’s free and will take less than a minute!