FAIC AI Regulatory Round Table Event Summary
Cognitivo Team
Team
FAIC AI Regulation and Governance Event Wrap up
In July 2024, we had a gathering of approximately 50 FSI leaders and researchers to discuss the opportunities and challenges of upcoming AI regulation and governance in Australia.
The event opened with Toby Walsh, Chief Scientist at UNSW.ai, reflecting that “we are in the biggest gold rush in human history.” With OpenAI, a $100m investment in training AI resulted in a company worth $100b. ChatGPT has democratised and made AI more accessible. The technology is moving fast but Institutions are adapting slowly.
The opening set the tone for the opportunities and challenges we’ll face in the financial services industry associated with AI adoption and management in Australia. Toby commented that with the passing of the EU AI Act, it became transparent that you can regulate AI.
Raymond Sun, Lawyer at Herbert Smith Freehills, then gave a summary of global AI regulation and Australia’s place at a crossroads considering the regulatory model to adopt. The range is between countries such as the US and China adopting a narrow AI law governing specific applications of AI vs the EU adopting a broad risk-based approach. These different approaches are characterised by whether a jurisdiction chooses to regulate the Technology of AI vs the Impact of AI.

Dr. Ali Akbari, Director of AI Practice at the Gradient Institute, then gave an overview of resources available on AI management best practices and the growing body of work from the International Standards Organisation (ISO) and contribution towards the AI voluntary safety standard (which is expected to be released shortly by the National AI Centre). To date, the national framework for assurance of AI published in Jun 2024 which defines mandatory guardrails only applies to Government and public sector organisations.
Dr Akbari also spoke of the challenges of lawyers trying to bridge technology and technologists bridging towards the law and the support that the 31 ISO standards published since 2018 can provide. Beyond ISO, there are also the IEEE 7000 series and various other standards published by OECD.ai that can support AI governance.
Pre-break presentations were wrapped up with Stuart Banyard, Senior Product Lead in Artificial Intelligence CSIRO’s Data 61, who spoke on “Building Trust in the fast evolving world of GenAI.” Stuart started with a quote from The Atlas of AI by Kate Crawford, “AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labour, infrastructures, logistics, histories, and classifications.” Stuart discussed the importance of building trust within the design of AI systems and the wide range of techniques and patterns designers are confronted with. Amongst the various challenges highlighted, one in particular is the engineering gap between current principles/standard/frameworks affecting AI vs the models themselves. Currently principles and standards != Engineering practices, so novel Software Engineering for Responsible AI techniques are required.

The session was then followed by an open discussion amongst participants covering topics relating to AI explainability, transparency, accountability and liability.
The participants discussed explainability at length and questioned the suitability of existing approaches satisfying these requirements. It was pointed out that there were standards such as “ISO/IEC CD TS 6254 Information technology - Artificial intelligence - Objectives and approaches for explainability and interpretability of ML models” that provide some guidance, however such standards are not routinely known or used by most organisations. This discussion point further affirms the need to bridge the increasingly numerous standards to be adopted into AI software development practices.
Further discussions occurred in the area of end-user access to remedies, for example does the user have access to evidence or did a particular breach (relating to AI) cause a particular loss. There are additional complexities relating to the exposure of IP/trade secrets embodied within AI systems.
The event rounded with a number of interviews by FAIC stakeholders, here are the highlights from the event:

https://www.youtube.com/watch?v=fqynbSPxiFk
Through the event, the FAIC and participants have identified a number of gaps relating to industry’s ability to engineer safe and responsible AI systems, particularly in the early stages of requirements and design.
As a result, actions will be taken further to define what needs to be undertaken to bridge these gaps with industry partners. These actions will include; creating resources in the area of researching software engineering patterns that encompass RAI (Responsible AI), translating this research into industry-relevant solutions and training.
The FAIC will be approaching industry partners in coming months to seek participation in various initiatives.
If interested, contact Fethi or Alan.
