July

18

Fintech AI Innovation Consortium (FAIC) Roundtable Discussion

Cognitivo is part of the Fintech AI Consortium (FAIC). FAIC’s vision aims to create effective AI solutions in the FinTech sector through a multidisciplinary approach, leveraging the expertise of professionals from various fields.  

Link to Fintech AI Consortium (FAIC)

On Thursday 18 July, Cognitivo hosts the FAIC Roundtable Discussion. The discussion will be around the various responses from the Financial Services industry to the Australian government discussion paper, Safe and responsible AI in Australia consultation.

In June 2023, the Australian government released the discussion paper, ‘Safe and responsible AI in Australia,’ related to AI regulation. The paper requested feedback on several questions related to governance mechanisms to ensure AI is used safely and responsibly. These mechanisms could include regulations, standards, tools, frameworks, principles and business practices. The paper invited feedback on key questions to inform how the Australian Government considers regulatory and governance responses to mitigate the potential risks from AI and ADM; and increase public trust and confidence in their development and use.

Feedback summary

On the definitions of AI, some feedback expanded on the relationship between AI and automated decision-making [ADM] and stated that regulations had to be clear which one is targeted. Another one made distinctions between the concepts of AI, AI System and machine learning [HSF]. Some consider the given definition of AI to be too narrow, an element of influence needs to be added which looks at intended use [FinAus].

In general, feedback showed diverging opinions on the scope of regulations. On one extreme, some said we need more in the form of best practices and less in the form of regulation which must be done in an AI agnostic way [ABA]. Some advocate a sector specific approach [APN] and some state that regulation should only focus on high-risk use cases and areas not currently covered like cybercrime and fraud [ABA]. One view is that regulation should not only consider impact of AI Systems but also impact of increasing automation, digital assets and a reliance on automated systems [HSF]. The main consensus seems to be that there is a need for a combined approach that updates existing regulations as well as introduce AI-specific regulation.

If legislation is introduced, it should avoid introducing conflicts between different regulators as there are many in the financial services sector: ASIC, APRA (whose role is coordination with Financial Services sector), AFCA, ACCC, OAIC(Privacy act) [ABA]. ONDC (Transparency Act) is also mentioned [FinAus].

There was also considerable debate regarding which areas legislation should cover. Some suggest that legislation should only cover novel AI specific issues such as increased risk, transparency and the registration of AI Systems [HSF]. Another area of importance for legislation is providing explainability. it is suggested that there should be no automatic decision-making without explanations [HRComm]. One possibility is in the form of disclosures. However, using disclosures to achieve transparency is not useful to consumers as there is notice fatigue [ABA]. In addition, revealing a model can be dangerous because of fraud so it is better to update existing disclosures [ABA].

Some issues seem to have been overlooked in the discussion, e.g. environmental impacts of AI are not discussed [ADM][HRComm]. Other areas that raise important privacy concerns also are missing from the debate, like the impact of neural technology, facial recognition, chatbots and metaverse [HRComm] but they seem less relevant to today's financial services. In addition, there are few references in the consultation document related to data quality.

On the subject of adopting a risk-based approach, it was pointed out that banks already have risk governing frameworks that consider technical and operational risks that apply in both AI and non AI contexts [ABA].Therefore, there is support for a risk based approach that is consistent with current regulations and the establishment of frequent assessments [FinAus]. In the risk-based approach, one important question raised is what rules, rights and obligations are required for each category [HSF]. [ABA] advocates and approved consistent with NIST's AI risk management framework. As existing Financial Services regulations reflect specific decisions about a particular risk should be managed, they need to be adjusted if they lead to unintended limited impacts on AI[ABA]. Risk assessment must be technology neutral because same technology may pose different risk levels based on its purpose, deployment and outcomes [Visa]. Other comments included the fact that risk assessment should be continuous, not at one point in time [HSF] and that data quality is missing from risk requirements[ADM]. In general, there were not many suggestions in addressing technical risks associated with the development lifecycle and the sharing of data.

Recommendations

Suggestions Include:

  • A detailed gap analysis of which AI use cases are already covered by existing domestic regulatory frameworks [HSF]. This in-depth gap analysis is needed before any change to Australian conformity infrastructure are made [FinAus]. This will ensure unnecessary duplication or cumulative regulation [Visa]. Regulation should only focus on high risk usecases [Visa]. The financial industry generally supports a combination of updating existing laws, creating sector specific guidelines and encouraging voluntary industry standards. Some feedback [FinAus,Visa] states that it is important to consolidate existing regulatory regimes with technology neutral regulation.
  • Due to the Cross-Border nature of technology, acting in the global sphere is important [APN]. Therefore, Australian should be encouraged to take part in the participation and adoption of AI regulation worldwide such as the standards being developed by ISO/IEC JTC/1 SC42 such as ISO/IEC 23053-2022 (UK White paper) [APN,HSF,ADM]. Sector agnostic risk classification frameworks such as OECD classification of AI Systems and NIST risk management framework can be used for risk assessment [HSF]. [Visa] advocates creating something in Australia similar to EU AI Act5. On 5th April 2024, the sixth EU-US Trade and Technology Council meeting announced that the EU AI Office and the US AI Safety Institute will work together on tools to evaluate artificial intelligence models. In the USA, seven big technology companies made commitments [HSF]
  • Establishing a body such as AI safety commissioner that engages with UN guiding principles on human rights and business [HRComm]. In consultation with AI safety commissioner, issue guidelines for human right considerations [HRComm]. In the interim report, there are many government initiatives, but they seem to be focused mainly on cyber security. [HRComm] also sees AI regulation from the cyber security angle. In general, Government action seems fragmented. It has invested in a National AI Centre but its role in AI regulation was unclear. There are currently 16 government initiatives on AI running in parallel or consultation but there should be a dedicated task force related to AI regulation [HSF]
  • Observation and evaluation of an AI System is important aspects of transparency that requires new methods [ADM]. The importance of testing and in particular the example of Singapore AI Verify self-testing tool is being praised FinAus]. The interim report mentions the idea of implementing regulatory sandboxes. This is also supported by [FinAus]. A model which data can be donated by users from their browsers could be explored [ADM]. Organisations building AI should perform human right impact assessment (HRIA) for each tool for automatic decision-making [HRComm]. Introduction of AI ratings and AI practitioners registry is supported by [FinAus]. Government should promote standards and certifications for AI[HSF]. Encouraging initiatives for the sharing of data will also contribute to better testing. [APN] advocates a network view of payment data with public private collaborative analysis, not siloed.
  • Rapidly deployment of AI Technologies including new models and methods is not being addressed [ADM]. Lack of ex ante laws in technical development are preventing interventions early in lifecycle according to interim report. Actions are also needed to support development of AI systems as it is important for Australia to be developing technology, not just use it [ADM]. So far, emphasising the responsible AI and cybersecurity angle is only pushing AI systems to become more complex to develop. Alignment with technical standards such as Australian government architecture (AGA) is important. Enabling collaboration between multidisciplinary experts with different backgrounds early in the requirements phase is important [ADM]. In the interim response, the ARC centre of excellence in ADM has done work in representing the AI development lifecycle but many good practices for developing quality and robust software systems (e.g. DevOps and ML OPS) seem to be missing from the debate.
  • The lack of skills and inadequacy of IT infrastructure is stressed. In the interim response, technical risks are mentioned such as inaccuracies in system design. There is a lack of personnel with AI skills in Australia [HSF]. The interim report advocates to build AI capability in the education area. Creating something similar to EU’s AI Watch which is knowledge disseminated to the public to monitor AI is encouraged [FinAus].

Research Challenges

Several challenges related to AI regulation will remain and deserve research attention:

  • 'frontier AI' is problematic: it is hard to anticipate all potential use cases so there is always a possibility for unanticipated applications so called ‘frontier-AI’ to create new situations for which no regulation applies.
  • There are conflicting requirements between explainable AI and accuracy [APN]
  • Human in the loop is proposed as a solution that in some cases may increase problems in an AI Systems [ADM]
  • AI Supply chains are complex: responsibility cannot sit with the final actor in the chain [ADM]
  • Privacy should not put the onus on individuals only [HRComm]
  • Regulation should not stifle innovation ([ABA] seems to be worried about constraining AI which limits its use in Fraud Detection)
  • Safety by design is mentioned in the discussion paper but achieving the right trade-offs between different considerations including compliance early in the development lifecycle is hard to achieve.

recent posts

Fintech AI Innovation Consortium (FAIC) Roundtable Discussion

AI-POWERED ALGORITHM FOR STREET SIGNS DETECTION V2

DATA-DRIVEN, BUT FIRST WE MUST TACKLE THE ENTERPRISE DATA QUALITY CHALLENGE

YOU MUST DEAL WITH RE-IDENTIFICATION RISK BEFORE SHARING DATA BUT YOUR PRIVACY IMPACT ASSESSMENTS ARE INADEQUATE