Powered by MOMENTUM MEDIA
Broker Daily logo

APRA abstains from extra regulations to mitigate AI risks

APRA abstains from extra regulations to mitigate AI risks
expand image

The regulator has revealed it has “no such plans” to introduce new regulations to combat risks presented by AI.

The Australian Prudential Regulation Authority’s (APRA) executive board member Therese McCarthy Hockey has revealed that the regulator would not be introducing new regulations to address the risks that artificial intelligence (AI) brings to the financial services industry.

McCarthy Hockey delivered an address at the Australian Finance Industry Association (AFIA) Risk Summit 2024 on Wednesday (22 May) and spoke on the benefits and risks that AI poses to the finance industry.

She said that while AI can be used to increase productivity and improve customer experiences, it can also be used to scam businesses and customers, commit crimes, and “undermine financial stability”.

==
==

Despite the risks that AI poses, McCarthy Hockey added that APRA had “no such plans” to introduce new regulations to mitigate the risks of AI.

The member said that the main reason behind APRA’s decision to abstain from introducing new regulations was that it believes its framework “already has adequate regulations in place to deal with generative AI for the time being”.

She said: “Our prudential standards may not specifically refer to AI but nor do they need to at the moment. They have intentionally been designed to be high-level, principles-based and technology-neutral.”

APRA already has controls in place to manage AI threats and protect data from misuse and threat, according to the member.

She also said that APRA is introducing regulations next year that require businesses to consider AI risks introduced by a third party.

McCarthy Hockey continued: “So, while we are watching closely, we are confident - for now - that we have the tools to act, including formal enforcement powers, should it be necessary to intervene to preserve financial safety and protect the community.”

The board member also said that APRA has decided against implementing new regulatory measures as the federal government announced that it would be taking action to ensure that AI is used responsibly across Australia earlier this year. (The Minister for Industry and Science Ed Husic announced in January this year that the government is considering the introduction of regulator measures including product testing, transparency of model design and data, and accountability of AI developers.)

McCarthy Hockey said that APRA expects to have input on the government regulations. She said that the measures would cover “vastly wider terrain than the banking insurance and superannuation industries [it regulates]”.

Businesses without strong risk frameworks should not use AI

In her address, McCarthy Hockey suggested that the prudential regulator supports its regulated entities to include AI in their business operations, given the potential benefits for the business and its customers.

However, she cautioned that not all banks, superannuation trustees, and insurers were “equally capable” of introducing AI into their businesses.

McCarthy Hockey said: “ Having monitored developments in this area over several years, our advice now is that entities with robust technology platforms and a strong track record of risk management are good candidates to experiment with AI and should feel confident proceeding.

“Entities that are weak in these areas should proceed with caution and care.”

The executive board member also said that it was important for businesses to understand whether they should progress with integrating AI into their operations and that APRA would provide an individual assessment upon request.

She said: “Importantly, entities need to know in which category they sit. One example of what ‘good’ looks like is having open and proactive discussions with APRA, so if entities are unsure where they sit, APRA will happily provide its own assessment upon request.”

McCarthy Hockey reiterated Husic’s accountability measure, saying that “companies cannot delegate full responsibility to an AI program” as generative AI would involve automated decision making. She said that entities should have “an actual person who is accountable for ensuring it operates as intended”.

The executive member said: “While we are not adding to our rule book at the moment, we will be using our strong supervision approach to stay close to entities as they innovate and consider management of AI risks.”

The announcement comes as a major brokerage launched a pilot program that uses generative AI to reduce the administrative burden on brokers.

[Related: Major brokerage begins trialling generative AI]

More on Regulation
31 October 2024
The government body has slammed the non-bank lender after allegedly partaking in inappropriate lending practices that ...
31 October 2024
Scam-related complaints spiked over the 2023–24 year as the ombudsman service records a new high in overall complaints
28 October 2024
The government body has reinforced its confidence in the 3 per cent serviceability buffer, claiming it ensures stability