Strengthening AI in ASIC
By Tarah Barzanji and Aydin Hibbert, Quantium
As the Royal Commission has uncovered over the past year, Australian regulators are struggling to keep up with some of the troubling conduct in our financial sector.
A key source of the problem is the sheer volume of information that needs to be reviewed if regulators are to effectively police the sector. Imagine that 1 in 100 individuals who received financial advice each year  were given unacceptable advice – the advice may have been inappropriate given a customer’s financial situation, biased by incentives, or simply involve a fee for non-service. This would result in more than 23,000 client files that regulatory staff would need to review, comprising hundreds of thousands of pages of documents. And we know from ASIC’s review of sample advice files that they had “significant concerns about the impact of non-compliant advice on a customer’s financial situation” in a staggering 1 in 10 sample files.
Reviewing so many documents and so much data poses an overwhelming task for regulatory teams – but huge volumes of data is exactly why artificial intelligence (AI) has gained traction in the private sector. And, promisingly, governments are now starting to take notice.
Artificial intelligence, especially recent developments in natural language processing, allow for automated systems to capture key information from documents, synthesize it into a usable format, and make fast recommendations about problematic content.
In fact, the financial planning sector has recently harnessed the power of AI in providing robo-advice and otherwise managing the financial planning process.
Happily, regulators too can harness this capability, with some simple applications that would dramatically improve their monitoring.
AI can help regulators to monitor the sales process. For example, AI can classify advertisements and product promotions to flag when there are indicators of non-compliance. The same can be done for insurance sales calls, by converting voice to text and building algorithms that score the likelihood of non-compliance in the calls.
Further, AI can rapidly flag the most concerning client files for review by regulatory staff, allowing them to focus on the highest risk instances of advice and respond in near real-time.
The AI could be trained to check whether a document exists when a fee is charged. If the customer didn’t receive advice, then there shouldn’t be a fee.
Then we direct the AI towards whether advice is suitable. We can draw a picture of the customer’s income and expenses and the total cost of the product. If the customer can’t afford it, then it is not good financial advice.
Banks could likewise use AI to identify instances of mis-selling of lending products by bankers and third-party brokers at the time the application is lodged – helping to ensure loan applications are complete and accurate and the loan is affordable. Some banks have begun building these types of capability.
Once regulators have applied the simpler rules, the real power of AI can be harnessed to identify more concerning situations. AI can be used to identify advisers that are selling products at a rate that is out of step with other advisers, indicating a hidden incentive. Likewise, AI could be used to help regulators find advisers who have high rates of customer bankruptcies following financial advice.
These types of processes enhance organisational capability by allowing the focus of investigators to be on the highest risk planners and organisations and prompting proactive responses to new risks as industry trends emerge.
Regulators should embrace the potential of AI to help them with their ever-growing task, which will only become more complex as the volume of data continues to multiply.
Tarah Barzanji is Executive Manager of Government at Quantium, Australia’s largest data analytics company. Aydin Hibbert is Lead Analyst at Quantium, with a Masters of Research (Statistics) in machine learning and computational statistics.