With AI being flagged as a key focus for many in the advice industry, an industry specialist lawyer has warned that ASIC will likely sharpen its focus on the emerging technology in 2026.
The last few years have seen the emergence of fast-evolving AI technologies as technology providers seek to meet the need for greater efficiency within the advice industry amid overwhelming demand for service.
Speaking with Money Management last month, WT Financial founder and managing director Keith Cullen said technology, particularly AI services, is being rapidly adopted within the industry as a means of increasing advisers’ capacity to serve “without compromising quality or risk controls”.
“At WTL, we’ve seen advisers increasingly embrace AI and corporatised operating models to build practices that are more scalable, more defensible, and more client-centric than ever before,” Cullen said.
Expecting this trend to continue through to 2026, he suggested that this could signal the “next era of advice delivery in Australia”.
Even so, Cullen noted the importance of balancing efficiency benefits with potential risk exposure as the industry works to address Australia’s critical advice gap.
Looking to deliver on this need, Iress chief executive, wealth APAC, Kelli Willmer told Money Management that expanding its use of AI within its advice solutions will be a key focus for 2026.
Namely, she said the technology will be utilised to help automate and streamline advice workflows from onboarding to implementation to help advisers scale and spend more time with clients.
Although spurred by growing demand from advisers, Willmer said trust is a key factor when it comes to the technology they use.
She added: “They want technology that helps them build confidence with clients at key moments, like accurate, personalised emails after the first meeting. Overall, advisers want wealth-tech that reinforces trust at every step of the client journey.”
Legal and regulatory qualms
While much of the industry’s AI usage seems confined to the back office and reducing the administrative burden on advisers at this stage, the primary concern when it comes to the advice industry is the highly sensitive nature of the client information they have access to.
Due to the relatively new nature of AI, regulation is struggling to keep pace with its rapid evolution, making it challenging for the regulator and businesses themselves to ensure adequate protections are in place to protect client data, specifically when it comes to third-party operators.
While last year saw a number of legislative changes to address AI concerns, Holley Nethercote lawyer Tali Borowick explained in an analysis that challenges related to this technology are often complex and linked to other key considerations such as data governance, cybersecurity, privacy and ethics practices.
Noting events from last year, Borowick pointed to March 2025 which saw ASIC bring action against FIIG Securities Limited, alleging the licensee failed to take appropriate steps to protect itself and clients from cybersecurity risks over a four-year period.
The regulator alleged at that time that this failure had enabled a hacker to enter the FIIG network and resulting in the theft of confidential information from 18,000 clients.
Nethercote said this and similar cases shed light on what the regulator expects from business owners when it comes to protecting their firm and clients from data breaches.
She added: “The lessons from these cases can and should be extended to the use of AI technologies by licensees.”
While ASIC is yet to target licensees specifically, Nethercote said it is “only a matter of time” before the regulator takes actions against a licensee for data breaches or governance failures associated with AI-usage.
As the regulatory landscape pushes to keep up with this evolving technology, she said the industry should expect to see a wave of new and strengthened legal frameworks in 2026, specifically aimed at governing how AI is deployed, monitored and controlled across the advice sector.
“From stricter compliance obligations under emerging global standards to enhanced accountability for algorithmic decision-making, regulators will demand greater transparency, fairness and risk management,” Nethercote said.
“For financial institutions, this means moving beyond experimentation and into a phase of robust governance, embedding ethical AI principles, implementing rigorous audit trails and ensuring data integrity at every stage.”
Despite legislations struggles to keep up with these technological enhancements, she argued that the industry should be thinking about putting safety frameworks in place now.
“Those who act now to align with these evolving requirements will not only mitigate risk but also gain a competitive edge in an increasingly regulated environment.
“The message is clear: 2026 will not just be another year of innovation, it will be the year of accountability.”




