A Safer Way To Approach Client Data

12 July 2019
| By Industry |
image
image
expand image
The Hayne Royal Commission rightfully and very publicly raised a question mark over the quality of customer data held by financial institutions. It also highlighted that data remediation – the cleansing, organisation and migrating of data – after costly and often lengthy investigation, warrants greater focus.
 
However, before companies can make tangible changes to the way data is managed and protected, it is important to understand why data held by financial institutions is prone to error. 
 
Generally speaking, the more data there is, the greater the margin for error. Secondly, data is inherently ‘dirty’. If new data is entering a system, remediation activities will not disappear; nor should they. Customer data must be treated with ongoing and systematic data quality processes. 
 
A typical financial services institution will have multiple business units managing constantly changing customer data from multiple channels, across multiple technology platforms and in line with ever-changing business rules and regulatory requirements. Keeping data clean is understandably a challenging – but not insurmountable – task. 
 
HOW CAN DATA QUALITY BE MADE SIMPLER?
 
Data quality is made much simpler with the right people, process and technology. Depending on the size and scale of the business, a dedicated internal data quality team is a good move, along with the implementation of data quality management technology which monitors, remediates and reports on data quality effectively and economically.
 
Using Word documents, SQL scripts or Excel spreadsheets to check data is archaic and high risk and should no longer be considered acceptable business practice.
 
The scenarios in which data can go wrong are infinite, but critically, errors must be detected and corrected early. If this doesn’t happen, there is a tendency for the error to spread and ‘contaminate’ other data, across other systems. Monitoring of data would ideally be carried out in real-time or as close to real-time as possible across all related platforms simultaneously.
 
This is particularly important for exiting customers; once monies have been paid out, remediation becomes more difficult politically, reputationally and practically as the organisation typically no longer has the funds. This became glaringly apparent throughout the Hayne Royal Commission.
 
DATA ERROR TRIGGERS
 
The types of data remediation projects in financial services vary significantly in terms of size and complexity, but all are due to one or more defects. These defects could have been introduced by anything from an administrative mistake to a system issue. The exposure of the defects triggering the need for the data remediation will fall into one of two categories: 
 
1) A reactive trigger is an ad-hoc or accidental identification of a wider issue, for instance, an issue identified by a customer or regulatory body. Remediation programs tend to run with tight deadlines and budgets. Teams are usually stretched with their efforts and the possibility of errors introduced during the remediation are increased, meaning the quality of the final implementation can suffer. 
2) A focused trigger results from cyclic and ongoing data quality assessment, instigated by a controlled data quality processes and part of a wider data management system.
 
Focused triggers are more likely to be well structured, scoped and budgeted. This approach also drives towards a root cause analysis where the underlying problem will be addressed.
 
Focused remediations require a mature data management system and unfortunately, many financial services organisations are still working towards this level of sophistication in their systems; therefore, most remediations are still very much reactive.
 
MOST COMMON CAUSES OF DATA ERROR IN FINANCIAL INSTITUTION
 
Customers expect financial institutions to correctly calculate their financial position and to know exactly who they are. A miscalculation, an administrative mistake, lack of insurance coverage, or other errors, can cause customers to feel wronged, robbed, not cared about or even marginalised.
 
Achieving error-free data is unrealistic, but effective measures can be put in place to reduce the incidence and severity of data errors by identifying issues early. 
 
The most common causes of data errors in financial institutions include: 
 
 
1) Fee calculations as misinterpretation between various controlling documents such as product disclosure statements, deeds and administrative contracts typically lead to fee calculation issues. This misinterpretation can occur across several departments; for instance, there may be a different opinion from the legal or risk and compliance department than that taken by the actual business.
2) Interest crediting relates to direct errors or issues involving delays in crediting or calculation of interest to customer accounts, and it occurs quite often. Delay issues may also be caused by processing issues, for instance any delay in processing a customer investment switch request could have a large positive or negative impact on customer accounts.
3) Eligibility requirements around certain benefits, particularly those related to insurance or credit requirements, can have huge impacts on both customers and the institution. Insurance issues are usually highly emotive because they involve someone who is hurt or has died, and typically involve large sums of benefit payments.
4) Lack of internal controls stemming from a lack of adherence to, or inadequate controls around, various calculators used for financial decision making. For example, the Royal Commission noted that lack of controls around overdraft facilities led to clients being granted access to funds that they otherwise would not have received.
5) Lack of critical information as missing or lost information can lead to misinterpretation of business rules. For instance, if income protection benefits are calculated based on salary, but some employers submitting electronic data for members are not providing salary with their contribution data, then different calculations may need to be derived.
These calculations may be based on incorrect or invalid data and assumptions. 
Many data remediation programs are typically initiatives after an issue has been raised by a customer or group of customers, and upon investigation, this gives rise to a whole other host of issues that may have been impacting thousands of customers over several years.
 
Remediation requires exact knowledge of business rules. It often requires collaboration across several departments, including legal, risk and compliance, actuarial, project management, investments, marketing, call centre staff, administration, and IT.
 
Often in data remediation work, further investigation turns up other, previously undiscovered issues. This investigation is complicated by issues extending across years, as customer status may have changed. For example, in the case of superannuation, members may have changed from the accumulation to retirement phase where adjustments may be more difficult, switches have been carried out, members may have exited, and accounts may have been merged. 
 
Communication and keeping all relevant parties (including regulatory bodies) informed throughout data remediation work is key to ensuring the same mistakes are not repeated. Companies need to ensure that the appropriate assessments are completed in determining whether the data errors have caused a breach. If that breach is material, it is then reportable to the relevant regulator, meaning either the Australian Prudential Regulation Authority (APRA) or Australian Securities and Investments Commission (ASIC).
 
Everyone involved must clearly understand procedures, such as prioritisation of issues, particularly in a large program of work. It is especially important for significant data remediation events to be calling in the best possible team; otherwise, scope creep and program costs may blow out and data problems could be compounded.

WHAT CONTROLS ARE NEEDED TO PREVENT DATA ERROR?
 
When carried out correctly, data remediation contributes to the important cycle of continuous improvement and raising the value of the data within any financial services organisation. Remediation activities themselves are not necessarily an indication of poor controls, but what gave rise to these remediations is. If most remediations are instigated by ad-hoc triggers and external parties, then there is a clear lack of a reliable data quality process.
 
To improve data quality and avoid spending millions on data remediation, there are a few key controls to have in place:
 
1) Develop a series of data quality metrics as, put simply, you cannot manage what you do not measure. A metrics-based approach will provide a factual basis on which to justify, focus and monitor efforts while acting as a leading risk indicator.
2) Determine ownership as, like all strategic initiatives, data quality will not succeed without the oversight, collaboration and accountability of all key stakeholders.
3) Embed data quality into the culture through continuous training and prominent visibility of the importance of customer data to all relevant staff, from the call centre to senior management. This is essential practice. 
4) Invest in data quality system solution as, to break the cycle of Word documents, spreadsheets and SQL scripts, an integrated data quality platform is required. This will drive real cost efficiencies and risk management that true data quality can deliver.
5) Measure return on investment (ROI), remembering that the key to measuring ROI is about choosing the metrics that matter most to the business – those that can be measured and offer the biggest potential for improvement.
 
Addressing the points above should help transition from a reactive approach heavily focused on correction to one that finds the appropriate balance between prevention, detection and correction controls. While prevention is better than cure and remains the optimum solution, early detection is also a critical tool that can significantly reduce the cost and impact of data errors. 
 
Stephen Mahoney is the executive director of superannuation consultancy, QMV.  
Read more about:

AUTHOR

 

Recommended for you

 

MARKET INSIGHTS

sub-bg sidebar subscription

Never miss the latest news and developments in wealth management industry

Squeaky'21

My view is that after 2026 there will be quite a bit less than 10,000 'advisers' (investment advisers) and less than 100...

1 week ago
Jason Warlond

Dugald makes a great point that not everyone's definition of green is the same and gives a good example. Funds have bee...

1 week ago
Jasmin Jakupovic

How did they get the AFSL in the first place? Given the green light by ASIC. This is terrible example of ASIC's incompet...

1 week 1 day ago

AustralianSuper and Australian Retirement Trust have posted the financial results for the 2022–23 financial year for their combined 5.3 million members....

9 months 1 week ago

A $34 billion fund has come out on top with a 13.3 per cent return in the last 12 months, beating out mega funds like Australian Retirement Trust and Aware Super. ...

9 months ago

The verdict in the class action case against AMP Financial Planning has been delivered in the Federal Court by Justice Moshinsky....

9 months 2 weeks ago

TOP PERFORMING FUNDS

ACS FIXED INT - AUSTRALIA/GLOBAL BOND