Vivek Anand, Director of Advanced Analytics at a Fortune 500 Retailer, speaks with Derek Strauss, Chairman Gavroshe, and Editorial Board Member, CDO Magazine about the barriers in financial services regarding AI application, addressing the issues with data science use cases, transitioning into B2B, on building usable models, and the importance of explainability for better AI adoption.
Anand begins by pointing out that the number one barrier to applying data science and AI in the financial services sector is regulation. He adds that the industry was recovering from the 2008 financial crisis and the regulatory infrastructure limited the scope of doing things.
However, Anand maintains that a good deal of data science was still used to solve financial services problems. Further, it became critical to assess the worthiness of a creditor or counterparty based on their attributes, and the guidelines recommended using averages.
That need to assess creditworthiness led to building a robust model around credit rating, says Anand. He assumes that creditworthiness as a monotone function and the credit risk associated with it cannot overlap.
However, with aggregations, they were intersecting a couple of outliers that made a triple-A credit ranker less creditworthy than a triple-B, which was a fundamental issue. This paved the way for Anand to build the first data science use case with a robust regression that solved challenges and was less prone to outlier effects. He shares that the solution was adopted across the industry.
Further, Anand discusses another use case of data science in financial services that could be worked upon around pricing bonds. However, since it was not needed as a regulation, and given the fiduciary nature of the sector, it could not be pursued.
The black-and-white regulatory nature keeps the financial sector from becoming agile, says Anand. In this sector, it is challenging to scale given its evolving pace or leverage data science to solve business problems.
Nevertheless, there have been opportunities with increased adoption of data science because of some regulatory relaxations, says Anand. While guardrails are in place to ensure data science and AI do not go haywire, the adoption continues.
When asked about transitioning and learning the importance of explainability, Anand mentions that while working as VP in a financial services organization, he felt constrained in the work environment amidst regulators. Then, he pivoted away from financial services and got into traditional retail and B2B, and saw the guardrails vanishing in some ways.
Elaborating further, Anand shares that he could see the visible impact of his actions on the company's top and bottom lines. Therefore, it became critical for him to come up with solutions that are usable and reliable, otherwise, it would show in the stock price.
After transitioning into financial services, he moved to B2B pricing as it became a thing and found an opportunity to leverage data science use cases to do it better.
Highlighting the B2B landscape, Anand states that bilateral transactions are happening with long sales cycles and no two sales get the same price. Thus arises a scalability issue if one wants to price thousands of customers one at a time.
Consequently, segmentation comes into play as a way to do it at scale while also doing it intelligently, asserts Anand. He refers to finding the commonality within the subset of a million transactions and grouping them based on similar selling situations, and that is where the organization started using machine learning techniques.
Adding on, Anand mentions using clustering and regression techniques to identify the things that explain the data variability best. As an example, he mentions using unsupervised clustering techniques to identify how many distinct buying patterns are in a massive transaction set.
Explaining further, Anand states that in case there was a difference in machine prediction and the human eye, the differences in individual clusters were assessed. This aided the organization in developing similar selling situations and he affirms building robust segmentation models.
Moving forward, Anand considers it to be a paradigm shift where he could build reliable models. Emphasizing explainability, he maintains that since the end user of data science solutions is not a person trained in data science, explainability is critical.
Explainability is a barrier to AI adoption at scale across all enterprise levels, affirms Anand. Therefore, the organization incorporated a business intelligence framework in the solutioning to increase explainability around inputs and outputs.
With available data science capabilities, Anand further built upon it to come up with good explanations like Shapley analysis for every prediction.
In conclusion, Anand shares how the increased explainability in the BI dashboard led to changing conversations around pricing recommendations. Eventually, it was made self-serve and people could see what drove the transactional price recommendation, and that led to increased adoptions.
CDO Magazine appreciates Vivek Anand for sharing his insights with our global community.