Data Observability Serves as a Precursor to Implementing AI at Scale — Acceldata CEO and Founder

(US & Canada) Rohit Choudhary, CEO and Founder of Acceldata, speaks with Mark Johnson, Chief Growth Officer at CoStrategix, in a video interview about how AI is transforming data operations, automation in data operations, workload challenges, future trends of AI and data observability, external and internal perspectives on bringing new capabilities, and the company’s future with data observability.

Choudhary begins by describing how AI is transforming data operations in terms of automation and efficiency. He states that AI will continue to have a significant impact in the form of increased efficiency.

In one case, a major consumer goods company was able to cut 10% of its cloud data warehouse costs by simply monitoring duplicate queries, says Choudhary. He states that by analyzing how their users interacted with the database, Acceldata discovered that redundant datasets were being created. This enabled the company to take swift action and optimize the processes.

Similarly, Choudhary states that companies often allocate more resources than necessary out of caution, which can become expensive in cloud settings. By optimizing container usage, it is possible to save 25% of memory and 50% of overall compute power.

Adding on, Choudhary states that the reason that fuels the shift towards automation in data operations is that data infrastructure is now much larger compared to the application infrastructure. With the expanding data landscape, manual monitoring becomes impossible for every new process, report, or deployed model, he adds.

The required system must be able to adapt to changes in seasonality, user activity, and data volume, says Choudhary. Also, beyond cost savings, it must support core business objectives. Keeping these in mind, it is evident that AI needs to drive monitoring expenditure, and that is where data observability becomes imperative.

Furthermore, Choudhary lists alert fatigue as a challenge where there are too many alerts and it becomes challenging for data teams to determine which one deserves attention. Therefore, it is crucial to prioritize alerts effectively and reduce operational noise.

However, unlike before, alerts have evolved now, says Choudhary. Instead of simply providing an address for the rabbit hole, they now come with root cause analysis (RCA) and context, outlining the most likely causes and outcomes.

Speaking of workload challenges, Choudhary says that critical human resources, which have the ability to drive innovation and create new solutions, should not spend 50% or more of their time just troubleshooting day-to-day issues. He adds that if there is a model decay, they should already be able to implement preventive and detective measures.

By receiving information ahead of time, they can intervene early and avoid poor outcomes, ultimately saving time, says Choudhary. He states that data scientists, engineers, and other key players should invest their time in building new machine learning models, supporting new business initiatives, and creating revenue-generating use cases, and not on the operational side of things.

When asked about the future trends of AI and data observability, Choudhary states that AI will be deeply integrated across industries. He notes that eventually, all enterprises will deploy valuable AI use cases as part of mainstream operations.

According to Choudhary, there will be a clear divide between companies that successfully implement AI and those that do not. Further, capital markets, consumers, and supply chains will favor those who get it right.

For companies, data observability becomes a crucial element to incorporate into their data ecosystems as soon as possible. He believes that data observability serves as a foundational step before scaling AI, ensuring that current challenges in data analytics, such as improving data quality, are addressed, setting the stage for a well-run AI program.

Moving forward, Choudhary discusses the roles of engineering and R&D teams in the product set to bring features and capabilities to market. He states that historically, there has been a major gap when it comes to addressing data quality.

Sharing an external perspective, he points out the existing disconnect between the data governance office and the platform/data engineering teams when it comes to data quality. While the data governance office’s primary focus lies on compliance and auditability, the data engineering team is more involved in operational issues and responding to governance expectations.

In continuation, Choudhary affirms that these operational problems, such as the lack of data lineage and impact analysis, magnify the challenges for governance officers, especially in multi-technology and cloud environments. He maintains that data observability is bridging this gap by providing tools that allow both sides to see the broader operational landscape.

From the internal standpoint, Choudhary states that Acceldata’s engineering team is developing a comprehensive metadata repository to unite the data engineering and data governance groups. This repository includes everything from SQL queries and models to data assets and the cost implications of running queries or models.

Furthermore, this metadata can help identify key assets that need attention, automatically generate data quality rules through the generative AI capabilities, and provide alerts with context so that teams can quickly reach root cause analyses (RCA).

Reflecting on the company’s future, Choudhary says that the company’s accomplishments lie in the variety of real-world use cases that create an impact. For example, when milk products are stocked on shelves at retailers around the world, data observability helps ensure the right number of boxes are accurately delivered. When people use ink in their home printers, the company determines the optimal time to send them more ink to keep them printing.

Additionally, in financial services, it plays a role in validating credit scores for mortgages or short-term loans. Choudhary states that data leaders will see that this technology can be used in areas such as risk analytics in banking, electronic medical record (EMR) verification in healthcare, verifying payer information, or reconciling financial data between CRM systems, operational data stores, and analytics platforms.

In conclusion, Choudhary states that businesses will have better outcomes by putting observability across financial data supply chains and consumer data supply chains.

CDO Magazine appreciates Rohit Choudhary for sharing his insights with our global community.

Also Read
Not Maintaining Data Quality Today Would Mean Garbage In, Disasters Out — Acceldata CEO and Founder
Data Observability Serves as a Precursor to Implementing AI at Scale — Acceldata CEO and Founder

Executive Interviews

No stories found.
CDO Magazine
www.cdomagazine.tech