A Candid Conversation on Architecting Intelligent Compliance Systems With P. S. L. Narasimharao Davuluri
An interview with P. S. L. Narasimharao Davuluri on AI-driven compliance systems, cloud platforms, streaming data, and financial technology.
P. S. L. Narasimharao Davuluri is a data engineering expert who operates in finance, compliance, and modern technology. With more than 16 years of experience, he has spent his career building systems that help large financial institutions manage data more quickly, securely, and accurately. His work covers real-time sanctions screening, cloud modernization, streaming platforms, and AI-driven compliance solutions.
Currently serving as an Associate Principal Data Engineer at LTIMindtree, Narasimharao has led projects involving AWS, Snowflake, Apache Kafka, Spark, and enterprise ETL systems. His experience includes designing platforms that process large volumes of financial transactions while improving performance and reducing delays. Over the years, he has also focused on improving data governance, reducing false positives in compliance systems, and creating stronger reporting frameworks for regulated industries. In this interview, he shares insights from his journey, his current work, and the changing direction of AI-powered compliance systems.
1. Narasimharao, we’re glad to have you here with us. You’ve spent over 16 years working across data engineering and financial systems. Could you walk us through your current role as Associate Principal Data Engineer, with an emphasis on the kind of problems you find yourself solving today, compared to earlier in your career?
Narasimharao Davuluri: Thank you, it’s a pleasure to be here.
Over the past 16 years, my journey has been one of progressive complexity. I started as a Data Engineering Specialist at LTIMindtree, working with Citigroup’s banking and compliance infrastructure. In those early years, the challenges were largely foundational, building reliable batch pipelines, ensuring data quality, and migrating legacy systems to more structured environments. The work was important, but the feedback loops were slow. You’d process a day’s worth of transactions overnight and surface issues the next morning.
In my current role as Associate Principal Data Engineer, the nature of the problems I solve has fundamentally changed. Today, I’m working at the intersection of real-time data engineering, financial crime compliance, and cloud-native architecture. My core focus is on sanctions screening systems and anti-money laundering platform systems, where the cost of a missed signal or a false positive isn’t just operational; it can mean regulatory penalties or reputational damage for a global bank.
What’s also changed is the scope of ownership. Earlier in my career, I was primarily an executor of well-defined technical tasks. Today, I’m designing architectures, mentoring teams, driving cloud modernization on AWS and Snowflake, and increasingly integrating AI into compliance pipelines. Since 2023, my work has evolved into building AI-augmented frameworks that don’t just process data; they reason over it, flag anomalies intelligently, and help compliance teams stay audit-ready with far less manual effort.
In short, early in my career, I was building the pipes. Today, I’m designing the entire water system and making sure it responds in real time to any pressure changes.
2. Your research highlights a shift from traditional batch processing to hybrid batch-and-streaming architectures, particularly in sanctions screening systems. In one of your recent contributions, you emphasize optimizing Kafka streams for low-latency anomaly detection. When you bring those two approaches together, what changes in how teams respond to risk on a day-to-day basis?
Narasimharao Davuluri: The shift from pure batch to a hybrid batch-and-streaming model is not just a technical upgrade; it’s a fundamental change in how compliance teams experience and act on risk.
In a traditional batch environment, sanctions screening occurred in scheduled windows, typically nightly or hourly. By the time a suspicious transaction was flagged, it had often already been settled. Teams were essentially doing retrospective compliance, reacting to risk after the fact. That lag creates real exposure.
When we introduced hybrid architectures using Apache Kafka Streams alongside our existing batch layers, the first change was the response window. Kafka enabled us to screen transactions in near real-time, within milliseconds of initiation, while the batch layer continued to handle high-volume reconciliation, historical analysis, and regulatory reporting that didn’t require immediate action.
On a day-to-day basis, this changes everything for compliance teams. Analysts who used to come in each morning to a queue of yesterday’s flagged transactions now work with live dashboards. Alerts surface as events happen. This means investigations are opened faster, potentially before funds are moved, and that is enormously valuable in sanctions environments where even a brief settlement window can be irreversible.
It also changes the operational rhythm. Teams shift from a reactive, batch-review culture to a continuous-monitoring mindset. Runbooks, escalation protocols, and even staffing models have to evolve to support 24/7 alert responsiveness rather than shift-based batch reviews.
The hybrid model gives you the best of both worlds: the immediacy of streaming for high-risk signals, and the depth and completeness of batch for audit trails and regulatory submissions.
3. In your work around sanctions screening, you’ve focused on reducing false positives using entity resolution techniques. This area remains a persistent challenge in financial crime detection. Could you elaborate on how these techniques refine decision-making accuracy without slowing down high-volume transaction processing?
Narasimharao Davuluri: False positives are one of the most persistent and costly challenges in financial crime compliance. In sanctions screening, a false positive means a legitimate transaction is flagged, triggering manual review, operational overhead, and customer friction, all for something that wasn’t actually a risk. At high transaction volumes, even a small false positive rate translates into thousands of wasted analyst hours.
Entity resolution is the practice of determining whether two or more references, such as a transaction counterparty name and a name on a sanctions watchlist, actually refer to the same real-world entity. The challenge is that names appear in inconsistent formats. Transliterations vary, abbreviations differ, and data entry errors creep in. A naive string-match approach catches too many false positives and, paradoxically, can also miss true matches if the name format differs significantly.
My work focused on building multi-layered entity resolution pipelines that combine fuzzy matching, phonetic algorithms, and contextual enrichment factors like geography, entity type, and transaction patterns to score potential matches with much greater precision. Instead of a binary flag, the system outputs a confidence score, and thresholds are calibrated based on risk appetite.
The key to doing this without sacrificing throughput was architectural. Resolution logic was designed as lightweight, parallelizable microservices that could run within the Kafka streaming pipeline. We used pre-computed entity graphs and cached watchlist indices, so lookups didn’t require full-table scans at runtime. This allowed us to apply sophisticated matching logic at streaming speeds, maintaining sub-second latency even under peak transaction loads.
The outcome was a meaningful reduction in false positives, which freed compliance analysts to focus on genuine risk signals rather than noise, and that is ultimately what makes a screening system truly effective.
4. Your transition into AI-augmented compliance platforms since 2023 marks a significant evolution in your research. When designing intelligent systems that integrate AI with real-time data pipelines, what new possibilities emerge for regulatory reporting and audit readiness that were previously difficult to achieve?
Narasimharao Davuluri: The integration of AI into real-time data pipelines has opened up possibilities in regulatory reporting and audit readiness that were genuinely out of reach with traditional architectures.
Before AI augmentation, regulatory reporting was largely a structured, scheduled exercise. You’d extract data at defined intervals, run predefined aggregations, and produce reports that reflected a snapshot in time. Audit readiness meant maintaining documentation and hoping your records were consistent. Both processes were labor-intensive and reactive.
When AI is woven into the data pipeline itself, several things become possible. First, with continuous regulatory reporting instead of periodic batch reports, AI models can monitor data streams and generate near-real-time compliance summaries. Regulators increasingly expect this level of timeliness, and AI makes it architecturally feasible.
Second, intelligent data lineage and governance. AI can automatically tag, classify, and trace data as it flows through the pipeline, understanding where it originated, how it was transformed, and where it was used. This makes audit trails not just complete but interpretable. When a regulator asks why a particular transaction was screened a certain way, the system can reconstruct the decision path with full traceability.
Third, AI enables anomaly detection in the reporting process itself, flagging inconsistencies in regulatory submissions before they’re filed, which was nearly impossible to do systematically in manual environments.
Since 2023, my research has focused specifically on designing these AI-augmented compliance frameworks for Citigroup’s environment. The most significant shift is that audit readiness moves from being a periodic scramble to a continuous, always-on state. The system is always prepared for scrutiny, and that fundamentally changes the relationship between compliance teams and regulatory oversight.
5. Having led cloud modernization initiatives with AWS and Snowflake, you’ve achieved notable outcomes, including 99.9% uptime and substantial cost optimization. How do cloud-native architectures reshape the long-term scalability and sustainability of compliance systems? And what tends to shift most for teams once the transition is complete, especially in regulated environments?
Narasimharao Davuluri: Cloud-native architecture fundamentally changes the economics and operational model of compliance systems, and the effects compound over time.
Traditional on-premise compliance infrastructure was sized for peak load. You built for the worst day of the year and paid for that capacity every day of the year. Scaling meant procurement cycles, hardware lead times, and significant capital expenditure. In a compliance context, where transaction volumes fluctuate dramatically, think quarter-end surges or geopolitical events triggering sanctions activity, rigidity was a real constraint.
With AWS and Snowflake as the foundation, we shifted to an elastic, consumption-based model. The infrastructure scales up automatically during peak screening windows and scales back down in quieter periods. The 99.9% uptime we achieved wasn’t the result of over-provisioning; it came from architecting for resilience with multi-region redundancy, automated failover, and managed services that removed single points of failure.
The cost optimization dimension is equally significant. By applying FinOps principles, right-sizing workloads, leveraging reserved and spot capacity intelligently, and building data lifecycle policies, we achieved substantial reductions in infrastructure spend while improving performance. Sustainability here means both financial sustainability and operational longevity.
As for what shifts most for teams after the transition, the answer is mindset and ownership model. Teams stop thinking in terms of servers and start thinking in terms of services and data products. The operational burden of infrastructure maintenance largely disappears, and engineers redirect their energy toward higher-value work, building better pipelines, improving data quality, and innovating on the compliance logic itself. In regulated environments specifically, cloud-native also simplifies compliance with data residency and security standards, because the major cloud providers have already built certified frameworks that teams can inherit.
The transition is not without friction, but the teams that complete it rarely want to go back.
6. In the end, please shed light on the implications you see for the future role of data engineers and compliance professionals, given the growing self-sustainability of autonomous compliance systems that combine AI, streaming technologies, and FinOps principles.
Narasimharao Davuluri: This is a question I find genuinely exciting, because the trajectory I see is not one of displacement, it’s one of elevation.
Autonomous compliance systems, those that combine AI, real-time streaming, and FinOps-driven infrastructure, are increasingly capable of handling the repetitive, rule-based work that once consumed most of a compliance team’s day. Routine transaction screening, standard alert triage, and scheduled report generation… these are becoming automated workflows. And that’s a good thing.
For data engineers, the implication is a shift in value creation. The role is moving away from pipeline plumbing toward designing intelligent data systems that learn, adapt, and self-optimize. Engineers who understand both the technical substrate and the compliance domain will be exceptionally valuable. The future data engineer in this space is part architect, part AI integrator, and part domain expert. My own research trajectory, from batch pipelines in 2019 to autonomous compliance frameworks in 2025-2026, reflects exactly this evolution.
For compliance professionals, the shift is equally profound. With autonomous systems handling monitoring and initial triage, the human role moves toward judgment, exception handling, and regulatory interpretation. Compliance officers will spend less time reviewing routine alerts and more time on complex investigations, policy design, and engaging with regulators on emerging risk typologies. That is a far more intellectually demanding and impactful role.
The organizations that will lead in this space are those that invest in the intersection of people who can converse fluently across AI, data engineering, and financial regulation. Siloed expertise becomes a liability. Hybrid competency becomes the competitive advantage.
What I find most compelling about the autonomous compliance direction is that it doesn’t just make systems faster or cheaper; it makes the entire financial system more resilient. And that has implications far beyond any single institution.
Conclusion
P. S. L. Narasimharao Davuluri’s efforts, from working on ETL systems in the early stages of his career to leading large-scale compliance and cloud modernization projects, have remained focused on solving practical problems inside complex financial environments. His work shows how data systems are becoming more intelligent, faster, and more connected with real-time decision-making. In this interview, he offered valuable thoughts on the future of compliance technology, the role of AI in financial systems, and the growing importance of strong data governance. He is an example of how technical depth, research, and practical execution can come together to create systems that support both innovation and accountability in global finance.