talk-data.com talk-data.com

Topic

React

javascript_library front_end web_development

4

tagged

Activity Trend

9 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: Data + AI Summit 2025 ×
Transforming Customer Processes and Gaining Productivity With Lakeflow Declarative Pipelines

Bradesco Bank is one of the largest private banks in Latin America, with over 75 million customers and over 80 years of presence in FSI. In the digital business, velocity to react to customer interactions is crucial to succeed. In the legacy landscape, acquiring data points on interactions over digital and marketing channels was complex, costly and lacking integrity due to typical fragmentation of tools. With the new in-house Customer Data Platform powered by Databricks Intelligent Platform, it was possible to completely transform the data strategy around customer data. Using some key components such Uniform and Lakeflow Declarative Pipelines, it was possible to increase data integrity, reduce latency and processing time and, most importantly, boost personal productivity and business agility. Months of reprocessing, weeks of human labor and cumbersome and complex data integrations were dramatically simplified achieving significant operational efficiency.

Metadata-Driven Streaming Ingestion Using Lakeflow Declarative Pipelines, Azure Event Hubs and a Schema Registry

At Plexure, we ingest hundreds of millions of customer activities and transactions into our data platform every day, fuelling our personalisation engine and providing insights into the effectiveness of marketing campaigns.We're on a journey to transition from infrequent batch ingestion to near real-time streaming using Azure Event Hubs and Lakeflow Declarative Pipelines. This transformation will allow us to react to customer behaviour as it happens, rather than hours or even days later.It also enables us to move faster in other ways. By leveraging a Schema Registry, we've created a metadata-driven framework that allows data producers to: Evolve schemas with confidence, ensuring downstream processes continue running smoothly. Seamlessly publish new datasets into the data platform without requiring Data Engineering assistance. Join us to learn more about our journey and see how we're implementing this with Lakeflow Declarative Pipelines meta-programming - including a live demo of the end-to-end process!

Moody's AI Screening Agent: Automating Compliance Decisions

The AI Screening Agent automates Level 1 (L1) screening process, essential for Know Your Customer (KYC) and compliance due diligence during customer onboarding. This system aims to minimize false positives, significantly reducing human review time and costs. Beyond typical Retrieval-Augmented Generation (RAG) applications like summarization and chat-with-your-data (CWYD), the AI Screening Agent employs a ReAct architecture with intelligent tools, enabling it to perform complex compliance decision-making with human-like accuracy and greater consistency. In this talk, I will explore the screening agent architecture, demonstrating its ability to meet evolving client policies. I will discuss evaluation and configuration management using MLflow LLM-as-judge and Unity Catalog, and discuss challenges, such as, data fidelity and customization. This session underscores the transformative potential of AI agents in compliance workflows, emphasizing their adaptability, accuracy, and consistency.

Lakeflow Connect: The Game-Changer for Complex Event-Driven Architectures

In 2020, Delaware implemented a state-of-the-art, event-driven architecture for EFSA, enabling a highly decoupled system landscape, presented at the Data&AI Summit 2021. By centrally brokering events in near real-time, consumer applications react instantly to events from producer applications as they occur. Event producers are decoupled from consumers via a publisher/subscriber mechanism. Over the past years, we noticed some drawbacks. The processing of these custom events, primarily aimed for process integration weren’t covering all edge cases, the data quality was not always optimal due to missing events and we needed to create a complex logic for SCD2 tables. Lakeflow Connect allows us to extract the data directly from the source without the complex architecture in between, avoiding data loss and thus, data quality issues, and with some simple adjustments, an SCD2 table is created automatically. Lakeflow Connect allows us to create more efficient and intelligent data provisioning.