As Europe’s top B2B used-goods auction platform, TBAuctions is entering the AI era. Roberto Bonilla, Lead Data Engineer, shows how Databricks, Azure, Terraform, MLflow and LangGraph unify to simplify complex AI workflows. Bas Lucieer, Head of Data, details the strategy and change management that bring a sales-driven organization along, ensuring adoption and lasting value. Together they show tech + strategy = marketplace edge.
talk-data.com
Topic
Azure
Microsoft Azure
723
tagged
Activity Trend
Top Events
Veel organisaties investeren fors in moderne dataplatformen, maar zien dat adoptie achterblijft. In deze sessie leer je hoe je vanuit ons 7-stappen model voor datagedreven werken (stap 2 - ‘Maak een plan’ en stap 6 - ‘Deel de kennis’) zorgt voor platformkeuzes die gedragen worden door de business. Inclusief tips om Azure, Databricks of Fabric niet alleen technisch, maar ook organisatorisch te laten landen.
Elliot Foreman and Andrew DeLave from ProsperOps joined Yuliia and Dumky to discuss automated cloud cost optimization through commitment management. As Google go-to-market director and senior FinOps specialist, they explain how their platform manages over $4 billion in cloud spend by automating reserved instances, committed use discounts, and savings plans across AWS, Azure, and Google Cloud. The conversation covers the psychology behind commitment hesitation, break-even point mathematics for cloud discounts, workload volatility optimization, and why they avoid AI in favor of deterministic algorithms for financial decisions. They share insights on managing complex multi-cloud environments, the human vs automation debate in FinOps, and practical strategies for reducing cloud costs while mitigating commitment risks.
A session on re-architecting data workflows to significantly optimize performance in Azure Data Factory. Insider’s look at different architectural designs and practical insights for faster, less complex data processing with Azure Data Factory.
A practical deep-dive into Azure DevOps pipelines, the Azure CLI, and how to combine pipeline, bicep, and python templates to build a fully automated web app deployment system. Deploying a new proof of concept app within an actual enterprise environment never was faster.
Join Jay Parikh, Microsoft EVP of Core AI, as he opens MCP DevDays with an exciting look at how the Model Context Protocol is revolutionizing AI application development. Discover why Microsoft is all-in on MCP and how it's accelerating developer productivity across VS Code, GitHub Copilot, Azure AI Foundry, and Windows. This keynote features lightning demos showcasing real-world MCP implementations. Whether you're a developer, tool builder, or AI enthusiast, this session sets the stage for two days of hands-on learning about the protocol that's defining the next generation of intelligent
Prepare for Microsoft Exam DP-300 and demonstrate your real-world foundational knowledge of Azure database administration using a variety of methods and tools to perform and automate day-to-day operations, including use of Transact-SQL (T-SQL) and other tools for administrative management purposes. Designed for database administrators, solution architects, data scientists, and other data professionals, this Exam Ref focuses on the critical-thinking and decision-making acumen needed for success at the Microsoft Certified: Azure Database Administrator Associate level. Focus on the expertise measured by these objectives: Plan and implement data platform resources Implement a secure environment Monitor, configure, and optimize database resources Configure and manage automation of tasks Plan and configure a high availability and disaster recovery (HA/DR) environment This Microsoft Exam Ref: Organizes its coverage by the Skills Measured list published for the exam Features strategic, what-if scenarios to challenge you Assumes you have subject matter expertise in building database solutions that are designed to support multiple workloads built with SQL Server on-premises and Azure SQL About the Exam Exam PD-300 focuses on core knowledge for implementing and managing the operational aspects of cloud-native and hybrid data platform solutions built on SQL Server and Azure SQL services, using a variety of methods and tools to perform and automate day-to-day operations, including applying knowledge of using Transact-SQL (T-SQL) and other tools for administrative management purposes. About Microsoft Certification Passing this exam fulfills your requirements for the Microsoft Certified: Azure Database Administrator Associate certification, demonstrating your ability to administer a SQL Server database infrastructure for cloud, on-premises, and hybrid relational databases using the Microsoft PaaS relational database offerings. See full details at: microsoft.com/learn .
Fabric también es ideal para pequeñas empresas. En Casa Vicens Gaudí, Esbrina extrajo todas las reservas al Lakehouse y construyó un modelo semántico en modo importación. Como era la única carga de trabajo, se usó una logic app en Azure para activar y desactivar la capacidad, optimizando así los costes.
Microsoft Fabric is transforming how organizations build unified data platforms for analytics, data science, and business intelligence. Until recently, deploying and managing Fabric resources required manual effort or ad hoc automation. That changed with the release of the Terraform provider for Microsoft Fabric last year, enabling teams to manage Fabric infrastructure as code. In this session, you'll learn how to get started using Terraform to provision and manage Microsoft Fabric components — including workspaces, pipelines, dataflows, and more — in a repeatable and scalable way. Aimed at data engineers, cloud architects, and DevOps professionals, we'll cover core Terraform concepts, walk through practical examples, and share best practices for integrating with Azure and CI/CD workflows. By the end of the session, you'll be equipped to bring automation, consistency, and governance to your Microsoft Fabric environments using Terraform.
This talk explores EDB’s journey from siloed reporting to a unified data platform, powered by Airflow. We’ll delve into the architectural evolution, showcasing how Airflow orchestrates a diverse range of use cases, from Analytics Engineering to complex MLOps pipelines. Learn how EDB leverages Airflow and Cosmos to integrate dbt for robust data transformations, ensuring data quality and consistency. We’ll provide a detailed case study of our MLOps implementation, demonstrating how Airflow manages training, inference, and model monitoring pipelines for Azure Machine Learning models. Discover the design considerations driven by our internal data governance framework and gain insights into our future plans for AIOps integration with Airflow.
Many SRE teams still rely on manual intervention for incident handling; automation can improve response times and reduce toil. We will cover: Setting up comprehensive observability: Cloud Logging, Cloud Monitoring, and OpenTelemetry; Incident automation strategies: Runbooks, Auto-Healing, and ChatOps; Lessons from AWS CloudWatch and Azure Monitor applied to GCP; Case study: Reducing MTTR (Mean Time to Resolution) through automated detection and remediation
This comprehensive guide is designed to address the most frequent and challenging issues faced by users of Power Query, a powerful data transformation tool integrated into Excel, Power BI, and Microsoft Azure. By tackling 96 real-world problems with practical, step-by-step solutions, this book is an essential resource for data analysts, Excel enthusiasts, and Power BI professionals. It aims to enhance your data transformation skills and improve efficiency in handling complex data sets. Structured into 12 chapters, the book covers specific areas of Power Query such as data extraction, referencing, column splitting and merging, sorting and filtering, and pivoting and unpivoting tables. You will learn to combine data from Excel files with varying column names, handle multi-row headers, perform advanced filtering, and manage missing values using techniques such as linear interpolation and K-nearest neighbors (K-NN) imputation. The book also dives into advanced Power Query functions such as Table.Group, List.Accumulate, and List.Generate, explored through practical examples such as calculating running totals and implementing complex grouping and iterative processes. Additionally, it covers crucial topics such as error-handling strategies, custom function creation, and the integration of Python and R with Power Query. In addition to providing explanations on the use of functions and the M language for solving real-world challenges, this book discusses optimization techniques for data cleaning processes and improving computational speed. It also compares the execution time of functions across different patterns and proposes the optimal approach based on these comparisons. In today’s data-driven world, mastering Power Query is crucial for accurate and efficient data processing. But as data complexity grows, so do the challenges and pitfalls that users face. This book serves as your guide through the noise and your key to unlocking the full potential of Power Query. You’ll quickly learn to navigate and resolve common issues, enabling you to transform raw data into actionable insights with confidence and precision. What You Will Learn Master data extraction and transformation techniques for various Excel file structures Apply advanced filtering, sorting, and grouping methods to organize and analyze data Leverage powerful functions such as Table.Group, List.Accumulate, and List.Generate for complex transformations Optimize queries to execute faster Create and utilize custom functions to handle iterative processes and advanced list transformation Implement effective error-handling strategies, including removing erroneous rows and extracting error reasons Customize Power Query solutions to meet specific business needs and share custom functions across files Who This Book Is For Aspiring and developing data professionals using Power Query in Excel or Power BI who seek practical solutions to enhance their skills and streamline complex data transformation workflows
Erste Group's transition to Azure Databricks marked a significant upgrade from a legacy system to a secure, scalable and cost-effective cloud platform. The initial architecture, characterized by a complex hub-spoke design and stringent compliance regulations, was replaced with a more efficient solution. The phased migration addressed high network costs and operational inefficiencies, resulting in a 60% reduction in networking costs and a 30% reduction in compute costs for the central team. This transformation, completed over a year, now supports real-time analytics, advanced machine learning and GenAI while ensuring compliance with European regulations. The new platform features a Unity Catalogue, separate data catalogs and dedicated workspaces, demonstrating a successful shift to a cloud-based machine learning environment with significant improvements in cost, performance and security.
How do you transform a data pipeline from sluggish 10-hour batch processing into a real-time powerhouse that delivers insights in just 10 minutes? This was the challenge we tackled at one of France's largest manufacturing companies, where data integration and analytics were mission-critical for supply chain optimization. Power BI dashboards needed to refresh every 15 minutes. Our team struggled with legacy Azure Data Factory batch pipelines. These outdated processes couldn’t keep up, delaying insights and generating up to three daily incident tickets. We identified Lakeflow Declarative Pipelines and Databricks SQL as the game-changing solution to modernize our workflow, implement quality checks, and reduce processing times.In this session, we’ll dive into the key factors behind our success: Pipeline modernization with Lakeflow Declarative Pipelines: improving scalability Data quality enforcement: clean, reliable datasets Seamless BI integration: Using Databricks SQL to power fast, efficient queries in Power BI
SMBC, a major Japanese multinational financial services institution, has embarked on an initiative to build a GenAI-powered, modern and well-governed cloud data platform on Azure/Databricks. This initiative aims to build an enterprise data foundation encompassing loans, deposits, securities, derivatives, and other data domains. Its primary goals are: To decommission legacy data platforms and reduce data sprawl by migrating 20+ core banking systems to a multi-tenant Azure Databricks architecture To leverage Databrick’s delta-share capabilities to address SMBC’s unique global footprint and data sharing needs To govern data by design using Unity Catalog To achieve global adoption of the frameworks, accelerators, architecture and tool stack to support similar implementations across EMEA Deloitte and SMBC leveraged the Brickbuilder asset “Data as a Service for Banking” to accelerate this highly strategic transformation.
Deploying AI models efficiently and consistently is a challenge many organizations face. This session will explore how Vizient built a standardized MLOps stack using Databricks and Azure DevOps to streamline model development, deployment and monitoring. Attendees will gain insights into how Databricks Asset Bundles were leveraged to create reproducible, scalable pipelines and how Infrastructure-as-Code principles accelerated onboarding for new AI projects. The talk will cover: End-to-end MLOps stack setup, ensuring efficiency and governance CI/CD pipeline architecture, automating model versioning and deployment Standardizing AI model repositories, reducing development and deployment time Lessons learned, including challenges and best practices By the end of this session, participants will have a roadmap for implementing a scalable, reusable MLOps framework that enhances operational efficiency across AI initiatives.
In this session you will learn how to leverage a wide set of GenAI models in Databricks, including external connections to cloud vendors and other model providers. We will cover establishing connection to externally served models, via Mosaic AI Gateway. This will showcase connection to Azure, AWS & Google Cloud models, as well as model vendors like Anthropic, Cohere, AI21 Labs and more. You will also discover best practices on model comparison, governance and cost control on those model deployments.
Adobe’s Real-Time Customer Data Platform relies on the identity graph to connect over 70 billion identities and deliver personalized experiences. This session will showcase how the platform leverages Databricks, Spark Streaming and Delta Lake, along with 25+ Databricks deployments across multiple regions and clouds — Azure & AWS — to process terabytes of data daily and handle over a million records per second. The talk will highlight the platform’s ability to scale, demonstrating a 10x increase in ingestion pipeline capacity to accommodate peak traffic during events like the Super Bowl. Attendees will learn about the technical strategies employed, including migrating from Flink to Spark Streaming, optimizing data deduplication, and implementing robust monitoring and anomaly detection. Discover how these optimizations enable Adobe to deliver real-time identity resolution at scale while ensuring compliance and privacy.
At Plexure, we ingest hundreds of millions of customer activities and transactions into our data platform every day, fuelling our personalisation engine and providing insights into the effectiveness of marketing campaigns.We're on a journey to transition from infrequent batch ingestion to near real-time streaming using Azure Event Hubs and Lakeflow Declarative Pipelines. This transformation will allow us to react to customer behaviour as it happens, rather than hours or even days later.It also enables us to move faster in other ways. By leveraging a Schema Registry, we've created a metadata-driven framework that allows data producers to: Evolve schemas with confidence, ensuring downstream processes continue running smoothly. Seamlessly publish new datasets into the data platform without requiring Data Engineering assistance. Join us to learn more about our journey and see how we're implementing this with Lakeflow Declarative Pipelines meta-programming - including a live demo of the end-to-end process!
In this presentation, we'll show how we achieved a unified development experience for teams working on Mercedes-Benz Data Platforms in AWS and Azure. We will demonstrate how we implemented Azure to AWS and AWS to Azure data product sharing (using Delta Sharing and Cloud Tokens), integration with AWS Glue Iceberg tables through UniForm and automation to drive everything using Azure DevOps Pipelines and DABs. We will also show how to monitor and track cloud egress costs and how we present a consolidated view of all the data products and relevant cost information. The end goal is to show how customers can offer the same user experience to their engineers and not have to worry about which cloud or region the Data Product lives in. Instead, they can enroll in the data product through self-service and have it available to them in minutes, regardless of where it originates.