talk-data.com talk-data.com

Topic

AWS Lambda

serverless faas aws

9

tagged

Activity Trend

5 peak/qtr
2020-Q1 2026-Q1

Activities

9 activities · Newest first

Paul Andrew: An Evolution of Data Architectures - Lambda, Kappa, Delta, Mesh & Fabric

🌟 Session Overview 🌟

Session Name: An Evolution of Data Architectures - Lambda, Kappa, Delta, Mesh & Fabric Speaker: Paul Andrew Session Description: How have advancements in highly scalable cloud technology influenced the design principles we apply when building data platform solutions? Are we designing solely for speed and batch layers, or do we want more from our platforms? Who says these patterns must be delivered exclusively?

Let’s disrupt the theory and consider the practical application of everything Microsoft now has to offer, where concepts and patterns meet technology. Can we now utilize cloud technology to build architectures that cater to lambda, kappa, and Delta Lake concepts in a complete stack of services? Should we be considering a solution that offers all these principles in a nirvana of data insight perfection? How does the concept of Data Fabric align with Microsoft Fabric as a product?

In this session, we’ll explore the answers to these questions and more in a thought-provoking, argument-generating examination of the challenges every data platform engineer/architect faces.

🚀 About Big Data and RPA 2024 🚀

Unlock the future of innovation and automation at Big Data & RPA Conference Europe 2024! 🌟 This unique event brings together the brightest minds in big data, machine learning, AI, and robotic process automation to explore cutting-edge solutions and trends shaping the tech landscape. Perfect for data engineers, analysts, RPA developers, and business leaders, the conference offers dual insights into the power of data-driven strategies and intelligent automation. 🚀 Gain practical knowledge on topics like hyperautomation, AI integration, advanced analytics, and workflow optimization while networking with global experts. Don’t miss this exclusive opportunity to expand your expertise and revolutionize your processes—all from the comfort of your home! 📊🤖✨

📅 Yearly Conferences: Curious about the evolution of QA? Check out our archive of past Big Data & RPA sessions. Watch the strategies and technologies evolve in our videos! 🚀 🔗 Find Other Years' Videos: 2023 Big Data Conference Europe https://www.youtube.com/playlist?list=PLqYhGsQ9iSEpb_oyAsg67PhpbrkCC59_g 2022 Big Data Conference Europe Online https://www.youtube.com/playlist?list=PLqYhGsQ9iSEryAOjmvdiaXTfjCg5j3HhT 2021 Big Data Conference Europe Online https://www.youtube.com/playlist?list=PLqYhGsQ9iSEqHwbQoWEXEJALFLKVDRXiP

💡 Stay Connected & Updated 💡

Don’t miss out on any updates or upcoming event information from Big Data & RPA Conference Europe. Follow us on our social media channels and visit our website to stay in the loop!

🌐 Website: https://bigdataconference.eu/, https://rpaconference.eu/ 👤 Facebook: https://www.facebook.com/bigdataconf, https://www.facebook.com/rpaeurope/ 🐦 Twitter: @BigDataConfEU, @europe_rpa 🔗 LinkedIn: https://www.linkedin.com/company/73234449/admin/dashboard/, https://www.linkedin.com/company/75464753/admin/dashboard/ 🎥 YouTube: http://www.youtube.com/@DATAMINERLT

AWS re:Invent 2024 - Build Amazon Q apps to scale and drive community engagement (DEV201)

In this session, discover how AI services, including Amazon Q Business, can help to scale and improve community engagement, streamline events planning, and handle everyday tasks as an event organizer. Dive into technical insights with a demo as we share practical ideas and offer guidance on getting started. Learn how to use AI and services like Amazon S3 and AWS Lambda. Building and growing communities is crucial. Whether you’re organizing monthly meetups or full-scale events, learn how to use these tools to work smarter and more efficiently.

Learn more: AWS re:Invent: https://go.aws/reinvent. More AWS events: https://go.aws/3kss9CP

Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4

About AWS: Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

AWSreInvent #AWSreInvent2024

AWS re:Inforce 2024 - Accelerating auditing and compliance for generative AI on AWS (GRC302)

Generative AI brings exciting new innovations, but it also presents challenges regarding responsible usage and compliance with governance requirements. This session guides you through the journey of a generative AI application and how AWS can help you ensure that your use of Amazon Bedrock and other related services, such as Amazon S3, AWS Lambda, and Amazon VPC, follows best practices for compliance and governance. Explore compliance services that AWS offers, like AWS Audit Manager and AWS CloudTrail, that can assist you in continuously auditing your generative AI infrastructure. Learn how these services automate audit evidence collection and provide audit-ready reports to meet your compliance and audit needs.

Learn more about AWS re:Inforce at https://go.aws/reinforce.

Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4

ABOUT AWS Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts.

AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

reInforce2024 #CloudSecurity #AWS #AmazonWebServices #CloudComputing

AWS re:Inforce 2024 - Harnessing conversational AI for streamlined security operations (COM222)

Tired of chasing security threats by looking in many different places? Imagine a chatbot that understands security findings, prioritizes risks, and suggests solutions all through natural language. This session unveils how to create a conversational AI to get faster answers about your security posture. Learn how to build this interactive ChatSecOps tool using Amazon Q, AWS Lambda, Amazon S3, and AWS Security Hub.

Learn more about AWS re:Inforce at https://go.aws/reinforce.

Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4

ABOUT AWS Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts.

AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

reInforce2024 #CloudSecurity #AWS #AmazonWebServices #CloudComputing

Central application for all your dbt packages - Coalesce 2023

dbt packages are libraries for dbt. Packages can produce information about best practice for your dbt project (ex: dbt project evaluator) and cloud warehouse cost overviews. Unfortunately, all theses KPIs are stored in your data warehouse and it can be painful and expensive to create data visualization dashboards. This application build automatically dashboards from dbt packages that you are using. You just need to parameter your dbt Cloud API key - that's it! In this session, you'll learn how.

Speaker: Adrien Boutreau, Head of Analytics Engineers , Infinite Lambda

Register for Coalesce at https://coalesce.getdbt.com

The End of History? Convergence of Batch and Realtime Data Technologies | Ternary Data

ABOUT THE TALK: Hybridization approaches such as Lambda and Kappa architecture are powerful tools for combining the most useful characteristics of batch and real time systems. Implementation and management of these architectures is not for the faint of heart, but the last several years have seen a wave of SaaS platforms and managed services that deliver hybrid capabilities with a greatly reduced operational burden. This talk details the anticipated impact of these hybrid technologies on future data stacks, and touches on the mythical “one database to rule them all.”

ABOUT THE SPEAKER: Matt Housley holds a Ph.D. in mathematics and is co-author of the bestselling O’Reilly book Fundamentals of Data Engineering.

ABOUT DATA COUNCIL: Data Council (https://www.datacouncil.ai/) is a community and conference series that provides data professionals with the learning and networking opportunities they need to grow their careers.

Make sure to subscribe to our channel for the most up-to-date talks from technical professionals on data related topics including data infrastructure, data engineering, ML systems, analytics and AI from top startups and tech companies.

FOLLOW DATA COUNCIL: Twitter: https://twitter.com/DataCouncilAI LinkedIn: https://www.linkedin.com/company/datacouncil-ai/

Backfill Streaming Data Pipelines in Kappa Architecture

Streaming data pipelines can fail due to various reasons. Since the source data, such as Kafka topics, often have limited retention, prolonged job failures can lead to data loss. Thus, streaming jobs need to be backfillable at all times to prevent data loss in case of failures. One solution is to increase the source's retention so that backfilling is simply replaying source streams, but extending Kafka retention is very costly for Netflix's data sizes. Another solution is to utilize source data stored in DWH, commonly known as the Lambda architecture. However, this method introduces significant code duplication, as it requires engineers to maintain a separate equivalent batch job. At Netflix, we have created the Iceberg Source Connector to provide backfilling capabilities to Flink streaming applications. It allows Flink to stream data stored in Apache Iceberg while mirroring Kafka's ordering semantics, enabling us to backfill large-scale stateful Flink pipelines at low retention cost.

Connect with us: Website: https://databricks.com Facebook: https://www.facebook.com/databricksinc Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/data... Instagram: https://www.instagram.com/databricksinc/

Auditing Your Data and Answering the Lifelong Question—Is It the End of the Day Yet?

Huge volumes of data flow through a robust Kafka architecture, into several ETLs, receiving, transforming and storing the data. We clearly understood our ETLs’ workflow and our data architecture, from source to destination.

But how much did we know about the way our data makes though our systems? And what about the life long question, is it the end of the day yet?

In this talk I’m going to present to you the design process behind our Data Auditing system, Life Line. From tracking and producing, to analyzing and storing auditing information, using technologies such as Kafka, Avro, Spark, Lambda functions and complex SQL queries. We’re going to cover: * AVRO Audit header * Auditing heart beat - designing your metadata * Designing and optimizing your auditing table - what does this data look like anyway? * Creating an alert based monitoring system * Answering the most important question of all - is it the end of the day yet?

Connect with us: Website: https://databricks.com Facebook: https://www.facebook.com/databricksinc Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/data... Instagram: https://www.instagram.com/databricksinc/

Self-Serve, Automated and Robust CDC pipeline using AWS DMS, DynamoDB Streams and Databricks Delta

Many companies are trying to solve the challenges of ingesting transactional data in Data lake and dealing with late-arriving updates and deletes.

To address this at Swiggy, we have built CDC(Change Data Capture) system, an incremental processing framework to power all business-critical data pipelines at low latency and high efficiency.

It offers: Freshness: It operates in near real-time with configurable latency requirements. Performance: Optimized read and write performance with tuned compaction parameters and partitions and delta table optimization. Consistency: It supports reconciliation based on transaction types. Basically applying insert, update, and delete on existing data.

To implement this system, AWS DMS helped us with initial bootstrapping and CDC replication for Mysql sources. AWS Lambda and DynamoDB streams helped us to solve the bootstrapping and CDC replication for DynamoDB source.

After setting up the bootstrap and cdc replication process we have used Databricks delta merge to reconcile the data based on the transaction types.

To support the merge we have implemented supporting features - * Deduplicating multiple mutations of the same record using log offset and time stamp. * Adding optimal partition of the data set. * Infer schema and apply proper schema evolutions(Backward compatible schema) * We have extended the delta table snapshot generation technique to create a consistent partition for partitioned delta tables.

FInally to read the data we are using Spark sql with Hive metastore and Snowflake. Delta tables read with Spark sql have implicit support for hive metastore. We have built our own implementation of the snowflake sync process to create external, internal tables and materialized views on Snowflake.

Stats: 500m CDC logs/day 600+ tables

Connect with us: Website: https://databricks.com Facebook: https://www.facebook.com/databricksinc Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/data... Instagram: https://www.instagram.com/databricksinc/