talk-data.com talk-data.com

Topic

Beam

Apache Beam

data_processing batch_processing stream_processing

14

tagged

Activity Trend

2 peak/qtr
2020-Q1 2026-Q1

Activities

14 activities · Newest first

Many scientists rely on NumPy for its simplicity and strong CPU performance, but scaling beyond a single node is challenging. The researchers at SLAC need to process massive datasets under tight beam time constraints, often needing to modify code on the fly. This is where cuPyNumeric comes in—a drop-in replacement for NumPy that distributes work across CPUs and GPUs. With its familiar NumPy interface, cuPyNumeric makes it easy to scale computations without rewriting code, helping scientists focus on their research instead of debugging. It’s a great example of how the SciPy ecosystem enables cutting-edge science.

David Regalado: A Gentle Introduction to Apache Beam

Explore Apache Beam with David Regalado, a seasoned data practitioner and Google Cloud Champion Innovator. Join his session for a gentle introduction to this powerful tool! 🌐📊 #ApacheBeam #datainnovationsummit

✨ H I G H L I G H T S ✨

🙌 A huge shoutout to all the incredible participants who made Big Data Conference Europe 2023 in Vilnius, Lithuania, from November 21-24, an absolute triumph! 🎉 Your attendance and active participation were instrumental in making this event so special. 🌍

Don't forget to check out the session recordings from the conference to relive the valuable insights and knowledge shared! 📽️

Once again, THANK YOU for playing a pivotal role in the success of Big Data Conference Europe 2023. 🚀 See you next year for another unforgettable conference! 📅 #BigDataConference #SeeYouNextYear

The purpose of this session is to indicate how we leverage airflow in a federated way across all our business units to perform a cost-effective platform that accommodates different patterns of data integration, replication and ML tasks in a flexible way providing DevOps tunning of DAGs across environments that integrate to our open-source observability strategy that allows our SREs to have a consistent metrics, monitoring and alerting of data tasks. We will share the opinionated way we setup DAGs that include naming and folder structure conventions along coding expectation like the use of XCom specific entries to report processed elements and support for state for DAGs that require it as well as the expected configurable capabilities for tasks such as the type of runner for Apache Beam tasks. Along these ones we will also indicate the “DevOps DAGs” that we deploy in all our environments that take care of specific platform maintenance/support.

In this episode, Conor interviews speakers & attendees live from Lambda Days 2023! Link to Episode 133 on WebsiteDiscuss this episode, leave a comment, or ask a question (on GitHub)Twitter ADSP: The PodcastConor HoekstraBryce Adelstein LelbachGuests Interviewed Toby PfiefferKim HuizingAlexis KingJosé ValimAndor PénzesPeer StritzingerShow Notes Date Recorded: 2023-06-06 Date Released: 2023-06-09 Lambda Days 2023Elixir LanguageElixir Benchee Benchmarking LibraryRuby Berlin User Group MeetingLambda Days 2023: Stories In Open Soure - Toby PfeifferLambda Days 2018 - Tobias Pfeiffer - Stop Guessing and Start Measuring - Benchmarking in PracticeDevon Estes - Digging through the garbage - Code BEAM Lite Berlin 18Scala LanguageAWK LanguageAlexis King’s BlogLambda Days 2023: Delimited Continuations Demystified - Alexis KingZuriHac 2023: Delimited Continuations Demystified - Alexis KingRacket LanguageSlideshow LanguageLambda Days 2023: Meta-programmable functional notebooks with Livebook - José ValimThinking Elixir PodcastLambda Days 2021: Introducing Nx - José ValimElixir NxElixir LiveBookElixir PhoenixElixir MembraneKeynote: Celebrating the 10 Years of Elixir | José Valim | ElixirConf EU 2022Elixir ForumSponsor: Evolution EngineeringLambda Days 2023: Examples of easy dependently typed programming (in Idris) - Andor PénzesType Driven Development by Edwin BradyFunctional FemalesGRiSPErlang LanguageWhatsApp acquired for 19BGRiSP on TwitterIntro Song Info Miss You by Sarah Jansen https://soundcloud.com/sarahjansenmusic Creative Commons — Attribution 3.0 Unported — CC BY 3.0 Free Download / Stream: http://bit.ly/l-miss-you

Summary

Real-time capabilities have quickly become an expectation for consumers. The complexity of providing those capabilities is still high, however, making it more difficult for small teams to compete. Meroxa was created to enable teams of all sizes to deliver real-time data applications. In this episode DeVaris Brown discusses the types of applications that are possible when teams don't have to manage the complex infrastructure necessary to support continuous data flows.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their extensive library of integrations enable you to automatically send data to hundreds of downstream tools. Sign up free at dataengineeringpodcast.com/rudderstack Your host is Tobias Macey and today I'm interviewing DeVaris Brown about the impact of real-time data on business opportunities and risk profiles

Interview

Introduction How did you get involved in the area of data management? Can you describe what Meroxa is and the story behind it?

How have the focus and goals of the platform and company evolved over the past 2 years?

Who are the target customers for Meroxa?

What problems are they trying to solve when they come to your platform?

Applications powered by real-time data were the exclusive domain of large and/or sophisticated tech companies for several years due to the inherent complexities involved. What are the shifts that have made them more accessible to a wider variety of teams?

What are some of the remaining blockers for teams who want to start using real-time data?

With the democratization of real-time data, what are the new categories of products and applications that are being unlocked?

How are organizations thinking about the potential value that those types of apps/services can provide?

With data flowing constantly, there are new challenges around oversight and accuracy. How does real-time data change the risk profile for applications that are consuming it?

What are some of the technical controls that are available for organizations that are risk-averse?

What skills do developers need to be able to effectively design, develop, and deploy real-time data applications?

How does this differ when talking about internal vs. consumer/end-user facing applications?

What are the most interesting, innovative, or unexpected ways that you have seen Meroxa used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Meroxa? When is Meroxa the wrong choice? What do you have planned for the future of Meroxa?

Contact Info

LinkedIn @devarispbrown on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

Meroxa

Podcast Episode

Kafka Kafka Connect Conduit - golang Kafka connect replacement Pulsar Redpanda Flink Beam Clickhouse Druid Pinot

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC

At Credit Karma, we enable financial progress for more than 100 million of our members by recommending them personalized financial products when they interact with our application. In this talk we are introducing our machine learning platform to build interactive and production model-building workflows to serve relevant financial products to Credit Karma users. Vega, Credit Karma’s Machine Learning Platform, has 3 major components: 1) QueryProcessor for feature and training data generation, backed by Google BigQuery, 2) PipelineProcessor for feature transformations, offline scoring and model-analysis, backed by Apache Beam 3) ModelProcessor for running Tensorflow and Scikit models, backed by Google AI Platform, which provides data scientists the flexibility to explore different kinds of machine learning or deep learning models, ranging from gradient boosted trees to neural network with complex structures Vega exposed a unified Python API for Feature Generation, Modeling ETL, Model Training and Model Analysis. Vega supports writing interactive notebooks and python scripts to run these components in local mode with sampled data and in cloud mode for large scale distributed computing. Vega provides the ability to chain the processors provided by data scientists through Python code to define the entire workflow. Then it automatically generates the execution plan for deploying the workflow on Apache Airflow for running offline model experiments and refreshes. Overall, with the unified python API and automated Airflow DAG generation, Vega has improved the efficiency of ML Engineering. Using Airflow we deploy more than 20K features and 100 models daily

Building Big Data Pipelines with Apache Beam

Building Big Data Pipelines with Apache Beam is the essential guide for mastering data processing using Apache Beam. This book covers both the basics and advanced concepts, from implementing pipelines to extending functionalities with custom I/O connectors. By the end, you'll be equipped to build scalable and reusable big data solutions. What this Book will help me do Understand the core principles of Apache Beam and its architecture. Learn how to create efficient data processing pipelines for diverse scenarios. Master the use of stateful processing for real-time data handling. Gain skills in using Beam's portability features for various languages. Explore advanced functionalities like creating custom I/O connectors. Author(s) None Lukavský is a seasoned data engineer with extensive experience in big data technologies and Apache Beam. Having worked on innovative data solutions across industries, None brings hands-on insights and practical expertise to this book. Their approach to teaching ensures readers can directly apply concepts to real-world scenarios. Who is it for? This book is designed for professionals involved in big data, such as data engineers, analysts, and scientists. It is particularly suited for those with an intermediate level of understanding of Java, aiming to expand their skill set to include advanced data pipeline construction. Whether you're stepping into Apache Beam for the first time or looking to deepen your expertise, this book offers valuable, actionable insights.

Send us a text Want to be featured as a guest on Making Data Simple? Reach out to us at [[email protected]] and tell us why you should be next.

Abstract Hosted by Al Martin, VP, IBM Expert Services Delivery, Making Data Simple provides the latest thinking on big data, A.I., and the implications for the enterprise from a range of experts. This week on Making Data Simple, we have Davor Bonaci co-founder and CEO of a Seattle-based startup Kaskada. Previously, Davor served as the chair of the Apache Beam PMC and software engineer in Google Cloud since its early days and the inception of Cloud Dataflow. Show Notes 2:39 – Are you focused on the platform or the models around event based data? 6:52 – Does your company provide knowhow or is it tooling? 9:58 – What’s your secret sauce? 11:19 – How did you end up here? 15:40 – Who’s your biggest competitor?  17:13 – Can you talk to some of the common use cases? 20:30 – Are you and IDE, how does it work? 21:18 – Are you a subscription service? 22:18 – What’s your 5 year plan? 26:44 – How feature store save time and money 29:19 – Describe the company in three bullets 31:35 – What are the top skills?  Davor Bonaci  - LinkedIn Kaskada

Connect with the Team Producer Kate Brown - LinkedIn. Producer Steve Templeton - LinkedIn. Host Al Martin - LinkedIn and Twitter.  Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Summary

Yelp needs to be able to consume and process all of the user interactions that happen in their platform in as close to real-time as possible. To achieve that goal they embarked on a journey to refactor their monolithic architecture to be more modular and modern, and then they open sourced it! In this episode Justin Cunningham joins me to discuss the decisions they made and the lessons they learned in the process, including what worked, what didn’t, and what he would do differently if he was starting over today.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at www.dataengineeringpodcast.com/linode?utm_source=rss&utm_medium=rss and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch. You can help support the show by checking out the Patreon page which is linked from the site. To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers Your host is Tobias Macey and today I’m interviewing Justin Cunningham about Yelp’s data pipeline

Interview with Justin Cunningham

Introduction How did you get involved in the area of data engineering? Can you start by giving an overview of your pipeline and the type of workload that you are optimizing for? What are some of the dead ends that you experienced while designing and implementing your pipeline? As you were picking the components for your pipeline, how did you prioritize the build vs buy decisions and what are the pieces that you ended up building in-house? What are some of the failure modes that you have experienced in the various parts of your pipeline and how have you engineered around them? What are you using to automate deployment and maintenance of your various components and how do you monitor them for availability and accuracy? While you were re-architecting your monolithic application into a service oriented architecture and defining the flows of data, how were you able to make the switch while verifying that you were not introducing unintended mutations into the data being produced? Did you plan to open-source the work that you were doing from the start, or was that decision made after the project was completed? What were some of the challenges associated with making sure that it was properly structured to be amenable to making it public? What advice would you give to anyone who is starting a brand new project and how would that advice differ for someone who is trying to retrofit a data management architecture onto an existing project?

Keep in touch

Yelp Engineering Blog Email

Links

Kafka Redshift ETL Business Intelligence Change Data Capture LinkedIn Data Bus Apache Storm Apache Flink Confluent Apache Avro Game Days Chaos Monkey Simian Army PaaSta Apache Mesos Marathon SignalFX Sensu Thrift Protocol Buffers JSON Schema Debezium Kafka Connect Apache Beam

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast

Summary

What exactly is data engineering? How has it evolved in recent years and where is it going? How do you get started in the field? In this episode, Maxime Beauchemin joins me to discuss these questions and more.

Transcript provided by CastSource

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch. You can help support the show by checking out the Patreon page which is linked from the site. To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers Your host is Tobias Macey and today I’m interviewing Maxime Beauchemin

Questions

Introduction How did you get involved in the field of data engineering? How do you define data engineering and how has that changed in recent years? Do you think that the DevOps movement over the past few years has had any impact on the discipline of data engineering? If so, what kinds of cross-over have you seen? For someone who wants to get started in the field of data engineering what are some of the necessary skills? What do you see as the biggest challenges facing data engineers currently? At what scale does it become necessary to differentiate between someone who does data engineering vs data infrastructure and what are the differences in terms of skill set and problem domain? How much analytical knowledge is necessary for a typical data engineer? What are some of the most important considerations when establishing new data sources to ensure that the resulting information is of sufficient quality? You have commented on the fact that data engineering borrows a number of elements from software engineering. Where does the concept of unit testing fit in data management and what are some of the most effective patterns for implementing that practice? How has the work done by data engineers and managers of data infrastructure bled back into mainstream software and systems engineering in terms of tools and best practices? How do you see the role of data engineers evolving in the next few years?

Keep In Touch

@mistercrunch on Twitter mistercrunch on GitHub Medium

Links

Datadog Airflow The Rise of the Data Engineer Druid.io Luigi Apache Beam Samza Hive Data Modeling

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast

Fundamentals of Structural Stability

The ability of a structural assembly to carry loads and forces determines how stable it will be over time. Viewing structural assemblages as comprising columns, beams, arches, rings, and plates, this book will introduce the student to both a classical and advanced understanding of the mechanical behavior of such structural systems under load and how modeling the resulting strains can predict the overall future performance—the stability—of that structure. While covering traditional beam theory, the book is more focused on elastica theory in keeping with modern approaches. This text will be an expanded and updated version a similar, previously published book, but with pedagogical improvements and updated analytical methods. This engineering textbook will provide a focused treatment on the study of how structures behave and perform when under stress loading, including plastic deformation and buckling. All advanced engineering students studying engineering mechanics, structural analysis and design, fatigue and failure, and other related subjects need to have this knowledge, and this book will provide it in a thorough and coherent fashion. Written by two of the world’s leading engineering professors in this subject area, the pedagogy has been classroom-tested over many years and should find a receptive readership among both students and instructors. * An understandable introduction to the theory of structural stability, useful for a wide variety of engineering disciplines, including mechanical, civil and aerospace engineering * Covers both static and dynamic loads, for both conservative and nonconservative systems * Emphasizes elastic behavior under loads, including vertical buckling, torsional buckling and nonlinear affects of structural system buckling and stability * Case examples to illustrate real-world applications of Stability Theory