talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (85 results)

See all 85 →

Companies (5 results)

See all 5 →
CxO Advisor, UK & Europe EMEA Performance PC and Workstation Lead Senior Director, Head of UK Public Sales
Chief AI Officer
Showing 5 results

Activities & events

Title & Speakers Event

For your free pass, register here: https://www.devnetwork.com/registration

Important: You must register at the link above (and not just indicate that you are attending here on Meetup).

DeveloperWeek 2025 (Feb 11-13, Santa Clara, CA) + (Feb 18-20, Live Online) is the world’s largest independent engineering conference & expo where thousands of developers, engineers, software architects, managers, and executives converge to discover the latest engineering innovations.

Learn from leaders at Microsoft, Salesforce, Intuit, AWS, Oracle, Adobe, LinkedIn, Dropbox, Dell, U.S.Bank, and many more!

Choose from sessions across 8 conferences:

  1. AI DevWorld: Breakthroughs in AI technologies.
  2. ProductWorld: Best practices in product and team management .
  3. CloudNative World: Serverless deployment, microservice management, edge environments, and orchestration best practices.
  4. Frontend World: Designing and delivering successful user experiences.
  5. DevExec World: Hiring, nurturing, and retaining technical experts and best practices for growing your own technical skill set.
  6. OpsWorld: Developer portals, automated CI/CD, provisioning, deployment, and infrastructure – all defined through code.
  7. Dev Security World: Best practices in keeping your data and infrastructure secure.
  8. Dev Innovation World: Sifting big trends and paradigm shifts from the hype.

The DeveloperWeek team has offered our group 25 free OPEN Passes so our members can attend for free.

Register now to get your free OPEN Pass ($195 value): https://www.devnetwork.com/registration

Free Passes to DeveloperWeek 2025!
Wil van der Aalst – Professor at RWTH Aachen University; Chief Scientist at Celonis; part-time affiliated with Fraunhofer FIT; Member of the Board of Governors of Tilburg University @ RWTH Aachen University; Celonis; Fraunhofer FIT; Tilburg University , Richie – host @ DataCamp , Cong Yu – Leads the CeloAI group at Celonis; former Principal (Research) Scientist / Research Director at Google Research NYC @ Celonis

Regardless of profession, the work we do leaves behind a trace of actions that help us achieve our goals. This is especially true for those that work with data. For large enterprises where there are seemingly countless processes happening at any one time, keeping track of these processes is crucial. Given the scale of these processes, one small efficiency gain can leads to a staggering amount of time and money saved. Process mining is a data-driven approach to process analysis that uses event logs to extract process-related information. It can separate inferred facts, from exact truths, and uncover what really happens in a variety of operations.  Wil van der Aalst is a full professor at RWTH Aachen University, leading the Process and Data Science (PADS) group. He is also the Chief Scientist at Celonis, part-time affiliated with the Fraunhofer FIT, and a member of the Board of Governors of Tilburg University.  His research interests include process mining, Petri nets, business process management, workflow management, process modeling, and process analysis. Wil van der Aalst has published over 275 journal papers, 35 books (as author or editor), 630 refereed conference/workshop publications, and 85 book chapters. Cong Yu leads the CeloAI group at Celonis focusing on bringing advanced AI technologies to EMS products, building up capabilities for their knowledge platform, and ultimately helping enterprises in reducing process inefficiencies and achieving operational excellence. Previously, Cong was Principal (Research) Scientist / Research Director at Google Research NYC from September 2010 to July 2022, leading the NYSD/Beacon Research Group, and also taught at NYU Courant Institute of Mathematical Sciences.  In the episode, Wil, Cong, and Richie explore process mining and its development over the past 25 years, the differences between process mining and ML, AI, and data mining, popular use cases of process mining, adoption from large enterprises like BMW, HP, and Dell, the requirements for an effective process mining system, the role of predictive analytics and data engineering in process mining, how to scale process mining systems, prospects within the field and much more. Links Mentioned in the Show: CelonisGartner’s Magic Quadrant for Process MiningPM4PyProcess Query Language (PQL)[Couse] Business Process Analytics in R

AI/ML Analytics Data Engineering Data Science Process Mining
DataFramed
Jayson Gehri – Marketing Director / Directs the marketing team for Hybrid Data Management @ IBM , Al Martin – WW VP Technical Sales @ IBM

Send us a text Jayson Gehri directs the marketing team for Hybrid Data Management at IBM, following roles as marketing director for Dell and Quest Software. In this special episode, he lets us know what to watch for as IBM kicks off its annual THINK Conference, happening this year in the heart of downtown San Francisco from February 12th to the 15th.


Shownotes 00:00 - Check us out on YouTube and SoundCloud!  00:05 - Be sure to check out other MDS episodes here!  00:10 - Connect with Producer Steve Moore on LinkedIn & Twitter   00:15 - Connect with Producer Liam Seston on LinkedIn & Twitter   00:20 - Connect with Producer Rachit Sharma on LinkedIn   00:25 - Connect with Host Al Martin on LinkedIn & Twitter   00:40 – Connect with Jayson Gehri on LinkedIn & Twitter   00:55 – Get more info on THINK  01:30 – Pier 39   02:30 – Rob Thomas   02:35 – Arvind Krishna Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Data Management IBM Marketing Modern Data Stack
Making Data Simple
Tom Kaitchuck – guest @ Dell EMC , Tobias Macey – host

Summary

As more companies and organizations are working to gain a real-time view of their business, they are increasingly turning to stream processing technologies to fullfill that need. However, the storage requirements for continuous, unbounded streams of data are markedly different than that of batch oriented workloads. To address this shortcoming the team at Dell EMC has created the open source Pravega project. In this episode Tom Kaitchuk explains how Pravega simplifies storage and processing of data streams, how it integrates with processing engines such as Flink, and the unique capabilities that it provides in the area of exactly once processing and transactions. And if you listen at approximately the half-way mark, you can hear as the hosts mind is blown by the possibilities of treating everything, including schema information, as a stream.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media. Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m interviewing Tom Kaitchuck about Pravega, an open source data storage platform optimized for persistent streams

Interview

Introduction How did you get involved in the area of data management? Can you start by explaining what Pravega is and the story behind it? What are the use cases for Pravega and how does it fit into the data ecosystem?

How does it compare with systems such as Kafka and Pulsar for ingesting and persisting unbounded data?

How do you represent a stream on-disk?

What are the benefits of using this format for persisted streams?

One of the compelling aspects of Pravega is the automatic sharding and resource allocation for variations in data patterns. Can you describe how that operates and the benefits that it provides? I am also intrigued by the automatic tiering of the persisted storage. How does that work and what options exist for managing the lifecycle of the data in the cluster? For someone who wants to build an application on top of Pravega, what interfaces does it provide and what architectural patterns does it lend itself toward? What are some of the unique system design patterns that are made possible by Pravega? How is Pravega architected internally? What is involved in integrating engines such as Spark, Flink, or Storm with Pravega? A common challenge for streaming systems is exactly once semantics. How does Pravega approach that problem?

Does it have any special capabilities for simplifying processing of out-of-order events?

For someone planning a deployment of Pravega, what is involved in building and scaling a cluster?

What are some of the operational edge cases that users should be aware of?

What are some of the most interesting, useful, or challenging experiences that you have had while building Pravega? What are some cases where you would recommend against using Pravega? What is in store for the future of Pravega?

Contact Info

tkaitchuk on GitHub LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooli

Flink Data Engineering Data Management GitHub Kafka Spark Data Streaming
Data Engineering Podcast

“An excellent summary of the state of supply chain management going into the twenty-first century. Explains the essential concepts clearly and offers practical, down-to-earth advice for making supply chains more efficient and adaptive. Truly a survival guide for executives as they struggle to cope with the increasing competition between supply chains.” — Christian Knoll, Vice President of Global Supply Chain Management, SAP AG “Through real-world case studies and graphic illustrations, David Taylor clearly demonstrates the bottom-line benefits of managing the supply chain effectively. Although the book is written for managers, I recommend it for everyone from the executive suite to the shipping floor because they all have to work together to master the supply chain. But beware—you can expect many passionate employees demanding improvements in your company’s supply chain after reading this book!” — David Myers, President, WinfoSoft Inc., Former Board Member of Supply Chain Council “A comprehensive, thoroughly researched, and well-designed book that gives managers the information they need in a highly readable form. I am already starting to use the techniques in this book to improve our international distribution system.” — Jim Muller, Vice President of Produce Sales, SoFresh Produce “Supply chain management is a deceptively deep subject. Simple business practices combine to form complex systems that seem to defy rational analysis: Companies that form trading partnerships continue to compete despite their best efforts to cooperate; small variations in consumer buying create devastating swings in upstream demand, and so on. In his trademark fashion, Taylor clearly reveals the hidden logic at work in your supply chain and gives you the practical tools you need to make better management decisions. A must-read for every manager who affects a supply chain, and in today's marketplace there are few managers who are exempt from this requirement.” — Adrian J. Bowles, Ph.D., President, CoSource.net “David Taylor has done it again. With his new book, David makes supply chain management easy to grasp for the working manager, just as he did with his earlier guides to business technology. If you work for a company that is part of a supply chain, you need this book.” — Dirk Riehle, Ph.D. “David Taylor has done a masterful job of defining the core issues in supply chain management without getting trapped in the quicksand of jargon. This concise book is well written, highly informative, and easy to read.” — Marcia Robinson, President, E-Business Strategies, author of Services Blueprint: Roadmap “Taylor has done a tremendous job of giving readers an intuitive grasp of a complicated subject. If you’re new to supply chains, this book will give you an invaluable map of the territory. If you're already among the initiated, it will crystallize your insights and help you make better decisions. In either case, you can only come out ahead by reading this book.” — Kevin Dick, Founder of Kevin Dick Associates, author of XML: A Manager’s Guide “My motto for compressing data is ‘squeeze it til it gags.’ In the current business climate, that’s what you have to do to costs, and Taylor shows you many ways to squeeze costs out of your supply chain. He also writes with the same economy: This book contains exactly what you need to manage your supply chain effectively. Nothing is missing, and nothing is extra.” — Charles Ashbacher, President, Charles Ashbacher Technologies Today's fiercest business battles are taking place between competitors' supply chains, with victory dependent on finding a way to deliver products to customers more quickly and efficiently than the competition. For proof, just look to Dell and Amazon.com, both of which revolutionized their industries by changing how companies produce, distribute, and sell physical goods. But they're hardly alone. By revamping their supply chains, Siemens CT improved lead time from six months to two weeks, Gillette slashed $400 million of inventory, and Chrysler saved $1.7 billion a year. It's a high-stakes game, and you don't have a lot of choice about playing: If your company touches a physical product, it's part of a supply chain--and your success ultimately hangs on the weakest link in that chain. In , best-selling author David Taylor explains how to assemble a killer supply chain using the knowledge, technology, and tools employed in supply-chain success stories. Using his signature fast-track summaries and informative graphics, Taylor offers a clear roadmap to understanding and solving the complex problems of supply-chain management. Supply Chains: A Manager's Guide Modern manufacturing has driven down the time and cost of the production process, leaving supply chains as the final frontier for cost reduction and competitive advantage. will quickly give managers the foundation they need to contribute effectively to their company's supply-chain success. Supply Chains: A Manager's Guide

data data-science analytics-platforms qlik-sense C#/.NET SAP XML
O'Reilly Data Science Books
Showing 5 results