talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (125 results)

See all 125 →
Showing 6 results

Activities & events

Title & Speakers Event
PyDataMCR April 2025-04-10 · 17:30

PyDataMCR April

Continuing with some more great talks we are hosted this month by Autotrader!

THE TALKS

Automating open source LLM fine-tuning, monitoring and cloud deployment - Michail Vamvakaris,PhD (he/him) and Kekura Musa (he/him)

The advent of Gen AI Large Language Models such as ChatGPT, Claude, Sonnet and others is a massive leap forward for humanity, bringing forth a new industrial revolution. Following the success of ChatGPT and other closed source models, open source models have emerged have emerged as a powerful paradigm for custom use cases. However, fine-tuning and deploying these models for production systems requires a significant amount of effort and know-how.

ConstellaXion is an open source tool aiming to bring the ease of closed-source prompting to open-source building. Enabling engineers in organizations large or small to seamlessly fine-tune, deploy and monitor language models in any private cloud environment.

Attending this talk is your opportunity to influence the trajectory of the project as we are actively looking for collaborators and contributors to build with. It is also an opportunity to learn about our new open source venture and discover how you can benefit.

Kekura is the lead contributor of the ConstellaXion project and the person behind this idea. He is currently working as a AI/ML Computational Science Manager at Accenture leading large scale LLM Ops projects. His vision is build a democratized AI future where everyone has an equal opportunity to craft the trajectory of this amazing technology.

Michail is the second core contributor of the project. He is a Senior ML Engineer at Bloomberg where he built a real time Machine Learning bond pricing system. A fairly easy problem, from an academic point of view, that turn out to become the worst nightmare in a corporate environment where constrains pile up and the demand for accurate solution is high. Prior to joining Bloomberg, he spent a fair amount of time working for other investment/fintech firms as ML Research Engineer. His professional experience also spans ML for energy and water industry, cybersecurity, robotics and algo trading.

The Power of Self-Learning (or, How I Built a Website With Zero Knowledge) - Charlotte Wright (She/Her)

At the start of last year, Charlotte decided to challenge herself and learn a new skill stack adjacent to her role. Working with a wide range of people and teams in her day-to-day, she often gets looped into conversations around things such as JavaScript, React, UX design, and the like - skills not necessarily befitting an analyst - and so really wanted to build on her knowledge of these topics through a project agnostic to her day job. In this talk, she covers how this thinking manifested itself into building a website from scratch, learning how to design, create, and deploy an application, as well as the learnings and surprises she had along the way. Self-learning can be a very valuable thing, and this talk highlights some of the reasons why.

Charlotte has been working at Auto Trader for the past 4 years since graduating from the University of Liverpool with a degree in maths. She does lots of things within the company, but her day-to-day as a Digital Analyst typically involves homing in on AT’s consumers, analysing their behaviour, and ensuring the right tools and tracking are in place to make their digital experience the best it can possibly be. When she’s not chasing developers to fix broken tracking, you can often find her working on AT’s engineering blog as an editor or getting involved in their AT-languages community, which she co-founded a couple of years ago.

Event Details

The event will start at 18:30 with time for socialising until 19:00 when the talks begin. The talks will end approximately around 20:00.

After the talks we'll all head somewhere local for some post-event socialising.

Location We'll be at AutoTrader, who are kindly supplying the venue and catering. The capacity is limited to 80.

EVENT GUIDELINES

PyDataMCR is a strictly professional event, as such professional behaviour is expected.

PyDataMCR is a chapter of PyData, an educational program of NumFOCUS and thus abides by the NumFOCUS Code of Conduct

https://pydata.org/code-of-conduct.html

Please take a moment to familiarise yourself with its contents.

ACCESSIBILITY

There is a quiet room available if needed. Toilets and venue are accessible.

SPONSORS

Thank you to NUMFocus for sponsoring Meetup and further support.

Thank you to AutoTrader for their sponsorship and for the awesome venue and catering!

Thank you to Krakenflex for sponsoring PyDataMCR.

PyDataMCR April

Links:

Book: https://www.manning.com/books/machine-learning-system-design?utm_source=AGMLBookcamp&utm_medium=affiliate&utm_campaign=book_babushkin_machine_4_25_23&utm_content=twitter Discount: poddatatalks21 (35% off) Evidently: https://www.evidentlyai.com/ Article: https://medium.com/people-ai-engineering/design-documents-for-ml-models-bbcd30402ff7

Free MLOps course: https://github.com/DataTalksClub/mlops-zoomcamp

Join DataTalks.Club: https://datatalks.club/slack.html

Our events: https://datatalks.club/events.html

AI/ML GitHub HTML MLOps
DataTalks.Club

Common mistakes in Machine Learning system development - Valerii Babushkin

Outline:

  • Machine Learning sysem design
  • Common pitfalls in the design process
  • Doing the design right

About the speaker:

Valerii works at Blockchain.com as a head of Data science. Before that he worked in Facebook as the WhatApp User Data Privacy Tech Lead, at Alibaba Russia as the VP of Machine Learning and at X5 Retail Group as a Senior Director of Data Science. Also, Valerii is a Kaggle competition Grand Master, ranked globally in the top 30.

DataTalks.Club is the place to talk about data. Join our slack community

Why Machine Learning Design is Broken
Wes McKinney – guest , Tobias Macey – host

Summary The data ecosystem has been growing rapidly, with new communities joining and bringing their preferred programming languages to the mix. This has led to inefficiencies in how data is stored, accessed, and shared across process and system boundaries. The Arrow project is designed to eliminate wasted effort in translating between languages, and Voltron Data was created to help grow and support its technology and community. In this episode Wes McKinney shares the ways that Arrow and its related projects are improving the efficiency of data systems and driving their next stage of evolution.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Struggling with broken pipelines? Stale dashboards? Missing data? If this resonates with you, you’re not alone. Data engineers struggling with unreliable data need look no further than Monte Carlo, the leading end-to-end Data Observability Platform! Trusted by the data teams at Fox, JetBlue, and PagerDuty, Monte Carlo solves the costly problem of broken data pipelines. Monte Carlo monitors and alerts for data issues across your data warehouses, data lakes, dbt models, Airflow jobs, and business intelligence tools, reducing time to detection and resolution from weeks to just minutes. Monte Carlo also gives you a holistic picture of data health with automatic, end-to-end lineage from ingestion to the BI layer directly out of the box. Start trusting your data with Monte Carlo today! Visit dataengineeringpodcast.com/montecarlo to learn more. Data engineers don’t enjoy writing, maintaining, and modifying ETL pipelines all day, every day. Especially once they realize 90% of all major data sources like Google Analytics, Salesforce, Adwords, Facebook, Spreadsheets, etc., are already available as plug-and-play connectors with reliable, intuitive SaaS solutions. Hevo Data is a highly reliable and intuitive data pipeline platform used by data engineers from 40+ countries to set up and run low-latency ELT pipelines with zero maintenance. Boasting more than 150 out-of-the-box connectors that can be set up in minutes, Hevo also allows you to monitor and control your pipelines. You get: real-time data flow visibility, fail-safe mechanisms, and alerts if anything breaks; preload transformations and auto-schema mapping precisely control how data lands in your destination; models and workflows to transform data for analytics; and reverse-ETL capability to move the transformed data back to your business software to inspire timely action. All of this, plus its transparent pricing and 24*7 live support, makes it consistently voted by users as the Leader in the Data Pipeline category on review platforms like G2. Go to dataengineeringpodcast.com/hevodata and sign up for a free 14-day trial that also comes with 24×7 support. Your host is Tobias Macey and today I’m interviewing Wes McKinney about his work at Voltron Data and on the Arrow ecosystem

Interview

Introduction How did you get involved in the area of data management? Can you describe what you are building at Voltron Data and the story behind it? What is the vision for the broader data ecosystem that you are trying to realize through your investment in Arrow and related projects?

How does your work at Voltron Data contribute to the realization of that vision?

What is the impact on engineer productivity and compute efficiency that gets introduced by the impedance mismatches between language and framework representations of data? The scope and capabilities of the Arrow project have grown substantially since it was first introduced. Can you give an overview of the current features and extensions to the project? What are some of the ways that ArrowVe and its related projects can be integrated with or replace the different elements of a data platform? Can you describe how Arrow is implemented?

What are the most complex/challenging aspects of the engineering needed to support interoperable data interchange between language runtimes?

How are you balancing the desire to move quickly and improve the Arrow protocol and implementations, with the need to wait for other players in the ecosystem (e.g. database engines, compute frameworks, etc.) to add support? With the growing application of data formats such as graphs and vectors, what do you see as the role of Arrow and its ideas in those use cases? For workflows that rely on integrating structured and unstructured data, what are the options for interaction with non-tabular data? (e.g. images, documents, etc.) With your support-focused business model, how are you approaching marketing and customer education to make it viable and scalable? What are the most interesting, innovative, or unexpected ways that you have seen Arrow used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Arrow and its ecosystem? When is Arrow the wrong choice? What do you have planned for the future of Arrow?

Contact Info

Website wesm on GitHub @wesmckinn on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don’t forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

Voltron Data Pandas

Podcast Episode

Apache Arrow Partial Differential Equation FPGA == Field-Programmable Gate Array GPU == Graphics Processing Unit Ursa Labs Voltron (cartoon) Feature Engineering PySpark Substrait Arrow Flight Acero Arrow Datafusion Velox Ibis SIMD == Single Instruction, Multiple Data Lance DuckDB

Podcast Episode

Data Threads Conference Nano-Arrow Arrow ADBC Protocol Apache Iceberg

Podcast Episode

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Sponsored By: Atlan: Atlan

Have you ever woken up to a crisis because a number on a dashboard is broken and no one knows why? Or sent out frustrating slack messages trying to find the right data set? Or tried to understand what a column name means?

Our friends at Atlan started out as a data team themselves and faced all this collaboration chaos themselves, and started building Atlan as an internal tool for themselves. Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more.

Go to dataengineeringpodcast.com/atlan and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription.a href="https://dataengineeringpodcast.com/montecarlo"…

AI/ML Airflow Analytics Arrow BI Dashboard Data Engineering Data Management dbt DuckDB ETL/ELT GitHub Google Analytics Hevo Data Iceberg Kubernetes Lance Looker Marketing Modern Data Stack MongoDB Monte Carlo MySQL PagerDuty Pandas postgresql PySpark Python SaaS Snowflake SQL
Shane Gibson – guest , Tobias Macey – host

Summary Agile methodologies have been adopted by a majority of teams for building software applications. Applying those same practices to data can prove challenging due to the number of systems that need to be included to implement a complete feature. In this episode Shane Gibson shares practical advice and insights from his years of experience as a consultant and engineer working in data about how to adopt agile principles in your data work so that you can move faster and provide more value to the business, while building systems that are maintainable and adaptable.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Prefect is the modern Dataflow Automation platform for the modern data stack, empowering data practitioners to build, run and monitor robust pipelines at scale. Guided by the principle that the orchestrator shouldn’t get in your way, Prefect is the only tool of its kind to offer the flexibility to write code as workflows. Prefect specializes in glueing together the disparate pieces of a pipeline, and integrating with modern distributed compute libraries to bring power where you need it, when you need it. Trusted by thousands of organizations and supported by over 20,000 community members, Prefect powers over 100MM business critical tasks a month. For more information on Prefect, visit dataengineeringpodcast.com/prefect. Data engineers don’t enjoy writing, maintaining, and modifying ETL pipelines all day, every day. Especially once they realize 90% of all major data sources like Google Analytics, Salesforce, Adwords, Facebook, Spreadsheets, etc., are already available as plug-and-play connectors with reliable, intuitive SaaS solutions. Hevo Data is a highly reliable and intuitive data pipeline platform used by data engineers from 40+ countries to set up and run low-latency ELT pipelines with zero maintenance. Boasting more than 150 out-of-the-box connectors that can be set up in minutes, Hevo also allows you to monitor and control your pipelines. You get: real-time data flow visibility, fail-safe mechanisms, and alerts if anything breaks; preload transformations and auto-schema mapping precisely control how data lands in your destination; models and workflows to transform data for analytics; and reverse-ETL capability to move the transformed data back to your business software to inspire timely action. All of this, plus its transparent pricing and 24*7 live support, makes it consistently voted by users as the Leader in the Data Pipeline category on review platforms like G2. Go to dataengineeringpodcast.com/hevodata and sign up for a free 14-day trial that also comes with 24×7 support. Your host is Tobias Macey and today I’m interviewing Shane Gibson about how to bring Agile practices to your data management workflows

Interview

Introduction How did you get involved in the area of data management? Can you describe what AgileData is and the story behind it? What are the main industries and/or use cases that you are focused on supporting? The data ecosystem has been trying on different paradigms from software development for some time now (e.g. DataOps, version control, etc.). What are the aspects of Agile that do and don’t map well to data engineering/analysis? One of the perennial challenges of data analysis is how to approach data modeling. How do you balance the need to provide value with the long-term impacts of incomplete or underinformed modeling decisions made in haste at the beginning of a project?

How do you design in affordances for refactoring of the data models without breaking downstream assets?

Another aspect of implementing data products/platforms is how to manage permissions and governance. What are the incremental ways that those principles can be incorporated early and evolved along with the overall analytical products? What are some of the organizational design strategies that you find most helpful when establishing or training a team who is working on data products? In order to have a useful target to work toward it’s necessary to understand what the data consumers are hoping to achieve. What are some of the challenges of doing requirements gathering for data products? (e.g. not knowing what information is available, consumers not understanding what’s hard vs. easy, etc.)

How do you work with the "customers" to help them understand what a reasonable scope is and translate that to the actual project stages for the engineers?

What are some of the perennial questions or points of confusion that you have had to address with your clients on how to design and implement analytical assets? What are the most interesting, innovative, or unexpected ways that you have seen agile principles used for data? What are the most interesting, unexpected, or challenging lessons that you have learned while working on AgileData? When is agile the wrong choice for a data project? What do you have planned for the future of AgileData?

Contact Info

LinkedIn @shagility on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don’t forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

AgileData OptimalBI How To Make Toast Data Mesh Information Product Canvas DataKitchen

Podcast Episode

Great Expectations

Podcast Episode

Soda Data

Podcast Episode

Google DataStore Unfix.work Activity Schema

Podcast Episode

Data Vault

Podcast Episode

Star Schema Lean Methodology Scrum Kanban

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Sponsored By: Atlan: Atlan

Have you ever woken up to a crisis because a number on a dashboard is broken and no one knows why? Or sent out frustrating slack messages trying to find the right data set? Or tried to understand what a column name means?

Our friends at Atlan started out as a data team themselves and faced all this collaboration chaos themselves, and started building Atlan as an internal tool for themselves. Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more.

Go to dataengineeringpodcast.com/atlan and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription.Prefect: Prefect

Prefect is the modern Dataflow Automation platform for the modern data stack, empowering data practitioners to build, run and monitor robust pipelines at scale. Guided by the principle that the orchestrator shouldn’t get in your way, Prefect is the only tool of its kind to offer the flexibility to write code as workflows. Prefect specializes in glueing together the disparate pieces of a pipeline, and integrating with modern distributed compute libraries to bring power where you need it, when you need it. Trusted by thousands of organizations and supported by over 20,000 community members, Prefect powers over 100MM business critical tasks a month. For more information on Prefect, visit…

Activity Schema Agile/Scrum AI/ML Analytics BI Dashboard Data Engineering Data Management Data Modelling Data Vault Dataflow DataOps ETL/ELT Git GitHub Google Analytics Hevo Data dimensional modeling Kubernetes Looker Modern Data Stack MongoDB MySQL postgresql Prefect Python SaaS Snowflake SQL

Dashboards are at the forefront of today’s episode, and so I will be responding to some reader questions who wrote in to one of my weekly mailing list missives about this topic. I’ve not talked much about dashboards despite their frequent appearance in data product UIs, and in this episode, I’ll explain why. Here are some of the key points and the original questions asked in this episode:

My introduction to dashboards (00:00) Some overall thoughts on dashboards (02:50) What the risk is to the user if the insights are wrong or misinterpreted (4:56) Your data outputs create an experience, whether intentional or not (07:13) John asks: How do we figure out exactly what the jobs are that the dashboard user is trying to do? Are they building next year's budget or looking for broken widgets?  What does this user value today? Is a low resource utilization percentage something to be celebrated or avoided for this dashboard user today?  (13:05) Value is not intrinsically in the dashboard (18:47) Mareike asks: How do we provide Information in a way that people are able to act upon the presented Information?  How do we translate the presented Information into action? What can we learn about user expectation management when designing dashboard/analytics solutions? (22:00) The change towards predictive and prescriptive analytics (24:30) The upfront work that needs to get done before the technology is in front of the user (30:20) James asks: How can we get people to focus less on the assumption-laden and often restrictive term "dashboard", and instead worry about designing solutions focused on outcomes for particular personas and workflows that happen to have some or all of the typical ingredients associated with the catch-all term "dashboards?” (33:30) Stop measuring the creation of outputs and focus on the user workflows and the jobs to be done (37:00) The data product manager shouldn’t just be focused on deliverables (42:28)

Quotes from Today’s Episode “The term dashboards is almost meaningless today, it seems to mean almost any home default screen in a data product. It also can just mean a report. For others, it means an entire monitoring tool, for some, it means the summary of a bunch of data that lives in some other reports. The terms are all over the place.”- Brian (@rhythmspice) (01:36)

“The big idea here that I really want leaders to be thinking about here is you need to get your teams focused on workflows—sometimes called jobs to be done—and the downstream decisions that users want to make with machine-learning or analytical insights. ” - Brian (@rhythmspice) (06:12)

“This idea of human-centered design and user experience is really about trying to fit the technology into their world, from their perspective as opposed to building something in isolation where we then try to get them to adopt our thing.  This may be out of phase with the way people like to do their work and may lead to a much higher barrier to adoption.” - Brian (@rhythmspice) (14:30)

“Leaders who want their data science and analytics efforts to show value really need to understand that value is not intrinsically in the dashboard or the model or the engineering or the analysis.” - Brian (@rhythmspice) (18:45)

“There's a whole bunch of plumbing that needs to be done, and it’s really difficult. The tool that we end up generating in those situations tends to be a tool that’s modeled around the data and not modeled around [the customers] mental model of this space, the customer purchase space, the marketing spend space, the sales conversion, or propensity-to-buy space.” - Brian (@rhythmspice) (27:48)

“Data product managers should be these problem owners, if there has to be a single entity for this. When we’re talking about different initiatives in the enterprise or for a commercial software company, it’s really sits at this product management function.”  - Brian (@rhythmspice) (34:42)

“It’s really important that [data product managers] are not just focused on deliverables; they need to really be the ones that summarize the problem space for the entire team, and help define a strategy with the entire team that clarifies the direction the team is going in. They are not a project manager; they are someone responsible for delivering value.” - Brian (@rhythmspice) (42:23)

Links Referenced:

Mailing List: https://designingforanalytics.com/list CED UX Framework for Advanced Analytics:Original Article: https://designingforanalytics.com/ced Podcast/Audio Episode: https://designingforanalytics.com/resources/episodes/086-ced-my-ux-framework-for-designing-analytics-tools-that-drive-decision-making/ 

My LinkedIn Live about Measuring the Usability of Data Products: https://www.linkedin.com/video/event/urn:li:ugcPost:6911800738209800192/ Work With Me / My Services: https://designingforanalytics.com/services

Analytics Dashboard Data Science Marketing
Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design)
Showing 6 results