talk-data.com talk-data.com

Topic

Data Science

machine_learning statistics analytics

1516

tagged

Activity Trend

68 peak/qtr
2020-Q1 2026-Q1

Activities

1516 activities · Newest first

We talked about:

00:00 DataTalks.Club intro 01:56 Using data to create livable cities 02:52 Rachel's career journey: from geography to urban data science 04:20 What does a transport scientist do? 05:34 Short-term and long-term transportation planning 06:14 Data sources for transportation planning in Singapore 08:38 Rachel's motivation for combining geography and data science 10:19 Urban design and its connection to geography 13:12 Defining a livable city 15:30 Livability of Singapore and urban planning 18:24 Role of data science in urban and transportation planning 20:31 Predicting travel patterns for future transportation needs 22:02 Data collection and processing in transportation systems 24:02 Use of real-time data for traffic management 27:06 Incorporating generative AI into data engineering 30:09 Data analysis for transportation policies 33:19 Technologies used in text-to-SQL projects 36:12 Handling large datasets and transportation data in Singapore 42:17 Generative AI applications beyond text-to-SQL 45:26 Publishing public data and maintaining privacy 45:52 Recommended datasets and projects for data engineering beginners 49:16 Recommended resources for learning urban data science

About the speaker:

Rachel is an urban data scientist dedicated to creating liveable cities through the innovative use of data. With a background in geography, and a masters in urban data science, she blends qualitative and quantitative analysis to tackle urban challenges. Her aim is to integrate data driven techniques with urban design to foster sustainable and equitable urban environments. 

Links: - https://datamall.lta.gov.sg/content/datamall/en/dynamic-data.html

00:00 DataTalks.Club intro 01:56 Using data to create livable cities 02:52 Rachel's career journey: from geography to urban data science 04:20 What does a transport scientist do? 05:34 Short-term and long-term transportation planning 06:14 Data sources for transportation planning in Singapore 08:38 Rachel's motivation for combining geography and data science 10:19 Urban design and its connection to geography 13:12 Defining a livable city 15:30 Livability of Singapore and urban planning 18:24 Role of data science in urban and transportation planning 20:31 Predicting travel patterns for future transportation needs 22:02 Data collection and processing in transportation systems 24:02 Use of real-time data for traffic management 27:06 Incorporating generative AI into data engineering 30:09 Data analysis for transportation policies 33:19 Technologies used in text-to-SQL projects 36:12 Handling large datasets and transportation data in Singapore 42:17 Generative AI applications beyond text-to-SQL 45:26 Publishing public data and maintaining privacy 45:52 Recommended datasets and projects for data engineering beginners 49:16 Recommended resources for learning urban data science

Join our slack: https: //datatalks.club/slack.html

Send us a text Welcome to Datatopics Unplugged, where the tech world’s buzz meets laid-back banter. In each episode, we dive into the latest in AI, data science, and technology—perfect for your inner geek or curious mind. Pull up a seat, tune in, and join us for insights, laughs, and the occasional hot take on the digital world.

In this episode, we are joined by Vitale to discuss:

Meta’s video generation breakthrough: Explore Meta’s new “MovieGen” model family that generates hyper-realistic, 16-second video clips with reflections, consistent spatial details, and multi-frame coherence. Also discussed: Sora, a sneak peek at Meta’s open-source possibilities. For a look back, check out this classic AI-generated video of Will Smith eating spaghetti. Anthropic’s Claude 3.5 updates: Meet Claude 3.5 and its “computer use” feature, letting it navigate your screen for you. Easily fine-tune & train LLMs, faster with Unsloth: Discover tools that simplify model fine-tuning and deployment, making it easier for small-scale developers to harness AI’s power. Don’t miss Gerganov’s GitHub contributions in this space, too. Deno 2.0 release hype: With a splashy promo video, Deno’s JavaScript runtime enters the scene as a streamlined, secure alternative to Node.js.

Pandas Cookbook - Third Edition

Discover the power of pandas for your data analysis tasks. Pandas Cookbook provides practical, hands-on recipes for mastering pandas 2.x, guiding you through real-world scenarios quickly and effectively. What this Book will help me do Efficiently manipulate and clean data using pandas. Perform advanced grouping and aggregation operations. Handle time series data with pandas robust functions. Optimize pandas code for better performance. Integrate pandas with tools like NumPy and databases. Author(s) William Ayd and Matthew Harrison co-authored this insightful cookbook. With years of practical experience in data science and Python development, both authors aim to make data analysis accessible and efficient using pandas. Who is it for? This book is perfect for Python developers and data analysts looking to enhance their data manipulation skills. Whether you're a beginner aiming to understand pandas or a professional seeking advanced insights, this book is tailored for anyone handling structured data.

Episode SummaryIn this episode, we dive into the power of AI to tackle the often-overwhelming world of PDFs and technical documents. We explore an innovative tool that makes PDFs more accessible and actionable, from summarizing key insights to generating audio and even preparing Q&A. If you work in data science, AI, or any field that requires you to stay up-to-date with extensive documentation and research, this tool could be your new best friend. Topics Covered: * The PDF Dilemma * How data professionals face information overload from research papers, reports, and white papers. * Why keeping up with technical documents can feel like a “black hole” for your time. * AI-Powered PDF Assistance * Overview of an AI tool that leverages PyPDF2 and HuggingFace for seamless PDF extraction and summarization. * Using Google Text-to-Speech to convert summaries into audio for learning on the go. * Interactive Content Generation * How the tool creates a more interactive PDF experience by generating questions and answers. * Scenarios where this could be useful: preparing for presentations, understanding dense research, and managing technical documentation. * Real-World Scenarios and Use Cases * Examples of how a data scientist, data analyst, or any professional could save time and improve understanding. * AI as a “study buddy” for deeper learning and faster, more efficient information processing. * Balancing AI with Critical Thinking * The importance of using AI as a tool rather than a replacement for human expertise. * How AI challenges us to become more thoughtful consumers of information and better thinkers overall. Key Takeaways: * Save Time and Boost UnderstandingEmbrace AI to extract core insights from complex documents, potentially freeing up hours each week to focus on high-impact tasks. * Learn on the GoTurn PDF content into audio to make commuting, exercising, or downtime more productive. * Engage with Information InteractivelyUse the tool’s Q&A generation feature to explore documents in a more interactive way, perfect for preparing presentations or deep-diving into research. Final Thought:Imagine applying this technology not only to PDFs but also to other information sources like websites, articles, and even books. As AI continues to evolve, how might it transform the way we learn, work, and think? Call to ActionIf this resonates with you, let us know! Share what types of PDFs or documents you’d tackle with this AI-powered tool and how you think it could change your workflow. Don’t forget to subscribe for more insights on the latest AI tools and how they’re shaping the future of work and learning. Link: Blog Post Link This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit mukundansankar.substack.com

We talked about:

00:00 DataTalks.Club intro

00:00 DataTalks.Club anniversary "Ask Me Anything" event with Alexey Grigorev

02:29 The founding of DataTalks .Club

03:52 Alexey's transition from Java work to DataTalks.Club

04:58 Growth and success of DataTalks.Club courses

12:04 Motivation behind creating a free-to-learn community

24:03 Staying updated in data science through pet projects

26 :37 Hosting a second podcast and maintaining programming skills

28:56 Skepticism about LLMs and their relevance

31:53 Transitioning to DataTalks.Club and personal reflections

33:32 Memorable moments and the first event's success

36:19 Community building during the pandemic

38:31 AI's impact on data analysts and future roles

42:24 Discussion on AI in healthcare

44:37 Age and reflections on personal milestones

47:54 Building communities and personal connections

49:34 Future goals for the community and courses

51:18 Community involvement and engagement strategies

53:46 Ideas for competitions and hackathons

54:20 Inviting guests to the podcast

55:29 Course updates and future workshops

56:27 Podcast preparation and research process

58:30 Career opportunities in data science and transitioning fields

1:01 :10 Book recommendations and personal reading experiences

About the speaker:

Alexey Grigorev is the founder of DataTalks.Club.

Join our slack: https://datatalks.club/slack.html

Today's analytics and data science job market seems to be as competitive as it's ever been. So it's more important than ever to know what employers are looking for and have a solid plan of attack in your job search. In this episode, Luke Barousse and Kelly Adams will walk us through their insights from the job market, talk about exactly what employers are looking for, and lay out an actionable plan for you to start building skills that will help you in your career. You'll leave this show with a deeper understanding of the job market, and a concrete roadmap you can use to take your data skills and career to the next level.   What You'll Learn: Insights from a deep analysis of the data science and analytics job market The skills employers are looking for, and why they matter A roadmap for building key data science and data analytics skills   Register for free to be part of the next live session: https://bit.ly/3XB3A8b   About our guests: Luke Barousse is a data analyst, YouTuber, and engineer who helps data nerds be more productive. Follow Luke on LinkedIn Subscribe to Luke's YouTube Channel Luke's Python, SQL, and ChatGPT Courses

Kelly Adams is a data analyst, course creator, and writer. Kelly's Website Follow Kelly on LinkedIn Datanerd.Tech   Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter

Insights on integrating professional training into a busy work life, practical tips for balancing work and ongoing education, trends in data science, AI, and tech training, and strategies to leverage online and flexible learning options. Includes success stories of expats who upskilled.

SQL is one of the most widely used data analysis tools around, often discussed as a cornerstone for Data Analysis, Data Science, and Data Engineering careers. In this episode, Thais Cooke talks about how she leverages SQL in her role as a Data Analyst and shares practical tips you can use to take your SQL game to the next level. You'll leave the show with an insider's perspective on where SQL adds the most value, and where you should focus if you want to build SQL skills that will advance your career. What You'll Learn: What makes SQL such a valuable skill set for so many roles Some of the most valuable ways you can use SQL on the job Where you can focus if you want to build job-ready SQL skills   Register for free to be part of the next live session: https://bit.ly/3XB3A8b   About our guest: Thais Cooke is a Data Analyst proficient in Excel, SQL, and Python with a background in Clinical Healthcare. SQL for Healthcare Professionals Course Follow Thais on LinkedIn  

Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter

Coalesce 2024: Designing Figma Data Science’s first ML system

Despite the popularity of ML as a technical solution, there are few resources on the practical aspects of deploying your first ML model. This talk covers Figma’s journey from ideation to post launch: how we decided to invest, how we designed and built our first pipeline with dbt, what we wish we did differently on the way to production, and what came after our first launch.

Speaker: Emily Jia Data Scientist Figma

Read the blog to learn about the latest dbt Cloud features announced at Coalesce, designed to help organizations embrace analytics best practices at scale https://www.getdbt.com/blog/coalesce-2024-product-announcements

Coalesce 2024: How Virgin Media O2 streamlines operations with dbt Cloud

Learn how Virgin Media O2 uses dbt Cloud to enhance call center efficiency, personalize customer communications, and accelerate data science workflows. In this session, we will share details about our innovative continuous flow system, developed using best practices from Toyota Kanban, and how it helps reduce operational waste and costs. We will also highlight a number of capabilities within dbt Cloud that support continuous data flows by automating manual tasks.

Read the blog to learn about the latest dbt Cloud features announced at Coalesce, designed to help organizations embrace analytics best practices at scale https://www.getdbt.com/blog/coalesce-2024-product-announcements

Speakers: Arun Kumaravel Senior Analytics Engineer Virgin Media O2

Oliver Burt Lead Analytics Engineer Virgin Media o2

Gordon Curzon Head of Analytics Engineering Virgin Media O2

Coalesce 2024: How to leverage dbt for embedded domain knowledge across product engineering teams

In today's data-driven world, harnessing the power of data is no longer an option but a necessity for businesses to thrive. For product engineering teams in particular, timely access to accurate and contextual data is crucial for making informed decisions and monitoring success. In this conversation, Aakriti Kaul and Scott Henry, Data Scientists at Cisco, dive into Duo Security’s data modernization journey, bolstered by dbt Cloud and embedded context in data, aimed at empowering product teams with data access and insights to drive innovation.

At the end of this session we hope to leave attendees with the following takeaways: • Understand how an Embedded Data science model creates value across Product, Engineering and Data teams • Learn practical strategies for implementing dbt within product development workflows to accelerate decision making and drive innovation, in partnership with Analytics Engineering teams • Gain insights from real-world case studies of Duo’s Product Data teams that have successfully leveraged dbt to provide access to data and insights for product teams • Gain insights from our organizational experience using dbt to provide product teams with self-service access to contextual datasets

The presentation is designed for data scientists, analytics engineers and other professionals involved in product development who are interested in leveraging data to drive decision making and embedding context within their data workflows. Whether you're new to dbt or looking to optimize your existing data analytics workflows, this session will provide valuable insights and practical strategies for harnessing the power of dbt in partnership with product engineering teams.

Speakers: Aakriti Kaul Data Scientist Duo Security @ Cisco

Scott Henry Data Scientist Duo Security @ Cisco

Read the blog to learn about the latest dbt Cloud features announced at Coalesce, designed to help organizations embrace analytics best practices at scale https://www.getdbt.com/blog/coalesce-2024-product-announcements

podcast_episode
by Vijay Yadav (Center for Mathematical Sciences at Merck) , Joe Reis (DeepLearning.AI)

Vijay Yadav (Director of Data Science at Merck) joins me to chat about a very interesting project he launched at Merck involving LLMs in production. A big part of this discussion is how to make data ready for generative AI.

This is a great example of an LLM-native use case in production, which are rare right now. Lots to learn from here. Enjoy!

LinkedIn: https://www.linkedin.com/in/vijay-yadav-ds/

The Data Product Management In Action podcast, brought to you by Soda and executive producer Scott Hirleman, is a platform for data product management practitioners to share insights and experiences.  In Season 01, Episode 20, Nick is back and this time he is chatting with Ganesh Prasad. They dive into Ganesh's background as a data product manager and his journey from data science to product management. The discussion leads into the differences between internal and external products, the importance of user interviews and discovery, and the challenges and advantages of working in big tech and financial industries. Follow along as Ganesh shares some valuable tips and explains the importance of having a product mindset. About our host Nick Zervoudis: Nick is Head of Product at CKDelta, an AI software business within the CK Hutchison Holdings group. Nick oversees a portfolio of data products and works with sister companies to uncover new opportunities to innovate using data, analytics, and machine learning. Nick's career has revolved around data and advanced analytics from day one, having worked as an analyst, consultant, product manager, and instructor for startups, SMEs, and enterprises including PepsiCo, Sainsbury's, Lloyds Banking Group, IKEA, Capgemini Invent, BrainStation, QuantSpark, and Hg Capital. Nick is also the co-host of London's Data Product Management meetup, and speaks and writes regularly about data and AI product management. Connect with Nick on LinkedIn.   About our guest Ganesh Prasad: Ganesh is a Senior Product Lead in the Data Analytics division at Salesforce, bringing over 5 years of experience in data product management from both Salesforce and Mastercard. He has a proven track record of successfully launching and scaling products that meet customer needs. Ganesh has successfully managed and developed analytics, ML, and AI products across various domains, including marketing analytics, fraud detection, revenue forecasting, and platform optimization. Transitioning from a data scientist to a product manager, Ganesh is passionate about the intersection of data and product development. He leads the PM Community of Practice for the Data Analytics division at Salesforce and dedicates his spare time to mentoring others in the field. Connect with Ganesh on LinkedIn.  All views and opinions expressed are those of the individuals and do not necessarily reflect their employers or anyone else.  Join the conversation on LinkedIn.  Apply to be a guest or nominate someone that you know.  Do you love what you're listening to? Please rate and review the podcast, and share it with fellow practitioners you know. Your support helps us reach more listeners and continue providing valuable insights!   

Sometimes DIY UI/UX design only gets you so far—and you know it’s time for outside help. One thing prospects from SAAS analytics and data-related product companies often ask me is how things are like in the other guy/gal’s backyard. They want to compare their situation to others like them. So, today, I want to share some of the common “themes” I see that usually are the root causes of what leads to a phone call with me. 

By the time I am on the phone with most prospects who already have a product in market, they’re usually either having significant problems with 1 or more of the following: sales friction (product value is opaque); low adoption/renewal worries (user apathy), customer complaints about UI/UX being hard to use; velocity (team is doing tons of work, but leader isn’t seeing progress)—and the like. 

I’m hoping today’s episode will explain some of the root causes that may lead to these issues — so you can avoid them in your data product building work!  

Highlights/ Skip to:

(10:47) Design != "front-end development" or analyst work (12:34)  Liking doing UI/UX/viz design work vs. knowing  (15:04)  When a leader sees lots of work being done, but the UX/design isn’t progressing (17:31) Your product’s UX needs to convey some magic IP/special sauce…but it isn’t (20:25) Understanding the tradeoffs of using libraries, templates, and other solution’s design as a foundation for your own  (25:28) The sunk cost bias associated with POCs and “we’ll iterate on it” (28:31) Relying on UI/UX "customization" to please all customers (31:26) The hidden costs of abstraction of system objects, UI components, etc.  to make life easier for engineering and technical teams (32:32) Believing you’ll know the design is good “when you see it” (and what you don’t know you don’t know) (36:43) Believing that because the data science/AI/ML modeling under your solution was, accurate, difficult, and/or expensive makes it automatically worth paying for 

Quotes from Today’s Episode The challenge is often not knowing what you don’t know about a project. We often end up focusing on building the tech [and rushing it out] so we can get some feedback on it… but product is not about getting it out there so we can get feedback. The goal of doing product well is to produce value, benefits, or outcomes. Learning is important, but that’s not what the objective is. The objective is benefits creation. (5:47) When we start doing design on a project that’s not design actionable, we build debt and sometimes can hurt the process of design. If you start designing your product with an entire green space, no direction, and no constraints, the chance of you shipping a good v1 is small. Your product strategy needs to be design-actionable for the team to properly execute against it. (19:19) While you don’t need to always start at zero with your UI/UX design, what are the parts of your product or application that do make sense to borrow , “steal” and cheat from? And when does it not?  It takes skill to know when you should be breaking the rules or conventions. Shortcuts often don’t produce outsized results—unless you know what a good shortcut looks like.  (22:28) A proof of concept is not a minimum valuable product. There’s a difference between proving the tech can work and making it into a product that’s so valuable, someone would exchange money for it because it’s so useful to them. Whatever that value is, these are two different things. (26:40) Trying to do a little bit for everybody [through excessive customization] can often result in nobody understanding the value or utility of your solution. Customization can hide the fact the team has decided not to make difficult choices. If you’re coming into a crowded space… it’s like’y not going to be a compelling reason to [convince customers to switch to your solution]. Customization can be a tax, not a benefit. (29:26) Watch for the sunk cost bias [in product development]. [Buyers] don’t care how the sausage was made. Many don’t understand how the AI stuff works, they probably don’t need to understand how it works. They want the benefits downstream from technology wrapped up in something so invaluable they can’t live without it.  Watch out for technically right, effectively wrong. (39:27)

Businesses are collecting more data than ever before. But is bigger always better? Many companies are starting to question whether massive datasets and complex infrastructure are truly delivering results or just adding unnecessary costs and complications. How can you make sure your data strategy is aligned with your actual needs? What if focusing on smaller, more manageable datasets could improve your efficiency and save resources, all while delivering the same insights? Ryan Boyd is the Co-Founder & VP, Marketing + DevRel at MotherDuck. Ryan started his career as a software engineer, but since has led DevRel teams for 15+ years at Google, Databricks and Neo4j, where he developed and executed numerous marketing and DevRel programs. Prior to MotherDuck, Ryan worked at Databricks and focussed the team on building an online community during the pandemic, helping to organize the content and experience for an online Data + AI Summit, establishing a regular cadence of video and blog content, launching the Databricks Beacons ambassador program, improving the time to an “aha” moment in the online trial and launching a University Alliance program to help professors teach the latest in data science, machine learning and data engineering. In the episode, Richie and Ryan explore data growth and computation, the data 1%, the small data movement, data storage and usage, the shift to local and hybrid computing, modern data tools, the challenges of big data, transactional vs analytical databases, SQL language enhancements, simple and ergonomic data solutions and much more.  Links Mentioned in the Show: MotherDuckThe Small Data ManifestoConnect with RyanSmall DataSF conferenceRelated Episode: Effective Data Engineering with Liya Aizenberg, Director of Data Engineering at AwayRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Databricks Data Intelligence Platform: Unlocking the GenAI Revolution

This book is your comprehensive guide to building robust Generative AI solutions using the Databricks Data Intelligence Platform. Databricks is the fastest-growing data platform offering unified analytics and AI capabilities within a single governance framework, enabling organizations to streamline their data processing workflows, from ingestion to visualization. Additionally, Databricks provides features to train a high-quality large language model (LLM), whether you are looking for Retrieval-Augmented Generation (RAG) or fine-tuning. Databricks offers a scalable and efficient solution for processing large volumes of both structured and unstructured data, facilitating advanced analytics, machine learning, and real-time processing. In today's GenAI world, Databricks plays a crucial role in empowering organizations to extract value from their data effectively, driving innovation and gaining a competitive edge in the digital age. This book will not only help you master the Data Intelligence Platform but also help power your enterprise to the next level with a bespoke LLM unique to your organization. Beginning with foundational principles, the book starts with a platform overview and explores features and best practices for ingestion, transformation, and storage with Delta Lake. Advanced topics include leveraging Databricks SQL for querying and visualizing large datasets, ensuring data governance and security with Unity Catalog, and deploying machine learning and LLMs using Databricks MLflow for GenAI. Through practical examples, insights, and best practices, this book equips solution architects and data engineers with the knowledge to design and implement scalable data solutions, making it an indispensable resource for modern enterprises. Whether you are new to Databricks and trying to learn a new platform, a seasoned practitioner building data pipelines, data science models, or GenAI applications, or even an executive who wants to communicate the value of Databricks to customers, this book is for you. With its extensive feature and best practice deep dives, it also serves as an excellent reference guide if you are preparing for Databricks certification exams. What You Will Learn Foundational principles of Lakehouse architecture Key features including Unity Catalog, Databricks SQL (DBSQL), and Delta Live Tables Databricks Intelligence Platform and key functionalities Building and deploying GenAI Applications from data ingestion to model serving Databricks pricing, platform security, DBRX, and many more topics Who This Book Is For Solution architects, data engineers, data scientists, Databricks practitioners, and anyone who wants to deploy their Gen AI solutions with the Data Intelligence Platform. This is also a handbook for senior execs who need to communicate the value of Databricks to customers. People who are new to the Databricks Platform and want comprehensive insights will find the book accessible.

The Data Product Management In Action podcast, brought to you by Soda and executive producer Scott Hirleman, is a platform for data product management practitioners to share insights and experiences.  In Season 01, Episode 19, host Nadiem von Heydebrand interviews Pradeep Fernando, who leads the data and metadata management initiative at Swisscom. They explore key topics in data product management, including the definition and categorization of data products, the role of AI, prioritization strategies, and the application of product management principles. Pradeep shares valuable insights and experiences on successfully implementing data product management within organizations. About our host Nadiem von Heydebrand: Nadiem is the CEO and Co-Founder of Mindfuel. In 2019, he merged his passion for data science with product management, becoming a thought leader in data product management. Nadiem is dedicated to demonstrating the true value contribution of data. With over a decade of experience in the data industry, Nadiem leverages his expertise to scale data platforms, implement data mesh concepts, and transform AI performance into business performance, delighting consumers at global organizations that include Volkswagen, Munich Re, Allianz, Red Bull, and Vorwerk. Connect with Nadiem on LinkedIn. About our guest Pradeep Fernando: Pradeep is a seasoned data product leader with over 6 years of data product leadership experience and over 10 years of product management experience. He leads or is a key contributor to several company-wide data & analytics initiatives at Swisscom such as Data as a Product (Data Mesh), One Data Platform, Machine Learning (Factory), MetaData management, Self-service data & analytics, BI Tooling Strategy, Cloud Transformation, Big Data platforms,and Data warehousing. Previously, he was a product manager at both Swisscom's B2B and Innovation units both building new products and optimizing mature products (profitability) in the domains of enterprise mobile fleet management, cyber-and mobile device security.Pradeep is also passionate about and experienced in leading the development of data products and transforming IT delivery teams into empowered, agile product teams. And, he is always happy to engage in a conversation about lean product management or "heavier" topics such as humanity's future or our past. Connect with Pradeep on LinkedIn. All views and opinions expressed are those of the individuals and do not necessarily reflect their employers or anyone else.  Join the conversation on LinkedIn.  Apply to be a guest or nominate someone that you know.  Do you love what you're listening to? Please rate and review the podcast, and share it with fellow practitioners you know. Your support helps us reach more listeners and continue providing valuable insights!              

podcast_episode
by Scott , Daniil Shvets (ASAO DS: Data & AI Consulting Boutique) , Yuliia Tkachova (Masthead Data)

Daniil Shvets, CEO and co-founder of ASAO DS: Data & AI Consulting Boutique, previously led various companies' Data Science and Product teams. Daniil sat down with Yuliia and Scott to share his opinion on Data Science being a business department with appropriate data skills rather than an IT department. He explained why having 54 ML models in one of the largest retailers in the USA is the wrong approach. Daniil also shared his views on the biggest challenges in perceiving Data Science's role. We also touched on AI and the consultancy business while Scott made all possible relationship analogies. :)Daniil's Linkedin: https://www.linkedin.com/in/daniilshvets/

Show Notes  The Data Product Management In Action podcast, brought to you by Soda and executive producer Scott Hirleman, is a platform for data product management practitioners to share insights and experiences.  In Season 01, Episode 18, our host Frannie Helforoush is back again interviewing Katy Pusch about her extensive experience in data product management, particularly with decision-support data products. Katy shares her insights on incorporating machine learning and analytics to empower stakeholders in making informed decisions. They both explore team structure, the challenges encountered in product development, and the critical importance of validating products with users to ensure their effectiveness. About our host Frannie Helforoush: Frannie's journey began as a software engineer and evolved into a strategic product manager. Now, as a data product manager, she leverages her expertise in both fields to create impactful solutions. Frannie thrives on making data accessible and actionable, driving product innovation, and ensuring product thinking is integral to data management. Connect with Frannie on LinkedIn. About our guest Katy Pusch: Katy brings more than a decade of experience in product management and market strategy, driving market change and adoption of innovative technology solutions. She has successfully built and launched data products, IoT solutions, and SaaS platforms in multiple industries such as healthcare, education, and real estate. She is currently serving as a Sr.Product Line Director at Trintech. With a background in research, she brings data science and market intelligence to every aspect of her work. Katy is passionate about data privacy and tech-ethics, and is pursuing an MS in History and Sociology of Technology and Science at GeorgiaTech. When she’s not working with her team to deliver top solutions, Katy enjoys spending time with her husband, building Lego models, and pursuing a private pilot license. Connect with Katy on LinkedIn. All views and opinions expressed are those of the individuals and do not necessarily reflect their employers or anyone else.  Join the conversation on LinkedIn.  Apply to be a guest or nominate someone that you know.  Do you love what you're listening to? Please rate and review the podcast, and share it with fellow practitioners you know. Your support helps us reach more listeners and continue providing valuable insights!