Does size matter? When it comes to datasets, the conventional wisdom seems to be a resounding, "Yes!" But what about small datasets? Small- and mid-sized businesses and nonprofits, especially, often have limited web traffic, small email lists, CRM systems that can comfortably operate under the free tier, and lead and order counts that don't lend themselves to "big data" descriptors. Even large enterprises have scenarios where some datasets easily fit into Google Sheets with limited scrolling required. Should this data be dismissed out of hand, or should it be treated as what it is: potentially useful? Joe Domaleski from Country Fried Creative works with a lot of businesses that are operating in the small data world, and he was so intrigued by the potential of putting data to use on behalf of his clients that he's mid-way through getting a Master's degree in Analytics from Georgia Tech! He wrote a really useful article about the ins and outs of small data, so we brought him on for a discussion on the topic! This episode's Measurement Bite from show sponsor Recast is an explanation of synthetic controls and how they can be used as counterfactuals from Michael Kaminsky! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
talk-data.com
Topic
Big Data
342
tagged
Activity Trend
Top Events
In this episode, we talked with Aishwarya Jadhav, a machine learning engineer whose career has spanned Morgan Stanley, Tesla, and now Waymo. Aishwarya shares her journey from big data in finance to applied AI in self-driving, gesture understanding, and computer vision. She discusses building an AI guide dog for the visually impaired, contributing to malaria mapping in Africa, and the challenges of deploying safe autonomous systems. We also explore the intersection of computer vision, NLP, and LLMs, and what it takes to break into the self-driving AI industry.TIMECODES00:51 Aishwarya’s career journey from finance to self-driving AI05:45 Building AI guide dog for the visually impaired12:03 Exploring LiDAR, radar, and Tesla’s camera-based approach16:24 Trust, regulation, and challenges in self-driving adoption19:39 Waymo, ride-hailing, and gesture recognition for traffic control24:18 Malaria mapping in Africa and AI for social good29:40 Deployment, safety, and testing in self-driving systems37:00 Transition from NLP to computer vision and deep learning43:37 Reinforcement learning, robotics, and self-driving constraints51:28 Testing processes, evaluations, and staged rollouts for autonomous driving52:53 Can multimodal LLMs be applied to self-driving?55:33 How to get started in self-driving AI careersConnect with Aishwarya- Linkedin - https://www.linkedin.com/in/aishwaryajadhav8/Connect with DataTalks.Club:- Join the community - https://datatalks.club/slack.html- Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/r?cid=ZjhxaWRqbnEwamhzY3A4ODA5azFlZ2hzNjBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ- Check other upcoming events - https://lu.ma/dtc-events- GitHub: https://github.com/DataTalksClub- LinkedIn - https://www.linkedin.com/company/datatalks-club/ - Twitter - https://twitter.com/DataTalksClub - Website - https://datatalks.club/
From spreadsheets to strategy: what does data look like from the CEO's chair? For this episode, we sat down with Anna Lee, CEO of Flybuys and former CFO/COO of THE ICONIC, to get her view on data-led leadership and what great looks like in data and analytics. Discover how Anna's journey from finance to the corner office has shaped her approach to leveraging evidence for strategic decision-making. From productive curiosity, to informed pragmatism, and how data teams can build trust with leadership, this is a candid conversation about analytics from the top down. Whether you're embedded in a squad or building the next big data platform, this one's for anyone who's ever wondered what it takes to truly influence the C-suite! This episode's Measurement Bite from show sponsor Recast is an overview of the fundamental problem of causal inference from Michael Kaminsky! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
In this episode, we’re diving into a fascinating intersection of cutting-edge science and data innovation. As technology continues to evolve, researchers are increasingly turning to brain organoids, (miniature, lab-grown models of the human brain) to unravel some of the most complex mysteries of neuroscience. We’re joined by three brain organoid experts: Thomas Hartung, Professor of Environmental Health and Engineering at Johns Hopkins University; Jack Van Horn, Professor of Data Science and Psychology at the University of Virginia; and Lulu Jiang, Assistant Professor of Neuroscience, also at the University of Virginia. Together, they’ll shed light on how brain organoid technology is reshaping our understanding of the brain, and how data science is playing a crucial role in unlocking its secrets.
Chapters (00:00:51) - Brain Organizations(00:05:54) - Brain Organoids for drug discovery and immunology(00:13:53) - Alzheimer's disease in the organoid system(00:15:49) - What are the standards in the field of brain organoids?(00:22:44) - Big Data and Intelligence in the Brain(00:26:50) - Alzheimer's disease, the human brain(00:30:39) - The computational twin of the brain(00:37:23) - The quest for precision medicine in the brain(00:42:17) - The human brain in an organoid(00:43:21) - Will Brain Derived Organoids Replace Animal Models in Neurodegener
Summary In this episode of the Data Engineering Podcast Kacper Łukawski from Qdrant about integrating MCP servers with vector databases to process unstructured data. Kacper shares his experience in data engineering, from building big data pipelines in the automotive industry to leveraging large language models (LLMs) for transforming unstructured datasets into valuable assets. He discusses the challenges of building data pipelines for unstructured data and how vector databases facilitate semantic search and retrieval-augmented generation (RAG) applications. Kacper delves into the intricacies of vector storage and search, including metadata and contextual elements, and explores the evolution of vector engines beyond RAG to applications like semantic search and anomaly detection. The conversation covers the role of Model Context Protocol (MCP) servers in simplifying data integration and retrieval processes, highlighting the need for experimentation and evaluation when adopting LLMs, and offering practical advice on optimizing vector search costs and fine-tuning embedding models for improved search quality.
Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Your host is Tobias Macey and today I'm interviewing Kacper Łukawski about how MCP servers can be paired with vector databases to streamline processing of unstructured dataInterview IntroductionHow did you get involved in the area of data management?LLMs are enabling the derivation of useful data assets from unstructured sources. What are the challenges that teams face in building the pipelines to support that work?How has the role of vector engines grown or evolved in the past ~2 years as LLMs have gained broader adoption?Beyond its role as a store of context for agents, RAG, etc. what other applications are common for vector databaes?In the ecosystem of vector engines, what are the distinctive elements of Qdrant?How has the MCP specification simplified the work of processing unstructured data?Can you describe the toolchain and workflow involved in building a data pipeline that leverages an MCP for generating embeddings?helping data engineers gain confidence in non-deterministic workflowsbringing application/ML/data teams into collaboration for determining the impact of e.g. chunking strategies, embedding model selection, etc.What are the most interesting, innovative, or unexpected ways that you have seen MCP and Qdrant used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on vector use cases?When is MCP and/or Qdrant the wrong choice?What do you have planned for the future of MCP with Qdrant?Contact Info LinkedInTwitter/XPersonal websiteParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links QdrantKafkaApache OoziNamed Entity RecognitionGraphRAGpgvectorElasticsearchApache LuceneOpenSearchBM25Semantic SearchMCP == Model Context ProtocolAnthropic Contextualized ChunkingCohereThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Despite $180 billion spent on big data tools and technologies, poor data quality remains a significant barrier for businesses, especially in achieving Generative AI goals. Published at: https://www.eckerson.com/articles/poor-data-quality-is-a-full-blown-crisis-a-2024-customer-insight-report
How do you make sense of massive, interconnected datasets across time? In this episode of Data Unchained, we sit down with Ben Steer, Founder and CTO of Pometry, to explore the power of temporal graph analytics, a revolutionary approach called, "Big Data, Small Box," and how data can help prevent fraud and black market trading.
DataUnchained #EnterpriseData #CIO #CTO #CISO #DataStrategy #DigitalTransformation #BigData #CloudComputing #GraphAnalytics #AI #MachineLearning #DataEngineering #DataSecurity #BusinessIntelligence #TechLeadership #TechInnovation #AIinBusiness #ITStrategy #CyberSecurity #HPC #CloudCostOptimization #DataScience #Podcast #TechPodcast #BusinessPodcast #DataPodcast #Innovation
Cyberpunk by jiglr | https://soundcloud.com/jiglrmusic Music promoted by https://www.free-stock-music.com Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/deed.en_US Hosted on Acast. See acast.com/privacy for more information.
If you want to build a strong career in data, this show is for you. We welcomed the new face of Mavens of Data, Kristen Kehrer, who shared her best advice for data professionals and those aspiring toward a data career. You'll leave the show with some actionable tips and some of the best career advice directly from one of our favorite data pros of all time. What You'll Learn: What you should focus on if you're trying to land your first job How to succeed once you are in that initial role How to think about building a successful career long-term Register for free to be part of the next live session: https://bit.ly/3XB3A8b About our guest: Kristen Kehrer has been providing innovative & practical statistical modeling solutions in the utilities, healthcare, and eCommerce sectors since 2010. Alongside her professional accomplishments, she achieved recognition as a LinkedIn Top Voice in Data Science & Analytics in 2018. Kristen is also the founder of Data Moves Me, LLC, and has previously served as a faculty member and subject matter expert at the Emeritus Institute of Management and UC Berkeley Ext.
Kristen lights up on stage and has spoken at conferences like ODSC, DataScienceGO, BI+Analytics Conference, Boye Conference, and Big Data LDN, etc.
She holds a Master of Science degree in Applied Statistics from Worcester Polytechnic Institute and a Bachelor of Science degree in Mathematics.
datamovesme.com Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter
Send us a text In this special replay episode of Making Data Simple, Al Martin sits down with Matt Cowell, CEO of QuantHub, to dive deep into data literacy, upskilling, and solving learning challenges. Matt shares his expertise on defining data fluency, the best ways to learn, and how organizations can close the data skill gap. From client use cases to leadership insights, this episode is packed with valuable takeaways for businesses and individuals navigating the data-driven world. Show Notes & Chapter Markers: ⏳ 2:25 – From SVP of Products to Data Learning Business 📊 3:48 – Defining Data Literacy 🎓 5:50 – Teaching the Products 🚧 7:36 – What’s Out of Scope? 🏢 12:50 – Client Use Case 💡 18:07 – Solving Learning Problems 📖 21:14 – What Does a Learning Plan Look Like? 🔍 25:08 – Defining Micro 🧠 30:20 – Best Ways to Learn 📈 33:14 – Measuring Success 💰 34:47 – Venture Capital Funding 🌟 36:10 – Fundamental Leadership Belief 🔑 38:24 – The Most Valuable Leadership Skill 🔗 Connect & Resources: QuantHubMatt Cowell on LinkedInBooks Mentioned: Monetizing Innovation, Ultra LearningConnect with the Team:🎤 Host: Al Martin🎬 Producers: Kate Mayne📩 Want to be a guest? Reach out to [email protected] and tell us why you should be next! 📢 Hashtags:
MakingDataSimple #DataLiteracy #Upskilling #AI #BigData #TechPodcast #Leadership #LearnData #QuantHub
Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.
Welcome back to another podcast episode of Data Unchained. Jon Toor, CMO of Cloudian, joins us at Super Computing 2024 to discuss the future of decentralized data management, the evolving landscape of AI-driven storage, and what the next steps look like for metadata and object storage.
DataUnchained #Supercomputing2024 #AI #GPUComputing #ObjectStorage #GPUDirect #Cloudian #Hammerspace #DataScience #MachineLearning #AIInfrastructure #DataStorage #TechPodcast #ArtificialIntelligence #SC24 #BigData #DataManagement
Cyberpunk by jiglr | https://soundcloud.com/jiglrmusic Music promoted by https://www.free-stock-music.com Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/deed.en_US Hosted on Acast. See acast.com/privacy for more information.
As AI continues to dominate industry conversations, the notion of AI readiness becomes a focal point for organizations. It's a multifaceted challenge that goes beyond technology, encompassing business processes and cultural shifts. For professionals, this means grappling with questions like: How do you choose the right AI projects that align with business goals? What skills and team structures are necessary to support AI initiatives? And how do you manage the change that comes with integrating AI into your operations? Venky Veeraraghavan is the Chief Product Officer at DataRobot. As CPO, Venky drives the definition and delivery of the DataRobot Enterprise AI Suite. Venky has twenty-five years of experience focusing on big data and AI as a product leader and technical consultant at top technology companies (Microsoft) and early-stage startups (Trilogy). In the episode, Richie and Venky Veeraraghavan explore AI readiness in organizations, the importance of aligning AI with business processes, the roles and skills needed for AI integration, the balance between building and buying AI solutions, the challenges of implementing AI-driven changes, and much more. Links Mentioned in the Show: DatarobotConnect with VenkySkill Track: Artificial Intelligence (AI) LeadershipRelated Episode: Aligning AI with Enterprise Strategy with Leon Gordon, CEO at Onyx DataAttend RADAR Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
On this podcast episode of Data Unchained, David Cerf, Chief Data Evangelist for GRAU DATA GmbH, joins us to talk about how metadata is is playing a big part in accessing large data sets and helping researchers, scientists, and engineers make sense of billions of files at scale.
DataUnchained #TechPodcast #Supercomputing #AIInfrastructure #CloudComputing #DataManagement #Metadata #DataScience #BigData #HPC #HighPerformanceComputing #AIRevolution #DigitalTransformation #DataAnalytics #EnterpriseTech #FutureOfData #TechTalk #Innovation #CloudStorage #PodcastLife #TechInsights #ITInfrastructure #TechLeaders #MachineLearning #DataDriven
Cyberpunk by jiglr | https://soundcloud.com/jiglrmusic Music promoted by https://www.free-stock-music.com Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/deed.en_US Hosted on Acast. See acast.com/privacy for more information.
Welcome to Data Unchained, where we explore the decentralization of data and the cutting-edge technologies shaping the future of AI and HPC. Recorded live from Supercomputing 24 in Atlanta, Georgia, this episode features an in-depth conversation with Gary Grider, a leading technologist at Los Alamos National Laboratory, and host Molly Presley. Episode Highlights: - The evolution of storage systems: From early file systems to groundbreaking innovations like Lustre, HPSS, and NFS. - Overcoming storage challenges in massive-scale HPC and AI environments. - Insights into Los Alamos’ role in virtual nuclear testing and managing petabyte-scale simulations. - How Hammerspace Tier 0 technology is transforming local storage in compute nodes. - The convergence of AI and HPC: A look into standardizing infrastructure to support modern workloads. Gary shares his decades-long journey in storage innovation, the importance of standardized protocols like NFS, and the revolutionary impact of integrating compute and storage technologies to streamline workflows for industries beyond HPC.
DataUnchained #Supercomputing24 #HPC #AIWorkflows #DataStorage #DecentralizedData #NFS #LosAlamos #GaryGrider #BigData #ParallelComputing #Tier0Storage #AIInfrastructure #TechPodcast #Innovation #CloudComputing #MachineLearning #HybridCloud #MultiCloud #Supercomputing #TechInnovation #ArtificialIntelligence #HighPerformanceComputing #DataScience #ComputePower
Cyberpunk by jiglr | https://soundcloud.com/jiglrmusic Music promoted by https://www.free-stock-music.com Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/deed.en_US Hosted on Acast. See acast.com/privacy for more information.
Welcome to Data Unchained, the podcast where we delve into the evolving world of decentralized data and workflows. Hosted by Molly Presley, this episode features a thought-provoking discussion with Matthew Shaxted, Co-Founder and CEO of Parallel Works, about the challenges and opportunities in hybrid and multi-cloud environments. Key Highlights: - The journey of Parallel Works: From HPC simulations to democratizing large-scale computing resources. - The convergence of HPC and AI infrastructure—how organizations are adapting to GPU-heavy workflows. - Overcoming decentralized data challenges: Solutions for application portability and cost-efficient workload management. The evolution of AI-driven task placement for seamless resource optimization. - Real-world insights into managing hybrid and multi-cloud workloads with cost controls and global namespaces. - Matthew also introduces ACTIVATE, Parallel Works' next-gen hybrid multi-cloud platform, and shares exciting announcements for the future, including advancements in Kubernetes integration and benchmarking AI task placement. Learn more about Parallel Works: https://parallel.works @parallel-works
dataunchained #DecentralizedData #HybridCloud #MultiCloud #HPC #AIWorkflows #ParallelWorks #DataManagement #CloudComputing #ArtificialIntelligence #DataInnovation #TechPodcast #BigData #MachineLearning #futureofai
Cyberpunk by jiglr | https://soundcloud.com/jiglrmusic Music promoted by https://www.free-stock-music.com Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/deed.en_US Hosted on Acast. See acast.com/privacy for more information.
As AI continually changes how businesses operate, new questions emerge around ethics and privacy. Nowadays, algorithms can set prices and personalize offers, but how do companies ensure they’re doing this responsibly? What does it mean to be transparent with customers about data use, and how can businesses avoid unintended bias? Balancing innovation with trust is key, but achieving this balance isn’t always straightforward. Dr. Jose Mendoza is Academic Director and Clinical Associate Professor in Integrated Marketing at NYU, and was formerly an Associate Professor of Practice at The University of Arizona in Tucson, Arizona. His focus is on consumer pricing, digital retailing, intelligent retail stores, neuromarketing, big data, artificial intelligence, and machine learning. Previously, he taught marketing courses at Sacred Heart University and Western Michigan University. He is also an experienced senior global marketing executive with over 18 years of experience in global marketing alone and a career as an Engineer in Information Sciences. Dr. Mendoza is also a Doctoral Researcher in Strategic and Global pricing, Consumer Behavior, and Pricing Research methodologies. He had international roles in Latin America, Europe, and the USA with scope in over 50 countries. In the episode, Richie and Jose explore AI-driven pricing, consumer perceptions and ethical pricing, the complexity of dynamic pricing models, explainable AI, data privacy and customer trust, legal and ethical guardrails, innovations in dynamic pricing and much more. Links Mentioned in the Show: NYUConnect with JoseAmazon Dynamic Pricing Strategy in 2024Course: AI EthicsRelated Episode: The Future of Marketing Analytics with Cory Munchbach, CEO at BlueConicSign up to RADAR: Forward Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Building a robust data infrastructure is crucial for any organization looking to leverage AI and data-driven insights. But as your data ecosystem grows, so do the challenges of managing, securing, and scaling it. How do you ensure that your data infrastructure not only meets today’s needs but is also prepared for the rapid changes in technology tomorrow? What strategies can you adopt to keep your organization agile, while ensuring that your data investments continue to deliver value and support business goals? Saad Siddiqui is a venture capitalist for Titanium Ventures. Titanium focus on enterprise technology investments, particularly focusing on next generation enterprise infrastructure and applications. In his career, Saad has deployed over $100M in venture capital in over a dozen companies. In previous roles as a corporate development executive, he has executed M&A transactions valued at over $7 billion in aggregate. Prior to Titanium Ventures he was in corporate development at Informatica and was a member of Cisco's venture investing and acquisitions team covering cloud, big data and virtualization. In the episode, Richie and Saad explore the business impacts of data infrastructure, getting started with data infrastructure, the roles and teams you need to get started, scalability and future-proofing, implementation challenges, continuous education and flexibility, automation and modernization, trends in data infrastructure, and much more. Links Mentioned in the Show: Titanium VenturesConnect with SaadCourse - Artificial Intelligence (AI) StrategyRelated Episode: How are Businesses Really Using AI? With Tathagat Varma, Global TechOps Leader at Walmart Global TechRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Businesses are collecting more data than ever before. But is bigger always better? Many companies are starting to question whether massive datasets and complex infrastructure are truly delivering results or just adding unnecessary costs and complications. How can you make sure your data strategy is aligned with your actual needs? What if focusing on smaller, more manageable datasets could improve your efficiency and save resources, all while delivering the same insights? Ryan Boyd is the Co-Founder & VP, Marketing + DevRel at MotherDuck. Ryan started his career as a software engineer, but since has led DevRel teams for 15+ years at Google, Databricks and Neo4j, where he developed and executed numerous marketing and DevRel programs. Prior to MotherDuck, Ryan worked at Databricks and focussed the team on building an online community during the pandemic, helping to organize the content and experience for an online Data + AI Summit, establishing a regular cadence of video and blog content, launching the Databricks Beacons ambassador program, improving the time to an “aha” moment in the online trial and launching a University Alliance program to help professors teach the latest in data science, machine learning and data engineering. In the episode, Richie and Ryan explore data growth and computation, the data 1%, the small data movement, data storage and usage, the shift to local and hybrid computing, modern data tools, the challenges of big data, transactional vs analytical databases, SQL language enhancements, simple and ergonomic data solutions and much more. Links Mentioned in the Show: MotherDuckThe Small Data ManifestoConnect with RyanSmall DataSF conferenceRelated Episode: Effective Data Engineering with Liya Aizenberg, Director of Data Engineering at AwayRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Will AI completely revolutionize the way we work as data professionals? Or is it overhyped? In this episode, Lindsay Murphy and Colleen Tartow will take opposing viewpoints and help us understand whether or not AI can really live up to all the hype. You'll leave with a deeper understanding of the current state of AI in data, the tech stack needed to run AI, and where things are heading in the future. What You'll Learn: The tech stack required to run AI and how it differs from prior "big data" stacks Will AI change everything in data? Or is it overhyped? How you should be thinking about AI and its impact on your career Register for free to be part of the next live session: https://bit.ly/3XB3A8b About our guests: Lindsay Murphy is the host of the Women Lead Data podcast as well as the Head of Data at Hiive. Follow Lindsay on LinkedIn
Colleen Tartow is an engineering and data leader, author, speaker, advisor, mentor, and DEI Advocate. Data Mesh for Dummies E-Book Follow Colleen on LinkedIn Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter
The Data Product Management In Action podcast, brought to you by Soda and executive producer Scott Hirleman, is a platform for data product management practitioners to share insights and experiences. In Season 01, Episode 19, host Nadiem von Heydebrand interviews Pradeep Fernando, who leads the data and metadata management initiative at Swisscom. They explore key topics in data product management, including the definition and categorization of data products, the role of AI, prioritization strategies, and the application of product management principles. Pradeep shares valuable insights and experiences on successfully implementing data product management within organizations. About our host Nadiem von Heydebrand: Nadiem is the CEO and Co-Founder of Mindfuel. In 2019, he merged his passion for data science with product management, becoming a thought leader in data product management. Nadiem is dedicated to demonstrating the true value contribution of data. With over a decade of experience in the data industry, Nadiem leverages his expertise to scale data platforms, implement data mesh concepts, and transform AI performance into business performance, delighting consumers at global organizations that include Volkswagen, Munich Re, Allianz, Red Bull, and Vorwerk. Connect with Nadiem on LinkedIn. About our guest Pradeep Fernando: Pradeep is a seasoned data product leader with over 6 years of data product leadership experience and over 10 years of product management experience. He leads or is a key contributor to several company-wide data & analytics initiatives at Swisscom such as Data as a Product (Data Mesh), One Data Platform, Machine Learning (Factory), MetaData management, Self-service data & analytics, BI Tooling Strategy, Cloud Transformation, Big Data platforms,and Data warehousing. Previously, he was a product manager at both Swisscom's B2B and Innovation units both building new products and optimizing mature products (profitability) in the domains of enterprise mobile fleet management, cyber-and mobile device security.Pradeep is also passionate about and experienced in leading the development of data products and transforming IT delivery teams into empowered, agile product teams. And, he is always happy to engage in a conversation about lean product management or "heavier" topics such as humanity's future or our past. Connect with Pradeep on LinkedIn. All views and opinions expressed are those of the individuals and do not necessarily reflect their employers or anyone else. Join the conversation on LinkedIn. Apply to be a guest or nominate someone that you know. Do you love what you're listening to? Please rate and review the podcast, and share it with fellow practitioners you know. Your support helps us reach more listeners and continue providing valuable insights!
If you are working in or trying to break into data and want to learn how to fast-track your career, this one is for you! In this episode, Jess Ramos (180k+ followers on LinkedIn!) shares her best tips and practical advice to help take your career to the next level. What You'll Learn: How specializing and building niche skills can lead to big opportunities The importance of a personal brand if you want to accelerate your career Jess' top tips for those looking to break into data and move up quickly Register for free to be part of the next live session: https://bit.ly/3XB3A8b About our guest: Jess Ramos is the founder of Big Data Energy, a Senior Data Analyst at Crunchbase, a LinkedIn Learning Instructor, and a content creator in the data space. She loves to empower people to grow their careers in data while breaking industry stereotypes! Jess' Newsletter Follow Jess on LinkedIn
Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter