talk-data.com talk-data.com

Topic

AI/ML

Artificial Intelligence/Machine Learning

data_science algorithms predictive_analytics

9014

tagged

Activity Trend

1532 peak/qtr
2020-Q1 2026-Q1

Activities

9014 activities · Newest first

Sometimes DIY UI/UX design only gets you so far—and you know it’s time for outside help. One thing prospects from SAAS analytics and data-related product companies often ask me is how things are like in the other guy/gal’s backyard. They want to compare their situation to others like them. So, today, I want to share some of the common “themes” I see that usually are the root causes of what leads to a phone call with me. 

By the time I am on the phone with most prospects who already have a product in market, they’re usually either having significant problems with 1 or more of the following: sales friction (product value is opaque); low adoption/renewal worries (user apathy), customer complaints about UI/UX being hard to use; velocity (team is doing tons of work, but leader isn’t seeing progress)—and the like. 

I’m hoping today’s episode will explain some of the root causes that may lead to these issues — so you can avoid them in your data product building work!  

Highlights/ Skip to:

(10:47) Design != "front-end development" or analyst work (12:34)  Liking doing UI/UX/viz design work vs. knowing  (15:04)  When a leader sees lots of work being done, but the UX/design isn’t progressing (17:31) Your product’s UX needs to convey some magic IP/special sauce…but it isn’t (20:25) Understanding the tradeoffs of using libraries, templates, and other solution’s design as a foundation for your own  (25:28) The sunk cost bias associated with POCs and “we’ll iterate on it” (28:31) Relying on UI/UX "customization" to please all customers (31:26) The hidden costs of abstraction of system objects, UI components, etc.  to make life easier for engineering and technical teams (32:32) Believing you’ll know the design is good “when you see it” (and what you don’t know you don’t know) (36:43) Believing that because the data science/AI/ML modeling under your solution was, accurate, difficult, and/or expensive makes it automatically worth paying for 

Quotes from Today’s Episode The challenge is often not knowing what you don’t know about a project. We often end up focusing on building the tech [and rushing it out] so we can get some feedback on it… but product is not about getting it out there so we can get feedback. The goal of doing product well is to produce value, benefits, or outcomes. Learning is important, but that’s not what the objective is. The objective is benefits creation. (5:47) When we start doing design on a project that’s not design actionable, we build debt and sometimes can hurt the process of design. If you start designing your product with an entire green space, no direction, and no constraints, the chance of you shipping a good v1 is small. Your product strategy needs to be design-actionable for the team to properly execute against it. (19:19) While you don’t need to always start at zero with your UI/UX design, what are the parts of your product or application that do make sense to borrow , “steal” and cheat from? And when does it not?  It takes skill to know when you should be breaking the rules or conventions. Shortcuts often don’t produce outsized results—unless you know what a good shortcut looks like.  (22:28) A proof of concept is not a minimum valuable product. There’s a difference between proving the tech can work and making it into a product that’s so valuable, someone would exchange money for it because it’s so useful to them. Whatever that value is, these are two different things. (26:40) Trying to do a little bit for everybody [through excessive customization] can often result in nobody understanding the value or utility of your solution. Customization can hide the fact the team has decided not to make difficult choices. If you’re coming into a crowded space… it’s like’y not going to be a compelling reason to [convince customers to switch to your solution]. Customization can be a tax, not a benefit. (29:26) Watch for the sunk cost bias [in product development]. [Buyers] don’t care how the sausage was made. Many don’t understand how the AI stuff works, they probably don’t need to understand how it works. They want the benefits downstream from technology wrapped up in something so invaluable they can’t live without it.  Watch out for technically right, effectively wrong. (39:27)

Episode Notes Ever wondered how AI can transform business strategy? In this episode, we dive into the fascinating world of AI-powered SWOT analysis. We are tackling an old problem using a new approach. Using the latest technology, like GPT-3.5, companies can now analyze their strengths, weaknesses, opportunities, and threats with lightning speed. Join us as we explore how AI is reshaping the way we understand market dynamics, financial data, and competitive landscapes, with real-world examples from Google and Meta. Whether you're an investor, entrepreneur, or just curious about the future of business, this episode is packed with insights you won't want to miss! Thanks for reading Data, AI, Productivity & Business with a Little Personality! Subscribe for free to receive new posts and support my work. Key Topics Covered: * What is SWOT Analysis?: A quick refresher on this cornerstone of business strategy (Strengths, Weaknesses, Opportunities, and Threats). * AI Meets Business Strategy: How GPT-3.5 and AI technology are revolutionizing traditional SWOT analysis by speeding up data processing and uncovering deeper insights. * Real-World Examples: AI-driven SWOT analysis of Google and Meta, revealing potential vulnerabilities and opportunities for these tech giants. * Google: Over-reliance on ad revenue and the challenges posed by ad blockers. * Meta: Data privacy issues, regulatory hurdles, and user trust challenges. * Competitive Edge: How AI can give businesses a leg up by performing real-time competitive analysis and market trend predictions. * Beyond Business: Could AI also be used to analyze career paths, personal strengths, and even suggest side hustle ideas? We explore the exciting future possibilities of AI-powered insights. Why Listen? This episode is perfect for anyone interested in how cutting-edge AI tools are transforming not just the business world, but potentially the way we approach decision-making in our own lives. Tune in to find out how AI is making sophisticated analysis more accessible, and what that means for the future. Links & Resources: Blog Post on how AI is changing the SWOT game What is SWOT Analysis? What is Python Programming? More about GPT-3.5 Build and share Python Based Data Apps with Streamlit Thanks for reading Data, AI, Productivity & Business with a Little Personality! Subscribe for free to receive new posts and support my work! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit mukundansankar.substack.com

Businesses are collecting more data than ever before. But is bigger always better? Many companies are starting to question whether massive datasets and complex infrastructure are truly delivering results or just adding unnecessary costs and complications. How can you make sure your data strategy is aligned with your actual needs? What if focusing on smaller, more manageable datasets could improve your efficiency and save resources, all while delivering the same insights? Ryan Boyd is the Co-Founder & VP, Marketing + DevRel at MotherDuck. Ryan started his career as a software engineer, but since has led DevRel teams for 15+ years at Google, Databricks and Neo4j, where he developed and executed numerous marketing and DevRel programs. Prior to MotherDuck, Ryan worked at Databricks and focussed the team on building an online community during the pandemic, helping to organize the content and experience for an online Data + AI Summit, establishing a regular cadence of video and blog content, launching the Databricks Beacons ambassador program, improving the time to an “aha” moment in the online trial and launching a University Alliance program to help professors teach the latest in data science, machine learning and data engineering. In the episode, Richie and Ryan explore data growth and computation, the data 1%, the small data movement, data storage and usage, the shift to local and hybrid computing, modern data tools, the challenges of big data, transactional vs analytical databases, SQL language enhancements, simple and ergonomic data solutions and much more.  Links Mentioned in the Show: MotherDuckThe Small Data ManifestoConnect with RyanSmall DataSF conferenceRelated Episode: Effective Data Engineering with Liya Aizenberg, Director of Data Engineering at AwayRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Summary In this episode of the Data Engineering Podcast, Adrian Broderieux and Marcin Rudolph, co-founders of DLT Hub, delve into the principles guiding DLT's development, emphasizing its role as a library rather than a platform, and its integration with lakehouse architectures and AI application frameworks. The episode explores the impact of the Python ecosystem's growth on DLT, highlighting integrations with high-performance libraries and the benefits of Arrow and DuckDB. The episode concludes with a discussion on the future of DLT, including plans for a portable data lake and the importance of interoperability in data management tools. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementImagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at dataengineeringpodcast.com/datafold today!Your host is Tobias Macey and today I'm interviewing Adrian Brudaru and Marcin Rudolf, cofounders at dltHub, about the growth of dlt and the numerous ways that you can use it to address the complexities of data integrationInterview IntroductionHow did you get involved in the area of data management?Can you describe what dlt is and how it has evolved since we last spoke (September 2023)?What are the core principles that guide your work on dlt and dlthub?You have taken a very opinionated stance against managed extract/load services. What are the shortcomings of those platforms, and when would you argue in their favor?The landscape of data movement has undergone some interesting changes over the past year. Most notably, the growth of PyAirbyte and the rapid shifts around the needs of generative AI stacks (vector stores, unstructured data processing, etc.). How has that informed your product development and positioning?The Python ecosystem, and in particular data-oriented Python, has also undergone substantial evolution. What are the developments in the libraries and frameworks that you have been able to benefit from?What are some of the notable investments that you have made in the developer experience for building dlt pipelines?How have the interfaces for source/destination development improved?You recently published a post about the idea of a portable data lake. What are the missing pieces that would make that possible, and what are the developments/technologies that put that idea within reach?What is your strategy for building a sustainable product on top of dlt?How does that strategy help to form a "virtuous cycle" of improving the open source foundation?What are the most interesting, innovative, or unexpected ways that you have seen dlt used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on dlt?When is dlt the wrong choice?What do you have planned for the future of dlt/dlthub?Contact Info AdrianLinkedInMarcinLinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links dltPodcast EpisodePyArrowPolarsIbisDuckDBPodcast Episodedlt Data ContractsRAG == Retrieval Augmented GenerationAI Engineering Podcast EpisodePyAirbyteOpenAI o1 ModelLanceDBQDrant EmbeddedAirflowGitHub ActionsArrow DataFusionApache ArrowPyIcebergDelta-RSSCD2 == Slowly Changing DimensionsSQLAlchemySQLGlotFSSpecPydanticSpacyEntity RecognitionParquet File FormatPython DecoratorREST API ToolkitOpenAPI Connector GeneratorConnectorXPython no-GILDelta LakePodcast EpisodeSQLMeshPodcast EpisodeHamiltonTabularPostHogPodcast.init EpisodeAsyncIOCursor.AIData MeshPodcast EpisodeFastAPILangChainGraphRAGAI Engineering Podcast EpisodeProperty GraphPython uvThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

New EU guidelines on legitimate interest and what they mean. Our AI-hosts break down the legal jargon into plain English, explore real-world examples, and explain how these guidelines impact people's digital life! Listen now to learn the EDPB approach to data minimization, the balancing test, and the right to object. Plus, we cover how these rules apply to fraud prevention, direct marketing, and protecting children online.

You can download and read the new EDPB Guidelines yourself here

Databricks Data Intelligence Platform: Unlocking the GenAI Revolution

This book is your comprehensive guide to building robust Generative AI solutions using the Databricks Data Intelligence Platform. Databricks is the fastest-growing data platform offering unified analytics and AI capabilities within a single governance framework, enabling organizations to streamline their data processing workflows, from ingestion to visualization. Additionally, Databricks provides features to train a high-quality large language model (LLM), whether you are looking for Retrieval-Augmented Generation (RAG) or fine-tuning. Databricks offers a scalable and efficient solution for processing large volumes of both structured and unstructured data, facilitating advanced analytics, machine learning, and real-time processing. In today's GenAI world, Databricks plays a crucial role in empowering organizations to extract value from their data effectively, driving innovation and gaining a competitive edge in the digital age. This book will not only help you master the Data Intelligence Platform but also help power your enterprise to the next level with a bespoke LLM unique to your organization. Beginning with foundational principles, the book starts with a platform overview and explores features and best practices for ingestion, transformation, and storage with Delta Lake. Advanced topics include leveraging Databricks SQL for querying and visualizing large datasets, ensuring data governance and security with Unity Catalog, and deploying machine learning and LLMs using Databricks MLflow for GenAI. Through practical examples, insights, and best practices, this book equips solution architects and data engineers with the knowledge to design and implement scalable data solutions, making it an indispensable resource for modern enterprises. Whether you are new to Databricks and trying to learn a new platform, a seasoned practitioner building data pipelines, data science models, or GenAI applications, or even an executive who wants to communicate the value of Databricks to customers, this book is for you. With its extensive feature and best practice deep dives, it also serves as an excellent reference guide if you are preparing for Databricks certification exams. What You Will Learn Foundational principles of Lakehouse architecture Key features including Unity Catalog, Databricks SQL (DBSQL), and Delta Live Tables Databricks Intelligence Platform and key functionalities Building and deploying GenAI Applications from data ingestion to model serving Databricks pricing, platform security, DBRX, and many more topics Who This Book Is For Solution architects, data engineers, data scientists, Databricks practitioners, and anyone who wants to deploy their Gen AI solutions with the Data Intelligence Platform. This is also a handbook for senior execs who need to communicate the value of Databricks to customers. People who are new to the Databricks Platform and want comprehensive insights will find the book accessible.

The improving US activity data has trimmed the downside tail, reinforcing views for a Goldilocks-type soft-landing. However, with core inflation running at a pace little different from a year-ago (around 3%ar), more respect for a high-for-long scenario is also needed. Not all are in same boat, and Euro area growth weakness is likely to get the ECB to cut rates next week. Enthusiasm for a China fiscal bazooka has built, but we do not see this as likely.

Speakers:

Bruce Kasman

Joseph Lupton

This podcast was recorded on 11 October 2024.

This communication is provided for information purposes only. Institutional clients please visit www.jpmm.com/research/disclosures for important disclosures. © 2024 JPMorgan Chase & Co. All rights reserved. This material or any portion hereof may not be reprinted, sold or redistributed without the written consent of J.P. Morgan. It is strictly prohibited to use or share without prior written consent from J.P. Morgan any research material received from J.P. Morgan or an authorized third-party (“J.P. Morgan Data”) in any third-party artificial intelligence (“AI”) systems or models when such J.P. Morgan Data is accessible by a third-party. It is permissible to use J.P. Morgan Data for internal business purposes only in an AI system or model that protects the confidentiality of J.P. Morgan Data so as to prevent any and all access to or use of such J.P. Morgan Data by any third-party.

Businesses are constantly racing to stay ahead by adopting the latest data tools and AI technologies. But with so many options and buzzwords, it’s easy to get lost in the excitement without knowing whether these tools truly serve your business. How can you ensure that your data stack is not only modern but sustainable and agile enough to adapt to changing needs? What does it take to build data products that deliver real value to your teams while driving innovation? Adrian Estala is VP, Field Chief Data Officer and the host of Starburst TV. With a background in leading Digital and IT Portfolio Transformations, he understands the value of creating executive frameworks that focus on material business outcomes. Skilled with getting the most out of data-driven investments, Adrian is your trusted adviser to navigating complex data environments and integrating a Data Mesh strategy in your organization. In the episode, Richie and Adrian explore the modern data stack, agility in data, collaboration between business and data teams, data products and differing ways of building them, data discovery and metadata, data quality, career skills for data practitioners and much more. Links Mentioned in the Show: StarburstConnect with AdrianCareer Track: Data Engineer in PythonRelated Episode: How this Accenture CDO is Navigating the AI RevolutionRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Will AI completely revolutionize the way we work as data professionals? Or is it overhyped? In this episode, Lindsay Murphy and Colleen Tartow will take opposing viewpoints and help us understand whether or not AI can really live up to all the hype. You'll leave with a deeper understanding of the current state of AI in data, the tech stack needed to run AI, and where things are heading in the future.   What You'll Learn: The tech stack required to run AI and how it differs from prior "big data" stacks Will AI change everything in data? Or is it overhyped? How you should be thinking about AI and its impact on your career   Register for free to be part of the next live session: https://bit.ly/3XB3A8b   About our guests: Lindsay Murphy is the host of the Women Lead Data podcast as well as the Head of Data at Hiive. Follow Lindsay on LinkedIn  

Colleen Tartow is an engineering and data leader, author, speaker, advisor, mentor, and DEI Advocate. Data Mesh for Dummies E-Book Follow Colleen on LinkedIn   Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter

Data Engineering Best Practices

Unlock the secrets to building scalable and efficient data architectures with 'Data Engineering Best Practices.' This book provides in-depth guidance on designing, implementing, and optimizing cloud-based data pipelines. You will gain valuable insights into best practices, agile workflows, and future-proof designs. What this Book will help me do Effectively plan and architect scalable data solutions leveraging cloud-first strategies. Master agile processes tailored to data engineering for improved project outcomes. Implement secure, efficient, and reliable data pipelines optimized for analytics and AI. Apply real-world design patterns and avoid common pitfalls in data flow and processing. Create future-ready data engineering solutions following industry-proven frameworks. Author(s) Richard J. Schiller and David Larochelle are seasoned data engineering experts with decades of experience crafting efficient and secure cloud-based infrastructures. Their collaborative writing distills years of real-world expertise into practical advice aimed at helping engineers succeed in a rapidly evolving field. Who is it for? This book is ideal for data engineers, ETL specialists, and big data professionals seeking to enhance their knowledge in cloud-based solutions. Some familiarity with data engineering, ETL pipelines, and big data technologies is helpful. It suits those keen on mastering advanced practices, improving agility, and developing efficient data pipelines. Perfect for anyone looking to future-proof their skills in data engineering.

Haibin Zhu and Nora Szentivanyi discuss China’s latest policy easing measures and what to expect in coming weeks and months. Three aspects of the upcoming fiscal announcement will be important to watch: magnitude, composition and forward guidance. We do not expect the October fiscal package to exceed 2 trillion yuan, with only modest direct support for consumers, but additional fiscal easing is likely further down the road. Accommodative fiscal policy is important not only in the near term, but also into 2025 when the Chinese economy may face a series of adverse shocks.

This podcast was recorded on 10 October 2024.

This communication is provided for information purposes only. Institutional clients can view the related report at https://www.jpmm.com/research/content/GPS-4813222-0 for more information; please visit www.jpmm.com/research/disclosures for important disclosures.

© 2024 JPMorgan Chase & Co. All rights reserved. This material or any portion hereof may not be reprinted, sold or redistributed without the written consent of J.P. Morgan. It is strictly prohibited to use or share without prior written consent from J.P. Morgan any research material received from J.P. Morgan or an authorized third-party (“J.P. Morgan Data”) in any third-party artificial intelligence (“AI”) systems or models when such J.P. Morgan Data is accessible by a third-party. It is permissible to use J.P. Morgan Data for internal business purposes only in an AI system or model that protects the confidentiality of J.P. Morgan Data so as to prevent any and all access to or use of such J.P. Morgan Data by any third-party.

We talked about:

00:00 DataTalks.Club intro

08:06 Background and career journey of Katarzyna

09:06 Transition from linguistics to computational linguistics

11:38 Merging linguistics and computer science

15:25 Understanding phonetics and morpho-syntax

17:28 Exploring morpho-syntax and its relation to grammar

20:33 Connection between phonetics and speech disorders

24:41 Improvement of voice recognition systems

27:31 Overview of speech recognition technology

30:24 Challenges of ASR systems with atypical speech

30:53 Strategies for improving recognition of disordered speech

37:07 Data augmentation for training models

40:17 Transfer learning in speech recognition

42:18 Challenges of collecting data for various speech disorders

44:31 Stammering and its connection to fluency issues

45:16 Polish consonant combinations and pronunciation challenges

46:17 Use of Amazon Transcribe for generating podcast transcripts

47:28 Role of language models in speech recognition

49:19 Contextual understanding in speech recognition

51:27 How voice recognition systems analyze utterances

54:05 Personalization of ASR models for individuals

56:25 Language disorders and their impact on communication

58:00 Applications of speech recognition technology

1:00:34 Challenges of personalized and universal models

1:01:23 Voice recognition in automotive applications

1:03:27 Humorous voice recognition failures in cars

1:04:13 Closing remarks and reflections on the discussion

About the speaker:

Katarzyna is a computational linguist with over 10 years of experience in NLP and speech recognition. She has developed language models for automotive brands like Audi and Porsche and specializes in phonetics, morpho-syntax, and sentiment analysis.

Kasia also teaches at the University of Warsaw and is passionate about human-centered AI and multilingual NLP.

Join our slack: https://datatalks.club/slack.html

Azure SQL Revealed: The Next-Generation Cloud Database with AI and Microsoft Fabric

Access detailed content and examples on Azure SQL, a set of cloud services that allows for SQL Server to be deployed in the cloud. This book teaches the fundamentals of deployment, configuration, security, performance, and availability of Azure SQL from the perspective of these same tasks and capabilities in SQL Server. This distinct approach makes this book an ideal learning platform for readers familiar with SQL Server on-premises who want to migrate their skills toward providing cloud solutions to an enterprise market that is increasingly cloud-focused. If you know SQL Server, you will love this book. You will be able to take your existing knowledge of SQL Server and translate that knowledge into the world of cloud services from the Microsoft Azure platform, and in particular into Azure SQL. This book provides information never seen before about the history and architecture of Azure SQL. Author Bob Ward is a leading expert with access to and support from the Microsoft engineering team that built Azure SQL and related database cloud services. He presents powerful, behind-the-scenes insights into the workings of one of the most popular database cloud services in the industry. This book also brings you the latest innovations for Azure SQL including Azure Arc, Hyperscale, generative AI applications, Microsoft Copilots, and integration with the Microsoft Fabric. What You Will Learn Know the history of Azure SQL Deploy, configure, and connect to Azure SQL Choose the correct way to deploy SQL Server in Azure Migrate existing SQL Server instances to Azure SQL Monitor and tune Azure SQL’s performance to meet your needs Ensure your data and application are highly available Secure your data from attack and theft Learn the latest innovations for Azure SQL including Hyperscale Learn how to harness the power of AI for generative data-driven applications and Microsoft Copilots for assistance Learn how to integrate Azure SQL with the unified data platform, the Microsoft Fabric Who This Book Is For This book is designed to teach SQL Server in the Azure cloud to the SQL Server professional. Anyone who operates, manages, or develops applications for SQL Server will benefit from this book. Readers will be able to translate their current knowledge of SQL Server—especially of SQL Server 2019 and 2022—directly to Azure. This book is ideal for database professionals looking to remain relevant as their customer base moves into the cloud.

Brought to you by: • Paragon: ​​Build native, customer-facing SaaS integrations 7x faster. • WorkOS: For B2B leaders building enterprise SaaS — On today’s episode of The Pragmatic Engineer, I’m joined by Quinn Slack, CEO and co-founder of Sourcegraph, a leading code search and intelligence platform. Quinn holds a degree in Computer Science from Stanford and is deeply passionate about coding: to the point that he still codes every day! He also serves on the board of Hack Club, a national nonprofit dedicated to bringing coding clubs to high schools nationwide. In this insightful conversation, we discuss:             • How Sourcegraph's operations have evolved since 2021 • Why more software engineers should focus on delivering business value • Why Quinn continues to code every day, even as a CEO • Practical AI and LLM use cases and a phased approach to their adoption • The story behind Job Fairs at Sourcegraph and why it’s no longer in use • Quinn’s leadership style and his focus on customers and product excellence • The shift from location-independent pay to zone-based pay at Sourcegraph • And much more! — Where to find Quinn Slack: • X: https://x.com/sqs • LinkedIn: https://www.linkedin.com/in/quinnslack/ • Website: https://slack.org/ Where to find Gergely: • Newsletter: https://www.pragmaticengineer.com/ • YouTube: https://www.youtube.com/c/mrgergelyorosz • LinkedIn: https://www.linkedin.com/in/gergelyorosz/ • X: https://x.com/GergelyOrosz — In this episode, we cover: (01:35) How Sourcegraph started and how it has evolved over the past 11 years (04:14) How scale-ups have changed  (08:27) Learnings from 2021 and how Sourcegraph’s operations have streamlined (15:22) Why Quinn is for gradual increases in automation and other thoughts on AI (18:10) The importance of changelogs (19:14) Keeping AI accountable and possible future use cases  (22:29) Current limitations of AI (25:08) Why early adopters of AI coding tools have an advantage  (27:38) Why AI is not yet capable of understanding existing codebases  (31:53) Changes at Sourcegraph since the deep dive on The Pragmatic Engineer blog (40:14) The importance of transparency and understanding the different forms of compensation (40:22) Why Sourcegraph shifted to zone-based pay (47:15) The journey from engineer to CEO (53:28) A comparison of a typical week 11 years ago vs. now (59:20) Rapid fire round The Pragmatic Engineer deepdives relevant for this episode: • Inside Sourcegraph’s engineering culture: Part 1 https://newsletter.pragmaticengineer.com/p/inside-sourcegraphs-engineering-culture• Inside Sourcegraph’s engineering culture: Part 2 https://newsletter.pragmaticengineer.com/p/inside-sourcegraphs-engineering-culture-part-2 — References and Transcript: See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Help us become the #1 Data Podcast by leaving a rating & review! We are 67 reviews away! Many people feel unqualified for a data analyst role, but there are ways to fight imposter syndrome. Learn how to boost your confidence with practical steps 💌 Join 30k+ aspiring data analysts & get my tips in your inbox weekly 👉 https://www.datacareerjumpstart.com/newsletter 🆘 Feeling stuck in your data journey? Come to my next free "How to Land Your First Data Job" training 👉 https://www.datacareerjumpstart.com/training 👩‍💻 Want to land a data job in less than 90 days? 👉 https://www.datacareerjumpstart.com/daa 👔 Ace The Interview with Confidence 👉 https://www.datacareerjumpstart.com//interviewsimulator ⌚ TIMESTAMPS 01:30 Step 1: Build Projects to Boost Confidence 03:38 Step 2: Ask 'What's the Worst That Can Happen?' 06:13 Step 3: Accept You Can’t Learn Everything 07:24 Step 4: Fake It Till You Make It 09:28 Bonus Tip: Use Affirmations to Fight Imposter Syndrome 🎞️ Positive Affirmations for Aspiring Data Analysts [Listen Daily] https://youtu.be/vsuZfsYNO30?si=DctCusBQ6OaIlg9s 🔗 CONNECT WITH AVERY 🎥 YouTube Channel 🤝 LinkedIn 📸 Instagram 🎵 TikTok 💻 Website Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!

To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more

If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.

👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa

The Data Product Management In Action podcast, brought to you by Soda and executive producer Scott Hirleman, is a platform for data product management practitioners to share insights and experiences.  In Season 01, Episode 19, host Nadiem von Heydebrand interviews Pradeep Fernando, who leads the data and metadata management initiative at Swisscom. They explore key topics in data product management, including the definition and categorization of data products, the role of AI, prioritization strategies, and the application of product management principles. Pradeep shares valuable insights and experiences on successfully implementing data product management within organizations. About our host Nadiem von Heydebrand: Nadiem is the CEO and Co-Founder of Mindfuel. In 2019, he merged his passion for data science with product management, becoming a thought leader in data product management. Nadiem is dedicated to demonstrating the true value contribution of data. With over a decade of experience in the data industry, Nadiem leverages his expertise to scale data platforms, implement data mesh concepts, and transform AI performance into business performance, delighting consumers at global organizations that include Volkswagen, Munich Re, Allianz, Red Bull, and Vorwerk. Connect with Nadiem on LinkedIn. About our guest Pradeep Fernando: Pradeep is a seasoned data product leader with over 6 years of data product leadership experience and over 10 years of product management experience. He leads or is a key contributor to several company-wide data & analytics initiatives at Swisscom such as Data as a Product (Data Mesh), One Data Platform, Machine Learning (Factory), MetaData management, Self-service data & analytics, BI Tooling Strategy, Cloud Transformation, Big Data platforms,and Data warehousing. Previously, he was a product manager at both Swisscom's B2B and Innovation units both building new products and optimizing mature products (profitability) in the domains of enterprise mobile fleet management, cyber-and mobile device security.Pradeep is also passionate about and experienced in leading the development of data products and transforming IT delivery teams into empowered, agile product teams. And, he is always happy to engage in a conversation about lean product management or "heavier" topics such as humanity's future or our past. Connect with Pradeep on LinkedIn. All views and opinions expressed are those of the individuals and do not necessarily reflect their employers or anyone else.  Join the conversation on LinkedIn.  Apply to be a guest or nominate someone that you know.  Do you love what you're listening to? Please rate and review the podcast, and share it with fellow practitioners you know. Your support helps us reach more listeners and continue providing valuable insights!              

Está no ar, o Data Hackers News !! Os assuntos mais quentes da semana, com as principais notícias da área de Dados, IA e Tecnologia, que você também encontra na nossa Newsletter semanal, agora no Podcast do Data Hackers !!

Aperte o play e ouça agora, o Data Hackers News dessa semana !

Para saber tudo sobre o que está acontecendo na área de dados, se inscreva na Newsletter semanal:

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.datahackers.news/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Conheça nossos comentaristas do Data Hackers News:

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Monique Femme⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Paulo Vasconcellos

⁠Matérias/assuntos comentados:

OpenAI dobra de valor com novo investimento;

Bots conseguem resolver 100% de CAPTCHAs;

Ferramenta do Google resume videos do Youtube e áudios.

Baixe o relatório completo do State of Data Brazil e os highlights da pesquisa :

Demais canais do Data Hackers:

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Site⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Linkedin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Tik Tok⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠You Tube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Computational Intelligence in Sustainable Computing and Optimization

Computational Intelligence in Sustainable Computing and Optimization: Trends and Applications focuses on developing and evolving advanced computational intelligence algorithms for the analysis of data involved in applications, such as agriculture, biomedical systems, bioinformatics, business intelligence, economics, disaster management, e-learning, education management, financial management, and environmental policies. The book presents research in sustainable computing and optimization, combining methods from engineering, mathematics, artificial intelligence, and computer science to optimize environmental resources Computational intelligence in the field of sustainable computing combines computer science and engineering in applications ranging from Internet of Things (IoT), information security systems, smart storage, cloud computing, intelligent transport management, cognitive and bio-inspired computing, and management science. In addition, data intelligence techniques play a critical role in sustainable computing. Recent advances in data management, data modeling, data analysis, and artificial intelligence are finding applications in energy networks and thus making our environment more sustainable. Presents computational, intelligence–based data analysis for sustainable computing applications such as pattern recognition, biomedical imaging, sustainable cities, sustainable transport, sustainable agriculture, and sustainable financial management Develops research in sustainable computing and optimization, combining methods from engineering, mathematics, and computer science to optimize environmental resources Includes three foundational chapters dedicated to providing an overview of computational intelligence and optimization techniques and their applications for sustainable computing

Reshaping Intelligent Business and Industry

The convergence of Artif icial Intelligence (AI) and Internet of Things (IoT) is reshaping the way industries, businesses, and economies function; the 34 chapters in this collection show how the full potential of these technologies is being enabled to create intelligent machines that simulate smart behavior and support decision-making with little or no human interference, thereby providing startling organizational efficiencies. Readers will discover that in Reshaping Intelligent Business and Industry: The book unpacks the two superpowers of innovation, AI and IoT, and explains how they connect to better communicate and exchange information about online activities; How the center and the network's edge generate predictive analytics or anomaly alerts; The meaning of AI at the edge and IoT networks. How bandwidth is reduced and privacy and security are enhanced; How AI applications increase operating efficiency, spawn new products and services, and enhance risk management; How AI and IoT create 'intelligent' devices and how new AI technology enables IoT to reach its full potential; Analyzes AIOT platforms and the handling of personal information for shared frameworks that remain sensitive to customers’ privacy while effectively utilizing data. Audience This book will appeal to all business and organization leaders, entrepreneurs, policymakers, and economists, as well as scientists, engineers, and students working in artificial intelligence, software engineering, and information technology.

AI is becoming a key tool in industries far beyond just tech. From automating tasks in the movie industry to revolutionizing drug development in life sciences, AI is transforming how we work. But with this growth comes important questions: How is AI really impacting jobs? Are we just increasing efficiency, or are we replacing human roles? And how can companies effectively store and leverage the vast amounts of data being generated every day to gain a competitive advantage? Jamie Lerner is the President and CEO of Quantum, a company specializing in data storage, management, and protection. Since taking the helm in 2018, Lerner has steered Quantum towards innovative solutions for video and unstructured data. His leadership has been marked by strategic acquisitions and product launches that have significantly enhanced the company's market position. Before joining Quantum, Jamie worked at Cisco, Seagate, CITTIO, XUMA, and Platinum Technology. At Quantum, Lerner has been instrumental in shifting the company's focus towards data storage, management, and protection for video and unstructured data, driving innovation and strategic acquisitions to enhance its market position. In the episode, Richie and jamie explore AI in subtitling, translation, and the movie industry at large, AI in sports, AI in business and scientific research, AI ethics, infrastructure and data management, video and image data in business, challenges of working with AI in video, excitement vs fear in AI and much more.  Links Mentioned in the Show: QuantumConnect with JamieCareer Track: Data Engineer in PythonRelated Episode: Seeing the Data Layer Through Spatial Computing with Cathy Hackl and Irena CroninRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business