talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

1729

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

1729 activities · Newest first

podcast_episode
by Cris deRitis , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics)

The Inside Economics team laments the lack of a November jobs report but dives into the wealth of data released this week about the labor market, income, and consumer spending. The discussion then turns to affordability and whether it’s a con job or whether households are feeling a real financial pinch. A listener question turns the conversation toward Federal Reserve independence and whether Jerome Powell’s successor is likely to have outsize influence on interest rate decisions. Hosts: Mark Zandi – Chief Economist, Moody’s Analytics, Cris deRitis – Deputy Chief Economist, Moody’s Analytics, and Marisa DiNatale – Senior Director - Head of Global Forecasting, Moody’s Analytics Follow Mark Zandi on 'X' and BlueSky @MarkZandi, Cris deRitis on LinkedIn, and Marisa DiNatale on LinkedIn

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Too often, data conversations get stuck in silos: tools, models, or dashboards in isolation. But data only delivers value when the entire ecosystem works together. In this episode, Dylan Anderson, Head of Data Strategy at Profusion, joins us to break down what it really means to think holistically about data. We'll explore how business strategy, organizational design, governance, tech debt, and engineering all connect, and why ignoring these links creates inefficiency and frustration. Dylan also shares how he develops "expert generalists" on his team, building data professionals who can see beyond their technical skills and drive real business outcomes. Whether you're leading a team, looking for your next role, or just trying to elevate your impact, this conversation will help you zoom out and think like a strategist, not just a technologist. What You'll Learn: Why treating data topics in isolation leads to hidden inefficiencies and wasted effort  How to align data strategy with business goals (and avoid "models for models' sake")  Practical ways to assess if a company is taking a holistic approach before you join How to grow as a data professional by becoming an "expert generalist"   🤝 Follow Dylan on LinkedIn!   Register for free to be part of the next live session: https://bit.ly/3XB3A8b   Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter

Most organisations don't struggle with change because of strategy or technology, they struggle because change is fundamentally human. In this episode of Hub & Spoken, Jason Foster, CEO & Founder of Cynozure, speaks with Sunil Kumar, Chief Transformation Officer, to explore why transformation so often stalls and what leaders can do to make it stick. Drawing on more than 26 years working across airlines, telecoms, finance and FMCG, Sunil explains why context, such as geopolitics, customer behaviour, industry shifts and internal culture, is the deciding factor in how change lands. When leaders ignore that context, resistance and fatigue follow. Jason and Sunil discuss the human realities behind change, including: Why people naturally resist it How values and beliefs influence adoption Why narrative and excitement matter more than familiar project metrics Sunil also shares his practical "push, pull, connect" model for building momentum and why adoption, not go-live, should be the true measure of success. 🎧 Listen to the full episode now Cynozure is a leading data, analytics and AI company that helps organisations to reach their data potential. It works with clients on data and AI strategy, data management, data architecture and engineering, analytics and AI, data culture and literacy, and data leadership. The company was named one of The Sunday Times' fastest-growing private companies in both 2022 and 2023 and recognised as The Best Place to Work in Data by DataIQ in 2023 and 2024. Cynozure is a certified B Corporation. 

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. •⁠ Linear ⁠ — ⁠ The system for modern product development. — Michelle Lim joined Warp as engineer number one and is now building her own startup, Flint. She brings a strong product-first mindset shaped by her time at Facebook, Slack, Robinhood, and Warp. Michelle shares why she chose Warp over safer offers, how she evaluates early-stage opportunities, and what she believes distinguishes great founding engineers. Together, we cover how product-first engineers create value, why negotiating equity at early-stage startups requires a different approach, and why asking founders for references is a smart move. Michelle also shares lessons from building consumer and infrastructure products, how she thinks about tech stack choices, and how engineers can increase their impact by taking on work outside their job descriptions. If you want to understand what founders look for in early engineers or how to grow into a founding-engineer role, this episode is full of practical advice backed by real examples — Timestamps (00:00) Intro (01:32) How Michelle got into software engineering  (03:30) Michelle’s internships  (06:19) Learnings from Slack  (08:48) Product learnings at Robinhood (12:47) Joining Warp as engineer #1 (22:01) Negotiating equity (26:04) Asking founders for references (27:36) The top reference questions to ask (32:53) The evolution of Warp’s tech stack  (35:38) Product-first engineering vs. code-first (38:27) Hiring product-first engineers  (41:49) Different types of founding engineers  (44:42) How Flint uses AI tools  (45:31) Avoiding getting burned in founder exits (49:26) Hiring top talent (50:15) An overview of Flint (56:08) Advice for aspiring founding engineers (1:01:05) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • Thriving as a founding engineer: lessons from the trenches • From software engineer to AI engineer • AI Engineering in the real world • The AI Engineering stack — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

At Qdrant Conference, builders, researchers, and industry practitioners shared how vector search, retrieval infrastructure, and LLM-driven workflows are evolving across developer tooling, AI platforms, analytics teams, and modern search research.

Andrey Vasnetsov (Qdrant) explained how Qdrant was born from the need to combine database-style querying with vector similarity search—something he first built during the COVID lockdowns. He highlighted how vector search has shifted from an ML specialty to a standard developer tool and why hosting an in-person conference matters for gathering honest, real-time feedback from the growing community.

Slava Dubrov (HubSpot) described how his team uses Qdrant to power AI Signals, a platform for embeddings, similarity search, and contextual recommendations that support HubSpot’s AI agents. He shared practical use cases like look-alike company search, reflected on evaluating agentic frameworks, and offered career advice for engineers moving toward technical leadership.

Marina Ariamnova (SumUp) presented her internally built LLM analytics assistant that turns natural-language questions into SQL, executes queries, and returns clean summaries—cutting request times from days to minutes. She discussed balancing analytics and engineering work, learning through real projects, and how LLM tools help analysts scale routine workflows without replacing human expertise.

Evgeniya (Jenny) Sukhodolskaya (Qdrant) discussed the multi-disciplinary nature of DevRel and her focus on retrieval research. She shared her work on sparse neural retrieval, relevance feedback, and hybrid search models that blend lexical precision with semantic understanding—contributing methods like Mini-COIL and shaping Qdrant’s search quality roadmap through end-to-end experimentation and community education.

Speakers

Andrey Vasnetsov Co-founder & CTO of Qdrant, leading the engineering and platform vision behind a developer-focused vector database and vector-native infrastructure. Connect: https://www.linkedin.com/in/andrey-vasnetsov-75268897/

Slava Dubrov Technical Lead at HubSpot working on AI Signals—embedding models, similarity search, and context systems for AI agents. Connect: https://www.linkedin.com/in/slavadubrov/

Marina Ariamnova Data Lead at SumUp, managing analytics and financial data workflows while prototyping LLM tools that automate routine analysis. Connect: https://www.linkedin.com/in/marina-ariamnova/

Evgeniya (Jenny) Sukhodolskaya Developer Relations Engineer at Qdrant specializing in retrieval research, sparse neural methods, and educational ML content. Connect: https://www.linkedin.com/in/evgeniya-sukhodolskaya/

In this conversation with Nagim Ashufta, Founder of DRIVA GmbH, we dig into what it really looks like when organizations try to get serious about data. Nagim started his own agency to help companies accelerate their data capabilities, and he's seen the same challenges come up again and again. For those in analytics seeking to understand the realities of a data role, this is a great view into the challenges that organizations face, and how you can be prepared. If you're an analytics professional, this episode will give you a practical look at the landscape, the pitfalls, and the messy-but-important role you play in helping data actually deliver. What You'll Learn: Why data is almost never the first role companies hire for, and what this means The common patterns across organizations that struggle with analytics maturity Why "everyone is a data steward" and what stewardship really means in practice   🤝 Follow Nagim on LinkedIn!   Register for free to be part of the next live session: https://bit.ly/3XB3A8b   Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter

In this second part of my three-part series (catch Part I via episode 182), I dig deeper into the key idea that sales in commercial data products can be accelerated by designing for actual user workflows—vs. going wide with a “many-purpose” AI and analytics solution that “does more,” but is misaligned with how users’ most important work actually gets done.

To explain this, I will explain the concept of user experience (UX) outcomes, and how building your solution to enable these outcomes may be a dependency for you to get sales traction, and for your customer to see the value of your solution. I also share practical steps to improve UX outcomes in commercial data products, from establishing a baseline definition of UX quality to mapping out users’ current workflows (and future ones, when agentic AI changes their job). Finally, I talk about how approaching product development as small “bets” helps you build small, and learn fast so you can accelerate value creation. 

Highlights/ Skip to:

Continuing the journey: designing for users, workflows, and tasks (00:32) How UX impacts sales—not just usage and  adoption(02:16) Understanding how you can leverage users’ frustrations and perceived risks as fuel for building an indispensable data product (04:11)  Definition of a UX outcome (7:30) Establishing a baseline definition of product (UX) quality, so you know how to observe and measure improvement (11:04 ) Spotting friction and solving the right customer problems first (15:34) Collecting actionable user feedback (20:02) Moving users along the scale from frustration to satisfaction to delight (23:04) Unique challenges of designing B2B AI and analytics products used for decision intelligence (25:04)

Quotes from Today’s Episode One of the hardest parts of building anything meaningful, especially in B2B or data-heavy spaces, is pausing long enough to ask what the actual ‘it’ is that we’re trying to solve.

People rush into building the fix, pitching the feature, or drafting the roadmap before they’ve taken even a moment to define what the user keeps tripping over in their day-to-day environment.

And until you slow down and articulate that shared, observable frustration, you’re basically operating on vibes and assumptions instead of behavior and reality.

What you want is not a generic problem statement but an agreed-upon description of the two or three most painful frictions that are obvious to everyone involved, frictions the user experiences visibly and repeatedly in the flow of work.

Once you have that grounding, everything else prioritization, design decisions, sequencing, even organizational alignment suddenly becomes much easier because you’re no longer debating abstractions, you’re working against the same measurable anchor.

And the irony is, the faster you try to skip this step, the longer the project drags on, because every downstream conversation becomes a debate about interpretive language rather than a conversation about a shared, observable experience.

__

Want people to pay for your product? Solve an observable problem—not a vague information or data problem. What do I mean?

“When you’re trying to solve a problem for users, especially in analytical or AI-driven products, one of the biggest traps is relying on interpretive statements instead of observable ones.

Interpretive phrasing like ‘they’re overwhelmed’ or ‘they don’t trust the data’ feels descriptive, but it hides the important question of what, exactly, we can see them doing that signals the problem.

If you can’t film it happening, if you can’t watch the behavior occur in real time, then you don’t actually have a problem definition you can design around.

Observable frustration might be the user jumping between four screens, copying and pasting the same value into different systems, or re-running a query five times because something feels off even though they can’t articulate why.

Those concrete behaviors are what allow teams to converge and say, ‘Yes, that’s the thing, that is the friction we agree must change,’ and that shift from interpretation to observation becomes the foundation for better design, better decision-making, and far less wasted effort.

And once you anchor the conversation in visible behavior, you eliminate so many circular debates and give everyone, from engineering to leadership, a shared starting point that’s grounded in reality instead of theory."

__

One of the reasons that measuring the usability/utility/satisfaction of your product’s UX might seem hard is that you don’t have a baseline definition of how satisfactory (or not) the product is right now. As such, it’s very hard to tell if you’re just making product changes—or you’re making improvements that might make the product worth paying for at all, worth paying more for, or easier to buy.

"It’s surprisingly common for teams to claim they’re improving something when they’ve never taken the time to document what the current state even looks like. If you want to create a meaningful improvement, something a user actually feels, you need to understand the baseline level of friction they tolerate today, not what you imagine that friction might be.

Establishing a baseline is not glamorous work, but it’s the work that prevents you from building changes that make sense on paper but do nothing to the real flow of work. When you diagram the existing workflow, when you map the sequence of steps the user actually takes, the mismatches between your mental model and their lived experience become crystal clear, and the design direction becomes far less ambiguous.

That act of grounding yourself in the current state allows every subsequent decision, prioritizing fixes, determining scope, measuring progress, to be aligned with reality rather than assumptions.

And without that baseline, you risk designing solutions that float in conceptual space, disconnected from the very pains you claim to be addressing."

__

Prototypes are a great way to learn—if you’re actually treating them as a means to learn, and not a product you intend to deliver regardless of the feedback customers give you. 

"People often think prototyping is about validating whether their solution works, but the deeper purpose is to refine the problem itself.

Once you put even a rough prototype in front of someone and watch what they do with it, you discover the edges of the problem more accurately than any conversation or meeting can reveal.

Users will click in surprising places, ignore the part you thought mattered most, or reveal entirely different frictions just by trying to interact with the thing you placed in front of them. That process doesn’t just improve the design, it improves the team’s understanding of which parts of the problem are real and which parts were just guesses.

Prototyping becomes a kind of externalization of assumptions, forcing you to confront whether you’re solving the friction that actually holds back the flow of work or a friction you merely predicted.

And every iteration becomes less about perfecting the interface and more about sharpening the clarity of the underlying problem, which is why the teams that prototype early tend to build faster, with better alignment, and far fewer detours."

__

Most founders and data people tend to measure UX quality by “counting usage” of their solution. Tracking usage stats, analytics on sessions, etc. The problem with this is that it tells you nothing useful about whether people are satisfied (“meets spec”) or delighted (“a product they can’t live without”). These are product metrics—but they don’t reflect how people feel.

There are better measurements to use for evaluating users’ experience that go beyond “willingness to pay.” 

Payment is great, but in B2B products, buyers aren’t always users—and we’ve all bought something based on the promise of what it would do for us, but the promise fell short.

"In B2B analytics and AI products, the biggest challenge isn’t complexity, it’s ambiguity around what outcome the product is actually responsible for changing.

Teams often define success in terms of internal goals like ‘adoption,’ ‘usage,’ or ‘efficiency,’ but those metrics don’t tell you what the user’s experience is supposed to look like once the product is working well.

A product tied to vague business outcomes tends to drift because no one agrees on what the improvement should feel like in the user’s real workflow.

What you want are visible, measurable, user-centric outcomes, outcomes that describe how the user’s behavior or experience will change once the solution is in place, down to the concrete actions they’ll no longer need to take.

When you articulate outcomes at that level, it forces the entire organization to align around a shared target, reduces the scope bloat that normally plagues enterprise products, and gives you a way to evaluate whether you’re actually removing friction rather than just adding more layers of tooling.

And ironically, the clearer the user outcome is, the easier it becomes to achieve the business outcome, because the product is no longer floating in abstraction, it’s anchored in the lived reality of the people who use it."

Links

Listen to part one: Episode 182  Schedule a Design-Eyes Assessment with me and get clarity, now.

podcast_episode
by Cris deRitis , Scott Hoyt (Moody's Analytics) , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics)

Scott Hoyt joins the podcast to provide a look into the holiday retail season and to discuss the state of the U.S. consumer more broadly. The team reviews the downbeat data on consumer confidence, the labor market, inflation and housing, and contemplates the implications for consumer spending this Christmas. The team remembers to take a listener question on income inequality and the mood gets even darker. Happy Thanksgiving everyone! Hosts: Mark Zandi – Chief Economist, Moody’s Analytics, Cris deRitis – Deputy Chief Economist, Moody’s Analytics, and Marisa DiNatale – Senior Director - Head of Global Forecasting, Moody’s Analytics Follow Mark Zandi on 'X' and BlueSky @MarkZandi, Cris deRitis on LinkedIn, and Marisa DiNatale on LinkedIn

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Statsig are helping make the first-ever Pragmatic Summit a reality. Join me and 400 other top engineers and leaders on 11 February, in San Francisco for a special one-day event. Reserve your spot here. •⁠ Linear ⁠ — ⁠ The system for modern product development. Engineering teams today move much faster, thanks to AI. Because of this, coordination increasingly becomes a problem. This is where Linear helps fast-moving teams stay focused. Check out Linear. — As software engineers, what should we know about writing secure code? Johannes Dahse is the VP of Code Security at Sonar and a security expert with 20 years of industry experience. In today’s episode of The Pragmatic Engineer, he joins me to talk about what security teams actually do, what developers should own, and where real-world risk enters modern codebases. We cover dependency risk, software composition analysis, CVEs, dynamic testing, and how everyday development practices affect security outcomes. Johannes also explains where AI meaningfully helps, where it introduces new failure modes, and why understanding the code you write and ship remains the most reliable defense. If you build and ship software, this episode is a practical guide to thinking about code security under real-world engineering constraints. — Timestamps (00:00) Intro (02:31) What is penetration testing? (06:23) Who owns code security: devs or security teams? (14:42) What is code security?  (17:10) Code security basics for devs (21:35) Advanced security challenges (24:36) SCA testing  (25:26) The CVE Program  (29:39) The State of Code Security report  (32:02) Code quality vs security (35:20) Dev machines as a security vulnerability (37:29) Common security tools (42:50) Dynamic security tools (45:01) AI security reviews: what are the limits? (47:51) AI-generated code risks (49:21) More code: more vulnerabilities (51:44) AI’s impact on code security (58:32) Common misconceptions of the security industry (1:03:05) When is security “good enough?” (1:05:40) Johannes’s favorite programming language — The Pragmatic Engineer deepdives relevant for this episode: • What is Security Engineering? •⁠ Mishandled security vulnerability in Next.js •⁠ Okta Schooled on Its Security Practices — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Help us become the #1 Data Podcast by leaving a rating & review! We are 67 reviews away! Can You Pass This Data Analyst Interview? 🖥️ Build your own app with Replit: https://replit.com/refer/AveryData 👔 Mock Interview Platform: https://interviewsimulator.io 💌 Join 30k+ aspiring data analysts & get my tips in your inbox weekly 👉 https://www.datacareerjumpstart.com/newsletter 🆘 Feeling stuck in your data journey? Come to my next free "How to Land Your First Data Job" training 👉 https://www.datacareerjumpstart.com/training 👩‍💻 Want to land a data job in less than 90 days? 👉 https://www.datacareerjumpstart.com/daa 👔 Ace The Interview with Confidence 👉 https://www.datacareerjumpstart.com//interviewsimulator ⌚ TIMESTAMPS 00:28 — Question 1 02:54 — Question 2 04:53 — Question 3 08:08 — Where to do Interview Practice 08:54 — How to Build Cool Apps Like This  🔗 CONNECT WITH AVERY 🎥 YouTube Channel 🤝 LinkedIn 📸 Instagram 🎵 TikTok 💻 Website Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!

To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more

If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.

👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa

Data science leadership is about more than just technical expertise—it’s about building trust, embracing AI, and delivering real business impact. As organizations evolve toward AI-first strategies, data teams have an unprecedented opportunity to lead that transformation. But how do you turn a traditional analytics function into an AI-driven powerhouse that drives decision-making across the business? What’s the right structure to balance deep technical specialization with seamless business integration? From building credibility through high-impact forecasting to creating psychological safety around AI adoption, effective data leadership today requires both technical rigor and visionary communication. The landscape is shifting fast, but with the right approach, data science can stand as a true pillar of innovation alongside engineering, product, and design. Bilal Zia is currently the Head of Data Science & Analytics at Duolingo, an EdTech company whose mission is to develop the best education in the world and make it universally available. Previously, he spent two years helping to build and lead an interdisciplinary Central Science team at Amazon, comprising economists, data and applied scientists, survey specialists, user researchers, and engineers. Before that, he spent fifteen years in the Research Department of the World Bank in Washington, D.C., pursuing an applied academic career. He holds a Ph.D. in Economics from the Massachusetts Institute of Technology, and his interests span economics, data science, machine learning/AI, psychology, and user research. In the episode, Richie and Bilal explore rebuilding an underperforming data team, fostering trust with leadership, embedding data scientists within product teams, leveraging AI for productivity, the future of synthetic A/B testing, and much more. Links Mentioned in the Show: DuolingoDuolingo Blog: How machine learning supercharged our revenue by millions of dollarsConnect with BilalAI-Native Course: Intro to AI for WorkRelated Episode: The Future of Data & AI Education Just Arrived with Jonathan Cornelissen & Yusuf SaberRewatch RADAR AI  New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

In this episode, Tristan Handy sits down with Chang She — a co-creator of Pandas and now CEO of LanceDB — to explore the convergence of analytics and AI engineering. The team at LanceDB is rebuilding the data lake from the ground up with AI as a first principle, starting with a new AI-native file format called Lance. Tristan traces Chang's journey as one of the original contributors to the pandas library to building a new infrastructure layer for AI-native data. Learn why vector databases alone aren't enough, why agents require new architecture, and how LanceDB is building a AI lakehouse for the future. For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com. The Analytics Engineering Podcast is sponsored by dbt Labs.

podcast_episode
by Cris deRitis , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics) , Alan S. Blinder (Princeton University)

The Inside Economics crew welcomes Alan Blinder back to the podcast. The Princeton University economics professor and former Vice Chair of the Fed offers his perspective on the outlook for artificial intelligence, the risk of a bubble in equity markets, and the potential implications of current threats to Fed independence. The team also breaks down the much-delayed September employment report. Guest: Alan Blinder – Professor of Economics and Public Affairs at Princeton University Get more information on Alan Blinder's book - A Monetary and Fiscal History of the United States, 1961-2021 Hosts: Mark Zandi – Chief Economist, Moody’s Analytics, Cris deRitis – Deputy Chief Economist, Moody’s Analytics, and Marisa DiNatale – Senior Director - Head of Global Forecasting, Moody’s Analytics Follow Mark Zandi on 'X' and BlueSky @MarkZandi, Cris deRitis on LinkedIn, and Marisa DiNatale on LinkedIn

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

In this episode, we're joined by Terry Dorsey, Senior Data Architect & Evangelist at Denodo, to unpack the conceptual differences between terms like data fabrics, vector databases, and knowledge graphs, and remind you not to forget about the importance of structured data in this new AI-native world! What You'll Learn: The difference between data fabrics, vector databases, and knowledge graphs — and the pros and cons Why organizing and managing data is still the hardest part of any AI project (and how process design plays a critical role) Why structured data and schemas are still crucial in the age of LLMs and embeddings   How knowledge graphs help model context, relationships, and "episodic memory" more completely than other approaches   If you've ever wondered about different data and AI terms, here's a great glossary to check out from Denodo: https://www.denodo.com/en/glossary 🤝 Follow Terry on LinkedIn!   Register for free to be part of the next live session: https://bit.ly/3XB3A8b   Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter

In this landmark 100th episode of Data Unchained, host Molly Presley sits down with Jonathan Flynn, Director of Applied Systems at Hammerspace, live from Supercomputing 2025. Together they explore the performance engineering breakthroughs that enabled Hammerspace and Samsung to deliver a historic IO500 10 Node Production result using only standard Linux, the upstream NFSv4.2 client, and off the shelf NVMe hardware. This episode breaks down how the Hammerspace Data Platform delivered more than a 33 percent gain over earlier submissions, doubled overall bandwidth, and achieved an unprecedented 809 percent improvement in the IO Hard Read test using Samsung PM1753 Gen 5 NVMe SSDs. Jonathan explains the Linux kernel innovations, metadata advancements, IO path optimization, parallel file system breakthroughs, and multi instance file placement strategies that allowed Hammerspace to reach genuine HPC class performance without proprietary clients or custom networking. Listeners get a detailed walkthrough of the architectural differences between Research and Production IO500 submissions, the impact of metadata redundancy, the performance benefits of NFSd direct and NFS direct, the role of ZFS locking improvements, and how upstream Linux contributions directly advanced the state of HPC and AI data infrastructure. Jonathan also highlights the evolution of MLPerf benchmarking, the benefits of tier zero storage, and how Hammerspace performance engineering is unlocking new levels of efficiency and scalability for AI training, scientific workloads, and large scale analytics. This episode is essential for AI architects, HPC engineers, kernel developers, data scientists, and infrastructure leaders building the next generation of high performance data platforms. Cyberpunk by jiglr | https://soundcloud.com/jiglrmusic Music promoted by https://www.free-stock-music.com Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/deed.en_US Hosted on Acast. See acast.com/privacy for more information.

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. AI-accelerated development isn’t just about shipping faster: it’s about measuring whether, what you ship, actually delivers value. This is where modern experimentation with Statsig comes in. Check it out. •⁠ Linear ⁠ — ⁠ The system for modern product development. I had a jaw-dropping experience when I dropped in for the weekly “Quality Wednesdays” meeting at Linear. Every week, every dev fixes at least one quality isse, large or small. Even if it’s one pixel misalignment, like this one. I’ve yet to see a team obsess this much about quality. Read more about how Linear does Quality Wednesdays – it’s fascinating! — Martin Fowler is one of the most influential people within software architecture, and the broader tech industry. He is the Chief Scientist at Thoughtworks and the author of Refactoring and Patterns of Enterprise Application Architecture, and several other books. He has spent decades shaping how engineers think about design, architecture, and process, and regularly publishes on his blog, MartinFowler.com. In this episode, we discuss how AI is changing software development: the shift from deterministic to non-deterministic coding; where generative models help with legacy code; and the narrow but useful cases for vibe coding. Martin explains why LLM output must be tested rigorously, why refactoring is more important than ever, and how combining AI tools with deterministic techniques may be what engineering teams need. We also revisit the origins of the Agile Manifesto and talk about why, despite rapid changes in tooling and workflows, the skills that make a great engineer remain largely unchanged. — Timestamps (00:00) Intro (01:50) How Martin got into software engineering  (07:48) Joining Thoughtworks  (10:07) The Thoughtworks Technology Radar (16:45) From Assembly to high-level languages (25:08) Non-determinism  (33:38) Vibe coding (39:22) StackOverflow vs. coding with AI (43:25) Importance of testing with LLMs  (50:45) LLMs for enterprise software (56:38) Why Martin wrote Refactoring  (1:02:15) Why refactoring is so relevant today (1:06:10) Using LLMs with deterministic tools (1:07:36) Patterns of Enterprise Application Architecture (1:18:26) The Agile Manifesto  (1:28:35) How Martin learns about AI  (1:34:58) Advice for junior engineers  (1:37:44) The state of the tech industry today (1:42:40) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • Vibe coding as a software engineer • The AI Engineering stack • AI Engineering in the real world • What changed in 50 years of computing — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Help us become the #1 Data Podcast by leaving a rating & review! We are 67 reviews away! I wouldn't try to become a data analyst next here. Here's 4 reasons why and what I'd do instead. 💌 Join 30k+ aspiring data analysts & get my tips in your inbox weekly 👉 https://www.datacareerjumpstart.com/newsletter 🆘 Feeling stuck in your data journey? Come to my next free "How to Land Your First Data Job" training 👉 https://www.datacareerjumpstart.com/training 👩‍💻 Want to land a data job in less than 90 days? 👉 https://www.datacareerjumpstart.com/daa 👔 Ace The Interview with Confidence 👉 https://www.datacareerjumpstart.com//interviewsimulator ⌚ TIMESTAMPS 00:00 — Why the data job market is tough 00:36 — Worst states for data analysts 01:55 — Why these states rank low 04:11 — Best states (raw counts) for data analysts 06:08 — Jobs per capita explained 07:10 — Top states after normalization 09:35 — Slope chart breakdown 10:18 — What the normalized rankings mean 👨‍🎓 Featured Bootcamp Students: Moiz Noorali: https://www.linkedin.com/in/moiz-noorali/ Ani Mayilyan: https://www.linkedin.com/in/ani-mayilyan/ Mukta Pandey: https://www.linkedin.com/in/muktap2377210/ Amanda Ward: https://www.linkedin.com/in/amandawarddata/ Sebastian Wang: https://www.linkedin.com/in/zitong-wang-b06316284/ 📊 Intern with me: https://www.datacareerjumpstart.com/daa 🔗 CONNECT WITH AVERY 🎥 YouTube Channel 🤝 LinkedIn 📸 Instagram 🎵 TikTok 💻 Website Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!

To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more

If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.

👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa

podcast_episode
by Matt Colyar (Moody's Analytics) , Cris deRitis , Mark Zandi (Moody's Analytics) , Marisa DiNatale (Moody's Analytics)

The Inside Economics team records a rare Saturday podcast. They consider the fallout from the just-ended government shutdown on the broader economy and the economic data.  It’s not good, but it ended just before it did serious damage. The team also takes up the Trump administration’s pivot to addressing affordability, including scaling back tariffs, most important for the group, those on pasta and bananas.  And they introduce a new regular segment of the podcast – listener questions.  So, keep them coming. Hosts: Mark Zandi – Chief Economist, Moody’s Analytics, Cris deRitis – Deputy Chief Economist, Moody’s Analytics, and Marisa DiNatale – Senior Director - Head of Global Forecasting, Moody’s Analytics Follow Mark Zandi on 'X' and BlueSky @MarkZandi, Cris deRitis on LinkedIn, and Marisa DiNatale on LinkedIn

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Data scientists have the skills to model complex systems, work with messy data, and uncover hidden patterns. Quant scientists do all of that, but with the added thrill (and pressure) of putting real money on the line. In this episode, we sit down with Jason Strimpel, Founder of PyQuant News and Co-founder of Quant Science, to explore why data scientists are uniquely positioned to excel in algorithmic trading. Whether you're a data scientist curious about finance, or simply interested in seeing your models have a more personal impact, this show offers a fresh perspective on how your skills can translate into the world of algorithmic trading. What You'll Learn: How your Python, stats, and modeling skills transfer directly into the markets The mindset shifts required Why reproducibility, auditability, and backtesting discipline are the data scientist's secret weapon Common pitfalls when transitioning into quant roles, and how to avoid them The tools and workflows Jason recommends to get started fast   🤝 Follow Jason on LinkedIn! Subscribe to PyQuant News   Register for free to be part of the next live session: https://bit.ly/3XB3A8b   Follow us on Socials: LinkedIn YouTube Instagram (Mavens of Data) Instagram (Maven Analytics) TikTok Facebook Medium X/Twitter

In this episode of Hub & Spoken, Jason Foster, CEO and Founder of Cynozure, speaks with Shachar Meir, a data advisor who has worked with organisations from startups to the likes of Meta and Paypal. Together, they explore why so many companies, even those with skilled data teams, solid platforms and plenty of data, still struggle to deliver real business value. Shachar's take is clear: the problem isn't technology - it's people, process, and culture. Too often, data teams focus on building sophisticated platforms instead of understanding the business problems they're meant to solve. His summary: why guess when you can know? This episode is a practical conversation for anyone looking to move their organisation from data chaos to data clarity. 🎧 Listen now to discover how clarity beats complexity in data strategy. Cynozure is a leading data, analytics and AI company that helps organisations to reach their data potential. It works with clients on data and AI strategy, data management, data architecture and engineering, analytics and AI, data culture and literacy, and data leadership. The company was named one of The Sunday Times' fastest-growing private companies in both 2022 and 2023 and recognised as The Best Place to Work in Data by DataIQ in 2023 and 2024. Cynozure is a certified B Corporation.