talk-data.com talk-data.com

Topic

Agile/Scrum

project_management software_development methodology

61

tagged

Activity Trend

163 peak/qtr
2020-Q1 2026-Q1

Activities

61 activities · Newest first

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. AI-accelerated development isn’t just about shipping faster: it’s about measuring whether, what you ship, actually delivers value. This is where modern experimentation with Statsig comes in. Check it out. •⁠ Linear ⁠ — ⁠ The system for modern product development. I had a jaw-dropping experience when I dropped in for the weekly “Quality Wednesdays” meeting at Linear. Every week, every dev fixes at least one quality isse, large or small. Even if it’s one pixel misalignment, like this one. I’ve yet to see a team obsess this much about quality. Read more about how Linear does Quality Wednesdays – it’s fascinating! — Martin Fowler is one of the most influential people within software architecture, and the broader tech industry. He is the Chief Scientist at Thoughtworks and the author of Refactoring and Patterns of Enterprise Application Architecture, and several other books. He has spent decades shaping how engineers think about design, architecture, and process, and regularly publishes on his blog, MartinFowler.com. In this episode, we discuss how AI is changing software development: the shift from deterministic to non-deterministic coding; where generative models help with legacy code; and the narrow but useful cases for vibe coding. Martin explains why LLM output must be tested rigorously, why refactoring is more important than ever, and how combining AI tools with deterministic techniques may be what engineering teams need. We also revisit the origins of the Agile Manifesto and talk about why, despite rapid changes in tooling and workflows, the skills that make a great engineer remain largely unchanged. — Timestamps (00:00) Intro (01:50) How Martin got into software engineering  (07:48) Joining Thoughtworks  (10:07) The Thoughtworks Technology Radar (16:45) From Assembly to high-level languages (25:08) Non-determinism  (33:38) Vibe coding (39:22) StackOverflow vs. coding with AI (43:25) Importance of testing with LLMs  (50:45) LLMs for enterprise software (56:38) Why Martin wrote Refactoring  (1:02:15) Why refactoring is so relevant today (1:06:10) Using LLMs with deterministic tools (1:07:36) Patterns of Enterprise Application Architecture (1:18:26) The Agile Manifesto  (1:28:35) How Martin learns about AI  (1:34:58) Advice for junior engineers  (1:37:44) The state of the tech industry today (1:42:40) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • Vibe coding as a software engineer • The AI Engineering stack • AI Engineering in the real world • What changed in 50 years of computing — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Kasriel Kay, leading data democratization at Velotix, joined Yuliia and Dumke to challenge conventional wisdom about data governance and catalogs. Kasriel argues that data catalogs provide visibility but fail to deliver business value, comparing them to "buying JIRA and expecting agile practices." He advocates for shifting from restrictive data governance to data enablement through policy-based access control that considers user attributes, data sensitivity, and business context. Kasriel explains how AI-driven policy engines can learn from organizational behavior to automatically grant appropriate data access while maintaining compliance, ultimately reducing time-to-insight and unlocking missed business opportunities.

In this episode of Hub & Spoken, Jason Foster, CEO & Founder of Cynozure, speaks with Lisa Allen, Director of Data at The Pensions Regulator (TPR), about the role of data in protecting savers and shaping a more resilient pensions industry. Lisa shares the story behind TPR's new data strategy and how it's helping to modernise an ecosystem that oversees more than £2 trillion in savings across 38 million members. Drawing on her experience at organisations including the Ordnance Survey and the Open Data Institute, she explains why strong data foundations, industry collaboration, and adaptive thinking are essential to success. The conversation explores how the regulator is building a data marketplace, adopting open standards, and applying AI to enable risk-based regulation, while reducing unnecessary burdens on the industry. Lisa also discusses the value of working transparently, co-designing with stakeholders, and staying agile in the face of rapid change. This episode is a must-listen for business leaders, regulators, and data professionals thinking about strategy, innovation, and sector-wide impact. ****    Cynozure is a leading data, analytics and AI company that helps organisations to reach their data potential. It works with clients on data and AI strategy, data management, data architecture and engineering, analytics and AI, data culture and literacy, and data leadership. The company was named one of The Sunday Times' fastest-growing private companies in both 2022 and 2023 and recognised as The Best Place to Work in Data by DataIQ in 2023 and 2024. Cynozure is a certified B Corporation. 

Supported by Our Partners • Sonar —  Code quality and code security for ALL code.  •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. • Augment Code — AI coding assistant that pro engineering teams love. — Kent Beck is one of the most influential figures in modern software development. Creator of Extreme Programming (XP), co-author of The Agile Manifesto, and a pioneer of Test-Driven Development (TDD), he’s shaped how teams write, test, and think about code. Now, with over five decades of programming experience, Kent is still pushing boundaries—this time with AI coding tools. In this episode of Pragmatic Engineer, I sit down with him to talk about what’s changed, what hasn’t, and why he’s more excited than ever to code. In our conversation, we cover: • Why Kent calls AI tools an “unpredictable genie”—and how he’s using them • Why Kent no longer has an emotional attachment to any specific programming language • The backstory of The Agile Manifesto—and why Kent resisted the word “agile” • An overview of XP (Extreme Programming) and how Grady Booch played a role in the name  • Tape-to-tape experiments in Kent’s childhood that laid the groundwork for TDD • Kent’s time at Facebook and how he adapted to its culture and use of feature flags • And much more! — Timestamps (00:00) Intro (02:27) What Kent has been up to since writing Tidy First (06:05) Why AI tools are making coding more fun for Kent and why he compares it to a genie (13:41) Why Kent says languages don’t matter anymore (16:56) Kent’s current project building a small talk server (17:51) How Kent got involved with The Agile Manifesto (23:46) Gergely’s time at JP Morgan, and why Kent didn’t like the word ‘agile’ (26:25) An overview of “extreme programming” (XP)  (35:41) Kent’s childhood tape-to-tape experiments that inspired TDD (42:11) Kent’s response to Ousterhout’s criticism of TDD (50:05) Why Kent still uses TDD with his AI stack  (54:26) How Facebook operated in 2011 (1:04:10) Facebook in 2011 vs. 2017 (1:12:24) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

In this podcast episode, we talked with Nemanja Radojkovic about MLOps in Corporations and Startups.

About the Speaker: Nemanja Radojkovic is Senior Machine Learning Engineer at Euroclear.

In this event,we’re diving into the world of MLOps, comparing life in startups versus big corporations. Joining us again is Nemanja, a seasoned machine learning engineer with experience spanning Fortune 500 companies and agile startups. We explore the challenges of scaling MLOps on a shoestring budget, the trade-offs between corporate stability and startup agility, and practical advice for engineers deciding between these two career paths. Whether you’re navigating legacy frameworks or experimenting with cutting-edge tools.

1:00 MLOps in corporations versus startups 6:03 The agility and pace of startups 7:54 MLOps on a shoestring budget 12:54 Cloud solutions for startups 15:06 Challenges of cloud complexity versus on-premise 19:19 Selecting tools and avoiding vendor lock-in 22:22 Choosing between a startup and a corporation 27:30 Flexibility and risks in startups 29:37 Bureaucracy and processes in corporations 33:17 The role of frameworks in corporations 34:32 Advantages of large teams in corporations 40:01 Challenges of technical debt in startups 43:12 Career advice for junior data scientists 44:10 Tools and frameworks for MLOps projects 49:00 Balancing new and old technologies in skill development 55:43 Data engineering challenges and reliability in LLMs 57:09 On-premise vs. cloud solutions in data-sensitive industries 59:29 Alternatives like Dask for distributed systems

🔗 CONNECT WITH NEMANJA LinkedIn -   / radojkovic   Github - https://github.com/baskervilski

🔗 CONNECT WITH DataTalksClub Join the community - https://datatalks.club/slack.html Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/... Check other upcoming events - https://lu.ma/dtc-events  LinkedIn -   / datatalks-club    Twitter -   / datatalksclub    Website - https://datatalks.club/ 

It seems like quite a few data practitioners I talk with are miserable. Forced to deliver things ever faster, there's not enough time to THINK. Instead, the pressure is to deliver "stuff" in two-week sprints. In this episode, I rant about why not everything needs to be a sprint, and why we need to treat Deep Work versus Delivery Work differently.

Paolo Platter, CTO and co-founder of Agile Lab and Witboost, joined Yuliia to share how his 10 years of building custom data solutions for clients led to creating Witboost - a platform that helps big companies manage their data products at scale. One of their customers used Witboost to build over 250 data products in just 18 months, showing how well the platform works at scale. Paolo explained why setting rules for data teams becomes harder as companies grow, and shared how he shifted from saying "yes" to every client request as a consultant to building a product that works for many companies.Paolo Platter - https://www.linkedin.com/in/paoloplatter/

Brought to you by: • WorkOS — The modern identity platform for B2B SaaS. • Sevalla — Deploy anything from preview environments to Docker images. • Chronosphere — The observability platform built for control. — Welcome to The Pragmatic Engineer! Today, I’m thrilled to be joined by Grady Booch, a true legend in software development. Grady is the Chief Scientist for Software Engineering at IBM, where he leads groundbreaking research in embodied cognition. He’s the mind behind several object-oriented design concepts, a co-author of the Unified Modeling Language, and a founding member of the Agile Alliance and the Hillside Group. Grady has authored six books, hundreds of articles, and holds prestigious titles as an IBM, ACM, and IEEE Fellow, as well as a recipient of the Lovelace Medal (an award for those with outstanding contributions to the advancement of computing). In this episode, we discuss: • What it means to be an IBM Fellow • The evolution of the field of software development • How UML was created, what its goals were, and why Grady disagrees with the direction of later versions of UML • Pivotal moments in software development history • How the software architect role changed over the last 50 years • Why Grady declined to be the Chief Architect of Microsoft – saying no to Bill Gates! • Grady’s take on large language models (LLMs) • Advice to less experienced software engineers • … and much more! — Timestamps (00:00) Intro (01:56) What it means to be a Fellow at IBM (03:27) Grady’s work with legacy systems (09:25) Some examples of domains Grady has contributed to (11:27) The evolution of the field of software development (16:23) An overview of the Booch method (20:00) Software development prior to the Booch method (22:40) Forming Rational Machines with Paul and Mike (25:35) Grady’s work with Bjarne Stroustrup (26:41) ROSE and working with the commercial sector (30:19) How Grady built UML with Ibar Jacobson and James Rumbaugh (36:08) An explanation of UML and why it was a mistake to turn it into a programming language (40:25) The IBM acquisition and why Grady declined Bill Gates’s job offer  (43:38) Why UML is no longer used in industry  (52:04) Grady’s thoughts on formal methods (53:33) How the software architect role changed over time (1:01:46) Disruptive changes and major leaps in software development (1:07:26) Grady’s early work in AI (1:12:47) Grady’s work with Johnson Space Center (1:16:41) Grady’s thoughts on LLMs  (1:19:47) Why Grady thinks we are a long way off from sentient AI  (1:25:18) Grady’s advice to less experienced software engineers (1:27:20) What’s next for Grady (1:29:39) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • The Past and Future of Modern Backend Practices https://newsletter.pragmaticengineer.com/p/the-past-and-future-of-backend-practices  • What Changed in 50 Years of Computing https://newsletter.pragmaticengineer.com/p/what-changed-in-50-years-of-computing  • AI Tooling for Software Engineers: Reality Check https://newsletter.pragmaticengineer.com/p/ai-tooling-2024 — Where to find Grady Booch: • X: https://x.com/grady_booch • LinkedIn: https://www.linkedin.com/in/gradybooch • Website: https://computingthehumanexperience.com Where to find Gergely: • Newsletter: https://www.pragmaticengineer.com/ • YouTube: https://www.youtube.com/c/mrgergelyorosz • LinkedIn: https://www.linkedin.com/in/gergelyorosz/ • X: https://x.com/GergelyOrosz — References and Transcripts: See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Building a robust data infrastructure is crucial for any organization looking to leverage AI and data-driven insights. But as your data ecosystem grows, so do the challenges of managing, securing, and scaling it. How do you ensure that your data infrastructure not only meets today’s needs but is also prepared for the rapid changes in technology tomorrow? What strategies can you adopt to keep your organization agile, while ensuring that your data investments continue to deliver value and support business goals? Saad Siddiqui is a venture capitalist for Titanium Ventures. Titanium focus on enterprise technology investments, particularly focusing on next generation enterprise infrastructure and applications. In his career, Saad has deployed over $100M in venture capital in over a dozen companies. In previous roles as a corporate development executive, he has executed M&A transactions valued at over $7 billion in aggregate. Prior to Titanium Ventures he was in corporate development at Informatica and was a member of Cisco's venture investing and acquisitions team covering cloud, big data and virtualization.  In the episode, Richie and Saad explore the business impacts of data infrastructure, getting started with data infrastructure, the roles and teams you need to get started, scalability and future-proofing, implementation challenges, continuous education and flexibility, automation and modernization, trends in data infrastructure, and much more.  Links Mentioned in the Show: Titanium VenturesConnect with SaadCourse - Artificial Intelligence (AI) StrategyRelated Episode: How are Businesses Really Using AI? With Tathagat Varma, Global TechOps Leader at Walmart Global TechRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Businesses are constantly racing to stay ahead by adopting the latest data tools and AI technologies. But with so many options and buzzwords, it’s easy to get lost in the excitement without knowing whether these tools truly serve your business. How can you ensure that your data stack is not only modern but sustainable and agile enough to adapt to changing needs? What does it take to build data products that deliver real value to your teams while driving innovation? Adrian Estala is VP, Field Chief Data Officer and the host of Starburst TV. With a background in leading Digital and IT Portfolio Transformations, he understands the value of creating executive frameworks that focus on material business outcomes. Skilled with getting the most out of data-driven investments, Adrian is your trusted adviser to navigating complex data environments and integrating a Data Mesh strategy in your organization. In the episode, Richie and Adrian explore the modern data stack, agility in data, collaboration between business and data teams, data products and differing ways of building them, data discovery and metadata, data quality, career skills for data practitioners and much more. Links Mentioned in the Show: StarburstConnect with AdrianCareer Track: Data Engineer in PythonRelated Episode: How this Accenture CDO is Navigating the AI RevolutionRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

The Data Product Management In Action podcast, brought to you by Soda and executive producer Scott Hirleman, is a platform for data product management practitioners to share insights and experiences.  In Season 01, Episode 19, host Nadiem von Heydebrand interviews Pradeep Fernando, who leads the data and metadata management initiative at Swisscom. They explore key topics in data product management, including the definition and categorization of data products, the role of AI, prioritization strategies, and the application of product management principles. Pradeep shares valuable insights and experiences on successfully implementing data product management within organizations. About our host Nadiem von Heydebrand: Nadiem is the CEO and Co-Founder of Mindfuel. In 2019, he merged his passion for data science with product management, becoming a thought leader in data product management. Nadiem is dedicated to demonstrating the true value contribution of data. With over a decade of experience in the data industry, Nadiem leverages his expertise to scale data platforms, implement data mesh concepts, and transform AI performance into business performance, delighting consumers at global organizations that include Volkswagen, Munich Re, Allianz, Red Bull, and Vorwerk. Connect with Nadiem on LinkedIn. About our guest Pradeep Fernando: Pradeep is a seasoned data product leader with over 6 years of data product leadership experience and over 10 years of product management experience. He leads or is a key contributor to several company-wide data & analytics initiatives at Swisscom such as Data as a Product (Data Mesh), One Data Platform, Machine Learning (Factory), MetaData management, Self-service data & analytics, BI Tooling Strategy, Cloud Transformation, Big Data platforms,and Data warehousing. Previously, he was a product manager at both Swisscom's B2B and Innovation units both building new products and optimizing mature products (profitability) in the domains of enterprise mobile fleet management, cyber-and mobile device security.Pradeep is also passionate about and experienced in leading the development of data products and transforming IT delivery teams into empowered, agile product teams. And, he is always happy to engage in a conversation about lean product management or "heavier" topics such as humanity's future or our past. Connect with Pradeep on LinkedIn. All views and opinions expressed are those of the individuals and do not necessarily reflect their employers or anyone else.  Join the conversation on LinkedIn.  Apply to be a guest or nominate someone that you know.  Do you love what you're listening to? Please rate and review the podcast, and share it with fellow practitioners you know. Your support helps us reach more listeners and continue providing valuable insights!              

Venkat Subramaniam is a programmer, author, speaker, and founder of Agile Developer, Inc. I've seen him speak several times, and was always blown away by his passion and technical depth. So, I was excited to have him on the podcast.

We chat about agile development in the real world, learning to do less, and much more. Venkat is extremely wise, and I very much enjoyed our discussion. Enjoy!

LinkedIn: https://www.linkedin.com/in/vsubramaniam

Twitter: https://x.com/venkat_s

In today’s episode, I’m going to perhaps work myself out of some consulting engagements, but hey, that’s ok! True consulting is about service—not PPT decks with strategies and tiers of people attached to rate cards. Specifically today, I decided to reframe a topic and approach it from the opposite/negative side. So, instead of telling you when the right time is to get UX design help for your enterprise SAAS analytics or AI product(s), today I’m going to tell you when you should NOT get help! 

Reframing this was really fun and made me think a lot as I recorded the episode. Some of these reasons aren’t necessarily representative of what I believe, but rather what I’ve heard from clients and prospects over 25 years—what they believe. For each of these, I’m also giving a counterargument, so hopefully, you get both sides of the coin. 

Finally, analytical thinkers, especially data product managers it seems, often want to quantify all forms of value they produce in hard monetary units—and so in this episode, I’m also going to talk about other forms of value that products can create that are worth paying for—and how mushy things like “feelings” might just come into play ;-)  Ready?

Highlights/ Skip to:

(1:52) Going for short, easy wins (4:29) When you think you have good design sense/taste  (7:09) The impending changes coming with GenAI (11:27) Concerns about "dumbing down" or oversimplifying technical analytics solutions that need to be powerful and flexible (15:36) Agile and process FTW? (18:59) UX design for and with platform products (21:14) The risk of involving designers who don’t understand data, analytics, AI, or your complex domain considerations  (30:09) Designing after the ML models have been trained—and it’s too late to go back  (34:59) Not tapping professional design help when your user base is small , and you have routine access and exposure to them   (40:01) Explaining the value of UX design investments to your stakeholders when you don’t 100% control the budget or decisions 

Quotes from Today’s Episode “It is true that most impactful design often creates more product and engineering work because humans are messy. While there sometimes are these magic, small GUI-type changes that have big impact downstream, the big picture value of UX can be lost if you’re simply assigning low-level GUI improvement tasks and hoping to see a big product win. It always comes back to the game you’re playing inside your team: are you working to produce UX and business outcomes or shipping outputs on time? ” (3:18) “If you’re building something that needs to generate revenue, there has to be a sense of trust and belief in the solution. We’ve all seen the challenges of this with LLMs. [when] you’re unable to get it to respond in a way that makes you feel confident that it understood the query to begin with. And then you start to have all these questions about, ‘Is the answer not in there,’ or ‘Am I not prompting it correctly?’ If you think that most of this is just an technical data science problem, then don’t bother to invest in UX design work… ” (9:52) “Design is about, at a minimum, making it useful and usable, if not delightful. In order to do that, we need to understand the people that are going to use it. What would an improvement to this person’s life look like? Simplifying and dumbing things down is not always the answer. There are tools and solutions that need to be complex, flexible, and/or provide a lot of power – especially in an enterprise context. Working with a designer who solely insists on simplifying everything at all costs regardless of your stated business outcome goals is a red flag—and a reason not to invest in UX design—at least with them!“ (12:28)“I think what an analytics product manager [or] an AI product manager needs to accept is there are other ways to measure the value of UX design’s contribution to your product and to your organization. Let’s say that you have a mission-critical internal data product, it’s used by the most senior executives in the organization, and you and your team made their day, or their month, or their quarter. You saved their job. You made them feel like a hero. What is the value  of giving them that experience and making them feel like those things… What is that worth when a key customer or colleague feels like you have their back with this solution you created? Ideas that spread, win, and if these people are spreading your idea, your product, or your solution… there’s a lot of value in that.” (43:33)

“Let’s think about value in non-financial terms. Terms like feelings. We buy insurance all the time. We’re spending money on something that most likely will have zero economic value this year because we’re actually trying not to have to file claims. Yet this industry does very well because the feeling of security matters. That feeling is worth something to a lot of people. The value of feeling secure is something greater than whatever the cost of the insurance plan. If your solution can build feelings of confidence and security, what is that worth? Does “hard to measure precisely” necessarily mean “low value?”  (47:26)

0:00

hi everyone Welcome to our event this event is brought to you by data dos club which is a community of people who love

0:06

data and we have weekly events and today one is one of such events and I guess we

0:12

are also a community of people who like to wake up early if you're from the states right Christopher or maybe not so

0:19

much because this is the time we usually have uh uh our events uh for our guests

0:27

and presenters from the states we usually do it in the evening of Berlin time but yes unfortunately it kind of

0:34

slipped my mind but anyways we have a lot of events you can check them in the

0:41

description like there's a link um I don't think there are a lot of them right now on that link but we will be

0:48

adding more and more I think we have like five or six uh interviews scheduled so um keep an eye on that do not forget

0:56

to subscribe to our YouTube channel this way you will get notified about all our future streams that will be as awesome

1:02

as the one today and of course very important do not forget to join our community where you can hang out with

1:09

other data enthusiasts during today's interview you can ask any question there's a pin Link in live chat so click

1:18

on that link ask your question and we will be covering these questions during the interview now I will stop sharing my

1:27

screen and uh there is there's a a message in uh and Christopher is from

1:34

you so we actually have this on YouTube but so they have not seen what you wrote

1:39

but there is a message from to anyone who's watching this right now from Christopher saying hello everyone can I

1:46

call you Chris or you okay I should go I should uh I should look on YouTube then okay yeah but anyways I'll you don't

1:53

need like you we'll need to focus on answering questions and I'll keep an eye

1:58

I'll be keeping an eye on all the question questions so um

2:04

yeah if you're ready we can start I'm ready yeah and you prefer Christopher

2:10

not Chris right Chris is fine Chris is fine it's a bit shorter um

2:18

okay so this week we'll talk about data Ops again maybe it's a tradition that we talk about data Ops every like once per

2:25

year but we actually skipped one year so because we did not have we haven't had

2:31

Chris for some time so today we have a very special guest Christopher Christopher is the co-founder CEO and

2:37

head chef or hat cook at data kitchen with 25 years of experience maybe this

2:43

is outdated uh cuz probably now you have more and maybe you stopped counting I

2:48

don't know but like with tons of years of experience in analytics and software engineering Christopher is known as the

2:55

co-author of the data Ops cookbook and data Ops Manifesto and it's not the

3:00

first time we have Christopher here on the podcast we interviewed him two years ago also about data Ops and this one

3:07

will be about data hops so we'll catch up and see what actually changed in in

3:13

these two years and yeah so welcome to the interview well thank you for having

3:19

me I'm I'm happy to be here and talking all things related to data Ops and why

3:24

why why bother with data Ops and happy to talk about the company or or what's changed

3:30

excited yeah so let's dive in so the questions for today's interview are prepared by Johanna berer as always

3:37

thanks Johanna for your help so before we start with our main topic for today

3:42

data Ops uh let's start with your ground can you tell us about your career Journey so far and also for those who

3:50

have not heard have not listened to the previous podcast maybe you can um talk

3:55

about yourself and also for those who did listen to the previous you can also maybe give a summary of what has changed

4:03

in the last two years so we'll do yeah so um my name is Chris so I guess I'm

4:09

a sort of an engineer so I spent about the first 15 years of my career in

4:15

software sort of working and building some AI systems some non- AI systems uh

4:21

at uh Us's NASA and MIT linol lab and then some startups and then um

4:30

Microsoft and then about 2005 I got I got the data bug uh I think you know my

4:35

kids were small and I thought oh this data thing was easy and I'd be able to go home uh for dinner at 5 and life

4:41

would be fine um because I was a big you started your own company right and uh it didn't work out that way

4:50

and um and what was interesting is is for me it the problem wasn't doing the

4:57

data like I we had smart people who did data science and data engineering the act of creating things it was like the

5:04

systems around the data that were hard um things it was really hard to not have

5:11

errors in production and I would sort of driving to work and I had a Blackberry at the time and I would not look at my

5:18

Blackberry all all morning I had this long drive to work and I'd sit in the parking lot and take a deep breath and

5:24

look at my Blackberry and go uh oh is there going to be any problems today and I'd be and if there wasn't I'd walk and

5:30

very happy um and if there was I'd have to like rce myself um and you know and

5:36

then the second problem is the team I worked for we just couldn't go fast enough the customers were super

5:42

demanding they didn't care they all they always thought things should be faster and we are always behind and so um how

5:50

do you you know how do you live in that world where things are breaking left and right you're terrified of making errors

5:57

um and then second you just can't go fast enough um and it's preh Hadoop era

6:02

right it's like before all this big data Tech yeah before this was we were using

6:08

uh SQL Server um and we actually you know we had smart people so we we we

6:14

built an engine in SQL Server that made SQL Server a column or

6:20

database so we built a column or database inside of SQL Server um so uh

6:26

in order to make certain things fast and and uh yeah it was it was really uh it's not

6:33

bad I mean the principles are the same right before Hadoop it's it's still a database there's still indexes there's

6:38

still queries um things like that we we uh at the time uh you would use olap

6:43

engines we didn't use those but you those reports you know are for models it's it's not that different um you know

6:50

we had a rack of servers instead of the cloud um so yeah and I think so what what I

6:57

took from that was uh it's just hard to run a team of people to do do data and analytics and it's not

7:05

really I I took it from a manager perspective I started to read Deming and

7:11

think about the work that we do as a factory you know and in a factory that produces insight and not automobiles um

7:18

and so how do you run that factory so it produces things that are good of good

7:24

quality and then second since I had come from software I've been very influenced

7:29

by by the devops movement how you automate deployment how you run in an agile way how you

7:35

produce um how you how you change things quickly and how you innovate and so

7:41

those two things of like running you know running a really good solid production line that has very low errors

7:47

um and then second changing that production line at at very very often they're kind of opposite right um and so

7:55

how do you how do you as a manager how do you technically approach that and

8:00

then um 10 years ago when we started data kitchen um we've always been a profitable company and so we started off

8:07

uh with some customers we started building some software and realized that we couldn't work any other way and that

8:13

the way we work wasn't understood by a lot of people so we had to write a book and a Manifesto to kind of share our our

8:21

methods and then so yeah we've been in so we've been in business now about a little over 10

8:28

years oh that's cool and uh like what

8:33

uh so let's talk about dat offs and you mentioned devops and how you were inspired by that and by the way like do

8:41

you remember roughly when devops as I think started to appear like when did people start calling these principles

8:49

and like tools around them as de yeah so agile Manifesto well first of all the I

8:57

mean I had a boss in 1990 at Nasa who had this idea build a

9:03

little test a little learn a lot right that was his Mantra and then which made

9:09

made a lot of sense um and so and then the sort of agile software Manifesto

9:14

came out which is very similar in 2001 and then um the sort of first real

9:22

devops was a guy at Twitter started to do automat automated deployment you know

9:27

push a button and that was like 200 Nish and so the first I think devops

9:33

Meetup was around then so it's it's it's been 15 years I guess 6 like I was

9:39

trying to so I started my career in 2010 so I my first job was a Java

9:44

developer and like I remember for some things like we would just uh SFTP to the

9:52

machine and then put the jar archive there and then like keep our fingers crossed that it doesn't break uh uh like

10:00

it was not really the I wouldn't call it this way right you were deploying you

10:06

had a Dey process I put it yeah

10:11

right was that so that was documented too it was like put the jar on production cross your

10:17

fingers I think there was uh like a page on uh some internal Viki uh yeah that

10:25

describes like with passwords and don't like what you should do yeah that was and and I think what's interesting is

10:33

why that changed right and and we laugh at it now but that was why didn't you

10:38

invest in automating deployment or a whole bunch of automated regression

10:44

tests right that would run because I think in software now that would be rare

10:49

that people wouldn't use C CD they wouldn't have some automated tests you know functional

10:56

regression tests that would be the

There’s been a lot of pressure to add AI to almost every digital tool and service recently, and two years into the AI hype cycle, we’re seeing two types of problems. The first is organizations that haven’t done much yet with AI because they don’t know where to start. The second is organizations that rushed into AI and failed because they didn’t know what they were doing. Both are symptoms of the same problem: not having an AI strategy and not understanding how to tactically implement AI. There’s a lot to consider around choosing the right project and putting processes and skilled talent in place, not to mention worrying about costs and return on investment. Tathagat Varma is the Global TechOps Leader at Walmart Global Tech. Tathagat is responsible for leading strategic business initiatives, enterprise agile transformation, technical learning and enablement, strategic technical initiatives, startup ecosystem engagement, and internal events across Walmart Global Tech. He also provides support to horizontal technical and internal innovation programs in the company. Starting as a Computer Scientist with DRDO, and with an overall experience of 27 years, Tathagat has played significant technical and leadership roles in establishing and growing organizations like NerdWallet, ChinaSoft International, McAfee, Huawei, Network General, NetScout System, [24]7 Innovations Labs and Yahoo!, and played key engineering roles at Siemens and Philips. In the episode, Richie and Tathagat explore failures in AI adoption, the role of leadership in AI adoption, AI strategy and business objective alignment, investment and timeline for AI projects, identifying starter AI projects, skills for AI success, building a culture of AI adoption, the potential of AI and much more.  Links Mentioned in the Show: Walmart Global TechConnect with Tathagat[Course] Data Governance ConceptsRelated Episode: How Walmart Leverages Data & AI with Swati Kirti, Sr Director of Data Science at WalmartRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile app Empower your business with world-class data and AI skills with DataCamp for business

Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI). Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.

I’m so excited to welcome this expert from the field of UX and design to today’s episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.

In our chat, we covered:

Ben's career studying human-computer interaction and computer science. (0:30) 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy’ AI systems. (3:55) 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56) 'There’s no such thing as an autonomous device': Designing human control into AI systems. (18:16) A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08) Designing ‘comprehensible, predictable, and controllable’ user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34) Ben's upcoming book on human-centered AI. (35:55)

Resources and Links: People-Centered Internet: https://peoplecentered.net/ Designing the User Interface (one of Ben’s earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764 Partnership on AI: https://www.partnershiponai.org/ AI incident database: https://www.partnershiponai.org/aiincidentdatabase/ University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/ ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/ Ben on Twitter: https://twitter.com/benbendc

Quotes from Today’s Episode The world of AI has certainly grown and blossomed — it’s the hot topic everywhere you go. It’s the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they’re not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that’s where the action is. Of course, what we really want from AI is to make our world a better place, and that’s a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person’s sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that’s where we want to go. - Ben (2:05)  

The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it’s not just programming, but it also involves the use of data that’s used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let’s say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There’s been bias in facial recognition algorithms, which were less accurate with people of color. That’s led to some real problems in the real world. And that’s where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10)

Every company will tell you, “We do a really good job in checking out our AI systems.” That’s great. We want every company to do a really good job. But we also want independent oversight of somebody who’s outside the company — someone who knows the field, who’s looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that’s where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04)

There’s no such thing as an autonomous device. Someone owns it; somebody’s responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it’s performing poorly. … Responsibility is a pretty key factor here. So, if there’s something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what’s happening? What’s it doing? What’s going wrong and what’s going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that’s hidden away and you never see it because that’s just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what’s going on and make sure it gets better. Every quarter. - Ben (19:41)

Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they’re at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they’re doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36)

Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what’s usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I’m afraid I haven’t seen too many success stories of that working. … I’ve been diving through this for years now, and I’ve been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA’s XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it’s going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let’s prevent the user from getting confused and so they don’t have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what’s happened in each step, you can go back, you can explore, you can change things in each part of it. It’s also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)

The Data Product Management In Action podcast, brought to you by Soda and executive producer Scott Hirleman, is a platform for data product management practitioners to share insights and experiences. In Season 01, Episode 003, host Michael Toland (Product Management Coach and Consultant with Pathfinder Product) talks to Panos Lazaridis (Senior Data Product Manager at The Economist). They delve into the importance of understanding organizational and user needs, data maturity and user experience, and the roles within a data product management team. About our host Michael Toland: Michael is a Product Management Coach and Consultant with Pathfinder Product, a Test Double Operation. Since 2016, Michael has worked on large-scale system modernizations and migration initiatives at Verizon. Outside his professional career, Michael serves as the Treasurer for the New Leaders Council, mentors with Venture for America, sings with the Columbus Symphony, and writes satire for his blog Dignified Product. He is excited to discuss data product management with the podcast audience. Connect with Michael on LinkedIn.

About our guest Panos Lazaridis: Panos is an agile and resilient product manager specializing in data products and platforms. He loves talking about data strategy, AI, and sustainability. Panos holds BSc, MSc, MBA, CSPO, and Prince2 qualifications. Connect with Panos on LinkedIn. All views and opinions expressed are those of the individuals and do not necessarily reflect their employers or anyone else. Join the conversation on LinkedIn.  

Building a successful data engineering team involves more than just hiring skilled individuals—it requires fostering a culture of trust, collaboration, and continuous learning. But how do you start from scratch and create a team that not only meets technical demands but also drives business value? What key traits should you look for in your early hires, and how do you ensure your team’s projects align with the company’s goals? Liya Aizenberg is Director of Data Engineering at Away and a seasoned data leader with over 22 years of experience spearheading innovation in scalable data engineering pipelines and distribution solutions. She has built successful data teams that integrate seamlessly with various business functions, serving as invaluable organizational partners. She focuses on promoting data-driven approaches to empower organizations to make proactive decisions based on timely and organized data, shifting from reactive to proactive business strategies. Additionally, as a passionate advocate for Women in Tech, she actively contributes to fostering diversity and inclusion in the technology industry. In the episode, Adel and Liya explore the key attributes that forge an effective data engineering team, traits to look for in new hires, what technical skill sets set people up for success in a data engineering team, leveraging knowledge transfer between external experts and internal stakeholders, upskilling and career growth, aligning data engineering initiatives with business goals, measuring the ROI of data projects, working agile in data engineering, balancing innovation and practicality, future trends and much more.  Links Mentioned in the Show: Away TravelConnect with Liya on Linkedin[Career Track] Data Engineer with PythonRelated Episode: Scaling Data Engineering in Retail with Mo Sabah, SVP of Engineering & Data at Thrive MarketSign up to RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile app Empower your business with world-class data and AI skills with DataCamp for business

Keith Belanger is an OG data modeling practitioner, having been in the game for decades.

We chat about a wide range of data modeling topics.

What's changed and what's stayed the same? How to model data to fit the business's needs. Agile data modeling. When it works, when it doesn't. Data modeling for data mesh and decentralization. The art of data modeling How to teach conceptual data modeling to new practitioners

Keith brings a wealth of experience and a practical, no-nonsense perspective. If you're interested in data modeling, don't miss this!

LinkedIn: https://www.linkedin.com/in/krbelanger/