talk-data.com
People (16 results)
See all 16 →Activities & events
| Title & Speakers | Event |
|---|---|
|
[Online] Democratizing Bayesian Modeling with Insight Agents: A Case Study
2025-06-17 · 16:00
🎙️ Speaker: Andy Heusser\, Luca Fiaschi \| ⏰ Time: 4 PM UTC / 9 AM PT / 12 PM ET / 6 PM Berlin Insight Agents are purpose‑built AI coworkers that transform demanding analytical workflows into push‑button tasks. Built on a modular blend of retrieval‑augmented generation (RAG), tool calling, and sandboxed code execution, each agent automates the full statistical pipeline—from data exploration and validation to model fitting and interpretation—without requiring deep technical expertise. The session showcases our Marketing Mix Modeling (MMM) Insight Agent, which compresses weeks of Bayesian MMM work into minutes by delegating tasks to specialized sub‑agents. You’ll see how this architecture delivers secure, explainable, and scalable results that let marketers focus on strategy instead of code. MMM is only the first stop. We plan to extend the same framework to prototype Insight Agents for customer life-time value, causal impact analysis and more. We’ll dig into the design principles, share implementation lessons, and outline the roadmap from today’s collaborative “copilots” to tomorrow’s autonomous digital coworkers that proactively surface insights and drive better business outcomes. Read More:
📜 Outline of Talk / Agenda:
💼 About the speaker:
🔗 Connect with Andy: 👉 Linkedin: https://www.linkedin.com/in/andrew-heusser-3b6587b1/ 👉Github: https://github.com/andrewheusser
🔗 Connect with Luca: 👉 Linkedin: https://www.linkedin.com/in/lfiaschi/ 💼 About the Host:
📖 Code of Conduct: Please note that participants are expected to abide by PyMC's Code of Conduct. 🔗 Connecting with PyMC Labs: 🌐 Website: https://www.pymc-labs.com/ 👥 LinkedIn: https://www.linkedin.com/company/pymc-labs/ 🐦 Twitter: https://twitter.com/pymc_labs 🎥 YouTube: https://www.youtube.com/c/PyMCLabs 🤝 Meetup: https://www.meetup.com/pymc-labs-online-meetup/ |
[Online] Democratizing Bayesian Modeling with Insight Agents: A Case Study
|
|
Workshop: Amplifying women voices in data leadership
2025-05-15 · 16:30
Step into your power at the “Amplifying Women’s Voices in Data Leadership” workshop, where influence meets impact. Brought to life through the inspiring partnership between Women in Big Data Berlin and Thoughtworks, this interactive evening is dedicated to advancing women in data-driven roles. United by our shared commitment to building a strong, inclusive community, we’re creating a space where women can connect, learn, and lead with confidence. Expect meaningful mentorship, dynamic networking, and practical insights designed to elevate your voice and your career in data leadership. Let’s amplify together. Event Details
Event Format Begin the evening with engaging keynote speeches from influential voices in the data field, followed by hands-on workshops designed to strengthen leadership capabilities. This collaborative setting offers a space to connect, strategize, and develop key skills for navigating your data leadership path. Agenda 6:30 - 7:00 PM \| Keynote speakers
Community Development Gain perspective on creating value through AI innovation or building meaningful communities.
The Role of Allyship Understand how allyship supports inclusion and drives lasting impact in leadership. 7:10 - 8:30 PM \| Workshop sessions
Led by Maria Beiner - Product Manager Pricing at thermondo Learn how adopting a data-driven perspective can refine decision-making and leadership.
Led by Katarzyna (Kasia) Stoltmann and Aliya Boranbayeva Delve into the principles of creating AI solutions that are equitable, ethical, and impactful. Build your personal brand and amplify your presence in the data field through powerful storytelling.
Led by Amy Raygada - Principal Data and AI Strategist at Thoughtworks Explore strategies for crafting a data vision that propels you to the heart of business leadership.
Led by Neil Metzler - Founder @Stealth AI Startup More about the workshops: From Data Strategy to Seat at the Table - by Amy Raygada Data is everywhere, yet influence is often unevenly distributed. This workshop is designed to empower women in data roles with the skills to lead with confidence and secure a strategic seat at the table. Join us for an evening of practical advice, collaborative activities, and candid conversations aimed at helping you transform data insights into influence. You’ll leave with actionable strategies to align data work with business impact, engage stakeholders effectively, and overcome challenges with resilience and clarity. Key Takeaways
***Bias-Free AI-Driven End-to-End Solutions - ***by Katarrzyna (Kasia) Stoltmann and Build your personal brand and amplify your presence in the data field through powerful storytelling - by Aliya Boranbayeva In today’s digital world, standing out isn’t just about technical skills, it’s about how you tell your story. This interactive workshop is designed for data professionals, aspiring analysts, and tech enthusiasts who want to build a strong personal brand and communicate their value with clarity, confidence, and authenticity. We'll explore how to craft your unique narrative, showcase your expertise, and strategically position yourself on platforms like LinkedIn and beyond. You’ll leave with actionable steps to develop your personal brand, create meaningful content, and network with intention — all through the lens of impactful storytelling. Key Takeaways
Learn how adopting a data-driven perspective can refine decision-making and Leadership - by Maria Beiner In today’s fast-moving business world, data is most valuable when it drives thoughtful, strategic decisions. In this hands-on session, you’ll explore what it means to lead with data—not just analyse it. Through real-world examples from pricing and product strategy, Maria will share practical tools to help you align data with business goals, communicate insights clearly and lead with confidence in data-rich environments. Whether you're influencing stakeholders or shaping key decisions, this workshop will help you strengthen your leadership mindset and turn insight into impact. Key Takeaways
Our crisis of trust calls on allies to champion inclusion & belonging - by Neil Metzler There is a deep, persistent crisis of trust in the workplace. Trust in organizations, leaders, and management has never been lower. This crisis underscores the challenges and opportunities for inclusion and belonging at work. Allies in particular have a critical role to play. Allies who seize the opportunity will succeed and emerge as top performing leaders. They will prove themselves capable of managing through change, driving innovation and navigating uncertainty. Leaders who fail to invest in I&B to nurture and grow trust at work will become expendable and sidelined in role. Take this opportunity to amplify your voice, showcase your expertise, and step confidently into your role as a data leader. We look forward to seeing you on 15 May! *** Please note: By attending this event, you consent to being photographed and/or recorded on video. These materials may be used for promotional purposes by Women in Big Data Berlin and Thoughtworks, including on social media, websites, and future event materials. If you prefer not to appear in photos or videos, please let a member of the organizing team know upon arrival. |
Workshop: Amplifying women voices in data leadership
|
|
PyData Exeter #9: April 2nd @ Innovation Hub
2025-04-02 · 17:45
Enjoy three talks from our brilliant speakers after drinks and networking. Agenda:
Talks: Modelling human behaviour through a framework of rationality, Harry Findlay In order to have successful human-machine teaming, sometimes referred to as collaborative AI, one assumption you can make is that the machine must internalise a model of human cognition. This is difficult as humans are incredibly adaptive, and are a unique combination of subjective preferences and physical and mental capacities. Developing a theory-driven, generative model of human cognition has provided a useful avenue to tackle this challenge. In this talk I will introduce such a theory-driven model, called computational rationality that put simply, assumes human behaviour can be predicted by the optimal solution to a constrained optimisation problem. I will then present how ideas from multimodal information fusion can be applied to computational rationality, and how a focus on how humans perceive and represent information could improve our models of human behaviour. Maths Proofs in Lean - Introduction & Live Demo, Tariq Rashid Open source Lean with Mathlib as an automated proof assistant has seen an unprecedented growth, and even recognition in popular media. It is being used by world-class mathematicians, and is increasingly being taught in undergraduate mathematics courses. This talk provides an overview and a short live demonstration. The talk is aimed at newcomers, not experts, and aims to remove barriers to more people trying to write their own simple proofs. Verifiable AI, Amy Stell A brief look into how we go about formalising and verifying neural networks in safety critical domains. Sponsors: Butterfly Data, Exeter Innovation Hub, NumFOCUS A little more about our speakers: Harry Findlay is currently a PhD student in computer science at the University of Exeter, where he also completed his undergraduate degree. His research direction has evolved from formalising compiler optimisation as a reinforcement learning problem, to now ultimately developing enhanced assistive systems deployed in human-machine teaming scenarios by developing models of human cognition that the machine can use to compute the optimal, human-aligned intervention. Tariq Rashid was originally trained as a physicist, and later gained a masters in machine learning and data mining. He’s worked in technology for 25 years, including almost a decade in central government, leading on the modernisation of technology and security. Tariq is passionate about open source, developing communities, and inspiring the next generation of scientists and engineers. He led the London Python meetup, children’s CoderDojo Cornwall, the Data Science Cornwall community, as well as the London based Algorithmic Art group which grew to over 4,500 members. He also writes books on machine learning and creative computing. He is currently developing Digital Dynamics, a business helping organisations ensure their use of data and automation through machine learning is safe, fair and ethical. Amy Stell is a PhD student working on verifiable AI, with a focus on ensuring neural networks are safe and secure. She also works for Code First Girls in bridging the gender gap in technology. CODE OF CONDUCT The PyData Code of Conduct governs this meetup ([https://pydata.org/code-of-conduct/](https://pydata.org/code-of-conduct/)). To discuss any issues or concerns relating to the code of conduct or behaviour of anyone at the PyData meetup, please contact the PyData Exeter organisers, or you can submit a report of any potential Code of Conduct violation directly to NumFOCUS (https://numfocus.typeform.com/to/ynjGdT) |
PyData Exeter #9: April 2nd @ Innovation Hub
|
|
PyData Leeds: Leeds Digital Festival '24
2024-09-17 · 17:00
PyData Leeds: Leeds Digital Festival ’24 PyData Leeds, for the second time, is coming to Leeds Digital Festival! Date: Tuesday 17th September 2024 Time: 18:00 Location: Hippo Event Space, 26 Aire St, Leeds, LS1 4HT (At the back of Brew Society) PyData is a global meet-up of Data and Software Engineering enthusiasts who want to connect, network and learn. Leeds has a thriving and growing eco-system of technology businesses and people. The Digital Festival is an open, collaborative celebration of digital culture in all its forms. A perfect match, right? Agenda 17:30: Networking and Refreshments 18:00: Welcome & Icebreaker 18:15: Speaker 1 - Suze Hawkins (Senior Data Scientist at Hippo), ‘Do you need AI, or do you need a reporting dashboard?’ Giving a realistic account of the challenges of data science delivery and how to overcome them, this talk will explain how AI and ML projects differ in their design to typical engineering projects, and practical solutions to make the most of a proof of concept. Maybe you’re a business leader wondering why you’ve still not seen a finished AI product, a delivery manager confused about why data exploration is spilling into another sprint, or even a data scientist looking for group therapy - this is an honest account of problems you can see along the way and pragmatic steps you can take to ease the pain. It also raises the question: is a complex AI product going to solve the business problem, or is there a more simple solution to try first? 18:45: James Spence (Engineering Manager at Smoothwall by Qoria), ‘Qoria: Unlocking Potential - Creating a safer online for children’ Qoria is a global technology company, dedicated to keeping children safe and well in their digital lives. This talk will explore our mission, an overview of our most impactful services and how we get value from our data to support 24 million (and counting) children across the world to be safer online. 19:30: Wrap-up & Drinks in Brew Society If you have been before, we look forward to seeing you again and if you're coming along for the first time, we're excited to meet you and for you to join the Leeds PyData Community. Connect with us on Meetup, Discord or Twitter. PyData Leeds is a strictly professional event, as such professional behaviour is expected. PyData Leeds is a chapter of PyData, an educational program of NumFOCUS and thus abides by the NumFOCUS Code of Conduct - https://pydata.org/code-of-conduct.html |
PyData Leeds: Leeds Digital Festival '24
|
|
145 - Data Product Success: Adopting a Customer-Centric Approach With Malcolm Hawker, Head of Data Management at Profisee
2024-06-11 · 10:00
Brian T. O’Neill
– host
,
Malcolm Hawker
– guest
Wait, I’m talking to a head of data management at a tech company? Why!? Well, today I'm joined by Malcolm Hawker to get his perspective around data products and what he’s seeing out in the wild as Head of Data Management at Profisee. Why Malcolm? Malcolm was a former head of product in prior roles, and for several years, I’ve enjoyed Malcolm’s musings on LinkedIn about the value of a product-oriented approach to ML and analytics. We had a chance to meet at CDOIQ in 2023 as well and he went on my “need to do an episode” list! According to Malcom, empathy is the secret to addressing key UX questions that ensure adoption and business value. He also emphasizes the need for data experts to develop business skills so that they're seen as equals by their customers. During our chat, Malcolm stresses the benefits of a product- and customer-centric approach to data products and what data professionals can learn approaching problem solving with a product orientation. Highlights/ Skip to: Malcolm’s definition of a data product (2:10) Understanding your customers’ needs is the first step toward quantifying the benefits of your data product (6:34) How product makers can gain access to users to build more successful products (11:36) Answering the UX question to get past the adoption stage and provide business value (16:03) Data experts must develop business expertise if they want to be seen as equals by potential customers (20:07) What people really mean by “data culture" (23:02) Malcolm’s data product journey and his changing perspective (32:05) Using empathy to provide a better UX in design and data (39:24) Avoiding the death of data science by becoming more product-driven (46:23) Where the majority of data professionals currently land on their view of product management for data products (48:15) Quotes from Today’s Episode “My definition of a data product is something that is built by a data and analytics team that solves a specific customer problem that the customer would otherwise be willing to pay for. That’s it.” - Malcolm Hawker (3:42) “You need to observe how your customer uses data to make better decisions, optimize a business process, or to mitigate business risk. You need to know how your customers operate at a very, very intimate level, arguably, as well as they know how their business processes operate.” - Malcolm Hawker (7:36) “So, be a problem solver. Be collaborative. Be somebody who is eager to help make your customers’ lives easier. You hear "no" when people think that you’re a burden. You start to hear more “yeses” when people think that you are actually invested in helping make their lives easier.” - Malcolm Hawker (12:42) “We [data professionals] put data on a pedestal. We develop this mindset that the data matters more—as much or maybe even more than the business processes, and that is not true. We would not exist if it were not for the business. Hard stop.” - Malcolm Hawker (17:07) “I hate to say it, I think a lot of this data stuff should kind of feel invisible in that way, too. It’s like this invisible ally that you’re not thinking about the dashboard; you just access the information as part of your natural workflow when you need insights on making a decision, or a status check that you’re on track with whatever your goal was. You’re not really going out of mode.” - Brian O’Neill (24:59) “But you know, data people are basically librarians. We want to put things into classifications that are logical and work forwards and backwards, right? And in the product world, sometimes they just don’t, where you can have something be a product and be a material to a subsequent product.” - Malcolm Hawker (37:57) “So, the broader point here is just more of a mindset shift. And you know, maybe these things aren’t necessarily a bad thing, but how do we become a little more product- and customer-driven so that we avoid situations where everybody thinks what we’re doing is a time waster?” - Malcolm Hawker (48:00) Links Profisee: https://profisee.com/ LinkedIn: https://www.linkedin.com/in/malhawker/ CDO Matters: https://profisee.com/cdo-matters-live-with-malcolm-hawker/ |
Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design) |
|
Unlock the Power of Complex Problem Solving
2024-05-30 · 14:00
Essential Skills for Today's Business Challenges Join us for an insightful meetup where we delve into the art and science of complex problem solving within a business environment. As recognized by the World Economic Forum, these skills are not only in high demand but are pivotal for the future of work. Why Complex Problem Solving? In the fast-evolving business landscape, the ability to tackle complex challenges effectively is more crucial than ever. According to the World Economic Forum, complex problem solving is consistently ranked among the top skills needed to thrive in the modern workplace. Mastering problem solving skills is essential for any professional looking to lead and succeed in the face of uncertainty. The Challenge within Organizations Problem solving within organizations is particularly challenging due to a variety of factors:
Key Topics We Will Cover
What You Will Gain
Prepare to transform your approach to problem-solving and decision-making. Whether you're a seasoned business leader or an aspiring professional, these skills are key to helping you advance your career and contributing effectively to your organization's success. Reserve your spot today and become a catalyst for transformation in your organization! |
Unlock the Power of Complex Problem Solving
|
|
PyData Cluj-Napoca: Meetup #18
2024-04-04 · 15:00
🌷🌼🌸🌻🌿🌺🌱🌞🌈🦋🐝🍃🌞🌷🌼🌸🌻🌿🌺🌱🌞🌈🦋🐝🍃🌞🌷🌼🌸🌻 🌱🐍 Exciting News! PyData Cluj-Napoca 18th Spring Edition 🐍🌱 🌷🌼🌸🌻🌿🌺🌱🌞🌈🦋🐝🍃🌞🌷🌼🌸🌻🌿🌺🌱🌞🌈🦋🐝🍃🌞🌷🌼🌸🌻 We're thrilled to announce that the PyData Cluj-Napoca meetup is back with its 18th edition! After a brief hiatus, we're reviving our tech sessions, just like old times. As always, we have two engaging presentations lined up for you. Join us for a social and informative evening – it's fantastic to be back! ---------------------------------------------------------------------------------------------------- "Improving the SQL code quality with SQLFLUFF rules" by Cristina Bocan Nowadays the quantity of data that is processed daily has increased significantly compared to 15 years ago. The data engineers and data scientists, as well as the machine learning engineers, need to write complex SQL scripts to transform the data in the way it is required by the business. In this process of writing SQL scripts an important amount of time is taken by code review. Here the SQLFluff comes in place as a helpful tool for reducing the time spent on code review! SQLFluff is a code analyzer that checks for programmatic errors, stylistic errors or any kind of errors in a SQL script and has the ability to fix certain error types allowing the developers to focus on SQL developing part. It is implemented in Python and contains multiple sets of rules that are verified against SQL scripts. Apart of the already implemented rules it gives to the developer the possibility of implementing custom rules required by the business context. In this presentation I am going to present a particular business context in which creating a new rule was necessary and how I implemented this custom rule. Apart of that I will present how SQLFluff interacts with the SQL code and fixes it. ---------------------------------------------------------------------------------------------------- "Interactive data science for biotech: a case study on Alzheimer's research with R Shiny" by Oana Florean Through this presentation I want to highlight the role of interactive data science in biotech research. Accessible data, real-time analysis, dynamic visualizations and collaborative work is what biotech researchers need. Recently, we had the opportunity to contribute to Alzheimer's research by developing a platform for biomarkers data exploration and analysis. As R is widely used in biotech, we embraced its versatility and developed in Shiny, a R library which makes possible the development of data science apps without much web development knowledge. We made accessible sophisticated statistical analysis to researchers, regardless of their coding proficiency. Join me to know how nicely an interactive dashboard can be created using Shiny straight from R and hear about its Python equivalent. ---------------------------------------------------------------------------------------------------- NumFOCUS Code of Conduct https://numfocus.org/code-of-conduct |
PyData Cluj-Napoca: Meetup #18
|
|
Collaborative Data Science in Business - Ioannis Mesionis
2023-10-27 · 17:00
Ioannis Mesionis
– guest
Links: LinkedIn: https://www.linkedin.com/in/ioannis-mesionis/ Free ML Engineering course: http://mlzoomcamp.com Join DataTalks.Club: https://datatalks.club/slack.html Our events: https://datatalks.club/events.html |
DataTalks.Club |
|
Collaborative Data Science in Business
2023-10-02 · 10:30
The art and science of making data work for businesses - Ioannis Mesionis Outline:
About the speaker: Ioannis possesses over 4 years of experience as an accomplished data scientist and trusted leader in easyJet's Data Science and Analytics team. Since joining easyJet in 2019, he has risen to the role of Lead Data Scientist, committed to supporting easyJet's ambition of becoming the world's leading data-driven airline. In his current position, Ioannis works cross-functionally with Digital, Customer, and marketing to produce robust data products and solve business problems while leading easyJet's MLOps team to efficiently operationalize, scale, and govern AI solutions enterprise-wide. DataTalks.Club is the place to talk about data. Join our slack community! |
Collaborative Data Science in Business
|
|
Chief Analytics Officer at Mode, Benn Stancil on Leadership, Analytics, and Gratitude {Replay}
2023-05-31 · 10:00
Benn Stancil
– Field CTO
@ ThoughtSpot
,
Al Martin
– WW VP Technical Sales
@ IBM
Send us a text Want to be featured as a guest on Making Data Simple? Reach out to us at [[email protected]] and tell us why you should be next. Abstract Making Data Simple Podcast is hosted by Al Martin, VP, IBM Expert Services Delivery, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun. This week on Making Data Simple, we have Benn Stancil, Chief Analytics Officer + Founder @ Mode. Benn is an accomplished data analyst with deep expertise in collaborative Business Intelligence and Interactive Data Science. Benn is Co-founder, President, and Chief Analytics Officer of Mode, an award-winning SaaS company that combines the best elements of Business Intelligence (ABI), Data Science (DS) and Machine Learning (ML) to empower data teams to answer impactful questions and collaborate on analysis across a range of business functions. Under Benn’s leadership, the Mode platform has evolved to enable data teams to explore, visualize, analyze and share data in a powerful end-to-end workflow. Prior to founding Mode, Benn served in senior Analytics positions at Microsoft and Yammer, and worked as a researcher for the International Economics Program at the Carnegie Endowment for International Peace. Benn also served as an Undergraduate Research Fellow at Wake Forest University, where he received his B.S. in Mathematics and Economics. Benn believes in fostering a shared sense of humility and gratitude. Show Notes 1:22 – Benn’s history7:09 – Tell us how you got to where you are today9:14 – Tell us about Mode12:08 – What is your definition of the Chief Analytics Officer?21:53 – Why do we need another BI tool?24:09 – What’s your secret sauce?27:48 – Where did the name Mode come from?28:41 – How do we use Mode?31:08 – What is you goto market strategy? 32:38 – Any client references?34:58 – “The missing piece in the modern data stack” tell us about thisMode Email: [email protected] [email protected] Twitter: benn stancil Connect with the Team Producer Kate Brown - LinkedIn. Host Al Martin - LinkedIn and Twitter. Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun. |
Making Data Simple |
|
Valliappa Lakshmanan
– author
Learn how easy it is to apply sophisticated statistical and machine learning methods to real-world problems when you build using Google Cloud Platform (GCP). This hands-on guide shows data engineers and data scientists how to implement an end-to-end data pipeline with cloud native tools on GCP. Throughout this updated second edition, you'll work through a sample business decision by employing a variety of data science approaches. Follow along by building a data pipeline in your own project on GCP, and discover how to solve data science problems in a transformative and more collaborative way. You'll learn how to: Employ best practices in building highly scalable data and ML pipelines on Google Cloud Automate and schedule data ingest using Cloud Run Create and populate a dashboard in Data Studio Build a real-time analytics pipeline using Pub/Sub, Dataflow, and BigQuery Conduct interactive data exploration with BigQuery Create a Bayesian model with Spark on Cloud Dataproc Forecast time series and do anomaly detection with BigQuery ML Aggregate within time windows with Dataflow Train explainable machine learning models with Vertex AI Operationalize ML with Vertex AI Pipelines |
O'Reilly Data Science Books
|
|
087 - How Data Product Management and UX Integrate with Data Scientists at Albertsons Companies to Improve the Grocery Shopping Experience
2022-03-22 · 04:30
Brian T. O’Neill
– host
,
Danielle Crop
– Chief Data Officer
@ Albertsons Companies
For Danielle Crop, the Chief Data Officer of Albertsons, to draw distinctions between “digital” and “data” only limits the ability of an organization to create useful products. One of the reasons I asked Danielle on the show is due to her background as a CDO and former SVP of digital at AMEX, where she also managed product and design groups. My theory is that data leaders who have been exposed to the worlds of software product and UX design are prone to approach their data product work differently, and so that’s what we dug into this episode. It didn’t take long for Danielle to share how she pushes her data science team to collaborate with business product managers for a “cross-functional, collaborative” end result. This also means getting the team to understand what their models are personalizing, and how customers experience the data products they use. In short, for her, it is about getting the data team to focus on “outcomes” vs “outputs.” Scaling some of the data science and ML modeling work at Albertsons is a big challenge, and we talked about one of the big use cases she is trying to enable for customers, as well as one “real-life” non-digital experience that her team’s data science efforts are behind. The big takeaway for me here was hearing how a CDO like Danielle is really putting customer experience and the company’s brand at the center of their data product work, as opposed solely focusing on ML model development, dashboard/BI creation, and seeing data as a raw ingredient that lives in a vacuum isolated from people. In this episode, we cover: Danielle’s take on the “D” in CDO: is the distinction between “digital” and “data” even relevant, especially for a food and drug retailer? (01:25) The role of data product management and design in her org and how UX (i.e. shopper experience) is influenced by and considered in her team’s data science work (06:05) How Danielle’s team thinks about “customers” particularly in the context of internal stakeholders vs. grocery shoppers (10:20) Danielle’s current and future plans for bringing her data team into stores to better understand shoppers and customers (11:11) How Danielle’s data team works with the digital shopper experience team (12:02) “Outputs” versus “Outcomes” for product managers, data science teams, and data products (16:30) Building customer loyalty, in-store personalization, and long term brand interaction with data science at Albertsons (20:40) How Danielle and her team at Albertsons measure the success of their data products (24:04) Finding the problems, building the solutions, and connecting the data to the non-technical side of the company (29:11) Quotes from Today’s Episode “Data always comes from somewhere, right? It always has a source. And in our modern world, most of that source is some sort of digital software. So, to distinguish your data from its source is not very smart as a data scientist. You need to understand your data very well, where it came from, how it was developed, and software is a massive source of data. [As a CDO], I think it’s not important to distinguish between [data and digital]. It is important to distinguish between roles and responsibilities, you need different skills for these different areas, but to create an artificial silo between them doesn’t make a whole lot of sense to me.”- Danielle (03:00) “Product managers need to understand what the customer wants, what the business needs, how to pass that along to data scientists and data scientists, and to understand how that’s affecting business outcomes. That’s how I see this all working. And it depends on what type of models they’re customizing and building, right? Are they building personalization models that are going to be a digital asset? Are they building automation models that will go directly to some sort of operational activity in the store? What are they trying to solve?” - Danielle (06:30) “In a company that sells products—groceries—to individuals, personalization is a huge opportunity. How do we make that experience, both in-digital and in-store, more relevant to the customer, more sticky and build loyalty with those customers? That’s the core problem, but underneath that is you got to build a lot of models that help personalize that experience. When you start talking about building a lot of different models, you need scale.” - Danielle (9:24) “[Customer interaction in the store] is a true big data problem, right, because you need to use the WiFi devices, et cetera. that you have in store that are pinging the devices at all times, and it’s a massive amount of data. Trying to weed through that and find the important signals that help us to actually drive that type of personalized experience is challenging. No one’s gotten there yet. I hope that we’ll be the first.” - Danielle (19:50) “I can imagine a checkout clerk who doesn’t want to talk to the customer, despite a data-driven suggestion appearing on the clerk’s monitor as to how to personalize a given customer interaction. The recommendation suggested to the clerk may be ‘accurate from a data science point of view, but if the clerk doesn’t actually act on it, then the data product didn’t provide any value. When I train people in my seminar, I try to get them thinking about that last mile. It may not be data science work, and maybe you have a big enough org where that clerk/customer experience is someone else’s responsibility, but being aware that this is a fault point and having a cross-team perspective is key.” - Brian @rhythmspice (24:50) “We’re going through a moment in time in which trust in data is shaky. What I’d like people to understand and know on a broader philosophical level, is that in order to be able to understand data and use it to make decisions, you have to know its source. You have to understand its source. You have to understand the incentives around that source of data….you have to look at the data from the perspective of what it means and what the incentives were for creating it, and then analyze it, and then give an output. And fortunately, most statisticians, most data scientists, most people in most fields that I know, are incredibly motivated to be ethical and accurate in the information that they’re putting out.” - Danielle (34:15) |
Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design) |
|
Al and Benn Stancil discuss Mode, data, leadership and analytics
2021-12-22 · 11:00
Benn Stancil
– Field CTO
@ ThoughtSpot
,
Al Martin
– WW VP Technical Sales
@ IBM
Send us a text Want to be featured as a guest on Making Data Simple? Reach out to us at [[email protected]] and tell us why you should be next. Abstract Making Data Simple Podcast is hosted by Al Martin, VP, IBM Expert Services Delivery, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun. This week on Making Data Simple, we have Benn Stancil, Chief Analytics Officer + Founder @ Mode. Benn is an accomplished data analyst with deep expertise in collaborative Business Intelligence and Interactive Data Science. Benn is Co-founder, President, and Chief Analytics Officer of Mode, an award-winning SaaS company that combines the best elements of Business Intelligence (ABI), Data Science (DS) and Machine Learning (ML) to empower data teams to answer impactful questions and collaborate on analysis across a range of business functions. Under Benn’s leadership, the Mode platform has evolved to enable data teams to explore, visualize, analyze and share data in a powerful end-to-end workflow. Prior to founding Mode, Benn served in senior Analytics positions at Microsoft and Yammer, and worked as a researcher for the International Economics Program at the Carnegie Endowment for International Peace. Benn also served as an Undergraduate Research Fellow at Wake Forest University, where he received his B.S. in Mathematics and Economics. Benn believes in fostering a shared sense of humility and gratitude. Show Notes 1:22 – Benn’s history 7:09 – Tell us how you got to where you are today 9:14 – Tell us about Mode 12:08 – What is your definition of the Chief Analytics Officer? 21:53 – Why do we need another BI tool? 24:09 – What’s your secret sauce? 27:48 – Where did the name Mode come from? 28:41 – How do we use Mode? 31:08 – What is you goto market strategy? 32:38 – Any client references? 34:58 – “The missing piece in the modern data stack” tell us about this Mode Email: [email protected] [email protected] Twitter: benn stancil Connect with the Team Producer Kate Brown - LinkedIn. Producer Steve Templeton - LinkedIn. Host Al Martin - LinkedIn and Twitter. Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun. |
Making Data Simple |
|
The Benefits And Challenges Of Building A Data Trust
2020-02-03 · 20:00
Summary Every business collects data in some fashion, but sometimes the true value of the collected information only comes when it is combined with other data sources. Data trusts are a legal framework for allowing businesses to collaboratively pool their data. This allows the members of the trust to increase the value of their individual repositories and gain new insights which would otherwise require substantial effort in duplicating the data owned by their peers. In this episode Tom Plagge and Greg Mundy explain how the BrightHive platform serves to establish and maintain data trusts, the technical and organizational challenges they face, and the outcomes that they have witnessed. If you are curious about data sharing strategies or data collaboratives, then listen now to learn more! Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Corinium Global Intelligence, ODSC, and Data Council. Upcoming events include the Software Architecture Conference in NYC, Strata Data in San Jose, and PyCon US in Pittsburgh. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today. Your host is Tobias Macey and today I’m interviewing Tom Plagge and Gregory Mundy about BrightHive, a platform for building data trusts Interview Introduction How did you get involved in the area of data management? Can you start by describing what a data trust is? Why might an organization want to build one? What is BrightHive and what is its origin story? Beyond having a storage location with access controls, what are the components of a data trust that are necessary for them to be viable? What are some of the challenges that are common in establishing an agreement among organizations who are participating in a data trust? What are the responsibilities of each of the participants in a data trust? For an individual or organization who wants to participate in an existing trust, what is involved in gaining access? How does BrightHive support the process of building a data trust? How is ownership of derivative data sets/data products and associated intellectual property handled in the context of a trust? How is the technical architecture of BrightHive implemented and how has it evolved since it first started? What are some of the ways that you approach the challenge of data privacy in these sharing agreements? What are some legal and technical guards that you implement to encourage ethical uses of the data contained in a trust? What is the motivation for releasing the technical elements of BrightHive as open source? What are some of the most interesting, innovative, or inspirational ways that you have seen BrightHive used? Being a shared platform for empowering other organizations to collaborate I imagine there is a strong focus on long-term sustainability. How are you approaching that problem and what is the business model for BrightHive? What have you found to be the most interesting/unexpected/challenging aspects of building and growing the technical and business infrastructure of BrightHive? What do you have planned for the future of BrightHive? Contact Info Tom LinkedIn tplagge on GitHub Gregory LinkedIn gregmundy on GitHub @graygoree on Twitter Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don’t forget to check out our other show, Podcast.init to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Links BrightHive Data Science For Social Good Workforce Data Initiative NASA NOAA Data Trust Data Collaborative Public Benefit Corporation Terraform Airflow Podcast.init Episode Dagster Podcast Episode Secure Multi-Party Computation Public Key Encryption AWS Macie Blockchain Smart Contracts The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast |
Data Engineering Podcast |
|
Understanding #BigData for #BigCities with Maksim ( @MrMaksimize @CityofSanDiego )
2018-10-04 · 15:00
Maksim Pecherskiy
– CDO
@ City of San Diego
In this podcast, Maksim, CDO @ City of San Diago, discussed the nuances of running big data for big cities. He shares his perspectives on effectively building a central data office in a complex and extremely collaborative environment like a big city. He shared his thoughts on some ways to effectively prioritize which project to pursue. He shared how leadership and execution could blend to solve civic issues relating to big and small cities. A great practitioner podcast for folks seeking to build a robust data science practice across a large and collaborative ecosystem. Timeline: 0:28 Maksim's journey. 6:45 Maksim's current role. 11:46 Collaboration process in creating a data inventory. 14:52 Working with the bureaucracy. 18:35 Dealing with unforeseen circumstances at work. 20:22 Prioritization at work. 22:58 Qualities of a good data leader. 26:15 Collaboration with other cities. 27:40 Cool data projects in other cities. 30:55 Shortcomings of other city representatives. 36:54 Use cases in AI 39:00 What would Maksim change about himself? 40:50 Future cities and data 43:55 Opportunities for private investors in the public sector. 45:53 Maksim's success mantra. 50:19 Closing remark. Maksim's Book Recommendation: The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win by Gene Kim, Kevin Behr, George Spafford amzn.to/2MAu5Xv Podcast Link: https://futureofdata.org/understanding-bigdata-for-bigcities-with-maksim-mrmaksimize-cityofsandiego-futureofdata-podcast/ Maksim's BIO: Maksim Pecherskiy: As the CDO for the City of San Diego, working in the Performance & Analytics Department, Maksim strives to bring the necessary components together to allow the City's residents to benefit from a more efficient, agile government that is as innovative as the community around it. He has been solving complex problems with technology for nearly a decade. He spent 2014 working as a Code For America fellow in Puerto Rico, focusing on economic development. His team delivered a product called PrimerPeso that provides business owners and residents a tool to search, and apply for, government programs for which they may be eligible. Before moving to California, Maksim was a Solutions Architect at Promet Source in Chicago, where he built large web applications and designed complex integrations. He shaped workflow, configuration management, and continuous integration processes while leading and training international development teams. Before his work at Promet, he was a software engineer at AllPlayers, who was instrumental in the design and architecture of its APIs and the development and documentation of supporting client libraries in various languages. Maksim graduated from DePaul University with a bachelor of science degree in information systems and from Linköping University, Sweden, with a bachelor of science degree in international business. He is also certified as a Lean Six Sigma Green Belt. About #Podcast: FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.Wanna Join? If you or any you know wants to join in, Register your interest by mailing us @ [email protected] Want to sponsor? Email us @ [email protected] Keywords: FutureOfData, DataAnalytics, Leadership, Futurist, Podcast, BigData, Strategy |
|
|
Understanding #FutureOfData in #Health & #Medicine
2018-06-28 · 15:00
Aaron Black
– Chief Data Officer
@ Inova Translational Medicine Institute
In this podcast, Aaron Black from Inova Translational Medicine Institute talks about his journey in creating/leading data science practice in healthcare. He shared some of the best practices, opportunities, and challenges concerning team dynamics, process orientation, and leadership relationship building. This podcast is great for anyone from ADP who talked about big data in HR. He shared some of the best practices and opportunities that reside in HR data. Aaron also shared some tactical steps to help build a better data-driven team to execute data-driven strategies. This podcast is great for folks looking to explore the depth of HR data and opportunities in the health and medicine domain. Timeline: 0:28 Aaron's journey. 8:16 Defining translational medicine. 11:47 Defining precision medecine. 12:47 Data sharing between pharma companies. 15:03 Defining biobanking. 18:50 Data and healthcare industry. 22:20 Best practices in creating a healthcare database. 25:46 Tackling data regulations. 30:17 Best practices in creating data literacy in employees. 33:27 The culture of data scientists in the healthcare space. 36:09 Challenges that a data science leader faces in the healthcare space. 39:25 Opportunities in health data space. 42:19 Ingredients of a good data science leader in the healthcare space. 44:38 Tips for data science leaders in the healthcare space. 47:00 Putting together a data team in the healthcare space. 50:22 Aaron's success tips. 52:49 Aaron's reading list. 55:25 Closing remark. Podcast link: https://futureofdata.org/understanding-futureofdata-in-health-medicine-thedataguru-inovahealth-futureofdata/ Aaron's Book Recommendations: Smartcuts: The Breakthrough Power of Lateral Thinking by Shane Snow amzn.to/2rH9xzJ When: The Scientific Secrets of Perfect Timing by Daniel H. Pink amzn.to/2rElebc Aaron's BIO: Aaron Black, Chief Data Officer at the Inova Translational Medicine Institute. Healthcare Information Technology Executive and Data Evangelist. A results-driven technical leader with a 20+ year record of successful project and program implementations; Visionary, collaborative, and able to devise creative solutions and culture to complex business challenges. Key thought leader, international speaker, team builder, and data architect in building advanced and one-of-a-kind technical and data infrastructure to support precision medicine initiatives in large and cutting edge health care institutions. A featured speaker and panelist at National Conferences and Councils including TEDx Tysons, NIH, Amazon ReInvent, Precision Medicine World Conference, Labroots, HIMSS, and an invited speaker at the National Research Council’s Standing Committee on Biological and Physical Sciences in Space (CBPSS). Experience in start-up and new team development. Proven change-agent in diverse organizations and politically charged environments. A catalyst to create vision, motivation, and results across an entire enterprise. Creative thinker; organized, resolute, and able to direct multiple competing priorities with great precision while meeting strict deadlines and budget requirements. Strong healthcare and research industry knowledge, particularly in Life Sciences, with expertise in developing, implementing, and supporting large data enterprise architectures. Excellent interpersonal skills, work effectively with individuals of diverse backgrounds, and inspire teams to work to their fullest potential. About #Podcast: FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.Wanna Join? If you or any you know wants to join in, Register your interest @ play.analyticsweek.com/guest/ Want to sponsor? Email us @ [email protected] Keywords: FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategyfutureofdataleadership data in hr hr data hris big data |
|
|
Package Management And Distribution For Your Data Using Quilt with Kevin Moore - Episode 37
2018-06-25 · 02:00
Kevin Moore
– CEO and founder
@ Quilt Data
,
Tobias Macey
– host
Summary Collaboration, distribution, and installation of software projects is largely a solved problem, but the same cannot be said of data. Every data team has a bespoke means of sharing data sets, versioning them, tracking related metadata and changes, and publishing them for use in the software systems that rely on them. The CEO and founder of Quilt Data, Kevin Moore, was sufficiently frustrated by this problem to create a platform that attempts to be the means by which data can be as collaborative and easy to work with as GitHub and your favorite programming language. In this episode he explains how the project came to be, how it works, and the many ways that you can start using it today. Preamble Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. Are you struggling to keep up with customer request and letting errors slip into production? Want to try some of the innovative ideas in this podcast but don’t have time? DataKitchen’s DataOps software allows your team to quickly iterate and deploy pipelines of code, models, and data sets while improving quality. Unlike a patchwork of manual operations, DataKitchen makes your team shine by providing an end to end DataOps solution with minimal programming that uses the tools you love. Join the DataOps movement and sign up for the newsletter at datakitchen.io/de today. After that learn more about why you should be doing DataOps by listening to the Head Chef in the Data Kitchen at dataengineeringpodcast.com/datakitchen Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. Your host is Tobias Macey and today I’m interviewing Kevin Moore about Quilt Data, a platform and tooling for packaging, distributing, and versioning data Interview Introduction How did you get involved in the area of data management? What is the intended use case for Quilt and how did the project get started? Can you step through a typical workflow of someone using Quilt? How does that change as you go from a single user to a team of data engineers and data scientists? Can you describe the elements of what a data package consists of? What was your criteria for the file formats that you chose? How is Quilt architected and what have been the most significant changes or evolutions since you first started? How is the data registry implemented? What are the limitations or edge cases that you have run into? What optimizations have you made to accelerate synchronization of the data to and from the repository? What are the limitations in terms of data volume, format, or usage? What is your goal with the business that you have built around the project? What are your plans for the future of Quilt? Contact Info Email LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Links Quilt Data GitHub Jobs Reproducible Data Dependencies in Jupyter Reproducible Machine Learning with Jupyter and Quilt Allen Institute: Programmatic Data Access with Quilt Quilt Example: MissingNo Oracle Pandas Jupyter Ycombinator Data.World Podcast Episode with CTO Bryon Jacob Kaggle Parquet HDF5 Arrow PySpark Excel Scala Binder Merkle Tree Allen Institute for Cell Science Flask PostGreSQL Docker Airflow Quilt Teams Hive Hive Metastore PrestoDB Podcast Episode Netflix Iceberg Kubernetes Helm The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast |
Data Engineering Podcast |
|
#FutureOfData with Robin Thottungal, Chief Data Scientist at EPA
2017-07-13 · 02:45
Robin Thottungal
– Chief Data Scientist
@ EPA
In this podcast, Robin discussed how an analytics organization functions in a collaborative culture. He shed some light on preparing a robust framework while working on policy rich setup. This talk is a must for anyone building an analytics organization with a culture-rich or policy rich environment. Timeline: 0:29 Robin's journey. 6:02 Challenges in working as a chief data scientist. 9:50 Two breeds of data scientists. 13:38 Introducing data science into large companies. 16:57 Creating a center of excellence with data. 19:52 Challenges in working with a government agency. 22:57 Creating a self-serving system. 26:29 Defining chief data officer, chief analytics officer, chief data scientist. 28:28 Designing an architecture for a rapidly changing company culture. 31:39 Future of analytics and data leaders. 35:47 Art of doing business and science of doing business. 42:26 Perfect data science hire. 45:08 Closing remarks. Podcast link: https://futureofdata.org/futureofdata-with-robin-thottungal-chief-data-scientist-at-epa/ Here's Robin's bio on his current EPA Role: - Leading the Data Analytics effort of 15,000+ member agency through providing strategic vision, program development, evangelizing the value of data-driven decision making, bringing a lean-start up approach to the public sector & building advanced data analytics platform capable of real-time/batch analysis. -Serving as Chief data scientist for the agency, including directing, coordinating, and overseeing the division’s leadership of EPA’s multimedia data analytics, visualization, and predictive analysis work along with related tools, application development, and services. -Develop and oversee the implementation of Agency policy on integration analysis of environmental data, including multimedia analysis and assessments of environmental quality, status, and trends. -Develop, market, and implement tactical and strategic plans for the Agency’s data management, advanced data analytics, and predictive analysis work. -Lead crossfederal, state, tribal, and local government data partnerships as well as information partnerships with other entities. About #Podcast: FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.Wanna Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/ Want to sponsor? Email us @ [email protected] Keywords: FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy |
The Future of Data Podcast | conversation with leaders, influencers, and change makers in the World of Data & Analytics |
|
Breaking Data Science Open
2017-05-15
Over the past decade, data science has come out of the back office to become a force of change across the entire organization. At the forefront of this change is the open data science movement that advocates the use of open source tools in a powerful, connected ecosystem. This report explores how open data science can help your organization break free from the shackles of proprietary tools, embrace a more open and collaborative work style, and unleash new intelligent applications quickly. Authors Michele Chambers and Christine Doig explain how open source tools have helped bring about many facets of the data science evolution, including collaboration, self-service, and deployment. But you’ll discover that open data science is about more than tools; it’s about a new way of working as an organization. Learn how data science—particularly open data science—has become part of everyday business Understand how open data science engages people from other disciplines, not just statisticians Examine tools and practices that enable data science to be open across technical, operational, and organizational aspects Learn benefits of open data science, including rich resources, agility, transparency, and collective intelligence Explore case studies that demonstrate different ways to implement open data science Discover how open data science can help you break down department barriers and make bold market moves Michele Chambers, Chief Marketing Officer and VP Products at Continuum Analytics, is an entrepreneurial executive with over 25 years of industry experience. Prior to Continuum Analytics, Michele held executive leadership roles at several database and analytic companies, including Netezza, IBM, Revolution Analytics, MemSQL, and RapidMiner. Christine Doig is a senior data scientist at Continuum Analytics, where she's worked on several projects, including MEMEX, a DARPA-funded open data science project to help stop human trafficking. She has 5+ years of experience in analytics, operations research, and machine learning in a variety of industries. |
O'Reilly Data Science Books
|
|
Scott Zoldi, CAO FICO
2016-12-20 · 04:58
Scott Zoldi
– Chief Analytics Officer
@ FICO
,
Vishal Kumar
– CEO
@ AnalyticsWeek
In this session, Scott Zoldi, Chief Analytics Officer, FICO, sat with Vishal Kumar, CEO AnalyticsWeek and shared his journey as an analytics executive, best practices, and hacks for upcoming executives challenges/opportunities he's observing as a Chief Analytics Officer. Scott discussed creating the data-driven culture and what leaders could do to get buy-ins for building strong data science capabilities. Scott discussed his passion for security analytics. He shared some best practices to put-up a Cyber Security Center of Excellence. Scott also shared what traits future leaders should have. Timeline: 0:29 Scott's journey. 5:10 On Falcon's fraud manager. 9:12 Area in secuity where AI works. 11:40 FICO's dealing with new products. 15:30 Centre of excellence for cyber security. 22:00 Should a center of excellence be inside out or in partnership? 28:22 The CAO role in FICO. 31:14 Is FICO in facing or out facing? 32:12 Being analytical in a gutt based organization. 35:54 Art of doing business and science of doing business. 38:22 Challenges as CAO in FICO. 41:09 Opportunity for data science in the security space. 45:54 Qualities required for a CAO. 48:54 Tips for a data scientist to get hired at FICO. Podcast link: https://futureofdata.org/analyticsweek-leadership-podcast-with-scott-zoldi-cao-fico/ Here's Scott Zoldi's Bio: Scott Zoldi is Chief Analytics Officer at FICO, responsible for the analytic development of FICO’s product and technology solutions, including the FICO™ Falcon® Fraud Manager product, which protects about two-thirds of the world’s payment card transactions from fraud. While at FICO, Scott has been responsible for authoring 72 analytic patents, 36 patents granted, and 36 in process. Scott is actively involved in developing new analytic products and Big Data analytics applications, many of which leverage new streaming artificial intelligence innovations such as adaptive analytics, collaborative profiling, and self-learning models. Scott is most recently focused on the applications of streaming self-learning analytics for real-time detection of Cyber Security attacks and Money Laundering. Scott serves on two boards of directors, including Software San Diego and Cyber Center of Excellence. Scott received his Ph.D. in theoretical physics from Duke University. Follow @scottzoldi The podcast is sponsored by: TAO.ai(https://tao.ai), Artificial Intelligence Driven Career Coach |
The Future of Data Podcast | conversation with leaders, influencers, and change makers in the World of Data & Analytics |