talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

Summary Delivering a data analytics project on time and with accurate information is critical to the success of any business. DataOps is a set of practices to increase the probability of success by creating value early and often, and using feedback loops to keep your project on course. In this episode Chris Bergh, head chef of Data Kitchen, explains how DataOps differs from DevOps, how the industry has begun adopting DataOps, and how to adopt an agile approach to building your data platform.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! Managing and auditing access to your servers and databases is a problem that grows in difficulty alongside the growth of your teams. If you are tired of wasting your time cobbling together scripts and workarounds to give your developers, data scientists, and managers the permissions that they need then it’s time to talk to our friends at strongDM. They have built an easy to use platform that lets you leverage your company’s single sign on for your data platform. Go to dataengineeringpodcast.com/strongdm today to find out how you can simplify your systems. "There aren’t enough data conferences out there that focus on the community, so that’s why these folks built a better one": Data Council is the premier community powered data platforms & engineering event for software engineers, data engineers, machine learning experts, deep learning researchers & artificial intelligence buffs who want to discover tools & insights to build new products. This year they will host over 50 speakers and 500 attendees (yeah that’s one of the best "Attendee:Speaker" ratios out there) in San Francisco on April 17-18th and are offering a $200 discount to listeners of the Data Engineering Podcast. Use code: DEP-200 at checkout You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m interviewing Chris Bergh about the current state of DataOps and why it’s more than just DevOps for data

Interview

Introduction How did you get involved in the area of data management? We talked last year about what DataOps is, but can you give a quick overview of how the industry has changed or updated the definition since then?

It is easy to draw parallels between DataOps and DevOps, can you provide some clarity as to how they are different?

How has the conversat

Meta-Analytics

Meta-Analytics: Consensus Approaches and System Patterns for Data Analysis presents an exhaustive set of patterns for data science to use on any machine learning based data analysis task. The book virtually ensures that at least one pattern will lead to better overall system behavior than the use of traditional analytics approaches. The book is ‘meta’ to analytics, covering general analytics in sufficient detail for readers to engage with, and understand, hybrid or meta- approaches. The book has relevance to machine translation, robotics, biological and social sciences, medical and healthcare informatics, economics, business and finance. Inn addition, the analytics within can be applied to predictive algorithms for everyone from police departments to sports analysts. Provides comprehensive and systematic coverage of machine learning-based data analysis tasks Enables rapid progress towards competency in data analysis techniques Gives exhaustive and widely applicable patterns for use by data scientists Covers hybrid or ‘meta’ approaches, along with general analytics Lays out information and practical guidance on data analysis for practitioners working across all sectors

Summary Customer analytics is a problem domain that has given rise to its own industry. In order to gain a full understanding of what your users are doing and how best to serve them you may need to send data to multiple services, each with their own tracking code or APIs. To simplify this process and allow your non-engineering employees to gain access to the information they need to do their jobs Segment provides a single interface for capturing data and routing it to all of the places that you need it. In this interview Segment CTO and co-founder Calvin French-Owen explains how the company got started, how it manages to multiplex data streams from multiple sources to multiple destinations, and how it can simplify your work of gaining visibility into how your customers are engaging with your business.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! Managing and auditing access to your servers and databases is a problem that grows in difficulty alongside the growth of your teams. If you are tired of wasting your time cobbling together scripts and workarounds to give your developers, data scientists, and managers the permissions that they need then it’s time to talk to our friends at strongDM. They have built an easy to use platform that lets you leverage your company’s single sign on for your data platform. Go to dataengineeringpodcast.com/strongdm today to find out how you can simplify your systems. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. To help other people find the show please leave a review on iTunes and tell your friends and co-workers You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with O’Reilly Media for the Strata conference in San Francisco on March 25th and the Artificial Intelligence conference in NYC on April 15th. Here in Boston, starting on May 17th, you still have time to grab a ticket to the Enterprise Data World, and from April 30th to May 3rd is the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register. Your host is Tobias Macey and today I’m interviewing Calvin French-Owen about the data platform that Segment has built to handle multiplexing continuous streams of data from multiple sources to multiple destinations

Interview

Introduction How did you get involved in the area of data management? Can you start by explaining what Segment is and how the business got started?

What are some of the primary ways that your customers are using the Segment platform? How have the capabilities and use cases of the Segment platform changed since it was first launched?

Layered on top of the data integration platform you have added the concepts of Protocols and Personas. Can you explain how each of those products fit into the over

Hands-On Business Intelligence with Qlik Sense

"Hands-On Business Intelligence with Qlik Sense" teaches you how to harness the powerful capabilities of Qlik Sense to build dynamic, interactive dashboards and analyze data effectively. This book provides comprehensive guidance, from data modeling to creating visualizations, geospatial analysis, forecasting, and sharing insights across your organization. What this Book will help me do Understand the core concepts of Qlik Sense for building business intelligence dashboards. Master the process of loading, reshaping, and modeling data for analysis and reporting. Create impactful visual representations of data using Qlik Sense visualization tools. Leverage advanced analytics techniques, including Python and R integration, for deeper insights. Utilize Qlik Sense GeoAnalytics to perform geospatial analysis and produce location-based insights. Author(s) The authors of "Hands-On Business Intelligence with Qlik Sense" are experts in Qlik Sense and data analysis. They collectively bring decades of experience in business intelligence development and implementation. Their practical approach ensures that readers not only learn the theory but can also apply the techniques in real-world scenarios. Who is it for? This book is designed for business intelligence developers, data analysts, and anyone interested in exploring Qlik Sense for their data analysis tasks. If you're aiming to start with Qlik Sense and want a practical and hands-on guide, this book is ideal. No prior experience with Qlik Sense is necessary, but familiarity with data analysis concepts is helpful.

Mastering Tableau 2019.1 - Second Edition

Mastering Tableau 2019.1 is your essential guide for becoming an expert in Tableau's advanced features and functionalities. This book will teach you how to use Tableau Prep for data preparation, create complex visualizations and dashboards, and leverage Tableau's integration with R, Python, and MATLAB. You'll be equipped with the skills to solve both common and advanced BI challenges. What this Book will help me do Gain expertise in preparing and blending data using Tableau Prep and other data handling tools. Create advanced data visualizations and designs that effectively communicate insights. Implement narrative storytelling in BI with advanced presentation designs in Tableau. Integrate Tableau with programming tools like R, Python, and MATLAB for extended functionalities. Optimize performance and improve dashboard interactivity for user-friendly analytics solutions. Author(s) Marleen Meier, with extensive experience in business intelligence and analytics, and None Baldwin, an expert in data visualization, collaboratively bring this advanced Tableau guide to life. Their passion for empowering users with practical BI solutions reflects in the hands-on approach employed throughout the book. Who is it for? This book is perfectly suited for business analysts, BI professionals, and data analysts who already have foundational knowledge of Tableau and seek to advance their skills for tackling more complex BI challenges. It's ideal for individuals aiming to master Tableau's premium features for impactful analytics solutions.

Send us a text Matthias Funke and Thomas Chu lead the team shaping IBM's strategy for hybrid data management. They join host Al Martin for a deep dive into trends around data management across cloud environments — from private, to public, to multicloud. Matthias and Thomas each started as software developers in the trenches of data and analytics, and they bring that knowledge of fundamentals to their work with large organizations  in the grip of rapid change.


Shownotes

00:00 - Check us out on YouTube and SoundCloud!  00:10 - Connect with Producer Steve Moore on LinkedIn & Twitter  00:15 - Connect with Producer Liam Seston on LinkedIn & Twitter  00:20 - Connect with Producer Rachit Sharma on LinkedIn  00:25 - Connect with Host Al Martin on LinkedIn & Twitter  01:25 - Connect with Matthias Funke on LinkedIn 02:29 - Connect with Thomas Chu on LinkedIn 10:02 - Prepare your data for AI. 19:25 - What is data efficiency?  22:01 - What is cloud computing? Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

SAP Business Intelligence Quick Start Guide

This book is your practical guide to understanding and using the SAP BusinessObjects Business Intelligence (BI) Platform. Through hands-on examples and clear instructions, you'll learn how to create insightful data visualizations, manage business intelligence reports, and deploy and maintain the BI platform effectively, empowering better data-driven decision making. What this Book will help me do Learn how to use SAP Web Intelligence to develop insightful dashboards and reports. Understand the use of SAP Crystal Reports for Enterprise in creating detailed analytics. Gain proficiency in SAP Lumira for advanced data visualization and storytelling. Learn to configure and deploy the SAP BusinessObjects BI platform in a business environment. Develop skills in using SAP Predictive Analytics to perform advanced data analysis capabilities. Author(s) Vinay Singh brings significant expertise in data analysis and the SAP BusinessObjects platform. With years of experience implementing and consulting on SAP solutions across industries, Vinay offers a unique ability to demystify complex technical subjects for readers. His practical approach and commitment to empowering readers make his book a valuable learning resource. Who is it for? This book is ideal for Business Intelligence professionals seeking to explore advanced tools for data analysis. It caters to SAP users eager to expand their expertise in leveraging SAP BusinessObjects for improved decision-making capabilities. It serves IT consultants and data analysts wishing to gain deeper insights into deployment and utilization strategies. Appropriate for beginners with foundational understanding of BI principles aiming to learn a globally recognized BI tool.

podcast_episode
by Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Stacey Goers (National Public Radio (NPR)) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

Do you know something that is really simple? Really Simple Syndication (aka, RSS). Did you know that RSS is the backbone of podcast delivery? Well, aren't you clever! What's NOT really simple is effectively measuring podcasts when a key underlying component is a glorified text file that tells an app how to download an audio file. Advertisers, publishers, and content producers the world over have been stuck with "downloads" as their key -- and pretty much only -- metric for years. That's like just counting "hits" on a website! But, NPR is leading an initiative to change all that through Remote Audio Data, or RAD. Stacey Goers, product manager for podcasts at National Public Radio, joins the gang on this episode to discuss that effort: how it works, how it's rolling out, and the myriad parallels podcast analytics has to website and mobile analytics! "For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

IBM DS8880 Architecture and Implementation (Release 8.51)

Abstract * Updated for R8.51 * This IBM® Redbooks® publication describes the concepts, architecture, and implementation of the IBM DS8880 family. The book provides reference information to assist readers who need to plan for, install, and configure the DS8880 systems. The IBM DS8000® family is a high-performance, high-capacity, highly secure, and resilient series of disk storage systems. The DS8880 family is the latest and most advanced of the DS8000 offerings to date. The high availability, multiplatform support, including IBM Z, and simplified management tools help provide a cost-effective path to an on-demand and cloud-based infrastructures. The IBM DS8880 family now offers business-critical, all-flash, and hybrid data systems that span a wide range of price points: DS8882F: Rack Mounted storage system DS8884: Business Class DS8886: Enterprise Class DS8888: Analytics Class The DS8884 and DS8886 are available as either hybrid models, or can be configured as all-flash. Each model represents the most recent in this series of high-performance, high-capacity, flexible, and resilient storage systems. These systems are intended to address the needs of the most demanding clients. Two powerful IBM POWER8® processor-based servers manage the cache to streamline disk I/O, maximizing performance and throughput. These capabilities are further enhanced with the availability of the second generation of high-performance flash enclosures (HPFEs Gen-2) and newer flash drives. Like its predecessors, the DS8880 supports advanced disaster recovery (DR) solutions, business continuity solutions, and thin provisioning. All disk drives in the DS8880 storage system include the Full Disk Encryption (FDE) feature. The DS8880 can automatically optimize the use of each storage tier, particularly flash drives, by using the IBM Easy Tier® feature. Release 8.5 introduces the Safeguarded Copy feature. The DS8882F Rack Mounted is decribed in a separate publication, Introducing the IBM DS8882F Rack Mounted Storage System, REDP-5505.

Summary Deep learning is the latest class of technology that is gaining widespread interest. As data engineers we are responsible for building and managing the platforms that power these models. To help us understand what is involved, we are joined this week by Thomas Henson. In this episode he shares his experiences experimenting with deep learning, what data engineers need to know about the infrastructure and data requirements to power the models that your team is building, and how it can be used to supercharge our ETL pipelines.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! Managing and auditing access to your servers and databases is a problem that grows in difficulty alongside the growth of your teams. If you are tired of wasting your time cobbling together scripts and workarounds to give your developers, data scientists, and managers the permissions that they need then it’s time to talk to our friends at strongDM. They have built an easy to use platform that lets you leverage your company’s single sign on for your data platform. Go to dataengineeringpodcast.com/strongdm today to find out how you can simplify your systems. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media. Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data platforms. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss the Strata conference in San Francisco on March 25th and the Artificial Intelligence conference in NYC on April 15th, both run by our friends at O’Reilly Media. Go to dataengineeringpodcast.com/stratacon and dataengineeringpodcast.com/aicon to register today and get 20% off Your host is Tobias Macey and today I’m interviewing Thomas Henson about what data engineers need to know about deep learning, including how to use it for their own projects

Interview

Introduction How did you get involved in the area of data management? Can you start by giving an overview of what deep learning is for anyone who isn’t familiar with it? What has been your personal experience with deep learning and what set you down that path? What is involved in building a data pipeline and production infrastructure for a deep learning product?

How does that differ from other types of analytics projects such as data warehousing or traditional ML?

For anyone who is in the early stages of a deep learning project, what are some of the edge cases or gotchas that they should be aware of? What are your opinions on the level of involvement/understanding that data engineers should have with the analytical products that are being built with the information we collect and curate? What are some ways that we can use deep learning as part of the data management process?

How does that shift the infrastructure requirements for our platforms?

Cloud providers have b

Advanced R Statistical Programming and Data Models: Analysis, Machine Learning, and Visualization

Carry out a variety of advanced statistical analyses including generalized additive models, mixed effects models, multiple imputation, machine learning, and missing data techniques using R. Each chapter starts with conceptual background information about the techniques, includes multiple examples using R to achieve results, and concludes with a case study. Written by Matt and Joshua F. Wiley, Advanced R Statistical Programming and Data Models shows you how to conduct data analysis using the popular R language. You’ll delve into the preconditions or hypothesis for various statistical tests and techniques and work through concrete examples using R for a variety of these next-level analytics. This is a must-have guide and reference on using and programming with the R language. What You’ll Learn Conduct advanced analyses in R including: generalized linear models, generalized additive models, mixedeffects models, machine learning, and parallel processing Carry out regression modeling using R data visualization, linear and advanced regression, additive models, survival / time to event analysis Handle machine learning using R including parallel processing, dimension reduction, and feature selection and classification Address missing data using multiple imputation in R Work on factor analysis, generalized linear mixed models, and modeling intraindividual variability Who This Book Is For Working professionals, researchers, or students who are familiar with R and basic statistical techniques such as linear regression and who want to learn how to use R to perform more advanced analytics. Particularly, researchers and data analysts in the social sciences may benefit from these techniques. Additionally, analysts who need parallel processing to speed up analytics are givenproven code to reduce time to result(s).

Summary Distributed storage systems are the foundational layer of any big data stack. There are a variety of implementations which support different specialized use cases and come with associated tradeoffs. Alluxio is a distributed virtual filesystem which integrates with multiple persistent storage systems to provide a scalable, in-memory storage layer for scaling computational workloads independent of the size of your data. In this episode Bin Fan explains how he got involved with the project, how it is implemented, and the use cases that it is particularly well suited for. If your storage and compute layers are too tightly coupled and you want to scale them independently then Alluxio is the tool for the job.

Introduction

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media. Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m interviewing Bin Fan about Alluxio, a distributed virtual filesystem for unified access to disparate data sources

Interview

Introduction How did you get involved in the area of data management? Can you start by explaining what Alluxio is and the history of the project?

What are some of the use cases that Alluxio enables?

How is Alluxio implemented and how has its architecture evolved over time?

What are some of the techniques that you use to mitigate the impact of latency, particularly when interfacing with storage systems across cloud providers and private data centers?

When dealing with large volumes of data over time it is often necessary to age out older records to cheaper storage. What capabilities does Alluxio provide for that lifecycle management? What are some of the most complex or challenging aspects of providing a unified abstraction across disparate storage platforms?

What are the tradeoffs that are made to provide a single API across systems with varying capabilities?

Testing and verification of distributed systems is a complex undertaking. Can you describe the approach that you use to ensure proper functionality of Alluxio as part of the development and release process?

In order to allow for this large scale testing with any regularity it must be straightforward to deploy and configure Alluxio. What are some of the mechanisms that you have built into the platform to simplify the operational aspects?

Can you describe a typical system topology that incorporates Alluxio? For someone planning a deployment of Alluxio, what should they be considering in terms of system requirements and deployment topologies?

What are some edge cases or operational complexities that they should be aware of?

What are some cases where Alluxio is the wrong choice?

What are some projects or products that provide a similar capability to Alluxio?

What do you have planned for the future of the Alluxio project and company?

Contact Info

LinkedIn @binfan on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

Alluxio

Project Company

Carnegie Me

podcast_episode
by Val Kroll , Julie Hoyer , Steve Mulder (National Public Radio (NPR)) , Tim Wilson (Analytics Power Hour - Columbus (OH) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

"Hey, Google! How do you measure yourself?" "I'm sorry. I can't answer that question. Would you like to listen to a podcast that can?" National Public Radio has long been on the forefront of the world of audio media. Why, you might even remember episode #046, where Steve Mulder from NPR made his first appearance on the show discussing the cans and cannots of podcast measurement! On this episode, Mulder returns to chat about how much more comfortable we have become when it comes to conversing with animated inanimate objects, as well as the current state of what data is available (and how) to publishers and brands who have ventured into this brave new world. "Alexa! Play the Digital Analytics Power Hour podcast!" For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

It’s hard to admit failure, and I consider myself to be an all-star analyst, but if you can learn and become a better analyst then it’s ok to fail, right? Here are some of the key topics that I’ll address from failing: What key skills do you need to be a more successful analyst? What is the importance of Objective Analysis? How do you overcome negative results, noise and naysayers? How can you continue to grow in this field? What do I see as the future of Digital Analytics?

While most of the consultancies are focused on delivery, not many of them are creating inspiration and foundation for delivery of continual business value. We would explore reasons why common narrative is not aligned with most client needs, how to fix it and how to sell and grow analytics services.

News and other types of content consumption patterns are changing challenging media to transform and adjust to new readers' behavior. New distribution platforms appear and evolve, formats of content delivery appear and die. What are the key metrics modern publishers have to analyze if they want to increase profits and retain their audience? How we can track visitors' behaviour and get insights on publishers content policy, articles formats and distribution channels?

DIGITAL ANALYTICS MEETS DATA SCIENCE: USE CASES FOR GOOGLE ANALYTICS

Past attendees of Superweek have ridden along with Tim as he explored R, and then as he dove deeper into some of the fundamental concepts of statistics. In this session, he will provide the latest update on that journey: how he is putting his exploration into the various dimensions of data science to use with real data and real clients. The statistical methods will be real, the code will be R (and available on GitHub), and the data will only be lightly obfuscated. So, you will be able to head back to your room at the next break and try one or more of the examples out on your own data! (But, don't do that -- the food and conversation at the breaks is too good to miss!)

The top speed recorded by an F1 car is currently 372.6 km/h (231.5 mph). Juan Pablo Montoya of the McLaren-Mercedes F1 Team achieved that after racing professionally for 13 years (and 'Karting' most of his early life). The F1 car itself is an amazing tool, but does anyone think that you could put just anyone behind the drivers seat and achieve the same results as Montoya? Tools are only as good as the people using them (at least until said tools learn how to murder us). In a recent study 55% of college students said they believed the full moon caused people to behave oddly, despite a complete lack of evidence. The often unspoken problem in Analytics is the people. Even if they don't believe in Bigfoot, they may lack the humility to admit they have blind spots, or the intellectual empathy to understand things that aren't all about them. We must elevate our practice of analytics through the promotion of Critical Thought, which leads to better Empirical Analysis, and insight and value for our companies and clients.

talk
by Stéphane Hamel (IMMERIA - QUÉBEC, CANADA)

On his first visit he shared the Digital Analytics Maturity Model, next came the Radical Analytics Manifesto, and after skipping a year, he is back in force with interesting stories about digital transformations, customer centricity, use and abuse of data, and the secret life of digital analysts.