talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

podcast_episode
by Damien Moore (Moody's Analytics) , Cris deRitis , Mark Zandi (Moody's Analytics) , Ryan Sweet

Interest rates are on the rise and the Fed is set to normalize monetary policy. Damien Moore, Director of Economic Research at Moody's Analytics, joins the podcast to discuss.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

podcast_episode
by Tom Davenport (Babson College; Oxford University; MIT; Deloitte AI practice) , Jonas Christensen

When we talk about analytics and AI-driven organisations, we often think of the likes of Google, Amazon, Facebook, Netflix and Tencent, which have all risen to dominance during the internet era. But what about companies that have been around for much longer, can they achieve the same results with their data? To answer this question, I recently spoke to Tom Davenport who is one of the world’s foremost thought leaders and authors in the areas of business, analytics, data science and AI. He is the President’s Distinguished Professor of Information Technology and Management at Babson College, a Fellow of the MIT Center for Digital Business, and an independent senior advisor to Deloitte Analytics. He has authored more than 20 books and hundreds of articles on topics such as artificial intelligence, analytics, information and knowledge management, process management, and enterprise systems. He is a regular contributor to Harvard Business Review, Forbes Magazine, The Wall Street Journal and many other publications around the world. In this episode, Tom gives us a history lesson of data and analytics and provides an in-depth description of what it takes for traditional companies to ascend through what he calls the “Four Eras of Analytics”.

Building a SAAS business that focuses on building a research tool, more than building a data product, is how Jonathan Kay, CEO and Co-Founder of Apptopia frames his company’s work. Jonathan and I worked together when Apptopia pivoted from its prior business into a mobile intelligence platform for brands. Part of the reason I wanted to have Jonathan talk to you all is because I knew that he would strip away all the easy-to-see shine and varnish from their success and get really candid about what worked…and what hasn’t…during their journey to turn a data product into a successful SAAS business. So get ready: Jonathan is going to reveal the very curvy line that Apptopia has taken to get where they are today. 

In this episode, Jonathan also describes one of the core product design frameworks that Apptopia is currently using to help deliver actionable insights to their customers. For Jonathan, Apptopia’s research-centric approach changes the ways in which their customers can interact with data and is helping eliminate the lull between “the why” and “the actioning” with data.

Here are some of the key parts of  the interview:

An introduction to Apptopia and how they serve brands in the world of mobile app data (00:36) The current UX gaps that Apptopia is working to fill (03:32) How Apptopia balances flexibility with ease-of-use  (06:22) How Apptopia establishes the boundaries of its product when it’s just one part of a user’s overall workflow (10:06) The challenge of “low use, low trust” and getting “non-data” people to act (13:45) Developing strong conclusions and opinions and presenting them to customers (18:10) How Apptopia’s product design process has evolved when working with data, particularly at the UI level (21:30) The relationship between Apptopia’s buyer, versus the users of the product and how they balance the two (24:45) Jonathan’s advice for hiring good data product design and management staff (29:45) How data fits into Jonathan’s own decision making as CEO of Apptopia (33:21) Jonathan’s advice for emerging data product leaders (36:30)

Quotes from Today’s Episode  

“I want to just give you some props on the work that you guys have done and seeing where it's gone from when we worked together. The word grit, I think, is the word that I most associate with you and Eli [former CEO, co-founder] from those times. It felt very genuine that you believed in your mission and you had a long-term vision for it.” - Brian T. O’Neill (@rhythmspice) (02:08)

“A research tool gives you the ability to create an input, which might be, ‘I want to see how Netflix is performing.’ And then it gives you a bunch of data. And it gives you good user experience that allows you to look for the answer to the question that’s in your head, but you need to start with a question. You need to know how to manipulate the tool. It requires a huge amount of experience and understanding of the data consumer in order to actually get the answer to the question. For me, that feels like a miss because I think the amount of people who need and can benefit from data, and the amount of people who know how to instrument the tools to get the answers from the data—well, I think there’s a huge disconnect in those numbers. And just like when I take my car to get service, I expected the car mechanic knows exactly what the hell is going on in there, right? Like, our obligation as a data provider should be to help people get closer to the answer. And I think we still have some room to go in order to get there.” - Jonathan Kay (@JonathanCKay) (04:54)

“You need to present someone the what, the why, etc.—then the research component [of your data product] is valuable. And so it’s not that having a research tool isn’t valuable. It’s just, you can’t have the whole thing be that. You need to give them the what and the why first.” - Jonathan Kay (@JonathanCKay) (08:45) “You can't put equal resources into everything. Knowing the boundaries of your data product is important, but it's a hard thing to know sometimes where to draw those. A leader has to ask, ‘am I getting outside of my sweet spot? Is this outside of the mission?’ Figuring the right boundaries goes back to customer research.” - Brian T. O’Neill (@rhythmspice) (12:54)

“What would I have done differently if I was starting Apptopia today? I would have invested into the quality of the data earlier. I let the product design move me into the clouds a little bit, because sometimes you're designing a product and you're designing visuals, but we were doing it without real data. One of the biggest things that I've learned over a lot of mistakes over a long period of time, is that we've got to incorporate real data in the design process.” - Jonathan Kay (@JonathanCKay) (20:09)

“We work with one of the biggest food manufacturer distributors in the world, and they were choosing between us and our biggest competitor, and what they essentially did was [say] “I need to put this report together every two weeks. I used your competitor’s platform during a trial and your platform during the trial, and I was able to do it two hours faster in your platform, so I chose you—because all the other checkboxes were equal. However, at the end of the day, if we could get two hours a week back by using your tool, saving time and saving money and making better decisions, they’re all equal ROI contributors.” - Jonathan Kay on UX (@JonathanCKay) (27:23)

“In terms of our product design and management hires, we're typically looking for people who have not worked at one company for 10 years. We've actually found a couple phenomenal designers that went from running their own consulting company to wanting to join full time. That was kind of a big win because one of them had a huge breadth of experience working with a bunch of different products in a bunch of different spaces.”- Jonathan Kay (@JonathanCKay) (30:34)

“In terms of how I use data when making decisions for Apptopia, here’s an example. If you break our business down into different personas, my understanding one time was that one of our personas was more stagnant. The data however, did not support that. And so we're having a resource planning meeting, and I'm saying, ‘let's pull back resources a little bit,’ but [my team is] showing me data that says my assumption on that customer segment is actually incorrect. I think entrepreneurs and passionate people need data more because we have so much conviction in our decisions—and because of that,I'm more likely to make bad decisions. Theoretically good entrepreneurs should have good instincts, and you need to trust those, but what I’m saying is, you also need to check those. It's okay to make sure that your instinct is correct, right? And one of the ways that I’ve gotten more mature is by forcing people to show me data to either back up my decision in either direction and being comfortable being wrong. And I am wrong at least half of the time with those things!” - Jonathan Kay (@JonathanCKay) (34:09)

In this episode, I explain the importance of having data before starting a machine learning project by sharing a story of freelancing in data science

LAST DAY TO JOIN 21 DAYS TO DATA: https://www.datacareerjumpstart.com/challenge

If you want a free way to kickstart your analytics career, check out my free 33-page PDF giving you an introduction to everything you need to know: https://www.datacareerjumpstart.com/roadmap

Want to learn data science while building your portfolio? Check out Data Career Jumpstart: https://www.datacareerjumpstart.com/data-career-jumpstart-course

MORE DATA ANALYTICS CONTENT HERE:

📺 Subscribe YouTube: https://www.youtube.com/c/AverySmithDataCareerJumpstart/videos

🎙Listen to My Podcast: https://podcasts.apple.com/us/podcast/data-career-podcast/id1547386535

👔 Connect with me on LinkedIn: https://www.linkedin.com/in/averyjsmith/

📸 Instagram: https://www.instagram.com/datacareerjumpstart/

👾Join My Discord: https://www.datacareerjumpstart.com/discord

🎵 TikTok: https://www.tiktok.com/@verydata? 

Mentioned in this episode: Join the last cohort of 2025! The LAST cohort of The Data Analytics Accelerator for 2025 kicks off on Monday, December 8th and enrollment is officially open!

To celebrate the end of the year, we’re running a special End-of-Year Sale, where you’ll get: ✅ A discount on your enrollment 🎁 6 bonus gifts, including job listings, interview prep, AI tools + more

If your goal is to land a data job in 2026, this is your chance to get ahead of the competition and start strong.

👉 Join the December Cohort & Claim Your Bonuses: https://DataCareerJumpstart.com/daa https://www.datacareerjumpstart.com/daa

Summary Along with globalization of our societies comes the need to analyze the geospatial and geotemporal data that is needed to manage the growth in commerce, communications, and other activities. In order to make geospatial analytics more maintainable and scalable there has been an increase in the number of database engines that provide extensions to their SQL syntax that supports manipulation of spatial data. In this episode Matthew Forrest shares his experiences of working in the domain of geospatial analytics and the application of SQL dialects to his analysis.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription StreamSets DataOps Platform is the world’s first single platform for building smart data pipelines across hybrid and multi-cloud architectures. Build, run, monitor and manage data pipelines confidently with an end-to-end data integration platform that’s built for constant change. Amp up your productivity with an easy-to-navigate interface and 100s of pre-built connectors. And, get pipelines and new hires up and running quickly with powerful, reusable components that work across batch and streaming. Once you’re up and running, your smart data pipelines are resilient to data drift. Those ongoing and unexpected changes in schema, semantics, and infrastructure. Finally, one single pane of glass for operating and monitoring all your data pipelines. The full transparency and control you desire for your data operations. Get started building pipelines in minutes for free at dataengineeringpodcast.com/streamsets. The first 10 listeners of the podcast that subscribe to StreamSets’ Professional Tier, receive 2 months free after their first month. Your host is Tobias Macey and today I’m interviewing Matthew Forrest about doing spatial analysis in SQL

Interview

Introduction How did you get involved in the area of data management? Can you describe what spatial SQL is and some of the use cases that it is relevant for? compatibility with/comparison to syntax from PostGIS What is involved in implementation of spatial logic in database engines mapping geospatial concepts into declarative syntax foundational data types data modeling workflow for analyzing spatial data sets outside of database engines translating from e.g. geopandas to SQL level of support in database engines for spatial data types What are the most interesting, innovative, or unexpected ways that you have seen spatial SQL used? What are the most interesting, unexpected, or challenging lessons that you have learned while working with spatial SQL? When is SQL the wrong choice for spatial analysis? What do you have planned for the future o

Summary There are many dimensions to the work of protecting the privacy of users in our data. When you need to share a data set with other teams, departments, or businesses then it is of utmost importance that you eliminate or obfuscate personal information. In this episode Will Thompson explores the many ways that sensitive data can be leaked, re-identified, or otherwise be at risk, as well as the different strategies that can be employed to mitigate those attack vectors. He also explains how he and his team at Privacy Dynamics are working to make those strategies more accessible to organizations so that you can focus on all of the other tasks required of you.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Today’s episode is Sponsored by Prophecy.io – the low-code data engineering platform for the cloud. Prophecy provides an easy-to-use visual interface to design & deploy data pipelines on Apache Spark & Apache Airflow. Now all the data users can use software engineering best practices – git, tests and continuous deployment with a simple to use visual designer. How does it work? – You visually design the pipelines, and Prophecy generates clean Spark code with tests on git; then you visually schedule these pipelines on Airflow. You can observe your pipelines with built in metadata search and column level lineage. Finally, if you have existing workflows in AbInitio, Informatica or other ETL formats that you want to move to the cloud, you can import them automatically into Prophecy making them run productively on Spark. Create your free account today at dataengineeringpodcast.com/prophecy. The only thing worse than having bad data is not knowing that you have it. With Bigeye’s data observability platform, if there is an issue with your data or data pipelines you’ll know right away and can get it fixed before the business is impacted. Bigeye let’s data teams measure, improve, and communicate the quality of your data to company stakeholders. With complete API access, a user-friendly interface, and automated yet flexible alerting, you’ve got everything you need to establish and maintain trust in your data. Go to dataengineeringpodcast.com/bigeye today to sign up and start trusting your analyses. Your host is Tobias Macey and today I’m interviewing Will Thompson about managing data privacy concerns for data sets used in analytics and machine learning

Interview

Introduction How did you get involved in the area of data management? Data privacy is a multi-faceted problem domain. Can you start by enumerating the different categories of privacy concern that are involved in analytical use cases? Can you describe what Privacy Dynamics is and the story behind it?

Which categor(y|ies) are you focused on addressing?

What are some of the best practices in the definition, protection, and enforcement of data privacy policies?

Is there a data security/privacy equivalent to the OWASP top 10?

What are some of the techniques that are available for anonymizing data while maintaining statistical utility/significance?

What are some of the engineering/systems capabilities that are required for data (platform) engineers to incorporate these practices in their platforms?

What are the tradeoffs of encryption vs. obfuscation when anonymizing data? What are some of the types of PII that are non-obvious? What are the risks associated with data re-identification, and what are some of the vectors that might be exploited to achieve that?

How can privacy risks mitigation be maintained as new data sources are introduced that might contribute to these re-identification vectors?

Can you describe how Privacy Dynamics is implemented?

What are the most challenging engineering problems that you are dealing with?

How do you approach validation of a data set’s privacy? What have you found to be useful heuristics for identifying private data?

What are the risks of false positives vs. false negatives?

Can you describe what is involved in integrating the Privacy Dynamics system into an existing data platform/warehouse?

What would be required to integrate with systems such as Presto, Clickhouse, Druid, etc.?

What are the most interesting, innovative, or unexpected ways that you have seen Privacy Dynamics used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Privacy Dynamics? When is Privacy Dynamics the wrong choice? What do you have planned for the future of Privacy Dynamics?

Contact Info

LinkedIn @willseth on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don’t forget to check out our other show, Podcast.init to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers

Links

Privacy Dynamics Pandas

Podcast Episode – Pandas For Data Engineering

Homomorphic Encryption Differential Privacy Immuta

Podcast Episode

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast

The advent of big data, self-service analytics, and cloud applications has created a need for new ways to manage data access. New data access governance tools promise to simplify and standardize data access and authorization across an enterprise. Data management expert, Sanjeev Mohan, provides an industry perspective on this emerging technology and what it means for data analytics teams.

Mark, Ryan, and Cris welcome back Marisa DiNatale, Senior Director at Moody's Analytics, to discuss the latest employment report. Full episode transcript.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Chegou aquele momento do ano em que vamos bancar a Mãe Dináh e fazer nossas previsões sobre o que achamos que será tendência na área de dados como um todo. Falamos sobre MLOps, Analytics Engineer, altos salários e até eleições! E é claro que convidamos novamente os Community Managers do Data Hackers para essa conversa: Marlesson Santana, Pietro Oliveira e Mario Filho!

Acesse nosso post no Medium para ter acesso aos links das referências: https://medium.com/data-hackers/tend%C3%AAncias-para-dados-e-ai-em-2022-data-hackers-podcast-51-384c0554a4a2

Send us a text Want to be featured as a guest on Making Data Simple? Reach out to us at [[email protected]] and tell us why you should be next.

Abstract Making Data Simple Podcast is hosted by Al Martin, WW VP Account Technical Leader IBM Technology Sales, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun. This week on Making Data Simple, we have Ayal Steinberg VP, WW Data, AI, and Automation Sales Leader Global Markets. Ayal started off in music and then in the late 1990’s shifted to retail where he learned about data and analytics. In the past 20 years Ayal has held various sales rolls during his career.  Show Notes 2:18 – What the new year means in sales 7:09 – How are you going to go to market in 2022? 10:36 – If we jumped to this time next year 2023, how did 2022 go? 12:36 – Does the Challenger Sale still apply today? 14:58 – How do you execute influence? 18:39 – What motivates a core sales team? 21:34 – What are your tricks and tips around shared communication? And then how do you build on that shared influence? Connect with the Team Producer Kate Mayne - LinkedIn. Producer Steve Templeton - LinkedIn. Host Al Martin - LinkedIn and Twitter.  Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Why is the Data Scientist role called the sexiest job of the 21st century? I believe it’s partly because the data science profession is constantly evolving to include new data types, new tech and tools, new modelling techniques along with an increasing ability to drive customer and business outcomes with data. The main challenge for data scientists becomes one of bandwidth. Great data scientists are highly intelligent, technically proficient, curious and creative, but even so, the world of data science is evolving too fast for most individuals to keep up with. I recently spoke with Ravit Jain to understand how data professionals stay relevant and connected to the fast-paced world of data.   Ravit is a true servant leader who has built a global online community of data lovers. Through his work as a book publisher, podcast and vlog host, content curator and conference organiser he helps hundreds of thousands of data professionals learn new skills, share knowledge and connect with each other. In this episode of Leaders of Analytics, we discuss what’s hot in data, including: How Ravit became passionate about the world of dataHow to build your career in dataThe most important trends and topics in data today and the futureThe traits that make some data science leaders stand out from the restWhy Ravit’s first advice for aspiring data professionals is to start networking with others in the industry, and much more.

AB TESTING: CAUSE, EFFECT AND UNCERTAINTY

We will discuss why AB Testing, and analytics more generally, always involves uncertainty. We will then briefly discuss the causal inference problem, and how AB Tests are one of the main ways to help solve these problems. In the course of the talk we will briefly touch on types of reasoning, the importance of assignment, and the logic of p-values.

ANALYTICS IN THE AGE OF THE MODERN DATA STACK

The pace of change in the analytics sector increased dramatically since 2012 with tons of new tools, paving the way to the birth of the Modern Data Stack. The rapid explosion of tools is met with a rapid explosion of restrictions, challenging the status quo of data collection, processing and storage. How does that reflect on Analytics and its future?

Quality, speed, and velocity are all things we would like to have around decision making. One of the big value propositions of analytics is support for decision making, automating decision making, and making decision making easier. Yet somehow in practice it is rare to find this happening in and around businesses today. In this talk we are going to dive into decisions in depth, talk about decision science a bit and some approaches to facilitate better decision making individually and corporately.

Since the dawn of digital analytics, commerce activities have been defined with eCommerce metrics and tracking. This approach was described and accepted as a de-facto standard, as no better integration and commerce model was offered. With technology advancements and the evolution of purchasing behavior, commerce activities are becoming more dominant. It is time to reformat or redesign the data and start talking to businesses with a different narrative.