talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

If you’re anything like me, you have a love/hate relationship with marketing. Marketing can be delightful, obnoxious or somewhere in-between, depending on content and context. Most of us remember an ad from our youth that has given us a life-long emotional connection to a brand or product. Most of us also remember that obnoxious sales call or email campaign that made us swear never to buy from the offending company again. In this episode of Leaders of Analytics, you will learn from Ikechi Okoronkwo why data-driven marketers have a leg-up when it comes to designing and executing impactful campaigns that hit the right audiences and create delight. Ikechi is Executive Director, Managing Partner and Head of Business Intelligence & Analytics at Mindshare. Mindshare is a global media and marketing agency, and part of global marketing powerhouse GroupM. Listen to this episode to learn: What Ikechi sees as the biggest opportunities in data-driven marketingWhat kinds of analytics to invest in to optimise the impact of your marketing effortsWhat kinds of data is needed to take advantage of these opportunities, and how to collect itHow Ikechi and colleagues use data and analytics to distinguish between rational and emotional reactions to advertisingHow to drive a culture of experimentation and measurement among colleagues and stakeholders who are more creatively than analytically minded, and much more.Ikechi on LinkedIn: https://www.linkedin.com/in/ikechi-okoronkwo-0318579/

In today’s episode, we are joined by Boris Berenberg. Boris is VP of Product at Modus Create, a digital transformation consulting firm aimed at helping clients build competitive advantage through digital innovation.

We talk about:

How Modus works and the problems it solves.Boris’ background and how he got into building products.Finding the optimal sweet spot between growth and efficiency.Redefining your target audience and customer needs.The importance of go-to-market for products.The various phases of thinking through a successful product.The importance of quality content in SaaS.

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.  Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas  #analytics #AWS  #BI

podcast_episode
by Santosh Kanthethy (EverBright (subsidiary of NextEra Energy Resources)) , Mico Yuk (Data Storytelling Academy)

Data plays a vital role in helping companies develop a competitive advantage, but it's the data evangelist who gathers and leverages those insights to help organizations understand the story their data is telling them. Today, on Analytics on Fire, we discuss how to become a data evangelist with data storyteller, leader, and lifelong learner, Santosh Kanthethy. At the time of recording this episode, Santosh was the IT Technology Manager for NextEra Energy Resources. Now, he is Head of Data Analytics and the leader of a growing internal data visualization community at EverBright, a solar financing solutions company and a subsidiary of NextEra. Tuning in, you'll gain step-by-step instructions for becoming a rockstar data evangelist , including three things to consider before you get started. We also take a look at the top functions of an internal data visualization community, how to get your executive team on board, and how to overcome some of the challenges that data evangelists are likely to encounter along the way. For actionable insights into how to build a thriving community, transform data culture from the inside out, and more, make sure not to miss this episode!   In this episode, you'll learn: [06:16] More about NextEra, one of America's largest capital investors in infrastructure. [07:10] Defining what a data evangelist is and how the internal data visualization community at NextEra was born. [08:48] Why Santosh decided to nurture and grow this community and switch from IT to data. [09:55] What the game of cricket taught Santosh about being a team leader. [13:55] Three things to consider before becoming a data evangelist: the maturity of your organization, your curiosity, and your ability to create content. [19:16] How often the data community meets and some of the topics that come up. [20:50] The three core selling points of a data community for your company: consistency better decision making, and relevance. [24:19] Tips for obtaining essential executive buy-in and support. [26:52] Becoming tool-agnostic: how to evangelize the benefits of the practice, not the tool. [29:34] A look at membership and how to determine who joins your data community. [31:40] KPIs, WIGs, and OKRs to measure the success of your community. [34:13] How data evangelists can overcome resistance while building a community. [36:20] What percentage of technology budgets should be allocated to community, change management, and upskilling. [38:50] How Santosh is inspired by the people he interacts with on a daily basis. [0:43:21] How Santosh can help you visualize your fitness data from Garmin or Strava! For full show notes, and the links mentioned visit: https://bibrainz.com/podcast/89   Enjoyed the Show?  Please leave us a review on iTunes.

podcast_episode
by Val Kroll , Julie Hoyer , Frederik Werner (DHL) , Tim Wilson (Analytics Power Hour - Columbus (OH) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

Do analysts make things more complicated than they need to be, or is the data representing a complex world, so that is just the nature of the beast? Or is it both? Stakeholders yearn for simple answers to simple questions, but the road to delivering meaningful results seems paved with potholes of statistical complexity, data nuances, and messy tooling. What is a business to do? Frederik Werner from DHL joined Michael and Tim for a discussion that definitively determined that, well, the topic is…complicated! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

Today I’m chatting with Katy Pusch, Senior Director of Product and Integration for Cox2M. Katy describes the lessons she’s learned around making sure that the “juice is always worth the squeeze” for new users to adopt data solutions into their workflow. She also explains the methodologies she’d recommend to data & analytics professionals to ensure their IOT and data products are widely adopted. Listen in to find out why this former analyst turned data product leader feels it’s crucial to focus on more than just delivering data or AI solutions, and how spending more time upfront performing qualitative research on users can wind up being more efficient in the long run than jumping straight into development.

Highlights/ Skip to:

What Katy does at Cox2M, and why the data product manager role is so hard to define (01:07) Defining the value of the data in workflows and how that’s approached at Cox2M (03:13) Who buys from Cox2M and the customer problems that Katy’s product solves (05:57) How Katy approaches the zero-to-one process of taking IOT sensor data and turning it into a customer experience that provides a valuable solution (08:00) What Katy feels best motivates the adoption of a new solution for users (13:21) Katy describes how she spends more time upfront before development to ensure she’s solving the right problems for users (16:13) Katy’s views on the importance of data science & analytics pros being able to communicate in the language of their audience (20:47) The differences Katy sees between designing data products for sophisticated data users vs a broader audience (24:13) The methods Katy uses to effectively perform qualitative research and her triangulation method to surface the real needs of end users (27:29) Katy’s views on the most valuable skills for future data product managers (35:24)

Quotes from Today’s Episode “I’ve had the opportunity to get a little bit closer to our customers than I was in the beginning parts of my tenure here at Cox2M. And it’s just like a SaaS product in the sense that the quality of your data is still dependent on your customers’ workflows and their ability to engage in workflows that supply accurate data. And it’s been a little bit enlightening to realize that the same is true for IoT.” – Katy Pusch (02:11)

“Providing insights to executives that are [simply] interesting is not really very impactful. You want to provide things that are actionable and that drive the business forward.” – Katy Pusch (4:43)

“So, there’s one side of it, which is [the] happy path: figure out a way to embed your product in the customer’s existing workflow. That’s where the most success happens. But in the situation we find ourselves in right now with [this IoT solution], we do have to ask them to change their workflow.”-- Katy Pusch (12:46)

“And the way to communicate [the insight to other stakeholders] is not with being more precise with your numbers [or adding] statistics. It’s just to communicate the output of your analysis more clearly to the person who needs to be able to make a decision.” -- Katy Pusch (23:15)

“You have to define ‘What decision is my user making on a repeated basis that is worth building something that it does automatically?’ And so, you say, ‘What are the questions that my user needs answers to on a repeated basis?’ … At its essence, you’re answering three or four questions for that user [that] have to be the most important [...] questions for your user to add value. And that can be a difficult thing to derive with confidence.” – Katy Pusch (25:55)

“The piece of workflow [on the IOT side] that’s really impactful there is we’re asking for an even higher degree of change management in that case because we’re asking them to attach this device to their vehicle, and then detach it at a different point in time and there’s a procedure in the solution to allow for that, but someone at the dealership has to engage in that process. So, there’s a change management in the workflow that the juice has to be worth the squeeze to encourage a customer to embark in that journey with you.” – Katy Pusch (12:08)

“Finding people in your organization who have the appetite to be cross-functionally educated, particularly in a data arena, is very important [to] help close some of those communication gaps.” – Katy Pusch (37:03)

Summary The global economy is dependent on complex and dynamic networks of supply chains powered by sophisticated logistics. This requires a significant amount of data to track shipments and operational characteristics of materials and goods. Roambee is a platform that collects, integrates, and analyzes all of that information to provide companies with the critical insights that businesses need to stay running, especially in a time of such constant change. In this episode Roambee CEO, Sanjay Sharma, shares the types of questions that companies are asking about their logistics, the technical work that they do to provide ways to answer those questions, and how they approach the challenge of data quality in its many forms.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Prefect is the modern Dataflow Automation platform for the modern data stack, empowering data practitioners to build, run and monitor robust pipelines at scale. Guided by the principle that the orchestrator shouldn’t get in your way, Prefect is the only tool of its kind to offer the flexibility to write code as workflows. Prefect specializes in glueing together the disparate pieces of a pipeline, and integrating with modern distributed compute libraries to bring power where you need it, when you need it. Trusted by thousands of organizations and supported by over 20,000 community members, Prefect powers over 100MM business critical tasks a month. For more information on Prefect, visit dataengineeringpodcast.com/prefect. Data engineers don’t enjoy writing, maintaining, and modifying ETL pipelines all day, every day. Especially once they realize 90% of all major data sources like Google Analytics, Salesforce, Adwords, Facebook, Spreadsheets, etc., are already available as plug-and-play connectors with reliable, intuitive SaaS solutions. Hevo Data is a highly reliable and intuitive data pipeline platform used by data engineers from 40+ countries to set up and run low-latency ELT pipelines with zero maintenance. Boasting more than 150 out-of-the-box connectors that can be set up in minutes, Hevo also allows you to monitor and control your pipelines. You get: real-time data flow visibility, fail-safe mechanisms, and alerts if anything breaks; preload transformations and auto-schema mapping precisely control how data lands in your destination; models and workflows to transform data for analytics; and reverse-ETL capability to move the transformed data back to your business software to inspire timely action. All of this, plus its transparent pricing and 24*7 live support, makes it consistently voted by user

Trino: The Definitive Guide, 2nd Edition

Perform fast interactive analytics against different data sources using the Trino high-performance distributed SQL query engine. In the second edition of this practical guide, you'll learn how to conduct analytics on data where it lives, whether it's a data lake using Hive, a modern lakehouse with Iceberg or Delta Lake, a different system like Cassandra, Kafka, or SingleStore, or a relational database like PostgreSQL or Oracle. Analysts, software engineers, and production engineers learn how to manage, use, and even develop with Trino and make it a critical part of their data platform. Authors Matt Fuller, Manfred Moser, and Martin Traverso show you how a single Trino query can combine data from multiple sources to allow for analytics across your entire organization. Explore Trino's use cases, and learn about tools that help you connect to Trino for querying and processing huge amounts of data Learn Trino's internal workings, including how to connect to and query data sources with support for SQL statements, operators, functions, and more Deploy and secure Trino at scale, monitor workloads, tune queries, and connect more applications Learn how other organizations apply Trino successfully

Mark, Ryan, and Cris breakdown this week's key economic data and developments in financial markets. They also go through the economic impact of Hurricane Ian and the policy errors that are unfolding in the U.K. and ironically where Mark records the podcast from. Full episode transcript. Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis @MiddleWayEcon for additional insight.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

podcast_episode
by Dan White (Moody's Analytics) , Bill Glasgall (Volcker Alliance) , Cris deRitis , Mark Zandi (Moody's Analytics) , Ryan Sweet

Mark, Ryan, and Cris welcome colleague Dan White of Moody's Analytics and Bill Glasgall, Senior Director, Public Finance at the Volcker Alliance, to discuss state and local government finances and whether it will be a tailwind or drag on the broader economy. Full Episode transcript. Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis @MiddleWayEcon for additional insight. William Glasgall is senior director, public finance at the Volcker Alliance, a New York-based nonprofit organization where he has supervised the publication of numerous working papers and studies, including four Truth and Integrity in State Budgeting reports. He is also the creator of the Special Briefing webcast series and podcast, co-produced with the University of Pennsylvania Institute for Urban Research, where he is a fellow. Be sure to check out Volcker Alliance’s new podcast “Special Briefing” hosted by William Glasgall available here: Apple Podcasts, Spotify, Google Podcasts,

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

In today’s episode, we’re joined by Indus Khaitan. Indus is the CEO and Co-Founder of Quolum, a platform to make buying SaaS products as easy as possible.

We talk about:

Indus’ background, growing up in a mining town in India and moving to the USA to work in tech.How Quolum got started and the problems it solves today.Growing a business slowly and organically vs pushing to grow as fast as possible.Indus’ advice for early-stage founders.Is the SaaS market too heavily influenced by investors?The danger of celebrating unicorn valuations and funding.Some of the key events in Indus’ life that helped him in business.Why do people choose to risk it as a founder?

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.

Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas  #analytics #AWS  #BI

Data Science and Analytics for SMEs: Consulting, Tools, Practical Use Cases

Master the tricks and techniques of business analytics consulting, specifically applicable to small-to-medium businesses (SMEs). Written to help you hone your business analytics skills, this book applies data science techniques to help solve problems and improve upon many aspects of a business' operations. SMEs are looking for ways to use data science and analytics, and this need is becoming increasingly pressing with the ongoing digital revolution. The topics covered in the books will help to provide the knowledge leverage needed for implementing data science in small business. The demand of small business for data analytics are in conjunction with the growing number of freelance data science consulting opportunities; hence this book will provide insight on how to navigate this new terrain. This book uses a do-it-yourself approach to analytics and introduces tools that are easily available online and are non-programming based. Data science will allow SMEs to understand their customer loyalty, market segmentation, sales and revenue increase etc. more clearly. Data Science and Analytics for SMEs is particularly focused on small businesses and explores the analytics and data that can help them succeed further in their business. What You'll Learn Create and measure the success of their analytics project Start your business analytics consulting career Use solutions taught in the book in practical uses cases and problems Who This Book Is For Business analytics enthusiasts who are not particularly programming inclined, small business owners and data science consultants, data science and business students, and SME (small-to-medium enterprise) analysts

Mark, Cris, and Ryan sit down for their first in-person podcast to discuss their recession odds over the next 6,12, or 18 months. They list out both contributing and mitigating risk factors and the market signals to watch to understand where the economy is headed. Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis @MiddleWayEcon for additional insight. Watch the video of the in-person episode.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Summary Data integration from source systems to their downstream destinations is the foundational step for any data product. With the increasing expecation for information to be instantly accessible, it drives the need for reliable change data capture. The team at Fivetran have recently introduced that functionality to power real-time data products. In this episode Mark Van de Wiel explains how they integrated CDC functionality into their existing product, discusses the nuances of different approaches to change data capture from various sources.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! You wake up to a Slack message from your CEO, who’s upset because the company’s revenue dashboard is broken. You’re told to fix it before this morning’s board meeting, which is just minutes away. Enter Metaplane, the industry’s only self-serve data observability tool. In just a few clicks, you identify the issue’s root cause, conduct an impact analysis⁠—and save the day. Data leaders at Imperfect Foods, Drift, and Vendr love Metaplane because it helps them catch, investigate, and fix data quality issues before their stakeholders ever notice they exist. Setup takes 30 minutes. You can literally get up and running with Metaplane by the end of this podcast. Sign up for a free-forever plan at dataengineeringpodcast.com/metaplane, or try out their most advanced features with a 14-day free trial. Mention the podcast to get a free "In Data We Trust World Tour" t-shirt. RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder. Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to dataengineeringpodcast.com/ascend and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $5,000 when you become a customer. Your host is Tobias Macey and today I’m interviewing Mark Van de Wiel about Fivetran’s implementation of chang

Summary Regardless of how data is being used, it is critical that the information is trusted. The practice of data reliability engineering has gained momentum recently to address that question. To help support the efforts of data teams the folks at Soda Data created the Soda Checks Language and the corresponding Soda Core utility that acts on this new DSL. In this episode Tom Baeyens explains their reasons for creating a new syntax for expressing and validating checks for data assets and processes, as well as how to incorporate it into your own projects.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Prefect is the modern Dataflow Automation platform for the modern data stack, empowering data practitioners to build, run and monitor robust pipelines at scale. Guided by the principle that the orchestrator shouldn’t get in your way, Prefect is the only tool of its kind to offer the flexibility to write code as workflows. Prefect specializes in glueing together the disparate pieces of a pipeline, and integrating with modern distributed compute libraries to bring power where you need it, when you need it. Trusted by thousands of organizations and supported by over 20,000 community members, Prefect powers over 100MM business critical tasks a month. For more information on Prefect, visit dataengineeringpodcast.com/prefect. Data engineers don’t enjoy writing, maintaining, and modifying ETL pipelines all day, every day. Especially once they realize 90% of all major data sources like Google Analytics, Salesforce, Adwords, Facebook, Spreadsheets, etc., are already available as plug-and-play connectors with reliable, intuitive SaaS solutions. Hevo Data is a highly reliable and intuitive data pipeline platform used by data engineers from 40+ countries to set up and run low-latency ELT pipelines with zero maintenance. Boasting more than 150 out-of-the-box connectors that can be set up in minutes, Hevo also allows you to monitor and control your pipelines. You get: real-time data flow visibility, fail-safe mechanisms, and alerts if anything breaks; preload transformations and auto-schema mapping precisely control how data lands in your destination; models and workflows to transform data for analytics; and reverse-ETL capability to move the transformed data back to your business software to inspire timely action. All of this, plus its transparent pricing and 24*7 live support, makes it consistently voted by users as the Leader in the Data Pipeline category on review platforms like G2. Go to dataengineeringpodcast.com/hevodata and sign up for a free 14-day trial that also comes

Azure Data Engineering Cookbook - Second Edition

Azure Data Engineering Cookbook is your ultimate guide to mastering data engineering on Microsoft's Azure platform. Through an engaging collection of recipes, this book breaks down procedures to build sophisticated data pipelines, leveraging tools like Azure Data Factory, Data Lake, Databricks, and Synapse Analytics. What this Book will help me do Efficiently process large datasets using Azure Synapse analytics and Azure Databricks pipelines. Transform and shape data within systems by leveraging Azure Synapse data flows. Implement and manage relational databases in Azure with performance tuning and administration. Configure data pipeline solutions integrated with Power BI for insightful reporting. Monitor, optimize, and ensure lineage tracking for your data systems efficiently with Purview and Log analytics. Author(s) Nagaraj Venkatesan is an experienced cloud architect specializing in Microsoft Azure, with years of hands-on data engineering expertise. Ahmad Osama is a seasoned data professional and author's shared emphasis is on practical learning and bridging this with actionable skills effectively. Who is it for? This book is essential for data engineers seeking expertise in Azure's rich engineering capabilities. It's tailored for professionals with a foundational knowledge of cloud services, looking to achieve advanced proficiency in Azure data engineering pipelines.

podcast_episode
by Victor Calanog (Moody's Analytics) , Janice Stanton (Cushman & Wakefield) , Cris deRitis , Mark Zandi (Moody's Analytics) , Ryan Sweet

Janice Stanton, Executive Managing Director at Cushman and Wakefield and Victor Calanog, head of CRE at Moody's Analytics join the podcast to share their views in Commercial Real Estate and how it impacts the U.S. economy. Mark, Cris, and Ryan discuss recent developments in financial markets and the latest decision by the Fed.  Full episode transcript More Info on Janice Stanton: Ms. Stanton is an Executive Managing Director in the Capital Markets group at C&W. She is responsible for advising global investors on the real estate investment markets. Ms. Stanton has more than 25 years of industry experience in real estate investment research and analytics, finance, and pension fund management. More info on Victor Calanog: Dr. Calanog is the Head of Commercial Real Estate Economics at Moody’s Analytics. He and his team of economists and analysts are responsible for the firm’s commercial real estate market forecasting, valuation, and portfolio analytics services. For more on the long-term impacts to the built environment if remote and hybrid work models continue, check out the Future of Work research hub. Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis @MiddleWayEcon for additional insight.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

In today’s episode we’re talking to Dvir Shapira. Dvir is Chief Product Officer at Venn LocalZone, a company that’s creating a secure workspace for remote work.

We talk about:

…and much more.

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security, and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.

Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas  #analytics #AWS  #BI

Today I’m chatting with Vin Vashishta, Founder of V Squared. Vin believes that with methodical strategic planning, companies can prepare for continuous transformation by removing the silos that exist between leadership, data, AI, and product teams. How can these barriers be overcome, and what is the impact of doing so? Vin answers those questions and more, explaining why process disruption is necessary for long-term success and gives real-world examples of companies who are adopting these strategies.

Highlights/ Skip to:

What the AI ‘Last Mile’ Problem is (03:09) Why Vin sees so many businesses are reevaluating their offerings and realigning with their core business model (09:01) Why every company today is struggling to figure out how to bridge the gap between data, product, and business value (14:25) How the skillsets needed for success are evolving for data, product, and business leaders (14:40) Vin’s process when he’s helping a team with a data strategy, and what the end result looks like (21:53) Why digital transformation is dead, and how to reframe what business transformation means in today’s day and age (25:03) How Airbnb used data to inform their overall strategy to survive during a time of massive industry disruption, and how those strategies can be used by others as a preventative measure (29:03) Unpacking how a data strategy leader can work backward from a high-level business strategy to determining actionable steps and use cases for ML and analytics (32:52) Who (what roles) are ultimately responsible in an ideal strategy planning session? (34:41) How the C-Suite can bridge business & data strategy and the impact the world’s largest companies are seeing as a result (36:01)

Quotes from Today’s Episode “And when you have that [core business & technology strategy] disconnect, technology goes in one direction, what the business needs and what customers need sort of lives outside of the silo.” – Vin Vashishta (06:06)

“Why are we doing data and not just traditional software development? Why are we doing data science and not analytics? There has to be a justification because each one of these is more expensive than the last, each one is, you know, less certain.” – Vin Vashishta (10:36)

“[The right people to train] are smart about the technology, but have also lived with the users, have some domain expertise, and the interest in making a bigger impact. Let’s put them in strategy roles.” – Vin Vashishta (18:58) “You know, this is never going to end. Transformation is continuous. I don’t call it digital transformation anymore because that’s making you think that this thing is somehow a once-in-a-generation change. It’s not. It’s once every five years now.” – Vin Vashishta (25:03) “When do you want to have those [business] opportunities done by? When do you want to have those objectives completed by? Well, then that tells you how fast you have to transform if you want to use each one of these different technologies.” – Vin Vashishta (25:37) “You’ve got to disrupt the process. Strategy planning is not the same anymore. Look at how Amazon does it. ... They are destroying their competitors because their strategy planning process is both expert and data model-driven.” – Vin Vashishta (33:44) “And one of the critical things for CDOs to do is tell stories with data to the board. When they sit in and talk to the board. They need to tell those stories about how one data point hit this one use case and the company made $4 million.” – Vin Vashishta (39:33)

Links HumblePod: https://humblepod.com V Squared: https://datascience.vin LinkedIn: https://www.linkedin.com/in/vineetvashishta/ Twitter: https://twitter.com/v_vashishta YouTube channel: https://www.youtube.com/c/TheHighROIDataScientist Substack: https://vinvashishta.substack.com/

podcast_episode
by Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

Here at the Analytics Power Hour, we have a very clear delineation of who owns what when it comes to the show production. And ownership is the topic of this episode. It's possible that the owner of the episode description feels like this is an awfully touchy-feely topic, but said owner also knows that teamwork means going along with the majority when it comes to show topics. I guess that's joint ownership? Can that work? Sadly, that, specifically, was not discussed, but the show definitely earned its explicit rating with this episode! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

Learning Microsoft Power BI

Microsoft Power BI is a data analytics and visualization tool powerful enough for the most demanding data scientists, but accessible enough for everyday use for anyone who needs to get more from data. The market has many books designed to train and equip professional data analysts to use Power BI, but few of them make this tool accessible to anyone who wants to get up to speed on their own. This streamlined intro to Power BI covers all the foundational aspects and features you need to go from "zero to hero" with data and visualizations. Whether you work with large, complex datasets or work in Microsoft Excel, author Jeremey Arnold shows you how to teach yourself Power BI and use it confidently as a regular data analysis and reporting tool. You'll learn how to: Import, manipulate, visualize, and investigate data in Power BI Approach solutions for both self-service and enterprise BI Use Power BI in your organization's business intelligence strategy Produce effective reports and dashboards Create environments for sharing reports and managing data access with your team Determine the right solution for using Power BI offerings based on size, security, and computational needs