talk-data.com talk-data.com

Topic

Data Science

machine_learning statistics analytics

1516

tagged

Activity Trend

68 peak/qtr
2020-Q1 2026-Q1

Activities

1516 activities · Newest first

Today I’m chatting with Katy Pusch, Senior Director of Product and Integration for Cox2M. Katy describes the lessons she’s learned around making sure that the “juice is always worth the squeeze” for new users to adopt data solutions into their workflow. She also explains the methodologies she’d recommend to data & analytics professionals to ensure their IOT and data products are widely adopted. Listen in to find out why this former analyst turned data product leader feels it’s crucial to focus on more than just delivering data or AI solutions, and how spending more time upfront performing qualitative research on users can wind up being more efficient in the long run than jumping straight into development.

Highlights/ Skip to:

What Katy does at Cox2M, and why the data product manager role is so hard to define (01:07) Defining the value of the data in workflows and how that’s approached at Cox2M (03:13) Who buys from Cox2M and the customer problems that Katy’s product solves (05:57) How Katy approaches the zero-to-one process of taking IOT sensor data and turning it into a customer experience that provides a valuable solution (08:00) What Katy feels best motivates the adoption of a new solution for users (13:21) Katy describes how she spends more time upfront before development to ensure she’s solving the right problems for users (16:13) Katy’s views on the importance of data science & analytics pros being able to communicate in the language of their audience (20:47) The differences Katy sees between designing data products for sophisticated data users vs a broader audience (24:13) The methods Katy uses to effectively perform qualitative research and her triangulation method to surface the real needs of end users (27:29) Katy’s views on the most valuable skills for future data product managers (35:24)

Quotes from Today’s Episode “I’ve had the opportunity to get a little bit closer to our customers than I was in the beginning parts of my tenure here at Cox2M. And it’s just like a SaaS product in the sense that the quality of your data is still dependent on your customers’ workflows and their ability to engage in workflows that supply accurate data. And it’s been a little bit enlightening to realize that the same is true for IoT.” – Katy Pusch (02:11)

“Providing insights to executives that are [simply] interesting is not really very impactful. You want to provide things that are actionable and that drive the business forward.” – Katy Pusch (4:43)

“So, there’s one side of it, which is [the] happy path: figure out a way to embed your product in the customer’s existing workflow. That’s where the most success happens. But in the situation we find ourselves in right now with [this IoT solution], we do have to ask them to change their workflow.”-- Katy Pusch (12:46)

“And the way to communicate [the insight to other stakeholders] is not with being more precise with your numbers [or adding] statistics. It’s just to communicate the output of your analysis more clearly to the person who needs to be able to make a decision.” -- Katy Pusch (23:15)

“You have to define ‘What decision is my user making on a repeated basis that is worth building something that it does automatically?’ And so, you say, ‘What are the questions that my user needs answers to on a repeated basis?’ … At its essence, you’re answering three or four questions for that user [that] have to be the most important [...] questions for your user to add value. And that can be a difficult thing to derive with confidence.” – Katy Pusch (25:55)

“The piece of workflow [on the IOT side] that’s really impactful there is we’re asking for an even higher degree of change management in that case because we’re asking them to attach this device to their vehicle, and then detach it at a different point in time and there’s a procedure in the solution to allow for that, but someone at the dealership has to engage in that process. So, there’s a change management in the workflow that the juice has to be worth the squeeze to encourage a customer to embark in that journey with you.” – Katy Pusch (12:08)

“Finding people in your organization who have the appetite to be cross-functionally educated, particularly in a data arena, is very important [to] help close some of those communication gaps.” – Katy Pusch (37:03)

podcast_episode
by Justin Fletcher (United States Space Force Space Systems Command)

We have had many guests on the show to discuss how different industries leverage data science to transform the way they do business, but arguably one of the most important applications of data science is in space research and technology.

Justin Fletcher joins the show to talk about how the US Space Force is using deep learning with telescope data to monitor satellites, potentially lethal space debris, and identify and prevent catastrophic collisions. Justin is responsible for artificial intelligence and autonomy technology development within the Space Domain Awareness Delta of the United States Space Force Space Systems Command. With over a decade of experience spanning space domain awareness, high performance computing, and air combat effectiveness, Justin is a recognized leader in defense applications of artificial intelligence and autonomy.

In this episode, we talk about how the US Space Force utilizes deep learning, how the US Space Force publishes its research and data to find high-quality peer review, the must-have skills aspiring practitioners need in order to pursue a career in Defense, and much more.

We talked about:

Supreet’s background Responsible AI Example of explainable AI Responsible AI vs explainable AI Explainable AI tools and frameworks (glass box approach) Checking for bias in data and handling personal data Understanding whether your company needs certain type of data Data quality checks and automation Responsibility vs profitability The human touch in AI The trade-off between model complexity and explainability Is completely automated AI out of the question? Detecting model drift and overfitting How Supreet became interested in explainable AI Trustworthy AI Reliability vs fairness Bias indicators The future of explainable AI About DataBuzz The diversity of data science roles Ethics in data science Conclusion

Links:

LinkedIn: https://www.linkedin.com/in/supreet-kaur1995/ Databuzz page: https://www.linkedin.com/company/databuzz-club/ Medium Blog Page: https://medium.com/@supreetkaur_66831

ML Zoomcamp: https://github.com/alexeygrigorev/mlbookcamp-code/tree/master/course-zoomcamp

Join DataTalks.Club: https://datatalks.club/slack.html

Our events: https://datatalks.club/events.html

We talked about:

Audience Poll Andrey’s background What data science practice is Best DS practice in a traditional company vs IT-centric companies Getting started with building data science practice (finding out who you report to) Who the initiative comes from Finding out what kind of problems you will be solving (Centralized approach) Moving to a semi-decentralized approach Resources to learn about data science practice Pivoting from the role of a software engineer to data scientist The most impactful realization from data science practice Advice for individual growth Finding Andrey online

Links:

Data Teams book: https://www.amazon.com/Data-Teams-Management-Successful-Data-Focused/dp/1484262271/

ML Zoomcamp: https://github.com/alexeygrigorev/mlbookcamp-code/tree/master/course-zoomcamp

Join DataTalks.Club: https://datatalks.club/slack.html

Our events: https://datatalks.club/events.html

Data Science and Analytics for SMEs: Consulting, Tools, Practical Use Cases

Master the tricks and techniques of business analytics consulting, specifically applicable to small-to-medium businesses (SMEs). Written to help you hone your business analytics skills, this book applies data science techniques to help solve problems and improve upon many aspects of a business' operations. SMEs are looking for ways to use data science and analytics, and this need is becoming increasingly pressing with the ongoing digital revolution. The topics covered in the books will help to provide the knowledge leverage needed for implementing data science in small business. The demand of small business for data analytics are in conjunction with the growing number of freelance data science consulting opportunities; hence this book will provide insight on how to navigate this new terrain. This book uses a do-it-yourself approach to analytics and introduces tools that are easily available online and are non-programming based. Data science will allow SMEs to understand their customer loyalty, market segmentation, sales and revenue increase etc. more clearly. Data Science and Analytics for SMEs is particularly focused on small businesses and explores the analytics and data that can help them succeed further in their business. What You'll Learn Create and measure the success of their analytics project Start your business analytics consulting career Use solutions taught in the book in practical uses cases and problems Who This Book Is For Business analytics enthusiasts who are not particularly programming inclined, small business owners and data science consultants, data science and business students, and SME (small-to-medium enterprise) analysts

In this episode, Jason talks to Natalia Connolly, the Vice President of Data Science at Infinite Acres, about Agricultural Tech (AgTech) and how it is helping the agriculture industry keep up with increasing demand. With the assistance of technology and data, the agricultural industry's not just been improved but completely overhauled over time with more sustainable methods and tech.

Today I’m chatting with Vin Vashishta, Founder of V Squared. Vin believes that with methodical strategic planning, companies can prepare for continuous transformation by removing the silos that exist between leadership, data, AI, and product teams. How can these barriers be overcome, and what is the impact of doing so? Vin answers those questions and more, explaining why process disruption is necessary for long-term success and gives real-world examples of companies who are adopting these strategies.

Highlights/ Skip to:

What the AI ‘Last Mile’ Problem is (03:09) Why Vin sees so many businesses are reevaluating their offerings and realigning with their core business model (09:01) Why every company today is struggling to figure out how to bridge the gap between data, product, and business value (14:25) How the skillsets needed for success are evolving for data, product, and business leaders (14:40) Vin’s process when he’s helping a team with a data strategy, and what the end result looks like (21:53) Why digital transformation is dead, and how to reframe what business transformation means in today’s day and age (25:03) How Airbnb used data to inform their overall strategy to survive during a time of massive industry disruption, and how those strategies can be used by others as a preventative measure (29:03) Unpacking how a data strategy leader can work backward from a high-level business strategy to determining actionable steps and use cases for ML and analytics (32:52) Who (what roles) are ultimately responsible in an ideal strategy planning session? (34:41) How the C-Suite can bridge business & data strategy and the impact the world’s largest companies are seeing as a result (36:01)

Quotes from Today’s Episode “And when you have that [core business & technology strategy] disconnect, technology goes in one direction, what the business needs and what customers need sort of lives outside of the silo.” – Vin Vashishta (06:06)

“Why are we doing data and not just traditional software development? Why are we doing data science and not analytics? There has to be a justification because each one of these is more expensive than the last, each one is, you know, less certain.” – Vin Vashishta (10:36)

“[The right people to train] are smart about the technology, but have also lived with the users, have some domain expertise, and the interest in making a bigger impact. Let’s put them in strategy roles.” – Vin Vashishta (18:58) “You know, this is never going to end. Transformation is continuous. I don’t call it digital transformation anymore because that’s making you think that this thing is somehow a once-in-a-generation change. It’s not. It’s once every five years now.” – Vin Vashishta (25:03) “When do you want to have those [business] opportunities done by? When do you want to have those objectives completed by? Well, then that tells you how fast you have to transform if you want to use each one of these different technologies.” – Vin Vashishta (25:37) “You’ve got to disrupt the process. Strategy planning is not the same anymore. Look at how Amazon does it. ... They are destroying their competitors because their strategy planning process is both expert and data model-driven.” – Vin Vashishta (33:44) “And one of the critical things for CDOs to do is tell stories with data to the board. When they sit in and talk to the board. They need to tell those stories about how one data point hit this one use case and the company made $4 million.” – Vin Vashishta (39:33)

Links HumblePod: https://humblepod.com V Squared: https://datascience.vin LinkedIn: https://www.linkedin.com/in/vineetvashishta/ Twitter: https://twitter.com/v_vashishta YouTube channel: https://www.youtube.com/c/TheHighROIDataScientist Substack: https://vinvashishta.substack.com/

podcast_episode
by David A. Bader (New Jersey Institute of Technology (NJIT))

We talked about:

David’s background A day in the life of a professor David’s current projects Starting a school The different types of professors David’s recent papers Similarities and differences between research labs and startups Finding (or creating) good datasets David’s lab Balancing research and teaching as a professor David’s most rewarding research project David’s most underrated research project David’s virtual data science seminars on YouTube Teaching at universities without doing research Staying up-to-date in research David’s favorite conferences Selecting topics for research Convincing students to stay in academia and competing with industry Finding David online

Links: 

David A. Bader: https://davidbader.net/ NJIT Institute for Data Science: https://datascience.njit.edu/ Arkouda: https://github.com/Bears-R-Us/arkouda NJIT Data Science YouTube Channel: https://www.youtube.com/c/NJITInstituteforDataScience

ML Zoomcamp: https://github.com/alexeygrigorev/mlbookcamp-code/tree/master/course-zoomcamp

Join DataTalks.Club: https://datatalks.club/slack.html

Our events: https://datatalks.club/events.html

Data Literacy may be an important skill for everyone to have, but the level of need is always unique to each individual. Some may need advanced technical skills in machine learning algorithms, while others may just need to be able to understand the basics. Regardless of where anyone sits on the skills spectrum, the data community can help accelerate their careers.

There’s no one who knows that better than Kate Strachnyi. Kate is the Founder and Community Manager at DATAcated, a company that is focused on bringing data professionals together and helping data companies reach their target audience through effective content strategies.

Kate has created courses on data storytelling, dashboard and visualization best practices, and she is also the author of several books on data science, including a children’s book about data literacy. Through her professional accomplishments and her content efforts online, Kate has not only built a massive online following, she has also established herself as a leader in the data space.

In this episode, we talk about best practices in data visualization, the importance of technical skills and soft skills for data professionals, how to build a personal brand and overcome Imposter Syndrome, how data literacy can make or break organizations, and much more.

This episode of DataFramed is a part of DataCamp’s Data Literacy Month, where we raise awareness for Data Literacy throughout the month of September through webinars, workshops, and resources featuring thought leaders and subject matter experts that can help you build your data literacy, as well as your organization’s. For more information, visit: https://www.datacamp.com/data-literacy-month/for-teams

In this episode, Jason Foster talks to Stephen Galsworthy, Head of Data at TomTom, a leading provider of mapping and location technology. They discuss the gradual integration of artificial intelligence (AI) into data products to create a better user experience, how TomTom navigated the shift from hardware to software and AI, and the challenges associated with integrating AI with data. Stephen also shares his brilliant journey in data & analytics, his extensive experience leading data science teams since 2011 and how to align a data team depending on the maturity of the business.

Today I’m sitting down with Jon Cooke, founder and CTO of Dataception, to learn his definition of a data product and his views on generating business value with your data products. In our conversation, Jon explains his philosophy on data products and where design and UX fit in. We also review his conceptual model for data products (which he calls the data product pyramid), and discuss how together, these concepts allow teams to ship working solutions faster that actually produce value. 

Highlights/ Skip to:

Jon’s definition of a data product (1:19)  Brian explains how UX research and design planning can and should influence data architecture —so that last mile solutions are useful and usable (9:47) The four characteristics of a data product in Jon’s model (16:16) The idea of products having a lifecycle with direct business/customer interaction/feedback (17:15) Understanding Jon’s data product pyramid (19:30) The challenges when customers/users don’t know what they want from data product teams - and who should be doing the work to surface requirements (24:44) Mitigating risk and the importance of having management buy-in when adopting a product-driven approach (33:23) Does the data product pyramid account for UX? (35:02) What needs to change in an org model that produces data products that aren’t delivering good last mile UXs (39:20)

Quotes from Today’s Episode “A data product is something that specifically solves a business problem, a piece of analytics, data use case, a pipeline, datasets, dashboard, that type that solves a business use case, and has a customer, and as a product lifecycle to it.” - Jon (2:15)

“I’m a fan of any definition that includes some type of deployment and use by some human being. That’s the end of the cycle, because the idea of a product is a good that has been made, theoretically, for sale.” - Brian (5:50)

“We don’t build a lot of stuff around cloud anymore. We just don’t build it from scratch. It’s like, you know, we don’t generate our own electricity, we don’t mill our own flour. You know, the cloud—there’s a bunch of composable services, which I basically pull together to build my application, whatever it is. We need to apply that thinking all the way through the stack, fundamentally.” - Jon (13:06)

“It’s not a data science problem, it’s not a business problem, it’s not a technology problem, it’s not a data engineering problem, it’s an everyone problem. And I advocate small, multidisciplinary teams, which have a business value person in it, have an SME, have a data scientist, have a data architect, have a data engineer, as a small pod that goes in and answer those questions.” - Jon (26:28)

“The idea is that you’re actually building the data products, which are the back-end, but you’re actually then also doing UX alongside that, you know? You’re doing it in tandem.” - Jon (37:36)

“Feasibility is one of the legs of the stools. There has to be market need, and your market just may be the sales team, but there needs to be some promise of value there that this person is really responsible for at the end of the day, is this data product going to create value or not?” - Brian (42:35)

“The thing about data products is sometimes you don’t know how feasible it is until you actually look at the data…You’ve got to do what we call data archaeology. You got to go and find the data, you got to brush it off, and you’re looking at and go, ‘Is it complete?’” - Jon (44:02)

Links Referenced: Dataception Data Product Pyramid Email: [email protected] LinkedIn: https://www.linkedin.com/in/jon-cooke-096bb0/

Practical Linear Algebra for Data Science

If you want to work in any computational or technical field, you need to understand linear algebra. As the study of matrices and operations acting upon them, linear algebra is the mathematical basis of nearly all algorithms and analyses implemented in computers. But the way it's presented in decades-old textbooks is much different from how professionals use linear algebra today to solve real-world modern applications. This practical guide from Mike X Cohen teaches the core concepts of linear algebra as implemented in Python, including how they're used in data science, machine learning, deep learning, computational simulations, and biomedical data processing applications. Armed with knowledge from this book, you'll be able to understand, implement, and adapt myriad modern analysis methods and algorithms. Ideal for practitioners and students using computer technology and algorithms, this book introduces you to: The interpretations and applications of vectors and matrices Matrix arithmetic (various multiplications and transformations) Independence, rank, and inverses Important decompositions used in applied linear algebra (including LU and QR) Eigendecomposition and singular value decomposition Applications including least-squares model fitting and principal components analysis

Many times, data scientists can fall into the trap of resume-driven development. As in, learning the shiniest, most advanced technique available to them in an attempt to solve a business problem. However, this is not what a learning mindset should look like for data teams.

As it turns out, taking a step back and focusing on the fundamentals and step-by-step iteration can be the key to growing as a data scientist, because when data teams develop a strong understanding of the problems and solutions lying underneath the surface, they will be able to wield their tools with complete mastery.

Ella Hilal joins the show to share why operating from an always-learning mindset will open up the path to a true mastery and innovation for data teams. Ella is the VP of Data Science and Engineering for Commercial and Service Lines at Shopify, a global commerce leader that helps businesses of all size grow, market, and manage their retail operations. Recognized as a leading woman in Data science, Internet of things and Machine Learning, Ella has over 15 years of experience spanning multiple countries, and is an advocate for responsible innovation, women in tech, and STEM.

In this episode, we talk about the biggest mistakes data scientists make when solving business problems, how to create cohesion between data teams and the broader organization, how to be an effective data leader that prioritizes their team’s growth, and how developing an always-learning mindset based on iteration, experimentation, and deep understanding of the problems needing to be solved can accelerate the growth of data teams.

Comet for Data Science

Discover how to manage and optimize the life cycle of your data science projects with Comet! By the end of this book, you will master preparing, analyzing, building, and deploying models, as well as integrating Comet into your workflow. What this Book will help me do Master managing data science workflows with Comet. Confidently prepare and analyze your data for effective modeling. Deploy and monitor machine learning models using Copet tools. Integrate Comet with DevOps and GitLab workflows for production readiness. Apply Comet to advanced topics like NLP, deep learning, and time series analysis. Author(s) Angelica Lo Duca is an experienced author and data scientist with years of expertise in data science workflows and tools. She brings practical insights into integrating platforms like Comet into modern data science tasks. Who is it for? If you are a data science practitioner or programmer looking to understand and implement efficient project lifecycles using Comet, this book is tailored for you. A basic backdrop in data science and programming is highly recommended, but prior expertise in Comet is unnecessary.

Most of us take for granted that food is always available to us when we need it. Our local supermarkets have shelves stacked with produce from all corners of the world. Rarely do we stop to think that the items in our shopping carts have been on a long journey involving months of work by many people. How does all this food get produced in the first place, reliably, consistently and to a high standard? How do we combine and utilise scarce resources to feed billions of people around the world every day? I recently caught up with Serg Masis to answer these questions and understand how data science is used to optimise food production around the world. Serg is a Climate & Agronomic Data Scientist at global agriculture company Syngenta and author of the book ‘Interpretable Machine Learning with Python’. In this episode of Leaders of Analytics, we discuss: The biggest challenges facing our global food system and how data science can help solve theseHow data science is used to help the environmentWhy Serg wrote the book ‘Interpretable Machine Learning with Python’ and why we should read itHow to make models more interpretable, and much more.Connect with Serg: Serg's website: https://www.serg.ai/#about-me Serg on LinkedIn: https://www.linkedin.com/in/smasis/ Serg's books from Packt: https://www.packtpub.com/authors/serg-masis

Machine learning models are often thought to be mainly utilized by large tech companies that run large and powerful models to accomplish a wide array of tasks. However, machine learning models are finding an increasing presence in edge devices such as smart watches.

ML engineers are learning how to compress models and fit them into smaller and smaller devices while retaining accuracy, effectiveness, and efficiency. The goal is to empower domain experts in any industry around the world to effectively use machine learning models without having to become experts in the field themselves.

Daniel Situnayake is the Founding TinyML Engineer and Head of Machine Learning at Edge Impulse, a leading development platform for embedded machine learning used by over 3,000 enterprises across more than 85,000 ML projects globally. Dan has over 10 years of experience as a software engineer, which includes companies like Google (where he worked on TensorFlow Lite) and Loopt, and co-founded Tiny Farms America’s first insect farming technology company. He wrote the book, "TinyML," and the forthcoming "AI at the Edge".

Daniel joins the show to talk about his work with EdgeML, the biggest challenges facing the field of embedded machine learning, the potential use cases of machine learning models in edge devices, and the best tips for aspiring machine learning engineers and data science practitioners to get started with embedded machine learning.

Python for Data Analysis, 3rd Edition

Get the definitive handbook for manipulating, processing, cleaning, and crunching datasets in Python. Updated for Python 3.10 and pandas 1.4, the third edition of this hands-on guide is packed with practical case studies that show you how to solve a broad set of data analysis problems effectively. You'll learn the latest versions of pandas, NumPy, and Jupyter in the process. Written by Wes McKinney, the creator of the Python pandas project, this book is a practical, modern introduction to data science tools in Python. It's ideal for analysts new to Python and for Python programmers new to data science and scientific computing. Data files and related material are available on GitHub. Use the Jupyter notebook and IPython shell for exploratory computing Learn basic and advanced features in NumPy Get started with data analysis tools in the pandas library Use flexible tools to load, clean, transform, merge, and reshape data Create informative visualizations with matplotlib Apply the pandas groupby facility to slice, dice, and summarize datasets Analyze and manipulate regular and irregular time series data Learn how to solve real-world data analysis problems with thorough, detailed examples

We talked about:

Danny’s background What an MLOps Architect does The popularity of MLOps Architect as a role Convincing an employer that you can wear many different hats Interviewing for the role of an MLOps Architect How Danny prioritizes work with data scientists Coming to WhyLabs when you’ve already got something in production vs nothing in production Market awareness regarding the importance of model monitoring How Danny (WhyLabs) chooses tools ONNX Common trends in tooling setups The most rewarding thing for Danny in ML and data science Danny’s secret for staying sane while wearing so many different hats T-shaped specialist, E-shaped specialist, and the horizontal line The importance of background for the role of an MLOps Architect Key differences for WhyLogs free vs paid Conclusion and where to find Danny online

Links:

Matt Turck: https://mattturck.com/data2021/ AI Observability Platform: https://whylabs.ai/observability Danny's LinkedIn: https://www.linkedin.com/in/dleybz/ Whylabs' website: https://whylabs.ai/ AI Infrastructure Alliance: https://ai-infrastructure.org/

ML Zoomcamp: https://github.com/alexeygrigorev/mlbookcamp-code/tree/master/course-zoomcamp

Join DataTalks.Club: https://datatalks.club/slack.html

Our events: https://datatalks.club/events.html

Hands-On Healthcare Data

Healthcare is the next frontier for data science. Using the latest in machine learning, deep learning, and natural language processing, you'll be able to solve healthcare's most pressing problems: reducing cost of care, ensuring patients get the best treatment, and increasing accessibility for the underserved. But first, you have to learn how to access and make sense of all that data. This book provides pragmatic and hands-on solutions for working with healthcare data, from data extraction to cleaning and harmonization to feature engineering. Author Andrew Nguyen covers specific ML and deep learning examples with a focus on producing high-quality data. You'll discover how graph technologies help you connect disparate data sources so you can solve healthcare's most challenging problems using advanced analytics. You'll learn: Different types of healthcare data: electronic health records, clinical registries and trials, digital health tools, and claims data The challenges of working with healthcare data, especially when trying to aggregate data from multiple sources Current options for extracting structured data from clinical text How to make trade-offs when using tools and frameworks for normalizing structured healthcare data How to harmonize healthcare data using terminologies, ontologies, and mappings and crosswalks

Snowflake: The Definitive Guide

Snowflake's ability to eliminate data silos and run workloads from a single platform creates opportunities to democratize data analytics, allowing users at all levels within an organization to make data-driven decisions. Whether you're an IT professional working in data warehousing or data science, a business analyst or technical manager, or an aspiring data professional wanting to get more hands-on experience with the Snowflake platform, this book is for you. You'll learn how Snowflake users can build modern integrated data applications and develop new revenue streams based on data. Using hands-on SQL examples, you'll also discover how the Snowflake Data Cloud helps you accelerate data science by avoiding replatforming or migrating data unnecessarily. You'll be able to: Efficiently capture, store, and process large amounts of data at an amazing speed Ingest and transform real-time data feeds in both structured and semistructured formats and deliver meaningful data insights within minutes Use Snowflake Time Travel and zero-copy cloning to produce a sensible data recovery strategy that balances system resilience with ongoing storage costs Securely share data and reduce or eliminate data integration costs by accessing ready-to-query datasets available in the Snowflake Marketplace