talk-data.com talk-data.com

Topic

Analytics

data_analysis insights metrics

4552

tagged

Activity Trend

398 peak/qtr
2020-Q1 2026-Q1

Activities

4552 activities · Newest first

Summary The ecosystem for data tools has been going through rapid and constant evolution over the past several years. These technological shifts have brought about corresponding changes in data and platform architectures for managing data and analytical workflows. In this episode Colleen Tartow shares her insights into the motivating factors and benefits of the most prominent patterns that are in the popular narrative; data mesh and the modern data stack. She also discusses her views on the role of the data lakehouse as a building block for these architectures and the ongoing influence that it will have as the technology matures.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold built automated regression testing to help data and analytics engineers deal with data quality in their pull requests. Datafold shows how a change in SQL code affects your data, both on a statistical level and down to individual rows and values before it gets merged to production. No more shipping and praying, you can now know exactly what will change in your database! Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold. Tired of deploying bad data? Need to automate data pipelines with less red tape? Shipyard is the premier data orchestration platform built to help your data team quickly launch, monitor, and share workflows in a matter of minutes. Build powerful workflows that connect your entire data stack end-to-end with a mix of your code and their open-source, low-code templates. Once launched, Shipyard makes data observability easy with logging, alerting, and retries that will catch errors before your business team does. So whether you’re ingesting data from an API, transforming it with dbt, updating BI tools, or sending data alerts, Shipyard centralizes these operations and handles the heavy lifting so your data team can finally focus on what they’re good at — solving problems with data. Go to dataengineeringpodcast.com/shipyard to get started automating with their free developer plan today! Your host is Tobias Macey and today I’m interviewing Colleen Tartow about her views on the forces shaping th

Colleague, Gaurav Ganguly, Senior Director at Moody's Analytics, joins the podcast to examine the economic state of Europe and if they are headed into a recession. The gang also discusses the latest GDP release, recession odds, and beer of choice.  Full episode transcript Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis on LinkedIn for additional insight.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

This talk tells the story of how we have approached data and analytics as a startup at Preset and how the need for a data orchestrator grew over time. Our stack is (loosely) Fivetran/Segment/dbt/BigQuery/Hightouch, and we finally got to a place where we suffer quite a bit from not having an orchestrator and are bringing in Airflow to address our orchestration needs. This talk is about how startups approach solving data challenges, the shifting role of the orchestrator in the modern data stack, and the growing need for an orchestrator as your data platform becomes more complex.

Data + AI Summit 2022 Keynote from John Deere: Revolutionizing agriculture with AI

Hear Ganesh Jayaram, CIO of John Deere, talk about how the company is leveraging big data and AI to deliver ‘smart’ industrial solutions that are revolutionizing agriculture, driving sustainability and ultimately helping to feed the world. The John Deere Data Factory that is built upon the Databricks Lakehouse Platform is at the core of this innovation. It ingests 8 petabytes of data and trillions of records to give data teams fast, reliable access to standardized data sets to deliver over 3000 ML and analytics use cases that democratize data across John Deere, to deliver a culture of empowerment where data is everybody's responsibility.

Visit the Data + AI Summit at https://databricks.com/dataaisummit/

Data Engineering with Alteryx

Dive into 'Data Engineering with Alteryx' to master the principles of DataOps while learning to build robust data pipelines using Alteryx. This book guides you through key practices to enhance data pipeline reliability, efficiency, and accessibility, making it an essential resource for modern data professionals. What this Book will help me do Understand and implement DataOps practices within Alteryx workflows. Design and develop data pipelines with Alteryx Designer for efficient data processing. Learn to manage and publish pipelines using Alteryx Server and Alteryx Connect. Gain advanced skills in Alteryx for handling spatial analytics and machine learning. Master techniques to monitor, secure, and optimize data workflows and access. Author(s) Paul Houghton is an experienced data engineer and author specializing in data engineering and DataOps. With extensive experience using Alteryx tools and workflows, Paul has a passion for teaching and sharing his knowledge through clear and practical guidance. His hands-on approach ensures readers successfully navigate and apply technical concepts to real-world projects. Who is it for? This book is ideal for data engineers, data scientists, and data analysts aiming to build reliable data pipelines with Alteryx. You do not need prior experience with Alteryx, but familiarity with data workflows will enhance your learning experience. If you're focused on aligning with DataOps methodologies, this book is tailored for you.

Mastering Microsoft Power BI - Second Edition

Dive deep into Microsoft Power BI with the second edition of 'Mastering Microsoft Power BI'. This comprehensive book equips you with the skills to transform business data into actionable insights using Power BI's latest features and techniques. From efficient data retrieval and transformation processes to creating interactive dashboards that tell impactful data stories, you will learn actionable knowledge every step of the way. What this Book will help me do Learn to master data collection and modeling using the Power Query M language Gain expertise in designing DirectQuery, import, and composite data models Understand how to create advanced analytics reports using DAX and Power BI visuals Learn to manage the Power BI environment as an administrator with Premium capacity Develop insightful, scalable, and visually impactful dashboards and reports Author(s) Greg Deckler, a seasoned Power BI expert and solution architect, and None Powell, an experienced BI consultant and data visualization specialist, bring their extensive practical knowledge to this book. Together, they share their real-world expertise and proven techniques applying Power BI's diverse capabilities. Who is it for? This book is ideal for business intelligence professionals and intermediate Power BI users. If you're looking to master data visualization, prepare insightful dashboards, and explore Power BI's full potential, this is for you. Basic understanding of BI concepts and familiarity with Power BI will ensure you get the most value.

Every company, regardless of size, is dealing with a barrage of data. In any typical organisation, there is more information on hand than we know how to use or manage. While every team in the organisation is screaming for analytics professionals to turn data into insight, a strong data and analytics tech stack is foundational to being able to make sense of it all. The need for a robust and efficient data and analytics tech stack has created a sprawling industry for new technology solutions that sell the promise of seamless integration and faster insights. Today, there are a plethora of data and analytics platforms available, most with very high valuations attached to them. But do we really need all these tools to make us super-powered data users? To answer this question and many more related to the data and analytics tech stack, I recently spoke to Benn Stancil. Benn is the co-founder and Chief Analytics Officer at Mode. Mode is a modern analytics and BI solution that combines SQL, Python, R and visual analysis to answer questions for its users. In this episode of Leaders of Analytics, you will learn: What the perfect analytics tech stack looks like and why.Programmatic automation of the analytics workflow.What will cutting-edge analytics tech be able to do 5-10 years from now.Why Been thinks the Chief Analytics Officer role should be redefined, and much more.Connect with Benn Benn on LinkedIn: https://www.linkedin.com/in/benn-stancil/ Benn on Twitter: https://twitter.com/bennstancil Benn's (brilliant) Substack blog: https://benn.substack.com/

Today I sit down with Vijay Yadav, head of the data science team at Merck Manufacturing Division. Vijay begins by relating his own path to adopting a data product and UX-driven approach to applied data science, andour chat quickly turns to the ever-present challenge of user adoption. Vijay discusses his process of designing data products with customers, as well as the impact that building user trust has on delivering business value. We go on to talk about what metrics can be used to quantify adoption and downstream value, and then Vijay discusses the financial impact he has seen at Merck using this user-oriented perspective. While we didn’t see eye to eye on everything, Vijay was able to show how focusing on the last mile UX has had a multi-million dollar impact on Merck. The conversation concludes with Vijay’s words of advice for other data science directors looking to get started with a design and user-centered approach to building data products that achieve adoption and have measurable impact.

In our chat, we covered Vijay’s design process, metrics, business value, and more: 

Vijay shares how he came to approach data science with a data product management approach and how UX fits in (1:52) We discuss overcoming the challenge of user adoption by understanding user thinking and behavior (6:00) We talk about the potential problems and solutions when users self-diagnose their technology needs (10:23) Vijay delves into what his process of designing with a customer looks like (17:36) We discuss the impact “solving on the human level” has on delivering real world benefits and building user trust (21:57) Vijay talks about measuring user adoption and quantifying downstream value—and Brian discusses his concerns about tool usage metrics as means of doing this (25:35) Brian and Vijay discuss the multi-million dollar financial and business impact Vijay has seen at Merck using a more UX  driven approach to data product development (31:45) Vijay shares insight on what steps a head of data science  might wish to take to get started implementing a data product and UX approach to creating ML and analytics applications that actually get used  (36:46)

Quotes from Today’s Episode “They will adopt your solution if you are giving them everything they need so they don’t have to go look for a workaround.” - Vijay (4:22)

“It’s really important that you not only capture the requirements, you capture the thinking of the user, how the user will behave if they see a certain way, how they will navigate, things of that nature.” - Vijay (7:48)

“When you’re developing a data product, you want to be making sure that you’re taking the holistic view of the problem that can be solved, and the different group of people that we need to address. And, you engage them, right?” - Vijay (8:52)

“When you’re designing in low fidelity, it allows you to design with users because you don’t spend all this time building the wrong thing upfront, at which point it’s really expensive in time and money to go and change it.” - Brian (17:11)

"People are the ones who make things happen, right? You have all the technology, everything else looks good, you have the data, but the people are the ones who are going to make things happen.” - Vijay (38:47)

“You want to make sure that you [have] a strong team and motivated team to deliver. And the human spirit is something, you cannot believe how stretchable it is. If the people are motivated, [and even if] you have less resources and less technology, they will still achieve [your goals].” - Vijay (42:41)

“You’re trying to minimize any type of imposition on [the user], and make it obvious why your data product  is better—without disruption. That’s really the key to the adoption piece: showing how it is going to be better for them in a way they can feel and perceive. Because if they don’t feel it, then it’s just another hoop to jump through, right?” - Brian (43:56)

Resources and Links:  LinkedIn: https://www.linkedin.com/in/vijyadav/

Big Data Analytics and Machine Intelligence in Biomedical and Health Informatics

BIG DATA ANALYTICS AND MACHINE INTELLIGENCE IN BIOMEDICAL AND HEALTH INFORMATICS Provides coverage of developments and state-of-the-art methods in the broad and diversified data analytics field and applicable areas such as big data analytics, data mining, and machine intelligence in biomedical and health informatics. The novel applications of Big Data Analytics and machine intelligence in the biomedical and healthcare sector is an emerging field comprising computer science, medicine, biology, natural environmental engineering, and pattern recognition. Biomedical and health informatics is a new era that brings tremendous opportunities and challenges due to the plentifully available biomedical data and the aim is to ensure high-quality and efficient healthcare by analyzing the data. The 12 chapters in??Big Data Analytics and Machine Intelligence in Biomedical and Health Informatics??cover the latest advances and developments in health informatics, data mining, machine learning, and artificial intelligence. They have been organized with respect to the similarity of topics addressed, ranging from issues pertaining to the Internet of Things (IoT) for biomedical engineering and health informatics, computational intelligence for medical data processing, and Internet of Medical Things??(IoMT). New researchers and practitioners working in the field will benefit from reading the book as they can quickly ascertain the best performing methods and compare the different approaches. Audience Researchers and practitioners working in the fields of biomedicine, health informatics, big data analytics, Internet of Things, and machine learning.

Building data science functions has become tables takes for many organizations today. However, before data science functions were needed, the finance function acted as the insights layer for many organizations over the past. This means that working in finance has become an effective entry point into data science function for professionals across all spectrums.

Brian Richardi is the Head of Finance Data Science and Analytics at Stryker, a medical equipment manufacturing company based in Michigan, US. Brian brings over 14 years of global experience to the table. At Stryker, Brian leads a team of data scientists that use business data and machine learning to make predictions for optimization and automation.

In this episode, Brian talks about his experience as a data science leader transitioning from Finance, how he utilizes collaboration and effective communication to drive value, how leads the data science finance function at Stryker, and what the future of data science looks like in the finance space, and more.

An analytics center of excellence is the cornerstone of every data strategy, yet few data leaders know how to design one that works effectively. The key is to embrace federated techniques that balance standards and speed, agility, and governance. This article explains the core components of an analytics center of excellence. Published at: https://www.eckerson.com/articles/how-to-design-an-analytics-center-of-excellence

Summary The proliferation of sensors and GPS devices has dramatically increased the number of applications for spatial data, and the need for scalable geospatial analytics. In order to reduce the friction involved in aggregating disparate data sets that share geographic similarities the Unfolded team built a platform that supports working across raster, vector, and tabular data in a single system. In this episode Isaac Brodsky explains how the Unfolded platform is architected, their experience joining the team at Foursquare, and how you can start using it for analyzing your spatial data today.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold built automated regression testing to help data and analytics engineers deal with data quality in their pull requests. Datafold shows how a change in SQL code affects your data, both on a statistical level and down to individual rows and values before it gets merged to production. No more shipping and praying, you can now know exactly what will change in your database! Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold. Unstruk is the DataOps platform for your unstructured data. The options for ingesting, organizing, and curating unstructured files are complex, expensive, and bespoke. Unstruk Data is changing that equation with their platform approach to manage your unstructured assets. Built to handle all of your real-world data, from videos and images, to 3d point clouds and geospatial records, to industry specific file formats, Unstruk streamlines your workflow by converting human hours into machine minutes, and automatically alerting you to insights found in your dark data. Unstruk handles data versioning, lineage tracking, duplicate detection, consistency validation, as well as enrichment through sources including machine learning models, 3rd party data, and web APIs. Go to dataengineeringpodcast.com/unstruk today to transform your messy collection of unstructured data files into actionable assets that power your business. Your host is Tobias Macey and today I’m interviewing Isaac Brodsky about Foursquare’s Unfolded platform for working w

Even You Can Learn Statistics and Analytics: An Easy to Understand Guide

THE GUIDE FOR ANYONE AFRAID TO LEARN STATISTICS & ANALYTICS UPDATED WITH NEW EXAMPLES & EXERCISES This book discusses statistics and analytics using plain language and avoiding mathematical jargon. If you thought you couldn't learn these data analysis subjects because they were too technical or too mathematical, this book is for you! This edition delivers more everyday examples and end-of-chapter exercises and contains updated instructions for using Microsoft Excel. You'll use downloadable data sets and spreadsheet solutions, template-based solutions you can put right to work. Using this book, you will understand the important concepts of statistics and analytics, including learning the basic vocabulary of these subjects. Create tabular and visual summaries and learn to avoid common charting errors Gain experience working with common descriptive statistics measures including the mean, median, and mode; and standard deviation and variance, among others Understand the probability concepts that underlie inferential statistics Learn how to apply hypothesis tests, using Z, t, chi-square, ANOVA, and other techniques Develop skills using regression analysis, the most commonly-used Inferential statistical method Explore results produced by predictive analytics software Choose the right statistical or analytic techniques for any data analysis task Optionally, read the Equation Blackboards, designed for readers who want to learn about the mathematical foundations of selected methods ...

Analytics for Retail: A Step-by-Step Guide to the Statistics Behind a Successful Retail Business

Examine select retail business scenarios to learn basic mathematics, as well as probability and statistics required to analyze big data. This book focuses on useful and imperative applied analytics needed to build a retail business and explains mathematical concepts essential for decision making and communication in retail business environments. Everyone is a buyer or seller of products these days whether through a physical department store, Amazon, or their own business website. This book is a step-by-step guide to understanding and managing the mechanics of markups, markdowns, and basic statistics, math and computers that will help in your retail business. You'll tackle what to do with data once it is has accumulated and see how to arrange the data using descriptive statistics, primarily means, median, and mode, and then how to read the corresponding charts and graphs. Analytics for Retail is your path to creating visualrepresentations that powerfully communicate information and drive decisions. What You'll Learn Review standard statistical concepts to enhance your understanding of retail data Understand the concepts of markups, markdowns and profit margins, and probability Conduct an A/B testing email campaign with all the relevant analytics calculated and explained Who This Book Is For This is a primer book for anyone in the field of retail that needs to learn or refresh their skills or for a reader who wants to move in their company to a more analytical position.

Ten Things to Know About ModelOps

The past few years have seen significant developments in data science, AI, machine learning, and advanced analytics. But the wider adoption of these technologies has also brought greater cost, risk, regulation, and demands on organizational processes, tasks, and teams. This report explains how ModelOps can provide both technical and operational solutions to these problems. Thomas Hill, Mark Palmer, and Larry Derany summarize important considerations, caveats, choices, and best practices to help you be successful with operationalizing AI/ML and analytics in general. Whether your organization is already working with teams on AI and ML, or just getting started, this report presents ten important dimensions of analytic practice and ModelOps that are not widely discussed, or perhaps even known. In part, this report examines: Why ModelOps is the enterprise "operating system" for AI/ML algorithms How to build your organization's IP secret sauce through repeatable processing steps How to anticipate risks rather than react to damage done How ModelOps can help you deliver the many algorithms and model formats available How to plan for success and monitor for value, not just accuracy Why AI will be soon be regulated and how ModelOps helps ensure compliance

Julia Coronado, President and Founder, MacroPolicy Perspectives, joins the podcast to discuss whether we can talk ourselves into a recession, the mixed messages on consumer sentiment and what the odds a downturn are. She also crushes the statistics game.  Full Episode Transcript For more from Julia Coronado, follow her @jc_econ Follow Mark Zandi @MarkZandi, Ryan Sweet @RealTime_Econ and Cris deRitis on LinkedIn for additional insight.

Questions or Comments, please email us at [email protected]. We would love to hear from you.    To stay informed and follow the insights of Moody's Analytics economists, visit Economic View.

In-Memory Analytics with Apache Arrow

Discover the power of in-memory data analytics with "In-Memory Analytics with Apache Arrow." This book delves into Apache Arrow's unique capabilities, enabling you to handle vast amounts of data efficiently and effectively. Learn how Arrow improves performance, offers seamless integration, and simplifies data analysis in diverse computing environments. What this Book will help me do Gain proficiency with the datastore facilities and data types defined by Apache Arrow. Master the Arrow Flight APIs to efficiently transfer data between systems. Learn to leverage in-memory processing advantages offered by Arrow for state-of-the-art analytics. Understand how Arrow interoperates with popular tools like Pandas, Parquet, and Spark. Develop and deploy high-performance data analysis pipelines with Apache Arrow. Author(s) Matthew Topol, the author of the book, is an experienced practitioner in data analytics and Apache Arrow technology. Having contributed to the development and implementation of Arrow-powered systems, he brings a wealth of knowledge to readers. His ability to delve deep into technical concepts while keeping explanations practical makes this book an excellent guide for learners of the subject. Who is it for? This book is ideal for professionals in the data domain including developers, data analysts, and data scientists aiming to enhance their data manipulation capabilities. Beginners with some familiarity with data analysis concepts will find it beneficial, as well as engineers designing analytics utilities. Programming examples accommodate users of C, Go, and Python, making it broadly accessible.

Microsoft Power BI Data Analyst Certification Guide

This book is your ultimate companion to mastering Microsoft Power BI and becoming proficient in data analysis and visualization. With a focus on understanding and utilizing Power BI to its fullest extent, this guide also prepares you comprehensively for the PL-300 certification exam. You will go from the basics to advanced techniques enabling you to confidently analyze and present data. What this Book will help me do Understand and connect to various data sources using Power BI. Gain skills in transforming and preparing data for advanced analysis. Develop expertise in designing and optimizing data models. Learn to create insightful reports and dashboards to convey information clearly. Prepare for and succeed in the PL-300 certification exam with practice questions. Author(s) Authors None Edenfield and None Corcoran bring extensive experience in business intelligence and data analytics to this book. They have years of hands-on expertise with Power BI and a passion for teaching analytics in a practical and accessible way. Together, they aim to empower readers to master Power BI and achieve their certification goals. Who is it for? This book is perfect for data analysts, business intelligence professionals, and anyone aiming to deepen their knowledge of Microsoft Power BI. Beginners will find approachable content to quickly get started while experienced users will find detailed topics to refine their expertise. By covering exam preparation and practical applications, this guide benefits a wide range of learners who wish to get certified and excel in data-centric roles.

Beginning Data Science in R 4: Data Analysis, Visualization, and Modelling for the Data Scientist

Discover best practices for data analysis and software development in R and start on the path to becoming a fully-fledged data scientist. Updated for the R 4.0 release, this book teaches you techniques for both data manipulation and visualization and shows you the best way for developing new software packages for R. Beginning Data Science in R 4, Second Edition details how data science is a combination of statistics, computational science, and machine learning. You’ll see how to efficiently structure and mine data to extract useful patterns and build mathematical models. This requires computational methods and programming, and R is an ideal programming language for this. Modern data analysis requires computational skills and usually a minimum of programming. After reading and using this book, you'll have what you need to get started with R programming with data science applications. Source code will be available to support your next projects as well. Source code is available at github.com/Apress/beg-data-science-r4. What You Will Learn Perform data science and analytics using statistics and the R programming language Visualize and explore data, including working with large data sets found in big data Build an R package Test and check your code Practice version control Profile and optimize your code Who This Book Is For Those with some data science or analytics background, but not necessarily experience with the R programming language.

Democratizing data, and developing data culture in large enterprise organizations is an incredibly complex process that can seem overwhelming if you don’t know where to start. And today’s guest draws a clear path towards becoming data-driven.

Meenal Iyer, Sr. Director for Data Science and Experimentation at Tailored Brands, Inc., has over 20 years of experience as a Data and Analytics strategist. She has built several data and analytics platforms and drives the enterprises she works with to be insights-driven. Meenal has also led data teams at various retail organizations, and as a wide variety of specialties in Data Science, including data literacy programs, data monetization, machine learning, enterprise data governance, and more.

In this episode, Meenal shares her thorough, effective, and clear strategy for democratizing data successfully and how that helps create a successful data culture in large enterprises, and gives you the tools you need to do the same in your organization.

[Announcement] Join us for DataCamp Radar, our digital summit on June 23rd. During this summit, a variety of experts from different backgrounds will be discussing everything related to the future of careers in data. Whether you're recruiting for data roles or looking to build a career in data, there’s definitely something for you. Seats are limited, and registration is free, so secure your spot today on https://events.datacamp.com/radar/