talk-data.com talk-data.com

Topic

Big Data

data_processing analytics large_datasets

1217

tagged

Activity Trend

28 peak/qtr
2020-Q1 2026-Q1

Activities

1217 activities · Newest first

MATLAB Machine Learning Recipes: A Problem-Solution Approach

Harness the power of MATLAB to resolve a wide range of machine learning challenges. This book provides a series of examples of technologies critical to machine learning. Each example solves a real-world problem. All code in MATLAB Machine Learning Recipes: A Problem-Solution Approach is executable. The toolbox that the code uses provides a complete set of functions needed to implement all aspects of machine learning. Authors Michael Paluszek and Stephanie Thomas show how all of these technologies allow the reader to build sophisticated applications to solve problems with pattern recognition, autonomous driving, expert systems, and much more. What you'll learn: How to write code for machine learning, adaptive control and estimation using MATLAB How these three areas complement each other How these three areas are needed for robust machine learning applications How to use MATLAB graphics and visualization tools for machine learning How to code real world examples in MATLAB for major applications of machine learning in big data Who is this book for: The primary audiences are engineers, data scientists and students wanting a comprehensive and code cookbook rich in examples on machine learning using MATLAB.

Send us a text Adam Weinstein is currently CEO and Co-Founder of Cursor, having worked at LinkedIn as a Senior Manager of Business Development and having founded enGreet, a print-on-demand greeting card company that merged crowd-sourcing with social expressions. In this episode, he describes his data analytics company and provides insight into creating a successful startup.


Shownotes

00:00 - Check us out on YouTube and SoundCloud!   

00:10 - Connect with Producer Steve Moore on LinkedIn & Twitter   

00:15 - Connect with Producer Liam Seston on LinkedIn & Twitter.   

00:20 - Connect with Producer Rachit Sharma on LinkedIn.

00:25 - Connect with Host Al Martin on LinkedIn & Twitter.   

00:55 - Connect with Adam Weinstein on LinkedIn.

03:55 - Find out more about Cursor.

06:45 - Learn more about Cursor's Co-Founder and CEO Adam Weinstein.

13:10 - Learn more about Big Data Analytics.

19:20 - What is Python/Jupyter Notebooks?

26:35 - Learn more about Data Fluency.

35:30 - What is a startup? 

Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Summary Building internal expertise around big data in a large organization is a major competitive advantage. However, it can be a difficult process due to compliance needs and the need to scale globally on day one. In this episode Jesper Søgaard and Keld Antonsen share the story of starting and growing the big data group at LEGO. They discuss the challenges of being at global scale from the start, hiring and training talented engineers, prototyping and deploying new systems in the cloud, and what they have learned in the process. This is a useful conversation for engineers, managers, and leadership who are interested in building enterprise big data systems.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media. Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m interviewing Keld Antonsen and Jesper Soegaard about the data infrastructure and analytics that powers LEGO

Interview

Introduction How did you get involved in the area of data management? My understanding is that the big data group at LEGO is a fairly recent development. Can you share the story of how it got started?

What kinds of data practices were in place prior to starting a dedicated group for managing the organization’s data? What was the transition process like, migrating data silos into a uniformly managed platform?

What are the biggest data challenges that you face at LEGO? What are some of the most critical sources and types of data that you are managing? What are the main components of the data infrastructure that you have built to support the organizations analytical needs?

What are some of the technologies that you have found to be most useful? Which have been the most problematic?

What does the team structure look like for the data services at LEGO?

Does that reflect in the types/numbers of systems that you support?

What types of testing, monitoring, and metrics do you use to ensure the health of the systems you support? What have been some of the most interesting, challenging, or useful lessons that you have learned while building and maintaining the data platforms at LEGO? How have the data systems at Lego evolved over recent years as new technologies and techniques have been developed? How does the global nature of the LEGO business influence the design strategies and technology choices for your platform? What are you most excited for in the coming year?

Contact Info

Jesper

LinkedIn

Keld

LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

LEGO Group ERP (Enterprise Resource Planning) Predictive Analytics Prescriptive Analytics Hadoop Center Of Excellence Continuous Integration Spark

Podcast Episode

Apache NiFi

Podcast Episode

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast

Machine Learning with Apache Spark Quick Start Guide

"Machine Learning with Apache Spark Quick Start Guide" introduces you to the fundamental concepts and tools needed to harness the power of Apache Spark for data processing and machine learning. This book combines practical examples and real-world scenarios to show you how to manage big data efficiently while uncovering actionable insights through advanced analytics. What this Book will help me do Understand the role of Apache Spark in the big data ecosystem. Set up and configure an Apache Spark development environment. Learn and implement supervised and unsupervised learning models using Spark MLlib. Apply advanced analytical algorithms to real-world big data problems. Develop and deploy real-time machine learning pipelines with Apache Spark. Author(s) None Quddus is an experienced practitioner in the fields of big data, distributed technologies, and machine learning. With a career dedicated to using advanced analytics to solve real-world problems, Quddus brings practical expertise to each topic addressed. Their approachable writing style ensures readers can apply concepts effectively, even in complex scenarios. Who is it for? This book is ideal for business analysts, data analysts, and data scientists who are eager to gain hands-on experience with big data technologies. Whether you are new to Apache Spark or looking to expand your knowledge of its machine learning capabilities, this guide provides the tools and insights necessary to achieve those goals. Technical professionals wanting to develop their skills in processing and analyzing big data will find this resource invaluable.

Fast Data Architectures for Streaming Applications, 2nd Edition

Why have stream-oriented data systems become so popular, when batch-oriented systems have served big data needs for many years? In the updated edition of this report, Dean Wampler examines the rise of streaming systems for handling time-sensitive problems—such as detecting fraudulent financial activity as it happens. You’ll explore the characteristics of fast data architectures, along with several open source tools for implementing them. Batch processing isn’t going away, but exclusive use of these systems is now a competitive disadvantage. You’ll learn that, while fast data architectures using tools such as Kafka, Akka, Spark, and Flink are much harder to build, they represent the state of the art for dealing with mountains of data that require immediate attention. Learn how a basic fast data architecture works, step-by-step Examine how Kafka’s data backplane combines the best abstractions of log-oriented and message queue systems for integrating components Evaluate four streaming engines, including Kafka Streams, Akka Streams, Spark, and Flink Learn which streaming engines work best for different use cases Get recommendations for making real-world streaming systems responsive, resilient, elastic, and message driven Explore an example IoT streaming application that includes telemetry ingestion and anomaly detection

Numerical Python: Scientific Computing and Data Science Applications with Numpy, SciPy and Matplotlib

Leverage the numerical and mathematical modules in Python and its standard library as well as popular open source numerical Python packages like NumPy, SciPy, FiPy, matplotlib and more. This fully revised edition, updated with the latest details of each package and changes to Jupyter projects, demonstrates how to numerically compute solutions and mathematically model applications in big data, cloud computing, financial engineering, business management and more. Numerical Python, Second Edition, presents many brand-new case study examples of applications in data science and statistics using Python, along with extensions to many previous examples. Each of these demonstrates the power of Python for rapid development and exploratory computing due to its simple and high-level syntax and multiple options for data analysis. After reading this book, readers will be familiar with many computing techniques including array-based and symbolic computing, visualization and numerical file I/O, equation solving, optimization, interpolation and integration, and domain-specific computational problems, such as differential equation solving, data analysis, statistical modeling and machine learning. What You'll Learn Work with vectors and matrices using NumPy Plot and visualize data with Matplotlib Perform data analysis tasks with Pandas and SciPy Review statistical modeling and machine learning with statsmodels and scikit-learn Optimize Python code using Numba and Cython Who This Book Is For Developers who want to understand how to use Python and its related ecosystem for numerical computing.

In this episode, Daniel Graham dissects the capabilities of data lakes and compares it to data warehouses. He talks about the primary use cases of data lakes and how they are vital for big data ecosystems. He then goes on to explain the role of data warehouses which are still responsible for timely and accurate data but don't have a central role anymore. In the end, both Wayne Eckerson and Dan Graham settle on a common definition for modern data architectures.

Daniel Graham has more than 30 years in IT, consulting, research, and product marketing, with almost 30 years at leading database management companies. Dan was a Strategy Director in IBM’s Global BI Solutions division and General Manager of Teradata’s high-end server divisions. During his tenure as a product marketer, Dan has been responsible for MPP data management systems, data warehouses, and data lakes, and most recently, the Internet of Things and streaming systems.

Apache Spark 2: Data Processing and Real-Time Analytics

Build efficient data flow and machine learning programs with this flexible, multi-functional open-source cluster-computing framework Key Features Master the art of real-time big data processing and machine learning Explore a wide range of use-cases to analyze large data Discover ways to optimize your work by using many features of Spark 2.x and Scala Book Description Apache Spark is an in-memory, cluster-based data processing system that provides a wide range of functionalities such as big data processing, analytics, machine learning, and more. With this Learning Path, you can take your knowledge of Apache Spark to the next level by learning how to expand Spark's functionality and building your own data flow and machine learning programs on this platform. You will work with the different modules in Apache Spark, such as interactive querying with Spark SQL, using DataFrames and datasets, implementing streaming analytics with Spark Streaming, and applying machine learning and deep learning techniques on Spark using MLlib and various external tools. By the end of this elaborately designed Learning Path, you will have all the knowledge you need to master Apache Spark, and build your own big data processing and analytics pipeline quickly and without any hassle. This Learning Path includes content from the following Packt products: Mastering Apache Spark 2.x by Romeo Kienzler Scala and Spark for Big Data Analytics by Md. Rezaul Karim, Sridhar Alla Apache Spark 2.x Machine Learning Cookbook by Siamak Amirghodsi, Meenakshi Rajendran, Broderick Hall, Shuen MeiCookbook What you will learn Get to grips with all the features of Apache Spark 2.x Perform highly optimized real-time big data processing Use ML and DL techniques with Spark MLlib and third-party tools Analyze structured and unstructured data using SparkSQL and GraphX Understand tuning, debugging, and monitoring of big data applications Build scalable and fault-tolerant streaming applications Develop scalable recommendation engines Who this book is for If you are an intermediate-level Spark developer looking to master the advanced capabilities and use-cases of Apache Spark 2.x, this Learning Path is ideal for you. Big data professionals who want to learn how to integrate and use the features of Apache Spark and build a strong big data pipeline will also find this Learning Path useful. To grasp the concepts explained in this Learning Path, you must know the fundamentals of Apache Spark and Scala.

Recent technology developments are driving urgency to modernize data management. What do you do about architecture, modeling, quality, and governance to keep up with big data, cloud, self-service, and other trends in data and technology? Examining some best practices can spark ideas of where to begin.

Originally published at https://www.eckerson.com/articles/stepping-up-to-modern-data-management

Send us a text Jason Tatge, CEO, president and cofounder of Farmobile, joins the show to discuss data in the agriculture industry. The conversation touches on Jason's experience launching a startup, tips for finding success, and the value of big data from a farmer's perspective. This episode gives insight to data science for one of the oldest and most important sectors in our society.   

Show Notes

00:00 - Check us out on YouTube and SoundCloud. 00:10 - Connect with producer Liam Seston on LinkedIn and Twitter. 00:15 - Connect with producer Steve Moore on LinkedIn and Twitter. 00:24 - Connect with host Al Martin on LinkedIn and Twitter. 01:20 - Connect with guest Jason Tatge on LinkedIn and Twitter. 04:24 - Get some insights to commodity trading. 10:09 - Check out Farmobile.com. 14:21 - Here are some more reasons why data collection in farming is so important. 22:21 - How data collection in farming is driving greater efficiency. 27:33 - Learn about pipeline entrepreneurs here. Follow @IBMAnalytics Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Computational Methods for Data Analysis

This graduate text covers a variety of mathematical and statistical tools for the analysis of big data coming from biology, medicine and economics. Neural networks, Markov chains, tools from statistical physics and wavelet analysis are used to develop efficient computational algorithms, which are then used for the processing of real-life data using Matlab.

Send us a text Paul Zikopolous, VP of big data cognitive systems at IBM, joins us to discuss tactics for both career and personal growth. Paul is also an established author and public speaker, and leverages experiences gained through those pursuits in the advice he gives. Have a pen and paper ready as there is a lot to take away from this enlightening conversation. Show notes 00:00 - Check us out on YouTube. 00:00 - We are now on Soundcloud. 00:10 - Add producer Liam Seston on LinkedIn and Twitter.  00:15 - Add producer Steve Moore on LinkedIn and Twitter. 00:25 - Add host Al Martin on LinkedIn and Twitter.  01:43 - Connect with Paul Zikopolous on LinkedIn and Twitter.  07:02 - Get up to speed with Watson Studio. 10:16 - Develop a continuous learning lifestyle. 14:27 - How to figure out what you want out of a job. 20:55 - How to succeed with failure.  24:50 - "Get comfortable feeling uncomfortable."  30:54 - Here are some tips to make time for the gym. 38:28 - "Don't let other people define you."  Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Dynamic Oracle Performance Analytics: Using Normalized Metrics to Improve Database Speed

Use an innovative approach that relies on big data and advanced analytical techniques to analyze and improve Oracle Database performance. The approach used in this book represents a step-change paradigm shift away from traditional methods. Instead of relying on a few hand-picked, favorite metrics, or wading through multiple specialized tables of information such as those found in an automatic workload repository (AWR) report, you will draw on all available data, applying big data methods and analytical techniques to help the performance tuner draw impactful, focused performance improvement conclusions. This book briefly reviews past and present practices, along with available tools, to help you recognize areas where improvements can be made. The book then guides you through a step-by-step method that can be used to take advantage of all available metrics to identify problem areas and work toward improving them. The method presented simplifies the tuning process and solves the problem of metric overload. You will learn how to: collect and normalize data, generate deltas that are useful in performing statistical analysis, create and use a taxonomy to enhance your understanding of problem performance areas in your database and its applications, and create a root cause analysis report that enables understanding of a specific performance problem and its likely solutions. What You'll Learn Collect and prepare metrics for analysis from a wide array of sources Apply statistical techniques to select relevant metrics Create a taxonomy to provide additional insight into problem areas Provide a metrics-based root cause analysis regarding the performance issue Generate an actionable tuning plan prioritized according to problem areas Monitor performance using database-specific normal ranges ​ Who This Book Is For Professional tuners: responsible for maintaining the efficient operation of large-scale databases who wish to focus on analysis, who want to expand their repertoire to include a big data methodology and use metrics without being overwhelmed, who desire to provide accurate root cause analysis and avoid the cyclical fix-test cycles that are inevitable when speculation is used

Send us a text In the latest episode of "Making Data Simple," host Al Martin invites Jeff Jonas, CEO, founder and chief scientist at Senzing Inc. to discuss use cases of AI and big data. The discussion ranges from Jeff's personal achievements, his miraculous quadriplegic recovery, his completion of every global Ironman triathlon race, and the birth of his company Senzing Inc. Suit up for what is truly an engaging conversation.  Show notes 00:00 - Checkout our YouTube channel.  00:10 - Connect with producer Liam Seston on LinkedIn and Twitter. 00:15 - Connect with producer Steve Moore on LinkedIn and Twitter. 00:24 - Connect with host Al Martin on LinkedIn and Twitter. 01:28 - Connect with guest Jeff Jonas on LinkedIn and Twitter. 02:08 - Not sure what the difference between a triathlon and an Ironman triathlon is? 02:28 - Here's how NORA and other security software applications are being employed in Las Vegas. 13:22 - Here's an interesting article about parent/child naming conventions. 16:26 - Check out Jeff's keynote at IBM Think 2018. 18:55 - Check out these 6 other brands with the "try then buy" sales method. 23:30 - Try out Senzing for yourself at senzing.com. 27:41 - Get an inside look at what it's like to live in a hotel, full-time. 31:49 - Need to brush up on Context Computing? Jeff Jonas explains it here. 33:12 - Check out these 10 Ironman triathlon facts.   Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Hands-On Big Data Modeling

This book, Hands-On Big Data Modeling, provides you with practical guidance on data modeling techniques, focusing particularly on the challenges of big data. You will learn the concepts behind various data models, explore tools and platforms for efficient data management, and gain hands-on experience with structured and unstructured data. What this Book will help me do Master the fundamental concepts of big data and its challenges. Explore advanced data modeling techniques using SQL, Python, and R. Design effective models for structured, semi-structured, and unstructured data types. Apply data modeling to real-world datasets like social media and sensor data. Optimize data models for performance and scalability in various big data platforms. Author(s) The authors of this book are experienced data architects and engineers with a strong background in developing scalable data solutions. They bring their collective expertise to simplify complex concepts in big data modeling, ensuring readers can effectively apply these techniques in their projects. Who is it for? This book is intended for data architects, business intelligence professionals, and any programmer interested in understanding and applying big data modeling concepts. If you are already familiar with basic data management principles and want to enhance your skills, this book is perfect for you. You will learn to tackle real-world datasets and create scalable models. Additionally, it is suitable for professionals transitioning to working with big data frameworks.

Hands-On Data Science with R

Dive into "Hands-On Data Science with R" and embark on a journey to master the R language for practical data science applications. This comprehensive guide walks through data manipulation, visualization, and advanced analytics, preparing you to tackle real-world data challenges with confidence. What this Book will help me do Understand how to utilize popular R packages effectively for data science tasks. Learn techniques for cleaning, preprocessing, and exploring datasets. Gain insights into implementing machine learning models in R for predictive analytics. Master the use of advanced visualization tools to extract and communicate insights. Develop expertise in integrating R with big data platforms like Hadoop and Spark. Author(s) This book was written by experts in data science and R including Doug Ortiz and his co-authors. They bring years of industry experience and a desire to teach, presenting complex topics in an approachable manner. Who is it for? Designed for data analysts, statisticians, or programmers with basic R knowledge looking to dive into machine learning and predictive analytics. If you're aiming to enhance your skill set or gain confidence in tackling real-world data problems, this book is an excellent choice.

Learn R for Applied Statistics: With Data Visualizations, Regressions, and Statistics

Gain the R programming language fundamentals for doing the applied statistics useful for data exploration and analysis in data science and data mining. This book covers topics ranging from R syntax basics, descriptive statistics, and data visualizations to inferential statistics and regressions. After learning R’s syntax, you will work through data visualizations such as histograms and boxplot charting, descriptive statistics, and inferential statistics such as t-test, chi-square test, ANOVA, non-parametric test, and linear regressions. Learn R for Applied Statistics is a timely skills-migration book that equips you with the R programming fundamentals and introduces you to applied statistics for data explorations. What You Will Learn Discover R, statistics, data science, data mining, and big data Master the fundamentals of R programming, including variables and arithmetic, vectors, lists, data frames, conditional statements, loops, and functions Work with descriptive statistics Create data visualizations, including bar charts, line charts, scatter plots, boxplots, histograms, and scatterplots Use inferential statistics including t-tests, chi-square tests, ANOVA, non-parametric tests, linear regressions, and multiple linear regressions Who This Book Is For Those who are interested in data science, in particular data exploration using applied statistics, and the use of R programming for data visualizations.

Hands-On Data Science with SQL Server 2017

In "Hands-On Data Science with SQL Server 2017," you will discover how to implement end-to-end data analysis workflows, leveraging SQL Server's robust capabilities. This book guides you through collecting, cleaning, and transforming data, querying for insights, creating compelling visualizations, and even constructing predictive models for sophisticated analytics. What this Book will help me do Grasp the essential data science processes and how SQL Server supports them. Conduct data analysis and create interactive visualizations using Power BI. Build, train, and assess predictive models using SQL Server tools. Integrate SQL Server with R, Python, and Azure for enhanced functionality. Apply best practices for managing and transforming big data with SQL Server. Author(s) Marek Chmel and Vladimír Mužný bring their extensive experience in data science and database management to this book. Marek is a seasoned database specialist with a strong background in SQL, while Vladimír is known for his instructional expertise in analytics and data manipulation. Together, they focus on providing actionable insights and practical examples tailored for data professionals. Who is it for? This book is an ideal resource for aspiring and seasoned data scientists, data analysts, and database professionals aiming to deepen their expertise in SQL Server for data science workflows. Beginners with fundamental SQL knowledge will find it a guided entry into data science applications. It is especially suited for those who aim to implement data-driven solutions in their roles while leveraging SQL's capabilities.

In this Episode, Wayne Eckerson asks Charles Reeves about his organization’s Internet of Things and Big Data strategy. Reeves is senior manager of BI and analytics at Graphics Packaging International, a leader in the packaging industry with hundreds of worldwide customers. He has 25 years of professional experience in IT management including nine years in reporting, analytics, and data governance.

In this podcast, Stephen Wunker spent time discussing the future of organizations via cost innovations and how some enterprises connect a successful pricing strategy with their data strategy. He sheds light on what some successful companies do to stay competitive and keep innovating their cost strategies to find effective customer connections. He shares some challenges that leaders face in adopting a successful cost innovation strategy. The book "Costovation" and this podcast are relevant for anyone seeking to learn about innovative ways to define their cost strategies. It is especially relevant for data science leaders to understand how they could transform sales by connecting cost and innovation.

Timelines: 0:30 Stephen's journey 6:25 Introducing "costovation". 10:10 Cost management in the age of "freemium" and opensource. 12:35 Key points of "Cost-o-vation". 15:40 Resolving issues between cost and innovation. 18:26 Introducing radical ideas of innovation to companies. 21:40 Gauging innovation. 24:20 Role of data in costovation. 26:15 Why adopt cost-ovation? 31:44 Innovation tips and suggestions. 34:45 Example of a company that is practicing cost-o-vation. 37:15 Tenets of good leadership. 39:50 Scalability of cost-o-vation. 43:17 cost-ovation and the customer. 47:47 Stephen's favorite reads. 49:45 Key takeaways.

Stephen's Book: Costovation: Innovation That Gives Your Customers Exactly What They Want--And Nothing More by Stephen Wunker, Jennifer Luo Law amzn.to/2xYyRFs

Stephen's Recommended Read: The Three-Box Solution: A Strategy for Leading Innovation by Vijay Govindarajan amzn.to/2y2Sex6 Made to Stick: Why Some Ideas Survive and Others Die by Chip Heath, Dan Heath amzn.to/2Ct2SRV The Innovator's Solution: Creating and Sustaining Successful Growth by Clayton M. Christensen, Michael E. Raynor amzn.to/2DZ6jRK

Podcast Link: https://futureofdata.org/stephen-wunker-on-future-of-customer-success-through-cost-innovation-and-data/

Stephen's BIO: Stephen Wunker is the founder and managing director of New Markets Advisors, a Boston-based consultancy focused on innovation and growth strategy.

With a long track record of creating successful ventures, Stephen has consulted multinational firms and start-ups across six continents, developing dozens of new growth platforms for clients over the past decade. He also pioneered both mobile commerce and mobile marketing, and he led the team that created one of the world's first smartphones.

In addition to his entrepreneurial and corporate ventures, he was a long-term colleague of the leading innovation authority Harvard Business School Professor Clayton Christensen in establishing his consulting practice, Innosight. His previous experience includes years with the management consultancy Bain & Company, the Rockefeller Brothers Fund, and the Soros Foundations.

Stephen holds an MBA from Harvard Business School, a Master's in Public Administration from Columbia University, and a BA cum laude from Princeton University. Coauthor of “COSTOVATION: Innovation that Gives Your Customers Exactly What They Want—and Nothing More” (HarperCollins Leadership, Aug. 14), his third book, Stephen has contributed to Harvard Business Review, Forbes, and a range of journals, and has appeared on Bloomberg TV, BBC and other broadcasts. He has lived in the United States, United Kingdom, Netherlands, Japan, Ecuador, and Zambia, and is now based in Boston.

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to come on the show and discuss their journey in creating the data-driven future.

Wanna Join? If you or any you know wants to join in, Register your interest by emailing us @ [email protected]

Want to sponsor? Email us @ [email protected]

Keywords: FutureOfData,

DataAnalytics,

Leadership,

Futurist,

Podcast,

BigData,

Strategy