talk-data.com talk-data.com

Topic

Data Analytics

data_analysis statistics insights

760

tagged

Activity Trend

38 peak/qtr
2020-Q1 2026-Q1

Activities

760 activities · Newest first

Practical Business Intelligence

Master the art of business intelligence in just a few steps with this hands-on guide. By following the detailed examples and techniques in this book, you'll learn to create effective BI solutions that analyze data for strategic decision-making. You'll explore tools like D3.js, R, Tableau, QlikView, and Python to visualize data and gain actionable insights. What this Book will help me do Develop the ability to create self-service reporting environments for business analytics. Understand and apply SQL techniques to aggregate and manipulate data effectively. Design and implement data models suitable for analytical and reporting purposes. Connect data warehouses with advanced BI tools to streamline reporting processes. Analyze and visualize data using industry-leading tools like D3.js, R, Tableau, and Python. Author(s) Written by seasoned experts in data analytics and business intelligence, the authors bring years of industry experience and practical insights to this well-rounded guide. They specialize in turning complex data into manageable, insightful BI solutions. Their writing style is approachable yet detailed, ensuring you gain both foundational and advanced knowledge in a structured way. Who is it for? This book caters to data enthusiasts and professionals in roles such as data analysis, BI development, or data management. It's perfect for beginners seeking practical BI skills, as well as experienced developers looking to integrate and implement sophisticated BI tools. The focus is on actionable insights, making it ideal for anyone aiming to leverage data for business growth.

In this session, Scott Zoldi, Chief Analytics Officer, FICO, sat with Vishal Kumar, CEO AnalyticsWeek and shared his journey as an analytics executive, best practices, and hacks for upcoming executives challenges/opportunities he's observing as a Chief Analytics Officer. Scott discussed creating the data-driven culture and what leaders could do to get buy-ins for building strong data science capabilities. Scott discussed his passion for security analytics. He shared some best practices to put-up a Cyber Security Center of Excellence. Scott also shared what traits future leaders should have.

Timeline:

0:29 Scott's journey. 5:10 On Falcon's fraud manager. 9:12 Area in secuity where AI works. 11:40 FICO's dealing with new products. 15:30 Centre of excellence for cyber security. 22:00 Should a center of excellence be inside out or in partnership? 28:22 The CAO role in FICO. 31:14 Is FICO in facing or out facing? 32:12 Being analytical in a gutt based organization. 35:54 Art of doing business and science of doing business. 38:22 Challenges as CAO in FICO. 41:09 Opportunity for data science in the security space. 45:54 Qualities required for a CAO. 48:54 Tips for a data scientist to get hired at FICO.

Podcast link: https://futureofdata.org/analyticsweek-leadership-podcast-with-scott-zoldi-cao-fico/

Here's Scott Zoldi's Bio: Scott Zoldi is Chief Analytics Officer at FICO, responsible for the analytic development of FICO’s product and technology solutions, including the FICO™ Falcon® Fraud Manager product, which protects about two-thirds of the world’s payment card transactions from fraud. While at FICO, Scott has been responsible for authoring 72 analytic patents, 36 patents granted, and 36 in process. Scott is actively involved in developing new analytic products and Big Data analytics applications, many of which leverage new streaming artificial intelligence innovations such as adaptive analytics, collaborative profiling, and self-learning models. Scott is most recently focused on the applications of streaming self-learning analytics for real-time detection of Cyber Security attacks and Money Laundering. Scott serves on two boards of directors, including Software San Diego and Cyber Center of Excellence. Scott received his Ph.D. in theoretical physics from Duke University.

Follow @scottzoldi

The podcast is sponsored by: TAO.ai(https://tao.ai), Artificial Intelligence Driven Career Coach

In this session, Mike Flowers, Chief Analytics Officer, Enigma, sat with Vishal Kumar, CEO AnalyticsWeek and shared his journey as an analytics executive, best practices, hacks for upcoming executives, and some challenges/opportunities he's observing as a Chief Analytics Officer. Mike discussed his journey from trial prosecutor to Chief Analytics Officer, sharing some great stories on how Govt. embraces data analytics.

Timeline: 0:29 Mike's journey. 23:32 Mike's role in Enigma. 27:46 The role of CAO in Enigma. 29:50 How much Mike's role is customer-facing vs. in facing. 30:00 Getting over the roadblocks of working with the government. 34:06 Creating a data bridge. 39:17 Collaboration in the data science field. 46:02 Challenges in working with Clients at Enigma. 51:34 Benefits of having a legal background before coming to data analytics.

Podcast link: https://futureofdata.org/enigma_io/

Here's Mike Flowers Bio: Mike is Chief Analytics Officer at New York City tech start-up Enigma, an operational data management and intelligence company, where he leads data scientists assisting the development and deployment of decision-support technologies to Fortune 500 clients in compliance, manufacturing, banking, and finance, and several U.S. and foreign government agencies. In addition, he is a Senior Fellow at Bloomberg Philanthropies, working with select U.S. city governments to launch sustainable analytics programs. Mike is also an advisor to numerous organizations in a wide variety of fields, including, for example, Weil Cornell Medical College, the Inter-American Development Bank, the Office of the New York State Comptroller, the Greater London Authority, the government of New South Wales, Australia, and the French national government.

From 2014-15, Mike was an Executive-in-Residence and the first MacArthur Urban Science Fellow at NYU’s Center for Urban Science and Progress, where he advised students and faculty on projects to advance data-driven decision-making in city government.

From 2009-2013, Mike served under Mayor Michael Bloomberg as New York City’s first Chief Analytics Officer. During his tenure, he founded the Mayor’s Office of Data Analytics, which provides quantitative support to the city’s public safety, public health, infrastructure development, finance, economic development, disaster preparedness and response, legislative, sustainability, and human services efforts. In addition, Mike designed and oversaw the implementation of NYC DataBridge, a first-of-its-kind citywide analytics platform that enables the sharing and analysis of city data across agencies and with the public, and he ran the implementation of the city’s internationally-recognized Open Data initiative. For this work, Mike was twice recognized by the White House for innovation.

Follow @mpflowersnyc

The podcast is sponsored by: TAO.ai(https://tao.ai), Artificial Intelligence Driven Career Coach

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Want to Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

In this session, John Young, Chief Analytics Officer, Epsilon Data Management, sat with Vishal Kumar, CEO AnalyticsWeek and shared his journey to Chief Analytics Officer, life @ Epsilon, and discussed some challenges/opportunities faced by data-driven organizations, its executives and shared some best practices.

Timeline: 2:51 What's Epsilon? 5:12 John's journey. 9:24 The role of CAO in Epsilon. 12:12 How much John's role is in facing and out facing. 13:19 Best practices in data analytics at Epsilon. 16:15 Demarcating CDO and CAO. 19:52 Depth and breadth of decision making at Epsilon. 25:00 Dealing with clients of Epsilon. 28:48 Best data practices for businesses. 34:39 Build or buy data? 37:21 Creating a center of excellence with data. 40:01 Building a data team. 43:45 Tips for aspiring data analytics executives. 46:05 Art of doing business and science of doing business. 48:31 Closing remarks.

Podcast link: https://futureofdata.org/analyticsweek-leadership-podcast-with-john-young-epsilon-data-management/

Here's John's Bio: Mr. Young has general management responsibilities for the 150+ member Analytic Consulting Group at Epsilon. His responsibilities also include design and consultation on various database marketing analytic engagements, including predictive modeling, segmentation, measurement, and profiling. John also brings thought leadership on important marketing topics. John works with companies in numerous industries, including financial services, technology, retail, healthcare, and not-for-profit.

Before joining Epsilon in 1994, Mr. Young was a Marketing Research Manager at Digitas, a Market Research Manager at Citizens Bank, Research Manager at the AICPA, and an Assistant Economist at the Federal Reserve Bank of Kansas City.

Mr. Young has presented at numerous conferences, including NCDM Winter and Summer, DMA Annual, DMA Marketing Analytics, LIMRA Big Data Analytics, and Epsilon’s Client Symposiums. He has published in DM News, CRM Magazine’s Viewpoints, Chief Marketer, Loyalty 360, Colloquy, and serves on the advisory board of the DMA’s Analytics Community.

Mr. Young holds a B.S. and M.S. in Economics from Colorado State University, Fort Collins, Colorado.

The podcast is sponsored by: TAO.ai(https://tao.ai), Artificial Intelligence Driven Career Coach

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Want to Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

The Big Data Transformation

Business executives today are well aware of the power of data, especially for gaining actionable insight into products and services. But how do you jump into the big data analytics game without spending millions on data warehouse solutions you don’t need? This 40-page report focuses on massively parallel processing (MPP) analytical databases that enable you to run queries and dashboards on a variety of business metrics at extreme speed and Exabyte scale. Because they leverage the full computational power of a cluster, MPP analytical databases can analyze massive volumes of data—both structured and semi-structured—at unprecedented speeds. This report presents five real-world case studies from Etsy, Cerner Corporation, Criteo and other global enterprises to focus on one big data analytics platform in particular, HPE Vertica. You’ll discover: How one prominent data storage company convinced both business and tech stakeholders to adopt an MPP analytical database Why performance marketing technology company Criteo used a Center of Excellence (CoE) model to ensure the success of its big data analytics endeavors How YPSM uses Vertica to speed up its Hadoop-based data processing environment Why Cerner adopted an analytical database to scale its highly successful health information technology platform How Etsy drives success with the company’s big data initiative by avoiding common technical and organizational mistakes

Fast Data Processing with Spark 2 - Third Edition

Fast Data Processing with Spark 2 takes you through the essentials of leveraging Spark for big data analysis. You will learn how to install and set up Spark, handle data using its APIs, and apply advanced functionality like machine learning and graph processing. By the end of the book, you will be well-equipped to use Spark in real-world data processing tasks. What this Book will help me do Install and configure Apache Spark for optimal performance. Interact with distributed datasets using the resilient distributed dataset (RDD) API. Leverage the flexibility of DataFrame API for efficient big data analytics. Apply machine learning models using Spark MLlib to solve complex problems. Perform graph analysis using GraphX to uncover structural insights in data. Author(s) Krishna Sankar is an experienced data scientist and thought leader in big data technologies. With a deep understanding of machine learning, distributed systems, and Apache Spark, Krishna has guided numerous projects in data engineering and big data processing. Matei Zaharia, the co-author, is also widely recognized in the field of distributed systems and cloud computing, contributing to Apache Spark development. Who is it for? This book is catered to software developers and data engineers with a foundational understanding of Scala or Java programming. Beginner to medium-level understanding of big data processing concepts is recommended for readers. If you are aspiring to solve big data problems using scalable distributed computing frameworks, this book is perfect for you. By the end, you will be confident in building Spark-powered applications and analyzing data efficiently.

In this session, Dr. Nipa Basu, Chief Analytics Officer, Dun&Bradstreet, sat with Vishal Kumar, CEO AnalyticsWeek and shared her journey as Chief Analytics Officer, life @ D&B, Future of Credit Scoring, and some challenges/opportunities she's observing as an industry observer, executive, and practitioner.

Timeline: 0:29 Nipa's background. 4:14 What is D&B? 7:40 Depth and breadth of decision making at D&B. 9:36 Matching security with technological evolution. 13:42 Anticipatory analytics. 16:00 CAO's role in D&B: in facing or outfacing? 18:32 Future of credit scoring. 21:36 Challenges in dealing with clients. 24:08 Cultural challenges. 28:42 Good use cases in security data. 31:51 CDO, CAO, and CTO. 33:56 Optimistic trends data analytics businesses. 36:44 Social data monitoring. 39:18 Creating a holistic model for data monitoring. 41:02 Overused terms in data analytics. 42:10 Best practices for small businesses to get started with data analytics. 44:33 Indicators that indicate the need for analytics for businesses. 47:06 Advice for data-driven leaders. 49:30 Art of doing business and science of doing business.

Podcast link: https://futureofdata.org/analyticsweek-leadership-podcast-with-dr-nipa-basu-dun-bradstreet/

Here's Nipa's Bio: Dr. Nipa Basu is the Chief Analytics Officer at Dun & Bradstreet. Nipa is the main source of inspiration and leadership for Dun & Bradstreet’s extensive team of data modelers and scientists that partner with the world’s leading Fortune 500 companies to create innovative, analytic solutions to drive business growth and results. The team is highly skilled in solving a wide range of business challenges with unique, basic, and advanced analytic applications.

Nipa joined Dun & Bradstreet in 2000 and since then has held key leadership roles focused on driving the success of Dun & Bradstreet’s Analytics practice. In 2012, Nipa was named Leader, Analytic Development, and in March 2015, Nipa was named Chief Analytics Officer and appointed to Dun & Bradstreet’s executive team.

Nipa began her professional career as an Economist with the New York State Legislative Tax Study Commission. She then joined Sandia National Laboratories, a national defense laboratory where she built a Microsimulation Model of the U.S. Economy. Prior to joining Dun & Bradstreet, Nipa was a database marketing statistician for AT&T with responsibility for building predictive marketing models.

Nipa received her Ph. D. in Economics from the State University of New York at Albany, specializing in Econometrics.

Follow @nipabasu

The podcast is sponsored by: TAO.ai(https://tao.ai), Artificial Intelligence Driven Career Coach

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Want to Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

In this session, Joe DeCosmo, Chief Analytics Officer, Enova International, sat with Vishal Kumar, CEO AnalyticsWeek and shared his journey to Chief Analytics Officer, life @ Enova, and some challenges/opportunities as he is observing as an executive, industry observer, and a Chief Analytics Officer.

Timeline: 0:29 Joe's journey. 5:05 Credit risk and fraud prevention models. 6:27 Enova: in facing or outfacing? 9:12 Enova area of expertise. 10:47 Enova decisions: Center of Excellence? 12:36 Depths and breadths of decision making at Enova. 14:51 CDO, CAO, and CTO. 17:24 Who owns the data at Enova? 19:55 Challenges in building a data culture. 25:52 Convincing leaders towards data science. 31:24 Business challenges that analytics is solving. 34:15 Getting started with data analytics as a business. 38:11 Exciting trends in data analytics. 41:09 Art of doing business and science of doing business. 44:00 Advice for budding CAOs.

Podcast link: https://futureofdata.org/analyticsweek-leadership-podcast-with-joe-decosmo-enova-international/

Here's Joe's Bio: Joe DeCosmo is the CAO of Enova International, where he leads a multi-disciplinary analytics team, providing end-to-end data and analytic services to Enova’s global online financial service brands and delivering real-time predictive analytics services to clients through Enova Decisions. Prior to Enova, Joe served as Director and Practice Leader of Advanced Analytics for West Monroe Partners and held a number of executive positions at HAVI Global Solutions and the Allant Group. He is also Immediate Past-President of the Chicago Chapter of the American Statistical Association and serves on the Advisory Board of the University of Illinois at Chicago's College of Business.

The podcast is sponsored by: TAO.ai(https://tao.ai), Artificial Intelligence Driven Career Coach

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Want to Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Spark for Data Science

Explore how to leverage Apache Spark for efficient big data analytics and machine learning solutions in "Spark for Data Science". This detailed guide provides you with the skills to process massive datasets, perform data analytics, and build predictive models using Spark's powerful tools like RDDs, DataFrames, and Datasets. What this Book will help me do Gain expertise in data processing and transformation with Spark. Perform advanced statistical analysis to uncover insights. Master machine learning techniques to create predictive models using Spark. Utilize Spark's APIs to process and visualize big data. Build scalable and efficient data science solutions. Author(s) This book is co-authored by None Singhal and None Duvvuri, both accomplished data scientists with extensive experience in Apache Spark and big data technologies. They bring their practical industry expertise to explain complex topics in a straightforward manner. Their writing emphasizes real-world applications and step-by-step procedural guidance, making this a valuable resource for learners. Who is it for? This book is ideally suited for technologists seeking to incorporate data science capabilities into their work with Apache Spark, data scientists interested in machine learning algorithms implemented in Spark, and beginners aiming to step into the field of big data analytics. Whether you are familiar with Spark or completely new, this book offers valuable insights and practical knowledge.

Big Data Analytics

Dive into the world of big data with "Big Data Analytics: Real Time Analytics Using Apache Spark and Hadoop." This comprehensive guide introduces readers to the fundamentals and practical applications of Apache Spark and Hadoop, covering essential topics like Spark SQL, DataFrames, structured streaming, and more. Learn how to harness the power of real-time analytics and big data tools effectively. What this Book will help me do Master the key components of Apache Spark and Hadoop ecosystems, including Spark SQL and MapReduce. Gain an understanding of DataFrames, DataSets, and structured streaming for seamless data handling. Develop skills in real-time analytics using Spark Streaming and technologies like Kafka and HBase. Learn to implement machine learning models using Spark's MLlib and ML Pipelines. Explore graph analytics with GraphX and leverage data visualization tools like Jupyter and Zeppelin. Author(s) Venkat Ankam, an expert in big data technologies, has years of experience working with Apache Hadoop and Spark. As an educator and technical consultant, Venkat has enabled numerous professionals to gain critical insights into big data ecosystems. With a pragmatic approach, his writings aim to guide readers through complex systems in a structured and easy-to-follow manner. Who is it for? This book is perfect for data analysts, data scientists, software architects, and programmers aiming to expand their knowledge of big data analytics. Readers should ideally have a basic programming background in languages like Python, Scala, R, or SQL. Prior hands-on experience with big data environments is not necessary but is an added advantage. This guide is created to cater to a range of skill levels, from beginners to intermediate learners.

Big Data War

This book mainly focuses on why data analytics fails in business. It provides an objective analysis and root causes of the phenomenon, instead of abstract criticism of utility of data analytics. The author, then, explains in detail on how companies can survive and win the global big data competition, based on actual cases of companies. Having established the execution and performance-oriented big data methodology based on over 10 years of experience in the field as an authority in big data strategy, the author identifies core principles of data analytics using case analysis of failures and successes of actual companies. Moreover, he endeavors to share with readers the principles regarding how innovative global companies became successful through utilization of big data. This book is a quintessential big data analytics, in which the author’s knowhow from direct and indirect experiences is condensed. How do we survive at this big data war in which Facebook in SNS, Amazon in e-commerce, Google in search, expand their platforms to other areas based on their respective distinct markets? The answer can be found in this book. 

Interactive Spark using PySpark

Apache Spark is an in-memory framework that allows data scientists to explore and interact with big data much more quickly than with Hadoop. Python users can work with Spark using an interactive shell called PySpark. Why is it important? PySpark makes the large-scale data processing capabilities of Apache Spark accessible to data scientists who are more familiar with Python than Scala or Java. This also allows for reuse of a wide variety of Python libraries for machine learning, data visualization, numerical analysis, etc. What you'll learn—and how you can apply it Compare the different components provided by Spark, and what use cases they fit. Learn how to use RDDs (resilient distributed datasets) with PySpark. Write Spark applications in Python and submit them to the cluster as Spark jobs. Get an introduction to the Spark computing framework. Apply this approach to a worked example to determine the most frequent airline delays in a specific month and year. This lesson is for you because… You're a data scientist, familiar with Python coding, who needs to get up and running with PySpark You're a Python developer who needs to leverage the distributed computing resources available on a Hadoop cluster, without learning Java or Scala first Prerequisites Familiarity with writing Python applications Some familiarity with bash command-line operations Basic understanding of how to use simple functional programming constructs in Python, such as closures, lambdas, maps, etc. Materials or downloads needed in advance Apache Spark This lesson is taken from by Jenny Kim and Benjamin Bengfort. Data Analytics with Hadoop

The Data and Analytics Playbook

The Data and Analytics Playbook: Proven Methods for Governed Data and Analytic Quality explores the way in which data continues to dominate budgets, along with the varying efforts made across a variety of business enablement projects, including applications, web and mobile computing, big data analytics, and traditional data integration. The book teaches readers how to use proven methods and accelerators to break through data obstacles to provide faster, higher quality delivery of mission critical programs. Drawing upon years of practical experience, and using numerous examples and an easy to understand playbook, Lowell Fryman, Gregory Lampshire, and Dan Meers discuss a simple, proven approach to the execution of multiple data oriented activities. In addition, they present a clear set of methods to provide reliable governance, controls, risk, and exposure management for enterprise data and the programs that rely upon it. In addition, they discuss a cost-effective approach to providing sustainable governance and quality outcomes that enhance project delivery, while also ensuring ongoing controls. Example activities, templates, outputs, resources, and roles are explored, along with different organizational models in common use today and the ways they can be mapped to leverage playbook data governance throughout the organization. Provides a mature and proven playbook approach (methodology) to enabling data governance that supports agile implementation Features specific examples of current industry challenges in enterprise risk management, including anti-money laundering and fraud prevention Describes business benefit measures and funding approaches using exposure based cost models that augment risk models for cost avoidance analysis and accelerated delivery approaches using data integration sprints for application, integration, and information delivery success

In this session, Eloy Sasot, Head of Analytics, NewsCorp, sat with Vishal Kumar, CEO AnalyticsWeek and shared his journey as an analytics executive, best practices, hacks for upcoming executives, and some challenges/opportunities she's observing as a Chief Analytics Officer.

Timeline:

0:29 Eloy's journey. 4:43 Why work in a publishing house? 7:16 Non-tech industry doing tech stuff. 10:18 Tips for a small business to get started with data science. 13:46 Creating a culture of data science in a company. 17:23 Convincing leaders towards data science. 22:05 Initial days for a leader in creating a data science practice. 27:20 Putting together a data science team. 29:18 Choosing the right tool. 33:00 Keep oneself tool agnostic. 35:20 CDO, CAO, and CTO. 38:58 Defining a data scientist at News Corp. 42:12 Future of data analytics. 46:37 Blaming everything on Big Data.

Podcast Link: https://futureofdata.org/563533-2/

Here's Eloy's Bio: Eloy is the CAO at News Corp, a worldwide network of leading companies in the worlds of diversified media, news, education, and information services, such as The Wall Street Journal, Dow Jones, New York Post, The Times, The Sun, The Australian, HarperCollins, Move, Storyful and Unruly

Prior to this, Eloy led Pricing, Data Science and Data Analytics for HarperCollins Publishers, the second-largest consumer book publisher in the world, with operations in 18 countries, nearly 200 years of history, and more than 65 unique imprints. Since joining HarperCollins in 2011, Eloy pioneered the creation and development of the pricing function, first in the UK, and then its extension to an international scale for the global company. He worked with his teams and each division around the world to drive data-driven decision-making, with a particular focus on Pricing. Besides his global role, he was Board Level Director of HarperCollins UK.

He holds an MBA from INSEAD and a Master’s in Mathematical Engineering from INSA Toulouse.

Follow @eloysasot

The podcast is sponsored by: TAO.ai(https://tao.ai), Artificial Intelligence Driven Career Coach

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Want to Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Big Data Analytics with R

Unlock the potential of big data analytics by mastering R programming with this comprehensive guide. This book takes you step-by-step through real-world scenarios where R's capabilities shine, providing you with practical skills to handle, process, and analyze large and complex datasets effectively. What this Book will help me do Understand the latest big data processing methods and how R can enhance their application. Set up and use big data platforms such as Hadoop and Spark in conjunction with R. Utilize R for practical big data problems, such as analyzing consumption and behavioral datasets. Integrate R with SQL and NoSQL databases to maximize its versatility in data management. Discover advanced machine learning implementations using R and Spark MLlib for predictive analytics. Author(s) None Walkowiak is an experienced data analyst and R programming expert with a passion for data engineering and machine learning. With a deep knowledge of big data platforms and extensive teaching experience, they bring a clear and approachable writing style to help learners excel. Who is it for? Ideal for data analysts, scientists, and engineers with fundamental data analysis knowledge looking to enhance their big data capabilities using R. If you aim to adapt R for large-scale data management and analysis workflows, this book is your ideal companion to bridge the gap.

Implementing an IBM High-Performance Computing Solution on IBM Power System S822LC

This IBM® Redbooks® publication demonstrates and documents that IBM Power Systems™ high-performance computing and technical computing solutions deliver faster time to value with powerful solutions. Configurable into highly scalable Linux clusters, Power Systems offer extreme performance for demanding workloads such as genomics, finance, computational chemistry, oil and gas exploration, and high-performance data analytics. This book delivers a high-performance computing solution implemented on the IBM Power System S822LC. The solution delivers high application performance and throughput based on its built-for-big-data architecture that incorporates IBM POWER8® processors, tightly coupled Field Programmable Gate Arrays (FPGAs) and accelerators, and faster I/O by using Coherent Accelerator Processor Interface (CAPI). This solution is ideal for clients that need more processing power while simultaneously increasing workload density and reducing datacenter floor space requirements. The Power S822LC offers a modular design to scale from a single rack to hundreds, simplicity of ordering, and a strong innovation roadmap for graphics processing units (GPUs). This publication is targeted toward technical professionals (consultants, technical support staff, IT Architects, and IT Specialists) responsible for delivering cost effective high-performance computing (HPC) solutions that help uncover insights from their data so they can optimize business results, product development, and scientific discoveries

In this session, Michael O'Connell, Chief Analytics Officer, TIBCO Software, sat with Vishal Kumar, CEO AnalyticsWeek and shared his journey as a Chief Analytics Executive, shared best practices, cultural hacks for upcoming executives, shared his perspective on changing BI landscape and how businesses could leverage that and shared some challenges/opportunities he's observing across various industries.

Timeline:

0:28 Michael's journey. 4:12 CDO, CAO, and CTO. 7:30 Adoption of data analytics capabilities. 9:55 The BI industry dealing with the latest in data analytics. 12:10 Future of stats. 14:58 Creating a center of excellence with data. 18:00 Evolution of data in BI. 21:40 Small businesses getting started with data analytics. 24:35 First steps in the process of becoming a data-driven company. 26:28 Convincing leaders towards data science. 28:20 Shortest route to become a data scientist. 29:49 A typical day in Michael's life.

Podcast Link: https://futureofdata.org/analyticsweek-leadership-podcast-with-michael-oconnell-tibco-software/

Here's Michael's Bio: Michael O’Connell, Chief Analytics Officer, TIBCO Software, developing analytic solutions across a number of industries including Financial Services, Energy, Life Sciences, Consumer Goods & Retail, and Telco, Media & Networks. Michael has been working on analytics software applications for the past 20 years and has published more than 50 papers and several software packages on analytics methodology and applications. Michael did his Ph.D. work in Statistics at North Carolina State University and is Adjunct Professor Statistics in the department.

Follow @michoconnell

The podcast is sponsored by: TAO.ai(https://tao.ai), Artificial Intelligence Driven Career Coach

About #Podcast:

FutureOfData podcast is a conversation starter to bring leaders, influencers, and lead practitioners to discuss their journey to create the data-driven future.

Want to Join? If you or any you know wants to join in, Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor? Email us @ [email protected]

Keywords:

FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

AI and Medicine

Data-driven techniques have improved decision-making processes for people in industries such as finance and real estate. Yet, despite promising solutions that data analytics and artificial intelligence/machine learning (ML) tools can bring to healthcare, the industry remains largely unconvinced. In this O’Reilly report, you’ll explore the potential of—and impediments to—widespread adoption of AI and ML in the medical field. You’ll also learn how extensive government regulation and resistance from the medical community have so far stymied full-scale acceptance of sophisticated data analytics in healthcare. Through interviews with several professionals working at the intersection of medicine and data science, author Mike Barlow examines five areas where the application of AI/ML strategies can spur a beneficial revolution in healthcare: Identifying risks and interventions for healthcare management of entire populations Closing gaps in care by designing plans for individual patients Supporting customized self-care treatment plans and monitoring patient health in real time Optimizing healthcare processes through data analysis to improve care and reduce costs Helping doctors and patients choose proper medications, dosages, and promising surgical options

Advancing Procurement Analytics

One area where data analytics can have profound effect is your company’s procurement process. Some organizations spend more than two thirds of their revenue buying goods and services, making procurement—out of all business activities—a key element in achieving cost reduction. This report examines how your company can significantly improve procurement analytics to solve business questions quickly and effectively. Author Federico Castanedo, Chief Data Scientist at WiseAthena.com, explains how a probabilistic, bottom-up approach can significantly increase the quality, speed, and scalability of your data preparation operations—whether you’re integrating datasets or cleaning and classifying them. You’ll learn how new solutions leverage automation and machine learning, including the Tamr platform, and help you take advantage of several data-driven actions for procurement—including compliance, price arbitrage, and spend recovery.

Apache Hive Cookbook

Apache Hive Cookbook is a comprehensive resource for mastering Apache Hive, a tool that bridges the gap between SQL and Big Data processing. Through guided recipes, you'll acquire essential skills in Hive query development, optimization, and integration with modern big data frameworks. What this Book will help me do Design efficient Hive query structures for big data analytics. Optimize data storage and query execution using partitions and buckets. Integrate Hive seamlessly with frameworks like Spark and Hadoop. Understand and utilize the HiveQL syntax to perform advanced analytical processing. Implement practical solutions to secure, maintain, and scale Hive environments. Author(s) Hanish Bansal, Saurabh Chauhan, and Shrey Mehrotra bring their extensive expertise in big data technologies and Hive to this cookbook. With years of practical experience and deep technical knowledge, they offer a collection of solutions and best practices that reflect real-world use cases. Their commitment to clarity and depth makes this book an invaluable resource for exploring Hive to its fullest potential. Who is it for? This book is perfect for data professionals, engineers, and developers looking to enhance their capabilities in big data analytics using Hive. It caters to those with a foundational understanding of big data frameworks and some familiarity with SQL. Whether you're planning to optimize data handling or integrate Hive with other data tools, this guide helps you achieve your goals. Step into the world of efficient data analytics with Apache Hive through structured learning paths.