talk-data.com talk-data.com

Event

O'Reilly Data Science Books

2013-08-09 – 2026-02-25 Oreilly Visit website ↗

Activities tracked

2118

Collection of O'Reilly books on Data Science.

Sessions & talks

Showing 1551–1575 of 2118 · Newest first

Search within this event →
Microsoft Visio 2013 Business Process Diagramming and Validation - Second Edition

This book, "Microsoft Visio 2013 Business Process Diagramming and Validation," is your comprehensive guide to leveraging the features of Microsoft Visio 2013 Professional for creating and validating structured diagrams. Through practical tutorials and code examples, it helps you master process diagramming and validation techniques for a range of business needs. What this Book will help me do Understand and utilize structured diagram functionality including basic and cross-functional flowcharts. Develop and apply custom validation rules to ensure diagram correctness and compliance using Visio 2013. Create and use advanced tools such as the Rules Tools add-on to enhance diagram validation. Learn to integrate Visio with platforms like SharePoint 2013 and Office365. Gain technical skills to build, publish, and share Visio templates and rule sets. Author(s) The authors of this book bring rich technical expertise in Microsoft Visio and business process management. They have extensive experience developing user-focused tutorials and tools, ensuring you gain both theoretical knowledge and practical skills. Their accessible writing makes complex topics approachable for a variety of learners. Who is it for? This book is for Microsoft Visio 2013 Professional Edition users who want to advance their skills in diagramming and validation. It's ideal for professionals who seek to improve diagram compliance and efficiency, including developers, analysts, and project managers. If you're already familiar with Visio basics, this book will take your skills to the next level by introducing advanced techniques for automation and standardization.

Nonparametric Statistical Methods, 3rd Edition

Praise for the Second Edition "This book should be an essential part of the personal library of every practicing statistician."—Technometrics Thoroughly revised and updated, the new edition of Nonparametric Statistical Methods includes additional modern topics and procedures, more practical data sets, and new problems from real-life situations. The book continues to emphasize the importance of nonparametric methods as a significant branch of modern statistics and equips readers with the conceptual and technical skills necessary to select and apply the appropriate procedures for any given situation. Written by leading statisticians, Nonparametric Statistical Methods, Third Edition provides readers with crucial nonparametric techniques in a variety of settings, emphasizing the assumptions underlying the methods. The book provides an extensive array of examples that clearly illustrate how to use nonparametric approaches for handling one- or two-sample location and dispersion problems, dichotomous data, and one-way and two-way layout problems. In addition, the Third Edition features: The use of the freely available R software to aid in computation and simulation, including many new R programs written explicitly for this new edition New chapters that address density estimation, wavelets, smoothing, ranked set sampling, and Bayesian nonparametrics Problems that illustrate examples from agricultural science, astronomy, biology, criminology, education, engineering, environmental science, geology, home economics, medicine, oceanography, physics, psychology, sociology, and space science Nonparametric Statistical Methods, Third Edition is an excellent reference for applied statisticians and practitioners who seek a review of nonparametric methods and their relevant applications. The book is also an ideal textbook for upper-undergraduate and first-year graduate courses in applied nonparametric statistics.

Introduction to R for Quantitative Finance

Explore how to use the statistical computing language R to solve complex quantitative finance problems with "Introduction to R for Quantitative Finance." This book offers a blend of theory and practice, empowering readers with both the foundational understanding and practical skills to tackle real-world challenges using R, making it an ideal resource for beginners and seasoned professionals alike. What this Book will help me do Utilize time series analysis in R to model and forecast financial and economic data. Apply key portfolio selection theories to analyze and optimize investment portfolios. Understand and implement a variety of pricing models, including the Capital Asset Pricing Model in R. Analyze and interpret fixed income instruments and derivatives, focusing on practical applications in finance. Leverage R for risk analysis through techniques such as Extreme Value Theory and copula-based modeling. Author(s) The authors of "Introduction to R for Quantitative Finance" are seasoned experts in the fields of quantitative finance and computational statistics. They bring a wealth of industry and academic experience to the table, having applied R to solve intricate financial problems in practical settings. Their approachable writing style ensures complex subjects remain accessible and engaging. Who is it for? This book is ideal for quantitative analysts, data scientists, or finance professionals eager to leverage R for financial analysis. It caters to individuals with a foundation in finance but new to the R programming language. Readers who aim to model, predict, and interpret financial phenomena using advanced statistical tools will particularly benefit from this guide.

Analytics in Healthcare and the Life Sciences: Strategies, Implementation Methods, and Best Practices

Make healthcare analytics work: leverage its powerful opportunities for improving outcomes, cost, and efficiency.This book gives you thepractical frameworks, strategies, tactics, and case studies you need to go beyond talk to action. The contributing healthcare analytics innovators survey the field’s current state, present start-to-finish guidance for planning and implementation, and help decision-makers prepare for tomorrow’s advances. They present in-depth case studies revealing how leading organizations have organized and executed analytic strategies that work, and fully cover the primary applications of analytics in all three sectors of the healthcare ecosystem: Provider, Payer, and Life Sciences. Co-published with the International Institute for Analytics (IIA), this book features the combined expertise of IIA’s team of leading health analytics practitioners and researchers. Each chapter is written by a member of the IIA faculty, and bridges the latest research findings with proven best practices. This book will be valuable to professionals and decision-makers throughout the healthcare ecosystem, including provider organization clinicians and managers; life sciences researchers and practitioners; and informaticists, actuaries, and managers at payer organizations. It will also be valuable in diverse analytics, operations, and IT courses in business, engineering, and healthcare certificate programs.

Accelerating MATLAB with GPU Computing

Beyond simulation and algorithm development, many developers increasingly use MATLAB even for product deployment in computationally heavy fields. This often demands that MATLAB codes run faster by leveraging the distributed parallelism of Graphics Processing Units (GPUs). While MATLAB successfully provides high-level functions as a simulation tool for rapid prototyping, the underlying details and knowledge needed for utilizing GPUs make MATLAB users hesitate to step into it. Accelerating MATLAB with GPUs offers a primer on bridging this gap. Starting with the basics, setting up MATLAB for CUDA (in Windows, Linux and Mac OS X) and profiling, it then guides users through advanced topics such as CUDA libraries. The authors share their experience developing algorithms using MATLAB, C++ and GPUs for huge datasets, modifying MATLAB codes to better utilize the computational power of GPUs, and integrating them into commercial software products. Throughout the book, they demonstrate many example codes that can be used as templates of C-MEX and CUDA codes for readers’ projects. Download example codes from the publisher's website: http://booksite.elsevier.com/9780124080805/ Shows how to accelerate MATLAB codes through the GPU for parallel processing, with minimal hardware knowledge Explains the related background on hardware, architecture and programming for ease of use Provides simple worked examples of MATLAB and CUDA C codes as well as templates that can be reused in real-world projects

Financial and Actuarial Statistics, 2nd Edition

This work enables readers to obtain the mathematical and statistical background required in the current financial and actuarial industries. It also advances the application and theory of statistics in modern financial and actuarial modeling. This second edition adds a substantial amount of new material, including Excel exercises with solutions; nomenclature and notations standard to the actuarial field; a new chapter on Markov chains and actuarial applications; expanded discussions on simulation, surplus models, and ruin computations; and much more.

Introduction to Numerical and Analytical Methods with MATLAB® for Engineers and Scientists

Introduction to Numerical and Analytical Methods with MATLAB® for Engineers and Scientists provides the basic concepts of programming in MATLAB for engineering applications. • Teaches engineering students how to write computer programs on the MATLAB platform • Examines the selection and use of numerical and analytical methods through examples and case studies • Demonstrates mathematical concepts that can be used to help solve engineering problems, including matrices, roots of equations, integration, ordinary differential equations, curve fitting, algebraic linear equations, and more The text covers useful numerical methods, including interpolation, Simpson’s rule on integration, the Gauss elimination method for solving systems of linear algebraic equations, the Runge-Kutta method for solving ordinary differential equations, and the search method in combination with the bisection method for obtaining the roots of transcendental and polynomial equations. It also highlights MATLAB’s built-in functions. These include interp1 function, the quad and dblquad functions, the inv function, the ode45 function, the fzero function, and many others. The second half of the text covers more advanced topics, including the iteration method for solving pipe flow problems, the Hardy-Cross method for solving flow rates in a pipe network, separation of variables for solving partial differential equations, and the use of Laplace transforms to solve both ordinary and partial differential equations. This book serves as a textbook for a first course in numerical methods using MATLAB to solve problems in mechanical, civil, aeronautical, and electrical engineering. It can also be used as a textbook or as a reference book in higher level courses.

Applied Statistics and Probability for Engineers, 6th Edition

This best-selling engineering statistics text provides a practical approach that is more oriented to engineering and the chemical and physical sciences than many similar texts. It is packed with unique problem sets that reflect realistic situations engineers will encounter in their working lives. This text shows how statistics, the science of data is just as important for engineers as the mechanical, electrical, and materials sciences.

Algorithmic and Artificial Intelligence Methods for Protein Bioinformatics

An in-depth look at the latest research, methods, and applications in the field of protein bioinformatics This book presents the latest developments in protein bioinformatics, introducing for the first time cutting-edge research results alongside novel algorithmic and AI methods for the analysis of protein data. In one complete, self-contained volume, Algorithmic and Artificial Intelligence Methods for Protein Bioinformatics addresses key challenges facing both computer scientists and biologists, arming readers with tools and techniques for analyzing and interpreting protein data and solving a variety of biological problems. Featuring a collection of authoritative articles by leaders in the field, this work focuses on the analysis of protein sequences, structures, and interaction networks using both traditional algorithms and AI methods. It also examines, in great detail, data preparation, simulation, experiments, evaluation methods, and applications. Algorithmic and Artificial Intelligence Methods for Protein Bioinformatics: Highlights protein analysis applications such as protein-related drug activity comparison Incorporates salient case studies illustrating how to apply the methods outlined in the book Tackles the complex relationship between proteins from a systems biology point of view Relates the topic to other emerging technologies such as data mining and visualization Includes many tables and illustrations demonstrating concepts and performance figures Algorithmic and Artificial Intelligence Methods for Protein Bioinformatics is an essential reference for bioinformatics specialists in research and industry, and for anyone wishing to better understand the rich field of protein bioinformatics.

Data Smart: Using Data Science to Transform Information into Insight

Data Science gets thrown around in the press like it's magic. Major retailers are predicting everything from when their customers are pregnant to when they want a new pair of Chuck Taylors. It's a brave new world where seemingly meaningless data can be transformed into valuable insight to drive smart business decisions. But how does one exactly do data science? Do you have to hire one of these priests of the dark arts, the "data scientist," to extract this gold from your data? Nope. Data science is little more than using straight-forward steps to process raw data into actionable insight. And in Data Smart, author and data scientist John Foreman will show you how that's done within the familiar environment of a spreadsheet. Why a spreadsheet? It's comfortable! You get to look at the data every step of the way, building confidence as you learn the tricks of the trade. Plus, spreadsheets are a vendor-neutral place to learn data science without the hype. But don't let the Excel sheets fool you. This is a book for those serious about learning the analytic techniques, the math and the magic, behind big data. Each chapter will cover a different technique in a spreadsheet so you can follow along: Mathematical optimization, including non-linear programming and genetic algorithms Clustering via k-means, spherical k-means, and graph modularity Data mining in graphs, such as outlier detection Supervised AI through logistic regression, ensemble models, and bag-of-words models Forecasting, seasonal adjustments, and prediction intervals through monte carlo simulation Moving from spreadsheets into the R programming language You get your hands dirty as you work alongside John through each technique. But never fear, the topics are readily applicable and the author laces humor throughout. You'll even learn what a dead squirrel has to do with optimization modeling, which you no doubt are dying to know.

Handbook of Probability

THE COMPLETE COLLECTION NECESSARY FOR A CONCRETE UNDERSTANDING OF PROBABILITY Written in a clear, accessible, and comprehensive manner, the Handbook of Probability presents the fundamentals of probability with an emphasis on the balance of theory, application, and methodology. Utilizing basic examples throughout, the handbook expertly transitions between concepts and practice to allow readers an inclusive introduction to the field of probability. The book provides a useful format with self-contained chapters, allowing the reader easy and quick reference. Each chapter includes an introduction, historical background, theory and applications, algorithms, and exercises. The Handbook of Probability offers coverage of: Probability Space Probability Measure Random Variables Random Vectors in Rn Characteristic Function Moment Generating Function Gaussian Random Vectors Convergence Types Limit Theorems The Handbook of Probability is an ideal resource for researchers and practitioners in numerous fields, such as mathematics, statistics, operations research, engineering, medicine, and finance, as well as a useful text for graduate students.

PROC DOCUMENT by Example Using SAS
PROC DOCUMENT by Example Using SAS demonstrates the practical uses of the DOCUMENT procedure, a part of the Output Delivery System, in SAS 9.3. Michael Tuchman explains how to work with PROC DOCUMENT, which is designed to store your SAS procedure output for replay at a later time without having to rerun your original SAS code. You’ll learn how to:

save a collection of procedure output, descriptive text, and supporting graphs that can be replayed as a single unit save output once and distribute that same output in a variety of ODS formats such as HTML, CSV, and PDF create custom reports by comparing output from the same procedure run at different points in time create a table of contents for your output modify the appearance of both textual and graphical ODS output even if the original data is no longer available or easily accessible manage your tabular and graphical output by using descriptive labels, titles, and footnotes rearrange the original order of output in a procedure to suit your needs

After using this book, you’ll be able to quickly and easily create libraries of professional-looking output that are accessible at any time.

This book is part of the SAS Press program.

PROC SQL: Beyond the Basics Using SAS, Second Edition

Kirk Lafler's PROC SQL: Beyond the Basics Using SAS, Second Edition, offers a step-by-step example-driven guide that helps readers master the language of PROC SQL. Packed with analysis and examples illustrating an assortment of PROC SQL options, statements, and clauses, this book can be approached in a number of ways. Users can read it cover-to-cover or selectively by chapter; they can use the extensive index to find content of interest or refer to the helpful "Summary" that precede each chapter to look for help on a specific topic.

The second edition explores new and powerful features in SAS 9.3, and includes such topics as adding data to a table with a SET clause; bulk loading data from Microsoft Excel; distinguishing between DATA step merges and PROC SQL joins; rules for designing indexes; cardinality and index selectivity; and demystifying join algorithms. It also features an expanded discussion of CASE expressions, and new sections on complex query applications, and grouping and performance. Delving into the workings of PROC SQL with greater analysis and discussion, PROC SQL: Beyond the Basic Using SAS, Second Edition, examines a broad range of topics and provides greater detail about this powerful database language using discussion and numerous real-world examples.

This book is part of the SAS Press program.

Business Statistics: For Contemporary Decision Making, 8th Edition

This text is an unbound, binder-ready edition. Business Statistics: For Contemporary Decision Making, 8th Edition continues the tradition of presenting and explaining the wonders of business statistics through the use of clear, complete, student-friendly pedagogy. Ken Black's text equips readers with the quantitative decision-making skills and analysis techniques you need to make smart decisions based on real-world data.

Business Intelligence with MicroStrategy Cookbook

This comprehensive guide introduces you to the functionalities of MicroStrategy for business intelligence, empowering you to build dashboards, reports, and visualizations using hands-on, practical recipes with clear examples. You'll learn how to use MicroStrategy for the entire BI lifecycle, making data actionable and insights accessible. What this Book will help me do Install and configure the MicroStrategy platform, including setting up a fully operational BI environment. Create interactive dashboards and web reports to visualize and analyze data effectively. Learn to use MicroStrategy on mobile devices, enabling access to data-driven insights anywhere. Discover advanced analytics techniques using Visual Insight and MicroStrategy Cloud Express. Master practical skills with real-life examples to implement robust BI solutions. Author(s) Davide Moraschi, an experienced professional in business intelligence and data analytics, brings his expertise to guiding readers through the MicroStrategy platform. He has years of experience implementing and developing BI solutions in diverse industries, offering practical perspectives. Davide's approachable teaching style and clear examples make technical concepts accessible and engaging. Who is it for? This book is tailored for BI developers and data analysts who want to deepen their expertise in MicroStrategy. It's also suitable for IT professionals and business users aiming to leverage MicroStrategy for data insights. Some existing knowledge of BI concepts, such as dimensional modeling, will enrich your learning experience. You need no prior experience with MicroStrategy to benefit from this book.

Doing Data Science

Now that people are aware that data can make the difference in an election or a business model, data science as an occupation is gaining ground. But how can you get started working in a wide-ranging, interdisciplinary field that’s so clouded in hype? This insightful book, based on Columbia University’s Introduction to Data Science class, tells you what you need to know. In many of these chapter-long lectures, data scientists from companies such as Google, Microsoft, and eBay share new algorithms, methods, and models by presenting case studies and the code they use. If you’re familiar with linear algebra, probability, and statistics, and have programming experience, this book is an ideal introduction to data science. Topics include: Statistical inference, exploratory data analysis, and the data science process Algorithms Spam filters, Naive Bayes, and data wrangling Logistic regression Financial modeling Recommendation engines and causality Data visualization Social networks and data journalism Data engineering, MapReduce, Pregel, and Hadoop Doing Data Science is collaboration between course instructor Rachel Schutt, Senior VP of Data Science at News Corp, and data science consultant Cathy O’Neil, a senior data scientist at Johnson Research Labs, who attended and blogged about the course.

IBM SPSS Modeler Cookbook

"IBM SPSS Modeler Cookbook" is your practical guide to mastering data mining with IBM SPSS Modeler. This comprehensive book takes you beyond the basics, offering expert insights, time-saving techniques, and powerful workflows to grow your skills and elevate your analytical abilities. You will learn to apply the CRISP-DM methodology, efficiently prepare and explore data, build advanced models, and confidently incorporate analytical results into your business decisions. What this Book will help me do Effectively apply the CRISP-DM standard process to organize your data mining projects. Leverage efficient techniques for data extraction, transformation, and preparation. Develop and evaluate predictive models for practical applications in your organization. Enhance your models by utilizing advanced features and expert tips. Automate and streamline your data mining process with scripting for ultimate control. Author(s) Keith McCormick and None Abbott are seasoned data mining professionals with deep expertise in IBM SPSS Modeler and predictive analytics. Together, they have extensive experience in consulting, training, and applying advanced analytical techniques across industries. Through their approachable and insightful writing style, they share practical knowledge and expert workflows to empower readers. Who is it for? This book is designed for individuals who have basic experience with IBM SPSS Modeler and aspire to deepen their expertise. Whether you are a data analyst looking to advance your analytical capabilities or a professional aiming to integrate data-driven solutions into your organization, this book provides the knowledge and practical guidance you need to take the next step in your data mining journey.

Pentaho Data Integration Beginner's Guide - Second Edition

This book is a comprehensive guide designed for those new to Pentaho Data Integration. With a focus on practical application and step-by-step learning, this book covers everything from installation to complex data manipulation. By following along, you will acquire the skills you need to efficiently manage and transform data using Pentaho. What this Book will help me do Understand how to install and set up Pentaho Data Integration for professional data manipulation. Master data transformation tasks such as cleaning, sorting, and integrating different data sources. Learn to configure and operate databases within the Pentaho environment, including CRUD operations. Gain hands-on experience with data warehousing concepts and using Pentaho to populate data warehouses. Develop workflows and schedules for automated data processes using Pentaho's advanced tools. Author(s) Carina Roldán is an experienced data professional with extensive expertise in the field of ETL and data integration. Her teaching style is clear, approachable, and heavily reliant on practical examples. She focuses on enabling learners to build real-world skills in a supportive and engaging manner, making complex topics accessible to everyone. Who is it for? This book is perfect for developers, database administrators, and IT professionals looking to venture into ETL tools or seeking a deeper understanding of Pentaho Data Integration. Beginners without prior exposure to Pentaho Data Integration will find it an excellent entry point, while those with some experience will benefit from its in-depth insights. It is also valuable for data warehouse designers and architects aiming to streamline their workflows.

Getting Started with Greenplum for Big Data Analytics

This book serves as a thorough introduction to using the Greenplum platform for big data analytics. It explores key concepts for processing, analyzing, and deriving insights from big data using Greenplum, covering aspects from data integration to advanced analytics techniques like programming with R and MADlib. What this Book will help me do Understand the architecture and core components of the Greenplum platform. Learn how to design and execute data science projects using Greenplum. Master loading, processing, and querying big data in Greenplum efficiently. Explore programming with R and integrating it with Greenplum for analytics. Gain skills in high-availability configurations, backups, and recovery within Greenplum. Author(s) Sunila Gollapudi is a seasoned expert in the field of big data analytics and has multiple years of experience working with platforms like Greenplum. Her real-world problem-solving expertise shapes her practical and approachable writing style, making this book not only educational but enjoyable to read. Who is it for? This book is ideal for data scientists or analysts aiming to explore the capabilities of big data platforms like Greenplum. It suits readers with basic knowledge of data warehousing, programming, and analytics tools who want to deepen their expertise and effectively harness Greenplum for analytics.

Getting Started with the Graph Template Language in SAS

You've just received a new survey of study results, and you need to quickly create custom graphical views of the data. Or, you've completed your analysis, and you need graphs to present the results to your audience, in the style that they prefer. Now, you can create custom graphs quickly and easily with Getting Started with the Graph Template Language in SAS, without having to understand all of the Graph Template Language (GTL) features first.

This book will get you started building graphs immediately and will guide you toward a better understanding of the GTL, one step at a time. It shows you the most common approaches to a variety of graphs along with information that you can use to build more complex graphs from there. Sanjay Matange offers expert tips, examples, and techniques, with a goal of providing you with a solid foundation in using the GTL so that you can progress to more sophisticated, adaptable graphs as you need them.

Ultimately, Getting Started with the Graph Template Language in SAS allows you to bypass the learning curve. It teaches you how to quickly create custom, aesthetically pleasing graphs that present your data with maximum clarity and minimum clutter.

This book is part of the SAS Press program.

Agile Data Science

Mining big data requires a deep investment in people and time. How can you be sure you’re building the right models? With this hands-on book, you’ll learn a flexible toolset and methodology for building effective analytics applications with Hadoop. Using lightweight tools such as Python, Apache Pig, and the D3.js library, your team will create an agile environment for exploring data, starting with an example application to mine your own email inboxes. You’ll learn an iterative approach that enables you to quickly change the kind of analysis you’re doing, depending on what the data is telling you. All example code in this book is available as working Heroku apps. Create analytics applications by using the agile big data development methodology Build value from your data in a series of agile sprints, using the data-value stack Gain insight by using several data structures to extract multiple features from a single dataset Visualize data with charts, and expose different aspects through interactive reports Use historical data to predict the future, and translate predictions into action Get feedback from users after each sprint to keep your project on track

KNIME Essentials

KNIME Essentials is a comprehensive guide to mastering KNIME, an open-source data analytics platform. Through this book, you'll discover how to process, visualize, and report on data effectively. Whether you're new to KNIME or data analytics in general, this resource is designed to equip you with the skills needed to handle data challenges confidently. What this Book will help me do Understand how to install and set up KNIME for data analysis tasks. Learn to create workflows to efficiently process data. Explore methods for importing and pre-processing data from various sources. Master techniques for visualizing and analyzing processed data. Generate professional-grade reports based on your data visualizations. Author(s) Gábor Bakos, the author of KNIME Essentials, leverages his expertise in data analytics and software tools to provide readers with a practical guide to mastering KNIME. With years of experience in working with analytics platforms, he crafts content that is accessible and focused on delivering real-world results. His user-focused approach helps readers quickly grasp complex concepts. Who is it for? This book is ideal for data analysts and professionals seeking to enhance their data processing skills with KNIME. No prior knowledge of KNIME is expected, but a foundational understanding of data analytics concepts would be beneficial. If you're looking to produce insightful analytics and reports efficiently, this guide is tailored for you.

Introduction to Statistical Process Control

A major tool for quality control and management, statistical process control (SPC) monitors sequential processes, such as production lines and Internet traffic, to ensure that they work stably and satisfactorily. Along with covering traditional methods, Introduction to Statistical Process Control describes many recent SPC methods that improve upon the more established techniques. The author—a leading researcher on SPC—shows how these methods can handle new applications. After exploring the role of SPC and other statistical methods in quality control and management, the book covers basic statistical concepts and methods useful in SPC. It then systematically describes traditional SPC charts, including the Shewhart, CUSUM, and EWMA charts, as well as recent control charts based on change-point detection and fundamental multivariate SPC charts under the normality assumption. The text also introduces novel univariate and multivariate control charts for cases when the normality assumption is invalid and discusses control charts for profile monitoring. All computations in the examples are solved using R, with R functions and datasets available for download on the author’s website. Offering a systematic description of both traditional and newer SPC methods, this book is ideal as a primary textbook for a one-semester course in disciplines concerned with process quality control, such as statistics, industrial and systems engineering, and management sciences. It can also be used as a supplemental textbook for courses on quality improvement and system management. In addition, the book provides researchers with many useful, recent research results on SPC and gives quality control practitioners helpful guidelines on implementing up-to-date SPC techniques.