talk-data.com talk-data.com

Topic

data-science-tasks

794

tagged

Activity Trend

1 peak/qtr
2020-Q1 2026-Q1

Activities

Showing filtered results

Filtering by: O'Reilly Data Science Books ×
Examples and Problems in Mathematical Statistics

Provides the necessary skills to solve problems in mathematical statistics through theory, concrete examples, and exercises With a clear and detailed approach to the fundamentals of statistical theory, Examples and Problems in Mathematical Statistics uniquely bridges the gap between theory andapplication and presents numerous problem-solving examples that illustrate the relatednotations and proven results. Written by an established authority in probability and mathematical statistics, each chapter begins with a theoretical presentation to introduce both the topic and the important results in an effort to aid in overall comprehension. Examples are then provided, followed by problems, and finally, solutions to some of the earlier problems. In addition, Examples and Problems in Mathematical Statistics features: Over 160 practical and interesting real-world examples from a variety of fields including engineering, mathematics, and statistics to help readers become proficient in theoretical problem solving More than 430 unique exercises with select solutions Key statistical inference topics, such as probability theory, statistical distributions, sufficient statistics, information in samples, testing statistical hypotheses, statistical estimation, confidence and tolerance intervals, large sample theory, and Bayesian analysis Recommended for graduate-level courses in probability and statistical inference, Examples and Problems in Mathematical Statistics is also an ideal reference for applied statisticians and researchers.

Information Evaluation

During the reception of a piece of information, we are never passive. Depending on its origin and content, from our personal beliefs and convictions, we bestow upon this piece of information, spontaneously or after reflection, a certain amount of confidence. Too much confidence shows a degree of naivety, whereas an absolute lack of it condemns us as being paranoid. These two attitudes are symmetrically detrimental, not only to the proper perception of this information but also to its use. Beyond these two extremes, each person generally adopts an intermediate position when faced with the reception of information, depending on its provenance and credibility. We still need to understand and explain how these judgements are conceived, in what context and to what end. Spanning the approaches offered by philosophy, military intelligence, algorithmics and information science, this book presents the concepts of information and the confidence placed in it, the methods that militaries, the first to be aware of the need, have or should have adopted, tools to help them, and the prospects that they have opened up. Beyond the military context, the book reveals ways to evaluate information for the good of other fields such as economic intelligence, and, more globally, the informational monitoring by governments and businesses. Contents 1. Information: Philosophical Analysis and Strategic Applications, Mouhamadou El Hady Ba and Philippe Capet. 2. Epistemic Trust, Gloria Origgi. 3. The Fundamentals of Intelligence, Philippe Lemercier. 4. Information Evaluation in the Military Domain: Doctrines, Practices and Shortcomings, Philippe Capet and Adrien Revault d'Allonnes. 5. Multidimensional Approach to Reliability Evaluation of Information Sources, Frédéric Pichon, Christophe Labreuche, Bertrand Duqueroie and Thomas Delavallade. 6. Uncertainty of an Event and its Markers in Natural Language Processing, Mouhamadou El Hady Ba, Stéphanie Brizard, Tanneguy Dulong and Bénédicte Goujon. 7. Quantitative Information Evaluation: Modeling and Experimental Evaluation, Marie-Jeanne Lesot, Frédéric Pichon and Thomas Delavallade. 8. When Reported Information Is Second Hand, Laurence Cholvy. 9. An Architecture for the Evolution of Trust: Definition and Impact of the Necessary Dimensions of Opinion Making, Adrien Revault d'Allonnes. About the Authors Philippe Capet is a project manager and research engineer at Ektimo, working mainly on information management and control in military contexts. Thomas Delavallade is an advanced studies engineer at Thales Communications & Security, working on social media mining in the context of crisis management, cybersecurity and the fight against cybercrime.

Methods and Applications of Statistics in Clinical Trials, Volume 1: Concepts, Principles, Trials, and Designs

A complete guide to the key statistical concepts essential for the design and construction of clinical trials As the newest major resource in the field of medical research, Methods and Applications of Statistics in Clinical Trials, Volume 1: Concepts, Principles, Trials, and Designs presents a timely and authoritative reviewof the central statistical concepts used to build clinical trials that obtain the best results. The referenceunveils modern approaches vital to understanding, creating, and evaluating data obtained throughoutthe various stages of clinical trial design and analysis. Accessible and comprehensive, the first volume in a two-part set includes newly-written articles as well as established literature from the Wiley Encyclopedia of Clinical Trials. Illustrating a variety of statistical concepts and principles such as longitudinal data, missing data, covariates, biased-coin randomization, repeated measurements, and simple randomization, the book also provides in-depth coverage of the various trial designs found within phase I-IV trials. Methods and Applications of Statistics in Clinical Trials, Volume 1: Concepts, Principles, Trials, and Designs also features: Detailed chapters on the type of trial designs, such as adaptive, crossover, group-randomized, multicenter, non-inferiority, non-randomized, open-labeled, preference, prevention, and superiority trials Over 100 contributions from leading academics, researchers, and practitioners An exploration of ongoing, cutting-edge clinical trials on early cancer and heart disease, mother-to-child human immunodeficiency virus transmission trials, and the AIDS Clinical Trials Group Methods and Applications of Statistics in Clinical Trials, Volume 1: Concepts, Principles, Trials, and Designs is an excellent reference for researchers, practitioners, and students in the fields of clinicaltrials, pharmaceutics, biostatistics, medical research design, biology, biomedicine, epidemiology,and public health.

Statistical Analysis in Forensic Science: Evidential Values of Multivariate Physicochemical Data

A practical guide for determining the evidential value of physicochemical data Microtraces of various materials (e.g. glass, paint, fibres, and petroleum products) are routinely subjected to physicochemical examination by forensic experts, whose role is to evaluate such physicochemical data in the context of the prosecution and defence propositions. Such examinations return various kinds of information, including quantitative data. From the forensic point of view, the most suitable way to evaluate evidence is the likelihood ratio. This book provides a collection of recent approaches to the determination of likelihood ratios and describes suitable software, with documentation and examples of their use in practice. The statistical computing and graphics software environment R, pre-computed Bayesian networks using Hugin Researcher and a new package, calcuLatoR, for the computation of likelihood ratios are all explored. Statistical Analysis in Forensic Science will provide an invaluable practical guide for forensic experts and practitioners, forensic statisticians, analytical chemists, and chemometricians. Key features include: Description of the physicochemical analysis of forensic trace evidence. Detailed description of likelihood ratio models for determining the evidential value of multivariate physicochemical data. Detailed description of methods, such as empirical cross-entropy plots, for assessing the performance of likelihood ratio-based methods for evidence evaluation. Routines written using the open-source R software, as well as Hugin Researcher and calcuLatoR. Practical examples and recommendations for the use of all these methods in practice.

Commercial Data Mining

Whether you are brand new to data mining or working on your tenth predictive analytics project, Commercial Data Mining will be there for you as an accessible reference outlining the entire process and related themes. In this book, you'll learn that your organization does not need a huge volume of data or a Fortune 500 budget to generate business using existing information assets. Expert author David Nettleton guides you through the process from beginning to end and covers everything from business objectives to data sources, and selection to analysis and predictive modeling. Commercial Data Mining includes case studies and practical examples from Nettleton's more than 20 years of commercial experience. Real-world cases covering customer loyalty, cross-selling, and audience prediction in industries including insurance, banking, and media illustrate the concepts and techniques explained throughout the book. Illustrates cost-benefit evaluation of potential projects Includes vendor-agnostic advice on what to look for in off-the-shelf solutions as well as tips on building your own data mining tools Approachable reference can be read from cover to cover by readers of all experience levels Includes practical examples and case studies as well as actionable business insights from author's own experience

Getting Started with Beautiful Soup

"Getting Started with Beautiful Soup" is your practical guide to website scraping using Python. It teaches you how to use Beautiful Soup and the urllib2 module to extract data from websites efficiently and effectively. Through hands-on examples and clear explanations, you'll gain the skills to navigate, search, and modify HTML content. What this Book will help me do Navigate and scrape web pages using the Beautiful Soup Python library. Understand and implement the urllib2 module to access web content programmatically. Search and analyze HTML structures efficiently to extract the needed data. Modify and format extracted HTML and XML content effectively. Handle encoding and manage output formats for diverse scraping requirements. Author(s) Vineeth G. Nair is an experienced Python developer with a strong focus on web technologies, data extraction, and automation. His expertise in Python's Beautiful Soup library has helped countless learners and professionals tackle the challenges of web scraping. Vineeth combines a methodical approach to teaching with practical examples, making complex concepts accessible and actionable. Who is it for? This book is ideal for Python enthusiasts, data analysts, and budding developers looking to explore web scraping. Whether you're a beginner or have some programming experience, this book will guide you through the fundamental concepts of extracting web data. If you're aiming to delve into practical, real-world implementations of web scraping, this is the book for you.

Forecasting Offertory Revenue at St. Elizabeth Seton Catholic Church

This new business analytics case study challenges readers to forecast donations, plan budgets, and manage cash flow for a religious institution suffering rapidly falling contributions. Crystallizing realistic analytical challenges faced by non-profit and for-profit organizations of all kinds, it exposes readers to the entire decision-making process, providing opportunities to perform analyses, interpret output, and recommend the best course of action. Author: Matthew J. Drake, Duquesne University.

Forecasting Sales at Ska Brewing Company

This new business analytics case study challenges readers to project trends and plan capacity for a fast-growing craft beer operation, so it can make the best possible decisions about expensive investments in brewing capacity. Crystallizing realistic analytical challenges faced by companies in many industries and markets, it exposes readers to the entire decision-making process, providing opportunities to perform analyses, interpret output, and recommend the best course of action. Author: Eric Huggins, Fort Lewis College.

Statistics for Mining Engineering

Many areas of mining engineering gather and use statistical information, provided by observing the actual operation of equipment, their systems, the development of mining works, surface subsidence that accompanies underground mining, displacement of rocks surrounding surface pits and underground drives and longwalls, amongst others. In addition, the actual modern machines used in surface mining are equipped with diagnostic systems that automatically trace all important machine parameters and send this information to the main producer’s computer. Such data not only provide information on the technical properties of the machine but they also have a statistical character. Furthermore, all information gathered during stand and lab investigations where parts, assemblies and whole devices are tested in order to prove their usefulness, have a stochastic character. All of these materials need to be developed statistically and, more importantly, based on these results mining engineers must make decisions whether to undertake actions, connected with the further operation of the machines, the further development of the works, etc. For these reasons, knowledge of modern statistics is necessary for mining engineers; not only as to how statistical analysis of data should be conducted and statistical synthesis should be done, but also as to understanding the results obtained and how to use them to make appropriate decisions in relation to the mining operation. This book on statistical analysis and synthesis starts with a short repetition of probability theory and also includes a special section on statistical prediction. The text is illustrated with many examples taken from mining practice; moreover the tables required to conduct statistical inference are included.

Growth Curve Modeling: Theory and Applications

Features recent trends and advances in the theory and techniques used to accurately measure and model growth Growth Curve Modeling: Theory and Applications features an accessible introduction to growth curve modeling and addresses how to monitor the change in variables over time since there is no "one size fits all" approach to growth measurement. A review of the requisite mathematics for growth modeling and the statistical techniques needed for estimating growth models are provided, and an overview of popular growth curves, such as linear, logarithmic, reciprocal, logistic, Gompertz, Weibull, negative exponential, and log-logistic, among others, is included. In addition, the book discusses key application areas including economic, plant, population, forest, and firm growth and is suitable as a resource for assessing recent growth modeling trends in the medical field. SAS is utilized throughout to analyze and model growth curves, aiding readers in estimating specialized growth rates and curves. Including derivations of virtually all of the major growth curves and models, Growth Curve Modeling: Theory and Applications also features: Statistical distribution analysis as it pertains to growth modeling Trend estimations Dynamic site equations obtained from growth models Nonlinear regression Yield-density curves Nonlinear mixed effects models for repeated measurements data Growth Curve Modeling: Theory and Applications is an excellent resource for statisticians, public health analysts, biologists, botanists, economists, and demographers who require a modern review of statistical methods for modeling growth curves and analyzing longitudinal data. The book is also useful for upper-undergraduate and graduate courses on growth modeling.

Metric Dashboards for Operations and Supply Chain Excellence

Over the last decade Lean and Six Sigma methods and tools have helped organizations improve to historic productivity levels with the data driven, systematic elimination of waste and improvement of flow. Today many organizations have enjoyed the benefits of Lean and Six sigma initiatives, and are looking more to sustain the gains, and aggressively drive a systematic and on-going approach to improvement and problem solving. The concept of diminishing returns applies here, when in the early stages, organizations were able to find "low-hanging fruit" and to quickly make significant improvements. Now the easy work is done, and organizations need a simple yet systematic approach to continuing their continuous improvement efforts. Operations and supply chain leaders will benefit from this book by developing a clear understanding of why and how metric scorecards and dashboards can be used as a powerful data driven improvement tool. This book illustrates visual management, scorecards, and dashboards for a full range of organizations, and focuses on Operations and Supply Chain Management areas. By covering these tools in these environments in a story book format, organization leaders can begin to understand how these methods and tools can be applied in their organizations.

Rapid Graphs with Tableau 8: The Original Guide for the Accidental Analyst

Tired of boring spreadsheets and data overload from confusing graphs? Master the art of visualization with Rapid Graphs with Tableau 8! Tableau insiders Stephen and Eileen McDaniel expertly provide a hands-on case study approach and more than 225 illustrations that will teach you how to quickly explore and understand your data to make informed decisions in a wide variety of real-world situations. Rapid Graphs with Tableau 8 includes best practices of visual analytics for ideas on how to communicate your findings with audience-friendly graphs, tables and maps. "A picture is worth a thousand words" is a common saying that is more relevant today than ever as data volumes grow and the need for easy access to answers becomes more critical. This book covers the core of Tableau capabilities in easy-to-follow examples, updated and expanded for Version 8. Learn how to be successful with Tableau from the team that started the original training program as the founding Tableau Education Partner! "A must read for anyone interested in Tableau. Clear explanations, practical advice and beautiful examples!" Elissa Fink – Chief Marketing Officer, Tableau Software What you'll learn Connect to and review data visually Create insightful maps and take advantage of view shifting Understand the types of views available in Tableau Take advantage of the powerful Marks card and much more Who this book is for Rapid Graphs with Tableau 8 is a great resource for those new to Tableau, and also contains useful tips and tricks for advanced users as well.

Nonlinear Option Pricing

New Tools to Solve Your Option Pricing Problems For nonlinear PDEs encountered in quantitative finance, advanced probabilistic methods are needed to address dimensionality issues. Written by two leaders in quantitative research—including Risk magazine’s 2013 Quant of the Year— Nonlinear Option Pricing compares various numerical methods for solving high-dimensional nonlinear problems arising in option pricing. Designed for practitioners, it is the first authored book to discuss nonlinear Black-Scholes PDEs and compare the efficiency of many different methods. Real-World Solutions for Quantitative Analysts The book helps quants develop both their analytical and numerical expertise. It focuses on general mathematical tools rather than specific financial questions so that readers can easily use the tools to solve their own nonlinear problems. The authors build intuition through numerous real-world examples of numerical implementation. Although the focus is on ideas and numerical examples, the authors introduce relevant mathematical notions and important results and proofs. The book also covers several original approaches, including regression methods and dual methods for pricing chooser options, Monte Carlo approaches for pricing in the uncertain volatility model and the uncertain lapse and mortality model, the Markovian projection method and the particle method for calibrating local stochastic volatility models to market prices of vanilla options with/without stochastic interest rates, the a + bλ technique for building local correlation models that calibrate to market prices of vanilla options on a basket, and a new stochastic representation of nonlinear PDE solutions based on marked branching diffusions.

Handbook of Graph Theory, 2nd Edition

With 34 new contributors, this best-selling handbook provides comprehensive coverage of the main topics in pure and applied graph theory. This second edition incorporates 14 new sections. Each chapter includes lists of essential definitions and facts, accompanied by examples, tables, remarks, and, in some cases, conjectures and open problems.

Understanding Uncertainty, Revised Edition

Praise for the First Edition "...a reference for everyone who is interested in knowing and handling uncertainty." —Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made. Featuring new material, the Revised Edition remains the go-to guide for uncertainty and decision making, providing further applications at an accessible level including: A critical study of transitivity, a basic concept in probability A discussion of how the failure of the financial sector to use the proper approach to uncertainty may have contributed to the recent recession A consideration of betting, showing that a bookmaker's odds are not expressions of probability Applications of the book's thesis to statistics A demonstration that some techniques currently popular in statistics, like significance tests, may be unsound, even seriously misleading, because they violate the rules of probability Understanding Uncertainty, Revised Edition is ideal for students studying probability or statistics and for anyone interested in one of the most fascinating and vibrant fields of study in contemporary science and mathematics.

Image Statistics in Visual Computing

To achieve the complex task of interpreting what we see, our brains rely on statistical regularities and patterns in visual data. Knowledge of these regularities can also be considerably useful in visual computing disciplines, such as computer vision, computer graphics, and image processing. The field of natural image statistics studies the regularities to exploit their potential and better understand human vision. With numerous color figures throughout, Image Statistics in Visual Computing The authors keep the material accessible, providing mathematical definitions where appropriate to help readers understand the transforms that highlight statistical regularities present in images. The book also describes patterns that arise once the images are transformed and gives examples of applications that have successfully used statistical regularities. Numerous references enable readers to easily look up more information about a specific concept or application. A supporting website also offers additional information, including descriptions of various image databases suitable for statistics. Collecting state-of-the-art, interdisciplinary knowledge in one source, this book explores the relation of natural image statistics to human vision and shows how natural image statistics can be applied to visual computing. It encourages readers in both academic and industrial settings to develop novel insights and applications in all disciplines that relate to visual computing.

Nonparametric Statistics for Social and Behavioral Sciences

Incorporating a hands-on pedagogical approach, this text presents the concepts, principles, and methods used in performing many nonparametric procedures. It also demonstrates practical applications of the most common nonparametric procedures using IBM's SPSS software. The text is the only current nonparametric book written specifically for students in the behavioral and social sciences. With examples of real-life research problems, it emphasizes sound research designs, appropriate statistical analyses, and accurate interpretations of results.

Understanding Business Statistics

Written in a conversational tone, presents topics in a systematic and organized manner to help students navigate the material. Demonstration problems appear alongside the concepts, which makes the content easier to understand. By explaining the reasoning behind each exercise, students are more inclined to engage with the material and gain a clear understanding of how to apply statistics to the business world. Freed, Understanding Business Statistics is accompanied by Freed, Understanding Business Statistics WileyPLUS, a research-based, online environment for effective teaching and learning. This online learning system gives students instant feedback on homework assignments, provides video tutorials and variety of study tools, and offers instructors thousands of reliable, accurate problems (including every problem from the book) to deliver automatically graded assignments or tests. Available in or outside of the Blackboard Learn Environment, WileyPLUS resources help reach all types of learners and give instructors the tools they need to enhance course material. WileyPLUS sold separately from text.

Early Estimation of Project Determinants

The study initiated with underlying principles of construction production which is an impetus to ill-conditioned prediction of project determinants at the early phases of building projects. To enhance the precision of these estimations, unique solutions relying on the statistical evidences were offered. Two alternative methods of analysis, namely linear regression and artificial neural networks, were employed to recognize the patterns in the sampled projects. Comparison was conducted on the basis of prediction measurements that were computed with the help of unseen test sample. The evidences of the empirical investigation suggest offered solutions provide superior prediction accuracy when compared to current practices. Last but not least, implementation of the solutions was illustrated on a random office development.

Fast Sequential Monte Carlo Methods for Counting and Optimization

A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the field, the book places emphasis on cross-entropy, minimum cross-entropy, splitting, and stochastic enumeration. Focusing on the concepts and application of Monte Carlo techniques, Fast Sequential Monte Carlo Methods for Counting and Optimization includes: Detailed algorithms needed to practice solving real-world problems Numerous examples with Monte Carlo method produced solutions within the 1-2% limit of relative error A new generic sequential importance sampling algorithm alongside extensive numerical results An appendix focused on review material to provide additional background information Fast Sequential Monte Carlo Methods for Counting and Optimization is an excellent resource for engineers, computer scientists, mathematicians, statisticians, and readers interested in efficient simulation techniques. The book is also useful for upper-undergraduate and graduate-level courses on Monte Carlo methods.