talk-data.com talk-data.com

Topic

data

5765

tagged

Activity Trend

3 peak/qtr
2020-Q1 2026-Q1

Activities

5765 activities · Newest first

Expert Performance Indexing for SQL Server 2012

Expert Performance Indexing for SQL Server 2012 is a deep dive into perhaps the single-most important facet of good performance: indexes, and how to best use them. The book begins in the shallow waters with explanations of the types of indexes and how they are stored in databases. Moving deeper into the topic, and further into the book, you will look at the statistics that are accumulated both by indexes and on indexes. All of this will help you progress towards properly achieving your database performance goals. What you'll learn from Expert Performance Indexing for SQL Server 2012 will help you understand what indexes are doing in the database and what can be done to mitigate and improve their effects on performance. The final destination is a guided tour through a number of real-world scenarios and approaches that can be taken to investigate, mitigate, and improve the performance of your database. Defines indexes and provides an understanding of their role Uncovers and explains the statistics that are kept in indexes Teaches strategies and approaches for indexing databases What you'll learn Fully understand the index types at your disposal Recognize and remove unnecessary indexes Review statistics to understand indexing choices made by the optimizer Properly apply strategies such as covering indexes, included columns, index intersections, and more Write queries to make good use of the indexes already in place Design effective indexes for full-text, spatial, and XML data types Manage the big picture: Encompassing all indexes in a database, and all database instances on a server Who this book is for Expert Performance Indexing for SQL Server 2012 is intended for database administrators and developers who are ready to boost the performance of their environments without the need to go in blindly building indexes.

HLSL and Pixel Shaders for XAML Developers

Pixel shaders are some of the more powerful graphic tools available for XAML programmers, but shader development bears little resemblance to traditional .NET programming. With this hands-on book, you’ll not only discover how to use existing shaders in your Windows Presentation Foundation (WPF) and Silverlight applications, you’ll also learn how create your own effects with XAML and Microsoft’s HLSL shading language. In the process, you’ll write, compile, and test custom XAML shaders with the Shazzam Shader Editor, a free utility developed by author Walt Ritscher. The book includes XAML and C# sample code, and Shazzam contains all of the sample shaders discussed. Learn how shaders help you extend the GPU’s rendering capabilities Explore prevailing shader types, such as color modification, blurring, and spatial transformation Get a quick tour of the shader features, and use pre-built effects on image elements in your application Examine the XAML ShaderEffect class to understand how WPF and Silverlight use shaders Learn about the shader-specific tools available in Visual Studio and Expression Blend Get up to speed on HLSL basics and learn how to create a variety of graphics effects

R For Dummies

Still trying to wrap your head around R? With more than two million users, R is the open-source programming language standard for data analysis and statistical modeling. R is packed with powerful programming capabilities, but learning to use R in the real world can be overwhelming for even the most seasoned statisticians. This easy-to-follow guide explains how to use R for data processing and statistical analysis, and then, shows you how to present your data using compelling and informative graphics. You'll gain practical experience using R in a variety of settings and delve deeper into R's feature-rich toolset. Includes tips for the initial installation of R Demonstrates how to easily perform calculations on vectors, arrays, and lists of data Shows how to effectively visualize data using R's powerful graphics packages Gives pointers on how to find, install, and use add-on packages created by the R community Provides tips on getting additional help from R mailing lists and websites Whether you're just starting out with statistical analysis or are a procedural programming pro, R For Dummies is the book you need to get the most out of R.

Statistical Inference: A Short Course

A concise, easily accessible introduction to descriptive and inferential techniques Statistical Inference: A Short Course offers a concise presentation of the essentials of basic statistics for readers seeking to acquire a working knowledge of statistical concepts, measures, and procedures. The author conducts tests on the assumption of randomness and normality, provides nonparametric methods when parametric approaches might not work. The book also explores how to determine a confidence interval for a population median while also providing coverage of ratio estimation, randomness, and causality. To ensure a thorough understanding of all key concepts, Statistical Inference provides numerous examples and solutions along with complete and precise answers to many fundamental questions, including: How do we determine that a given dataset is actually a random sample? With what level of precision and reliability can a population sample be estimated? How are probabilities determined and are they the same thing as odds? How can we predict the level of one variable from that of another? What is the strength of the relationship between two variables? The book is organized to present fundamental statistical concepts first, with later chapters exploring more advanced topics and additional statistical tests such as Distributional Hypotheses, Multinomial Chi-Square Statistics, and the Chi-Square Distribution. Each chapter includes appendices and exercises, allowing readers to test their comprehension of the presented material. Statistical Inference: A Short Course is an excellent book for courses on probability, mathematical statistics, and statistical inference at the upper-undergraduate and graduate levels. The book also serves as a valuable reference for researchers and practitioners who would like to develop further insights into essential statistical tools.

Beginning Database Design: From Novice to Professional, Second Edition

Beginning Database Design, Second Edition provides short, easy-to-read explanations of how to get database design right the first time. This book offers numerous examples to help you avoid the many pitfalls that entrap new and not-so-new database designers. Through the help of use cases and class diagrams modeled in the UML, you'll learn to discover and represent the details and scope of any design problem you choose to attack. Database design is not an exact science. Many are surprised to find that problems with their databases are caused by poor design rather than by difficulties in using the database management software. Beginning Database Design, Second Edition helps you ask and answer important questions about your data so you can understand the problem you are trying to solve and create a pragmatic design capturing the essentials while leaving the door open for refinements and extension at a later stage. Solid database design principles and examples help demonstrate the consequences of simplifications and pragmatic decisions. The rationale is to try to keep a design simple, but allow room for development as situations change or resources permit. Provides solid design principles by which to avoid pitfalls and support changing needs Includes numerous examples of good and bad design decisions and their consequences Shows a modern method for documenting design using the Unified Modeling Language What you'll learn Avoid the most common pitfalls in database design. Create clear use cases from project requirements. Design a data model to support the use cases. Apply generalization and specialization appropriately. Secure future flexibility through a normalized design. Ensure integrity through relationships, keys, and constraints. Successfully implement your data model as a relational schema. Who this book is for Beginning Database Design, Second Edition is aimed at desktop power users, developers, database administrators, and others who are charged with caring for data and storing it in ways that preserve its meaning and integrity. Desktop users will appreciate the coverage of Excel as a plausible "database" for research systems and lab environments. Developers and database designers will find insight from the clear discussions of design approaches and their pitfalls and benefits. All readers will benefit from learning a modern notation for documenting designs that is based upon the widely used and accepted Universal Modeling Language.

Getting Started with D3

Learn how to create beautiful, interactive, browser-based data visualizations with the D3 JavaScript library. This hands-on book shows you how to use a combination of JavaScript and SVG to build everything from simple bar charts to complex infographics. You’ll learn how to use basic D3 tools by building visualizations based on real data from the New York Metropolitan Transit Authority. Using historical tables, geographical information, and other data, you’ll graph bus breakdowns and accidents and the percentage of subway trains running on time, among other examples. By the end of the book, you’ll be prepared to build your own web-based data visualizations with D3. Join a dataset with elements of a webpage, and modify the elements based on the data Map data values onto pixels and colors with D3’s scale objects Apply axis and line generators to simplify aspects of building visualizations Create a simple UI that allows users to investigate and compare data Use D3 transitions in your UI to animate important aspects of the data Get an introduction to D3 layout tools for building more sophisticated visualizations If you can code and manipulate data, and know how to work with JavaScript and SVG, this book is for you.

Pro SQL Server 2012 Integration Services

Pro SQL Server 2012 Integration Services teaches how to take advantage of the powerful extract, transform, and load (ETL) platform included with Microsoft SQL Server 2012. You'll learn to build scalable, robust, performance-driven enterprise ETL solutions that save time and make money for your company. You'll learn to avoid common ETL development pitfalls and how to extend the power of your ETL solutions to include virtually any possible transformation on data from any conceivable source. SQL Server Integration Services (SSIS) facilitates an unprecedented ability to load data from anywhere, perform any type of manipulation on it, and store it to any source. Whether you are populating databases, retrieving data from the Web, or performing complex calculations on large data sets, SSIS gives you the tools to get the job done. And this book gives you the knowledge to take advantage of everything SSIS offers. Helps you design and develop robust, efficient, scalable ETL solutions Walks you through using the built-in, stock components Shows how to programmatically extend the power of SSIS to cover any possible scenario

Classic Problems of Probability

"A great book, one that I will certainly add to my personal library." --Paul J. Nahin, Professor Emeritus of Electrical Engineering, University of New Hampshire Classic Problems of Probability presents a lively account of the most intriguing aspects of statistics. The book features a large collection of more than thirty classic probability problems which have been carefully selected for their interesting history, the way they have shaped the field, and their counterintuitive nature. From Cardano's 1564 Games of Chance to Jacob Bernoulli's 1713 Golden Theorem to Parrondo's 1996 Perplexing Paradox, the book clearly outlines the puzzles and problems of probability, interweaving the discussion with rich historical detail and the story of how the mathematicians involved arrived at their solutions. Each problem is given an in-depth treatment, including detailed and rigorous mathematical proofs as needed. Some of the fascinating topics discussed by the author include: Buffon's Needle problem and its ingenious treatment by Joseph Barbier, culminating into a discussion of invariance Various paradoxes raised by Joseph Bertrand Classic problems in decision theory, including Pascal's Wager, Kraitchik's Neckties, and Newcomb's problem The Bayesian paradigm and various philosophies of probability Coverage of both elementary and more complex problems, including the Chevalier de Méré problems, Fisher and the lady testing tea, the birthday problem and its various extensions, and the Borel-Kolmogorov paradox Classic Problems of Probability is an eye-opening, one-of-a-kind reference for researchers and professionals interested in the history of probability and the varied problem-solving strategies employed throughout the ages. The book also serves as an insightful supplement for courses on mathematical probability and introductory probability and statistics at the undergraduate level.

Introduction to Probability and Stochastic Processes with Applications

An easily accessible, real-world approach to probability and stochastic processes Introduction to Probability and Stochastic Processes with Applications presents a clear, easy-to-understand treatment of probability and stochastic processes, providing readers with a solid foundation they can build upon throughout their careers. With an emphasis on applications in engineering, applied sciences, business and finance, statistics, mathematics, and operations research, the book features numerous real-world examples that illustrate how random phenomena occur in nature and how to use probabilistic techniques to accurately model these phenomena. The authors discuss a broad range of topics, from the basic concepts of probability to advanced topics for further study, including Itô integrals, martingales, and sigma algebras. Additional topical coverage includes: Distributions of discrete and continuous random variables frequently used in applications Random vectors, conditional probability, expectation, and multivariate normal distributions The laws of large numbers, limit theorems, and convergence of sequences of random variables Stochastic processes and related applications, particularly in queueing systems Financial mathematics, including pricing methods such as risk-neutral valuation and the Black-Scholes formula Extensive appendices containing a review of the requisite mathematics and tables of standard distributions for use in applications are provided, and plentiful exercises, problems, and solutions are found throughout. Also, a related website features additional exercises with solutions and supplementary material for classroom use. Introduction to Probability and Stochastic Processes with Applications is an ideal book for probability courses at the upper-undergraduate level. The book is also a valuable reference for researchers and practitioners in the fields of engineering, operations research, and computer science who conduct data analysis to make decisions in their everyday work.

Principles of Data Integration

Principles of Data Integration is the first comprehensive textbook of data integration, covering theoretical principles and implementation issues as well as current challenges raised by the semantic web and cloud computing. The book offers a range of data integration solutions enabling you to focus on what is most relevant to the problem at hand. Readers will also learn how to build their own algorithms and implement their own data integration application. Written by three of the most respected experts in the field, this book provides an extensive introduction to the theory and concepts underlying today's data integration techniques, with detailed, instruction for their application using concrete examples throughout to explain the concepts. This text is an ideal resource for database practitioners in industry, including data warehouse engineers, database system designers, data architects/enterprise architects, database researchers, statisticians, and data analysts; students in data analytics and knowledge discovery; and other data professionals working at the R&D and implementation levels. Offers a range of data integration solutions enabling you to focus on what is most relevant to the problem at hand Enables you to build your own algorithms and implement your own data integration applications

Joint Models for Longitudinal and Time-to-Event Data

Longitudinal studies often investigate how a marker that is repeatedly measured in time is associated with a time to an event of interest. An example is prostate cancer studies where longitudinal PSA level measurements are collected in conjunction with the time-to-recurrence. This book provides a full treatment of joint models for longitudinal and time-to-event data. The content is explanatory rather than mathematically rigorous and emphasizes applications. All illustrations put forward are available in the R programming language via the freely available package JM written by the author.

SQL Server 2012 Query Performance Tuning, Third Edition

Queries not running fast enough? Tired of the phone calls from frustrated users? Grant Fritchey's book SQL Server 2012 Query Performance Tuning is the answer to your SQL Server query performance problems. The book is revised to cover the very latest in performance optimization features and techniques. It is current with SQL Server 2012. It provides the tools you need to approach your queries with performance in mind. SQL Server 2012 Query Performance Tuning leads you through understanding the causes of poor performance, how to identify them, and how to fix them. You'll learn to be proactive in establishing performance baselines using tools like Performance Monitor and Extended Events. You'll learn to recognize bottlenecks and defuse them before the phone rings. You'll learn some quick solutions too, but emphasis is on designing for performance and getting it right, and upon heading off trouble before it occurs. Delight your users. Silence that ringing phone. Put the principles and lessons from SQL Server 2012 Query Performance Tuning into practice today. Establish performance baselines and monitor against them Troubleshoot and eliminate bottlenecks that frustrate users Plan ahead to achieve the right level of performance

Bayesian Estimation and Tracking: A Practical Guide

A practical approach to estimating and tracking dynamic systems in real-worl applications Much of the literature on performing estimation for non-Gaussian systems is short on practical methodology, while Gaussian methods often lack a cohesive derivation. Bayesian Estimation and Tracking addresses the gap in the field on both accounts, providing readers with a comprehensive overview of methods for estimating both linear and nonlinear dynamic systems driven by Gaussian and non-Gaussian noices. Featuring a unified approach to Bayesian estimation and tracking, the book emphasizes the derivation of all tracking algorithms within a Bayesian framework and describes effective numerical methods for evaluating density-weighted integrals, including linear and nonlinear Kalman filters for Gaussian-weighted integrals and particle filters for non-Gaussian cases. The author first emphasizes detailed derivations from first principles of eeach estimation method and goes on to use illustrative and detailed step-by-step instructions for each method that makes coding of the tracking filter simple and easy to understand. Case studies are employed to showcase applications of the discussed topics. In addition, the book supplies block diagrams for each algorithm, allowing readers to develop their own MATLAB toolbox of estimation methods. Bayesian Estimation and Tracking is an excellent book for courses on estimation and tracking methods at the graduate level. The book also serves as a valuable reference for research scientists, mathematicians, and engineers seeking a deeper understanding of the topics.

Statistical Quality Control, 7th Edition

The Seventh Edition of Introduction to Statistical Quality Control provides a comprehensive treatment of the major aspects of using statistical methodology for quality control and improvement. Both traditional and modern methods are presented, including state-of-the-art techniques for statistical process monitoring and control and statistically designed experiments for process characterization, optimization, and process robustness studies. The seventh edition continues to focus on DMAIC (define, measure, analyze, improve, and control--the problem-solving strategy of six sigma) including a chapter on the implementation process. Additionally, the text includes new examples, exercises, problems, and techniques. Statistical Quality Control is best suited for upper-division students in engineering, statistics, business and management science or students in graduate courses.

Audiovisual Archives: Digital Text and Discourse Analysis

Today, audiovisual archives and libraries have become very popular especially in the field of collecting, preserving and transmitting cultural heritage. However, the data in these archives or libraries - videos, images, soundtracks, etc. - constitute as such only potential cognitive resources for a given public (or "target community"). One of the most crucial issues of digital audiovisual libraries is indeed to enable users to actively appropriate audiovisual resources for their own concern (in research, education or any other professional or non-professional context). This means, an adaptation of the audiovisual data to the specific needs of a user or user group can be represented by small and closed "communities" as well as by networks of open communities around the globe. "Active appropriation" is, basically speaking, the use of existing digital audiovisual resources by users or user communities according to their expectations, needs, interests or desires. This process presupposes: 1) the definition and development of models or "scenarios" of cognitive processing of videos by the user; 2) the availability of tools necessary for defining, developing, reusing and sharing meta-linguistic resources such as thesauruses, ontologies or description models by users or user communities. Both aspects are central to the so-called semiotic turn in dealing with digital (audiovisual) texts, corpora of texts or again entire (audiovisual) archives and libraries. They demonstrate practically and theoretically the well-known "from data to metadata" or "from (simple) information to (relevant) knowledge" problem, which obviously directly influences the effective use, social impact and relevancy, and therefore also the future, of digital knowledge archives. This book offers a systematic, comprehensive approach to these questions from a theoretical as well as practical point of view. Contents Part 1. The Practical, Technical and Theoretical Context 1. Analysis of an Audiovisual Resource. 2. The Audiovisual Semiotic Workshop (ASW) Studio - A Brief Presentation. 3. A Concrete Example of a Model for Describing Audiovisual Content. 4. Model of Description and Task of Analysis. Part 2. Tasks in Analyzing an Audiovisual Corpus 5. The Analytical Task of "Describing the Knowledge Object". 6. The Analytical Task of "Contextualizing the Domain of Knowledge". 7. The Analytical Task of "Analyzing the Discourse Production around a Subject". Part 3. Procedures of Description 8. Definition of the Domain of Knowledge and Configuration of the Topical Structure. 9. The Procedure of Free Description of an Audiovisual Corpus. 10. The Procedure of Controlled Description of an Audiovisual Corpus. Part 4. The ASW System of Metalinguistic Resources 11. An Overview of the ASW Metalinguistic Resources. 12. The Meta-lexicon Representing the ASW Universe of Discourse.

IBM Flex System p260 and p460 Planning and Implementation Guide

To meet today’s complex and ever-changing business demands, you need a solid foundation of compute, storage, networking, and software resources that is simple to deploy and can quickly and automatically adapt to changing conditions. You also need to be able to take advantage of broad expertise and proven preferred practices in systems management, applications, hardware maintenance, and more. The IBM® Flex System™ p260 and p460 Compute Nodes are IBM Power Systems™ servers optimized for virtualization, performance, and efficiency. The nodes support IBM AIX®, IBM i, or Linux operating environments, and are designed to run various workloads in IBM PureFlex™ System. This IBM Redbooks® publication is a comprehensive guide to IBM PureFlex System and the Power Systems compute nodes. We introduce the offerings and describe the compute nodes in detail. We then describe planning and implementation steps and go through some of the key the management features of the IBM Flex System Manager management node. This book is for customers, IBM Business Partners, and IBM technical specialists that want to understand the new offerings and to plan and implement an IBM Flex System installation that involves the Power Systems compute nodes.

Getting Started with Couchbase Server

Starting with the core architecture and structure of Couchbase Server, this title will tell you everything you need to know to install and setup your first Couchbase cluster. You'll be given guidance on sizing your cluster so that you maximise your performance. After installation, you'll be shown how to use the admin web console to administer your server, and then learn the techniques behind the specific tasks behind cluster management. This includes adding and removing nodes, rebalancing, and backing up and restoring your cluster.

Developing Essbase Applications

If you love Essbase and hate seeing it misused, then this is the book for you. Written by 12 Essbase professionals that are either acknowledged Essbase gurus or certified Oracle ACEs, Developing Essbase Applications: Advanced Techniques for Finance and IT Professionals provides an unparalleled investigation and explanation of Essbase theory and best practices. Detailing the hows and the whys of successful Essbase implementation, the book arms you with simple yet powerful tools to meet your immediate needs, as well as the theoretical knowledge to proceed to the next level with Essbase. Infrastructure, data sourcing and transformation, database design, calculations, automation, APIs, reporting, and project implementation are covered by subject matter experts who work with the tools and techniques on a daily basis. In addition to practical cases that illustrate valuable lessons learned, the book offers: —Dan Pressman describes the previously unpublished and undocumented inner workings of the ASO Essbase engine. Undocumented Secrets Authoritative Experts—If you have questions that no one else can solve, these 12 Essbase professionals are the ones who can answer them. Unpublished—Includes the only third-party guide to infrastructure. Infrastructure is easy to get wrong and can doom any Essbase project. Comprehensive—Let there never again be a question on how to create blocks or design BSO databases for performance—Dave Farnsworth provides the answers within. Innovative—Cameron Lackpour and Joe Aultman bring new and exciting solutions to persistent Essbase problems. With a list of contributors as impressive as the program of presenters at a leading Essbase conference, this book offers unprecedented access to the insights and experiences of those at the forefront of the field. The previously unpublished material presented in these pages will give you the practical knowledge needed to use this powerful and intuitive tool to build highly useful analytical models, reporting systems, and forecasting applications.

IBM System Blue Gene Solution: Blue Gene/Q System Administration

This IBM® Redbooks® publication is one in a series of books that are written specifically for the IBM System Blue Gene® supercomputer, Blue Gene/Q®, which is the third generation of massively parallel supercomputers from IBM in the Blue Gene series. This book provides an overview of the system administration environment for Blue Gene/Q. It is intended to help administrators understand the tools that are available to maintain this system. This book details Blue Gene Navigator, which has grown to be a full featured web-based system administration tool on Blue Gene/Q. The book also describes many of the day-to-day administrative functions, such as running diagnostics, performing service actions, and monitoring hardware. There are also sections that cover BGmaster and the Control System processes that it monitors. This book is intended for Blue Gene/Q system administrators. It helps them use the tools that are available to maintain the Blue Gene/Q system.

Building an Ensemble Using IBM zEnterprise Unified Resource Manager

For the first time it is possible to deploy an integrated hardware platform that brings mainframe and distributed technologies together: a system that can start to replace individual islands of computing and that can work to reduce complexity, improve security, and bring applications closer to the data that they need.