talk-data.com talk-data.com

Topic

data

5765

tagged

Activity Trend

3 peak/qtr
2020-Q1 2026-Q1

Activities

5765 activities · Newest first

Monetizing Your Data

Transforming data into revenue generating strategies and actions Organizations are swamped with data—collected from web traffic, point of sale systems, enterprise resource planning systems, and more , but what to do with it? Monetizing your Data provides a framework and path for business managers to convert ever-increasing volumes of data into revenue generating actions through three disciplines: decision architecture, data science, and guided analytics. There are large gaps between understanding a business problem and knowing which data is relevant to the problem and how to leverage that data to drive significant financial performance. Using a proven methodology developed in the field through delivering meaningful solutions to Fortune 500 companies, this book gives you the analytical tools, methods, and techniques to transform data you already have into information into insights that drive winning decisions. Beginning with an explanation of the analytical cycle, this book guides you through the process of developing value generating strategies that can translate into big returns. The companion website, www.monetizingyourdata.com, provides templates, checklists, and examples to help you apply the methodology in your environment, and the expert author team provides authoritative guidance every step of the way. This book shows you how to use your data to: Monetize your data to drive revenue and cut costs Connect your data to decisions that drive action and deliver value Develop analytic tools to guide managers up and down the ladder to better decisions Turning data into action is key; data can be a valuable competitive advantage, but only if you understand how to organize it, structure it, and uncover the actionable information hidden within it through decision architecture and guided analytics. From multinational corporations to single-owner small businesses, companies of every size and structure stand to benefit from these tools, methods, and techniques; Monetizing your Data walks you through the translation and transformation to help you leverage your data into value creating strategies.

Effective Business Intelligence with QuickSight

Effective Business Intelligence with QuickSight introduces you to Amazon QuickSight, a modern BI tool that enables interactive visualizations powered by the cloud. With comprehensive tutorials, you'll master how to load, prepare, and visualize your data for actionable insights. This book provides real-world examples to showcase how QuickSight integrates into the AWS ecosystem. What this Book will help me do Understand how to effectively use Amazon QuickSight for business intelligence. Learn how to connect QuickSight to data sources like S3, RDS, and more. Create interactive dashboards and visualizations with QuickSight tools. Gain expertise in managing users, permissions, and data security in QuickSight. Execute a real-world big data project using AWS Data Lakes and QuickSight. Author(s) None Nadipalli is a seasoned data architect with extensive experience in cloud computing and business intelligence. With expertise in the AWS ecosystem, she has worked on numerous large-scale data analytics projects. Her writing focuses on providing practical knowledge through easy-to-follow examples and actionable insights. Who is it for? This book is ideal for business intelligence architects, developers, and IT executives seeking to leverage Amazon QuickSight. It is suited for readers with foundational knowledge of AWS who want to enhance their capabilities in BI and data visualization. If your goal is to modernize your business intelligence systems and explore advanced analytics, this book is perfect for you.

Translating Statistics to Make Decisions: A Guide for the Non-Statistician

Examine and solve the common misconceptions and fallacies that non-statisticians bring to their interpretation of statistical results. Explore the many pitfalls that non-statisticians—and also statisticians who present statistical reports to non-statisticians—must avoid if statistical results are to be correctly used for evidence-based business decision making. Victoria Cox, senior statistician at the United Kingdom's Defence Science and Technology Laboratory (Dstl), distills the lessons of her long experience presenting the actionable results of complex statistical studies to users of widely varying statistical sophistication across many disciplines: from scientists, engineers, analysts, and information technologists to executives, military personnel, project managers, and officials across UK government departments, industry, academia, and international partners. The author shows how faulty statistical reasoning often undermines the utility of statistical results even among those with advanced technical training. Translating Statistics teaches statistically naive readers enough about statistical questions, methods, models, assumptions, and statements that they will be able to extract the practical message from statistical reports and better constrain what conclusions cannot be made from the results. To non-statisticians with some statistical training, this book offers brush-ups, reminders, and tips for the proper use of statistics and solutions to common errors. To fellow statisticians, the author demonstrates how to present statistical output to non-statisticians to ensure that the statistical results are correctly understood and properly applied to real-world tasks and decisions. The book avoids algebra and proofs, but it does supply code written in R for those readers who are motivated to work out examples. Pointing along the way to instructive examples of statistics gone awry, Translating Statistics walks readers through the typical course of a statistical study, progressing from the experimental design stage through the data collection process, exploratory data analysis, descriptive statistics, uncertainty, hypothesis testing, statistical modelling and multivariate methods, to graphs suitable for final presentation. The steady focus throughout the book is on how to turn the mathematical artefacts and specialist jargon that are second nature to statisticians into plain English for corporate customers and stakeholders. The final chapter neatly summarizes the book's lessons and insights for accurately communicating statistical reports to the non-statisticians who commission and act on them. What You'll Learn Recognize and avoid common errors and misconceptions that cause statistical studies to be misinterpreted and misused by non-statisticians in organizational settings Gain a practical understanding of the methods, processes, capabilities, and caveats of statistical studies to improve the application of statistical data to business decisions See how to code statistical solutions in R Who This Book Is For Non-statisticians—including both those with and without an introductory statistics course under their belts—who consume statistical reports in organizational settings, and statisticians who seek guidance for reporting statistical studies to non-statisticians in ways that will be accurately understood and will inform sound business and technical decisions

Beginning Data Science in R: Data Analysis, Visualization, and Modelling for the Data Scientist

Discover best practices for data analysis and software development in R and start on the path to becoming a fully-fledged data scientist. This book teaches you techniques for both data manipulation and visualization and shows you the best way for developing new software packages for R. Beginning Data Science in R details how data science is a combination of statistics, computational science, and machine learning. You'll see how to efficiently structure and mine data to extract useful patterns and build mathematical models. This requires computational methods and programming, and R is an ideal programming language for this. This book is based on a number of lecture notes for classes the author has taught on data science and statistical programming using the R programming language. Modern data analysis requires computational skills and usually a minimum of programming. What You Will Learn Perform data science and analytics using statistics and the R programming language Visualize and explore data, including working with large data sets found in big data Build an R package Test and check your code Practice version control Profile and optimize your code Who This Book Is For Those with some data science or analytics background, but not necessarily experience with the R programming language.

Think Like a Data Scientist

Think Like a Data Scientist presents a step-by-step approach to data science, combining analytic, programming, and business perspectives into easy-to-digest techniques and thought processes for solving real world data-centric problems. About the Technology Data collected from customers, scientific measurements, IoT sensors, and so on is valuable only if you understand it. Data scientists revel in the interesting and rewarding challenge of observing, exploring, analyzing, and interpreting this data. Getting started with data science means more than mastering analytic tools and techniques, however; the real magic happens when you begin to think like a data scientist. This book will get you there. About the Book Think Like a Data Scientist teaches you a step-by-step approach to solving real-world data-centric problems. By breaking down carefully crafted examples, you'll learn to combine analytic, programming, and business perspectives into a repeatable process for extracting real knowledge from data. As you read, you'll discover (or remember) valuable statistical techniques and explore powerful data science software. More importantly, you'll put this knowledge together using a structured process for data science. When you've finished, you'll have a strong foundation for a lifetime of data science learning and practice. What's Inside The data science process, step-by-step How to anticipate problems Dealing with uncertainty Best practices in software and scientific thinking About the Reader Readers need beginner programming skills and knowledge of basic statistics. About the Author Brian Godsey has worked in software, academia, finance, and defense and has launched several data-centric start-ups. Quotes Explains difficult concepts and techniques concisely and approachably. - Jenice Tom, CVS Health Goes beyond simple tools and techniques and helps you to conceptualize and solve challenging, real-world data science problems. - Casimir Saternos, Synchronoss Technologies A successful attempt to put the mind of a data scientist on paper. - David Krief, Altansia The book that changed my career path! - Nicolas Boulet-Lavoie, DL Innov

Data Visualization, Volume II

This book discusses data and information visualization techniques-the decision-making tools with applications in health care, finance, manufacturing engineering, process improvement, product design, and others. These tools are an excellent means of viewing the current state of the process and improving them. The initial chapters discuss data analysis, the current trends in visualization, the concepts of systems and processes from which data are collected. The second part is devoted to quality tools-a set of graphical and information visualization tools in data analysis, decision-making, and Lean Six-Sigma quality. The eight basic tools of quality discussed are the Process Maps, Check Sheets, Histograms, Scatter Diagrams, Run Charts, Control Charts, Cause-and-Effect Diagrams, and Pareto Charts. The new quality tools presented are the Affinity, Tree, and Matrix Diagrams, Interrelationship Digraph, Prioritizing Matrices, Process Decision Program Chart, and Activity Network Diagram along with Quality Function Deployment (QFD) and Multivari Charts.

Data Science For Dummies, 2nd Edition

Your ticket to breaking into the field of data science! Jobs in data science are projected to outpace the number of people with data science skills—making those with the knowledge to fill a data science position a hot commodity in the coming years. Data Science For Dummies is the perfect starting point for IT professionals and students interested in making sense of an organization's massive data sets and applying their findings to real-world business scenarios. From uncovering rich data sources to managing large amounts of data within hardware and software limitations, ensuring consistency in reporting, merging various data sources, and beyond, you'll develop the know-how you need to effectively interpret data and tell a story that can be understood by anyone in your organization. Provides a background in data science fundamentals and preparing your data for analysis Details different data visualization techniques that can be used to showcase and summarize your data Explains both supervised and unsupervised machine learning, including regression, model validation, and clustering techniques Includes coverage of big data processing tools like MapReduce, Hadoop, Dremel, Storm, and Spark It's a big, big data world out there—let Data Science For Dummies help you harness its power and gain a competitive edge for your organization.

Illuminating Statistical Analysis Using Scenarios and Simulations

Features an integrated approach of statistical scenarios and simulations to aid readers in developing key intuitions needed to understand the wide ranging concepts and methods of statistics and inference Illuminating Statistical Analysis Using Scenarios and Simulations presents the basic concepts of statistics and statistical inference using the dual mechanisms of scenarios and simulations. This approach helps readers develop key intuitions and deep understandings of statistical analysis. Scenario-specific sampling simulations depict the results that would be obtained by a very large number of individuals investigating the same scenario, each with their own evidence, while graphical depictions of the simulation results present clear and direct pathways to intuitive methods for statistical inference. These intuitive methods can then be easily linked to traditional formulaic methods, and the author does not simply explain the linkages, but rather provides demonstrations throughout for a broad range of statistical phenomena. In addition, induction and deduction are repeatedly interwoven, which fosters a natural "need to know basis" for ordering the topic coverage. Examining computer simulation results is central to the discussion and provides an illustrative way to (re)discover the properties of sample statistics, the role of chance, and to (re)invent corresponding principles of statistical inference. In addition, the simulation results foreshadow the various mathematical formulas that underlie statistical analysis. In addition, this book: • Features both an intuitive and analytical perspective and includes a broad introduction to the use of Monte Carlo simulation and formulaic methods for statistical analysis • Presents straight-forward coverage of the essentials of basic statistics and ensures proper understanding of key concepts such as sampling distributions, the effects of sample size and variance on uncertainty, analysis of proportion, mean and rank differences, covariance, correlation, and regression • Introduces advanced topics such as Bayesian statistics, data mining, model cross-validation, robust regression, and resampling • Contains numerous example problems in each chapter with detailed solutions as well as an appendix that serves as a manual for constructing simulations quickly and easily using Microsoft® Office Excel® Illuminating Statistical Analysis Using Scenarios and Simulations is an ideal textbook for courses, seminars, and workshops in statistics and statistical inference and is appropriate for self-study as well. The book also serves as a thought-provoking treatise for researchers, scientists, managers, technicians, and others with a keen interest in statistical analysis. Jeffrey E. Kottemann, Ph.D., is Professor in the Perdue School at Salisbury University. Dr. Kottemann has published articles in a wide variety of academic research journals in the fields of business administration, computer science, decision sciences, economics, engineering, information systems, psychology, and public administration. He received his Ph.D. in Systems and Quantitative Methods from the University of Arizona.

Statistical Techniques for Transportation Engineering

Statistical Techniques for Transportation Engineering is written with a systematic approach in mind and covers a full range of data analysis topics, from the introductory level (basic probability, measures of dispersion, random variable, discrete and continuous distributions) through more generally used techniques (common statistical distributions, hypothesis testing), to advanced analysis and statistical modeling techniques (regression, AnoVa, and time series). The book also provides worked out examples and solved problems for a wide variety of transportation engineering challenges. Demonstrates how to effectively interpret, summarize, and report transportation data using appropriate statistical descriptors Teaches how to identify and apply appropriate analysis methods for transportation data Explains how to evaluate transportation proposals and schemes with statistical rigor

Oracle Database Upgrade and Migration Methods: Including Oracle 12c Release 2

Learn all of the available upgrade and migration methods in detail to move to Oracle Database version 12c. You will become familiar with database upgrade best practices to complete the upgrade in an effective manner and understand the Oracle Database 12c patching process. So it’s time to upgrade Oracle Database to version 12c and you need to choose the appropriate method while considering issues such as downtime. This book explains all of the available upgrade and migration methods so you can choose the one that suits your environment. You will be aware of the practical issues and proactive measures to take to upgrade successfully and reduce unexpected issues. With every release of Oracle Database there are new features and fixes to bugs identified in previous versions. As each release becomes obsolete, existing databases need to be upgraded. explains each method along with its strategy, requirements, steps, and known issues that have been seen so far. This book also compares the methods to help you choose the proper method according to your constraints. Oracle Database Upgrade and Migration Methods Also included in this book: Pre-requisite patches and pre-upgrade steps Patching to perform changes at the binary and database level to apply bug fixes What You Will Learn: Understand the need and importance of database upgrading and migration Be aware of the challenges associated with database upgrade decision making Compare all upgrade/migration methods Become familiar with database upgrade best practices and recommendations Understand database upgrade concepts in high availability and multi-tenant environments Know the database downgrade steps in case the upgraded database isn’t compatible with the environment Discover the features and benefits to the organization when it moves from the old database version to the latest database version Understand Oracle 12c patching concepts Who This Book Is For: Core database administrators, solution architects, business consultants, and database architects

Beginning Power BI: A Practical Guide to Self-Service Data Analytics with Excel 2016 and Power BI Desktop, Second Edition

Analyze your company's data quickly and easily using Microsoft's latest tools. You will learn to build scalable and robust data models to work from, clean and combine different data sources effectively, and create compelling visualizations and share them with your colleagues. Author Dan Clark takes you through each topic using step-by-step activities and plenty of screen shots to help familiarize you with the tools. This second edition includes new material on advanced uses of Power Query, along with the latest user guidance on the evolving Power BI platform. Beginning Power BI is your hands-on guide to quick, reliable, and valuable data insight. What You'll Learn Simplify data discovery, association, and cleansing Build solid analytical data models Create robust interactive data presentations Combine analytical and geographic data in map-based visualizations Publish and share dashboards and reports Who This Book Is For Business analysts, database administrators, developers, and other professionals looking to better understand and communicate with data

Big Data Visualization

Dive into 'Big Data Visualization' and uncover how to tackle the challenges of visualizing vast quantities of complex data. With a focus on scalable and dynamic techniques, this guide explores the nuances of effective data analysis. You'll master tools and approaches to display, interpret, and communicate data in impactful ways. What this Book will help me do Understand the fundamentals of big data visualization, including unique challenges and solutions. Explore practical techniques for using D3 and Python to visualize and detect anomalies in big data. Learn to leverage dashboards like Tableau to present data insights effectively. Address and improve data quality issues to enhance analysis accuracy. Gain hands-on experience with real-world use cases for tools such as Hadoop and Splunk. Author(s) James D. Miller is an IBM-certified expert specializing in data analytics and visualization. With years of experience handling massive datasets and extracting actionable insights, he is dedicated to sharing his expertise. His practical approach is evident in how he combines tool mastery with a clear understanding of data complexities. Who is it for? This book is designed for data analysts, data scientists, and others involved in interpreting and presenting big datasets. Whether you are a beginner looking to understand big data visualization or an experienced professional seeking advanced tools and techniques, this guide suits your needs perfectly. A foundational knowledge in programming languages like R and big data platforms such as Hadoop is recommended to maximize your learning.

Data Visualization with D3 4.x Cookbook - Second Edition

This book, 'Data Visualization with D3 4.x Cookbook' by Nick Zhu, is your ultimate guide to mastering data visualization using D3.js. Through practical recipes, you'll learn to create dynamic, data-driven visualizations and tackle real-world visualization challenges. The book also introduces techniques to manage and present data powerfully. What this Book will help me do Master D3.js 4.x features to create efficient data visualizations. Utilize pre-built recipes to generate diverse charts and graphs. Acquire expertise in manipulating datasets for visualization. Develop interactive, dynamic web applications with D3. Overcome common visualization challenges with practical solutions. Author(s) Nick Zhu is a professional data engineer and an expert in creating data-driven applications. With years of experience using D3.js, Nick brings his wealth of knowledge to writing, making complex concepts accessible to learners. He creates resources to help others enhance their data visualization skills. Who is it for? This book is ideal for developers and data analysts familiar with web technologies like HTML, CSS, and JavaScript, aiming to expand their skills with D3.js. Whether you're new to D3 or experienced and looking for a comprehensive reference, this book will empower you to create professional-grade visualizations.

Mastering Elastic Stack

Mastering Elastic Stack is your complete guide to advancing your data analytics expertise using the ELK Stack. With detailed coverage of Elasticsearch, Logstash, Kibana, Beats, and X-Pack, this book equips you with the skills to process and analyze any type of data efficiently. Through practical examples and real-world scenarios, you'll gain the ability to build end-to-end pipelines and create insightful dashboards. What this Book will help me do Build and manage log pipelines using Logstash, Beats, and Elasticsearch for real-time analytics. Develop advanced Kibana dashboards to visualize and interpret complex datasets. Efficiently utilize X-Pack features for alerting, monitoring, and security in the Elastic Stack. Master plugin customization and deployment for a tailored Elastic Stack environment. Apply Elastic Stack solutions to real-world cases for centralized logging and actionable insights. Author(s) The authors, None Kumar Gupta and None Gupta, are experienced technologists who have spent years working at the forefront of data processing and analytics. They are well-versed in Elasticsearch, Logstash, Kibana, and the Elastic ecosystem, having worked extensively in enterprise environments where these tools have transformed operations. Their passion for teaching and thorough understanding of the tools culminate in this comprehensive resource. Who is it for? The ideal reader is a developer already familiar with Elasticsearch, Logstash, and Kibana who wants to deepen their understanding of the stack. If you're involved in creating scalable data pipelines, analyzing complex datasets, or looking to implement centralized logging solutions in your work, this book is an excellent resource. It bridges the gap from intermediate to expert knowledge, allowing you to use the Elastic Stack effectively in various scenarios. Whether you are transitioning from a beginner or enhancing your skill set, this book meets your needs.

QGIS: Becoming a GIS Power User

Master data management, visualization, and spatial analysis techniques in QGIS and become a GIS power user About This Book Learn how to work with various types of data and create beautiful maps using this easy-to-follow guide Give a touch of professionalism to your maps, both for functionality and look and feel, with the help of this practical guide This progressive, hands-on guide builds on a geo-spatial data and adds more reactive maps using geometry tools. Who This Book Is For If you are a user, developer, or consultant and want to know how to use QGIS to achieve the results you are used to from other types of GIS, then this learning path is for you. You are expected to be comfortable with core GIS concepts. This Learning Path will make you an expert with QGIS by showing you how to develop more complex, layered map applications. It will launch you to the next level of GIS users. What You Will Learn Create your first map by styling both vector and raster layers from different data sources Use parameters such as precipitation, relative humidity, and temperature to predict the vulnerability of fields and crops to mildew Re-project vector and raster data and see how to convert between different style formats Use a mix of web services to provide a collaborative data system Use raster analysis and a model automation tool to model the physical conditions for hydrological analysis Get the most out of the cartographic tools to in QGIS to reveal the advanced tips and tricks of cartography In Detail The first module Learning QGIS, Third edition covers the installation and configuration of QGIS. You'll become a master in data creation and editing, and creating great maps. By the end of this module, you'll be able to extend QGIS with Python, getting in-depth with developing custom tools for the Processing Toolbox. The second module QGIS Blueprints gives you an overview of the application types and the technical aspects along with few examples from the digital humanities. After estimating unknown values using interpolation methods and demonstrating visualization and analytical techniques, the module ends by creating an editable and data-rich map for the discovery of community information. The third module QGIS 2 Cookbook covers data input and output with special instructions for trickier formats. Later, we dive into exploring data, data management, and preprocessing steps to cut your data to just the important areas. At the end of this module, you will dive into the methods for analyzing routes and networks, and learn how to take QGIS beyond the out-of-the-box features with plug-ins, customization, and add-on tools. This Learning Path combines some of the best that Packt has to offer in one complete, curated package. It includes content from the following Packt products: Learning QGIS, Third Edition by Anita Graser QGIS Blueprints by Ben Mearns QGIS 2 Cookbook by Alex Mandel, Víctor Olaya Ferrero, Anita Graser, Alexander Bruy Style and approach This Learning Path will get you up and running with QGIS. We start off with an introduction to QGIS and create maps and plugins. Then, we will guide you through Blueprints for geographic web applications, each of which will teach you a different feature by boiling down a complex workflow into steps you can follow. Finally, you'll turn your attention to becoming a QGIS power user and master data management, visualization, and spatial analysis techniques of QGIS. Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the code file.

The Data Science Handbook

A comprehensive overview of data science covering the analytics, programming, and business skills necessary to master the discipline Finding a good data scientist has been likened to hunting for a unicorn: the required combination of technical skills is simply very hard to find in one person. In addition, good data science is not just rote application of trainable skill sets; it requires the ability to think flexibly about all these areas and understand the connections between them. This book provides a crash course in data science, combining all the necessary skills into a unified discipline. Unlike many analytics books, computer science and software engineering are given extensive coverage since they play such a central role in the daily work of a data scientist. The author also describes classic machine learning algorithms, from their mathematical foundations to real-world applications. Visualization tools are reviewed, and their central importance in data science is highlighted. Classical statistics is addressed to help readers think critically about the interpretation of data and its common pitfalls. The clear communication of technical results, which is perhaps the most undertrained of data science skills, is given its own chapter, and all topics are explained in the context of solving real-world data problems. The book also features: • Extensive sample code and tutorials using Python™ along with its technical libraries • Core technologies of “Big Data,” including their strengths and limitations and how they can be used to solve real-world problems • Coverage of the practical realities of the tools, keeping theory to a minimum; however, when theory is presented, it is done in an intuitive way to encourage critical thinking and creativity • A wide variety of case studies from industry • Practical advice on the realities of being a data scientist today, including the overall workflow, where time is spent, the types of datasets worked on, and the skill sets needed The Data Science Handbook is an ideal resource for data analysis methodology and big data software tools. The book is appropriate for people who want to practice data science, but lack the required skill sets. This includes software professionals who need to better understand analytics and statisticians who need to understand software. Modern data science is a unified discipline, and it is presented as such. This book is also an appropriate reference for researchers and entry-level graduate students who need to learn real-world analytics and expand their skill set. FIELD CADY is the data scientist at the Allen Institute for Artificial Intelligence, where he develops tools that use machine learning to mine scientific literature. He has also worked at Google and several Big Data startups. He has a BS in physics and math from Stanford University, and an MS in computer science from Carnegie Mellon.

Learning PySpark

"Learning PySpark" guides you through mastering the integration of Python with Apache Spark to build scalable and efficient data applications. You'll delve into Spark 2.0's architecture, efficiently process data, and explore PySpark's capabilities ranging from machine learning to structured streaming. By the end, you'll be equipped to craft and deploy robust data pipelines and applications. What this Book will help me do Master the Spark 2.0 architecture and its Python integration with PySpark. Leverage PySpark DataFrames and RDDs for effective data manipulation and analysis. Develop scalable machine learning models using PySpark's ML and MLlib libraries. Understand advanced PySpark features such as GraphFrames for graph processing and TensorFrames for deep learning models. Gain expertise in deploying PySpark applications locally and on the cloud for production-ready solutions. Author(s) Authors None Drabas and None Lee bring extensive experience in data engineering and Python programming. They combine a practical, example-driven approach with deep insights into Apache Spark's ecosystem. Their expertise and clarity in writing make this book accessible for individuals aiming to excel in big data technologies with Python. Who is it for? This book is best suited for Python developers who want to integrate Apache Spark 2.0 into their workflow to process large-scale data. Ideal readers will have foundational knowledge of Python and seek to build scalable data-intensive applications using Spark, regardless of prior experience with Spark itself.

Oracle Database 12c Release 2 In-Memory: Tips and Techniques for Maximum Performance

Master Oracle Database 12c Release 2’s powerful In-Memory option This Oracle Press guide shows, step-by-step, how to optimize database performance and cut transaction processing time using Oracle Database 12c Release 2 In-Memory. Oracle Database 12c Release 2 In-Memory: Tips and Techniques for Maximum Performance features hands-on instructions, best practices, and expert tips from an Oracle enterprise architect. You will learn how to deploy the software, use In-Memory Advisor, build queries, and interoperate with Oracle RAC and Multitenant. A complete chapter of case studies illustrates real-world applications. • Configure Oracle Database 12c and construct In-Memory enabled databases • Edit and control In-Memory options from the graphical interface • Implement In-Memory with Oracle Real Application Clusters • Use the In-Memory Advisor to determine what objects to keep In-Memory • Optimize In-Memory queries using groups, expressions, and aggregations • Maximize performance using Oracle Exadata Database Machine and In-Memory option • Use Swingbench to create data and simulate real-life system workloads

JMP 13 Consumer Research, Second Edition, 2nd Edition

JMP 13 Consumer Research focuses on analyses that help users observe and predict subject's behavior, particularly those in the market research field. The Uplift platform predicts consumer behavior based on shifts in marketing efforts. Learn how to tabulate and summarize categorical responses with the Categorical platform. Factor Analysis rotates principal components to help identify which directions have the most variation among the variables. The book also covers Item Analysis, a method for identifying latent traits that might affect an individual's choices. And read about the Choice platform, which market researchers use to estimate probability in consumer spending.

JMP 13 Design of Experiments Guide, Second Edition, 2nd Edition

The JMP 13 Design of Experiments Guide covers classic DOE designs (for example, full factorial, response surface, and mixture designs). Read about more flexible custom designs, which you generate to fit your particular experimental situation. And discover JMP’s definitive screening designs, an efficient way to identify important factor interactions using fewer runs than required by traditional designs. The book also provides guidance on determining an appropriate sample size for your study.