talk-data.com talk-data.com

Topic

data

5765

tagged

Activity Trend

3 peak/qtr
2020-Q1 2026-Q1

Activities

5765 activities · Newest first

Getting Started with the Graph Template Language in SAS

You've just received a new survey of study results, and you need to quickly create custom graphical views of the data. Or, you've completed your analysis, and you need graphs to present the results to your audience, in the style that they prefer. Now, you can create custom graphs quickly and easily with Getting Started with the Graph Template Language in SAS, without having to understand all of the Graph Template Language (GTL) features first.

This book will get you started building graphs immediately and will guide you toward a better understanding of the GTL, one step at a time. It shows you the most common approaches to a variety of graphs along with information that you can use to build more complex graphs from there. Sanjay Matange offers expert tips, examples, and techniques, with a goal of providing you with a solid foundation in using the GTL so that you can progress to more sophisticated, adaptable graphs as you need them.

Ultimately, Getting Started with the Graph Template Language in SAS allows you to bypass the learning curve. It teaches you how to quickly create custom, aesthetically pleasing graphs that present your data with maximum clarity and minimum clutter.

This book is part of the SAS Press program.

Implementing the IBM Storwize V5000

Organizations of all sizes are faced with the challenge of managing massive volumes of increasingly valuable data. But storing this data can be costly, and extracting value from the data is becoming more difficult. IT organizations have limited resources but must stay responsive to dynamic environments and act quickly to consolidate, simplify, and optimize their IT infrastructures. The IBM® Storwize® V5000 system provides a smarter solution that is affordable, easy to use, and self-optimizing, which enables organizations to overcome these storage challenges. Storwize V5000 delivers efficient, entry-level configurations that are specifically designed to meet the needs of small and midsize businesses. Designed to provide organizations with the ability to consolidate and share data at an affordable price, Storwize V5000 offers advanced software capabilities that are usually found in more expensive systems. This IBM Redbooks® publication is intended for pre-sales and post-sales technical support professionals and storage administrators. The concepts in this book also relate to the IBM Storwize V3700. This book was written at a software level of Version 7 Release 1.

Agile Data Science

Mining big data requires a deep investment in people and time. How can you be sure you’re building the right models? With this hands-on book, you’ll learn a flexible toolset and methodology for building effective analytics applications with Hadoop. Using lightweight tools such as Python, Apache Pig, and the D3.js library, your team will create an agile environment for exploring data, starting with an example application to mine your own email inboxes. You’ll learn an iterative approach that enables you to quickly change the kind of analysis you’re doing, depending on what the data is telling you. All example code in this book is available as working Heroku apps. Create analytics applications by using the agile big data development methodology Build value from your data in a series of agile sprints, using the data-value stack Gain insight by using several data structures to extract multiple features from a single dataset Visualize data with charts, and expose different aspects through interactive reports Use historical data to predict the future, and translate predictions into action Get feedback from users after each sprint to keep your project on track

IBM Content Manager OnDemand Guide

This IBM® Redbooks® publication provides a practical guide to the design, installation, configuration, and maintenance of IBM Content Manager OnDemand Version 9.0. Content Manager OnDemand manages high-volume storage and retrieval of electronic statements and provides efficient enterprise report management. Content Manager OnDemand transforms formatted computer output and printed reports, such as statements and invoices, into electronic information for easy report management. Content Manager OnDemand helps eliminate costly, high-volume print output by capturing, indexing, archiving, and presenting electronic information for improved customer service. This publication covers the key areas of Content Manager OnDemand, some of which might not be known to the Content Manager OnDemand community or are misunderstood. The book covers various topics, including basic information in administration, database structure, storage management, and security. In addition, the book covers data indexing, loading, conversion, and expiration. Other topics include user exits, performance, retention management, records management, and many more. Because many other resources are available that address subjects on different platforms, this publication is not intended as a comprehensive guide for Content Manager OnDemand; rather, it is intended to complement the existing Content Manager OnDemand documentation and provide insight into the issues that might be encountered in the setup and use of Content Manager OnDemand.

KNIME Essentials

KNIME Essentials is a comprehensive guide to mastering KNIME, an open-source data analytics platform. Through this book, you'll discover how to process, visualize, and report on data effectively. Whether you're new to KNIME or data analytics in general, this resource is designed to equip you with the skills needed to handle data challenges confidently. What this Book will help me do Understand how to install and set up KNIME for data analysis tasks. Learn to create workflows to efficiently process data. Explore methods for importing and pre-processing data from various sources. Master techniques for visualizing and analyzing processed data. Generate professional-grade reports based on your data visualizations. Author(s) Gábor Bakos, the author of KNIME Essentials, leverages his expertise in data analytics and software tools to provide readers with a practical guide to mastering KNIME. With years of experience in working with analytics platforms, he crafts content that is accessible and focused on delivering real-world results. His user-focused approach helps readers quickly grasp complex concepts. Who is it for? This book is ideal for data analysts and professionals seeking to enhance their data processing skills with KNIME. No prior knowledge of KNIME is expected, but a foundational understanding of data analytics concepts would be beneficial. If you're looking to produce insightful analytics and reports efficiently, this guide is tailored for you.

Performance Management: Using IBM InfoSphere Optim Performance Manager and Query Workload Tuner

This IBM® Redbooks® publication describes the architecture and components of IBM InfoSphere® Optim™ Performance Manager Extended Edition. Intended for DBAs and those involved in systems performance, it provides information for installation, configuration, and deployment. InfoSphere Optim Performance Manager delivers a new paradigm used to monitor and manage database and database application performance issues. It describes product dashboards and reports and provides scenarios for how they can be used to identify, diagnose, prevent, and resolve database performance problems. IBM InfoSphere Optim Query Workload Tuner facilitates query and query workload analysis and provides expert recommendations for improving query and query workload performance. Use InfoSphere Optim Performance Manager to identify slow running queries, top CPU consumers, or query workloads needing performance improvements and seamlessly transfer them to InfoSphere Optim Query Workload Tuner for analysis and recommendations. This is done using query formatting annotated with relevant statistics, access plan graphical or hierarchical views, and access plan analysis. It further provides recommendations for improving query structure, statistics collection, and indexes including generated command syntax and rationale for the recommendations. Please note that the additional material referenced in the text is not available from IBM.

IBM Virtualization Engine TS7700 with R3.0

This IBM® Redbooks® publication highlights TS7700 Virtualization Engine Release 3.0. It is intended for system architects who want to integrate their storage systems for smoother operation. The IBM Virtualization Engine TS7700 offers a modular, scalable, and high-performing architecture for mainframe tape virtualization for the IBM System z® environment. It integrates 3592 Tape Drives, high-performance disks, and a new disk cache subsystem into a storage hierarchy. This storage hierarchy is managed by robust storage management firmware with extensive self-management capability. It includes the following advanced functions: Policy management to control physical volume pooling Cache management Dual copy, including across a grid network Copy mode control The TS7700 Virtualization Engine offers enhanced statistical reporting. It also includes a standards-based management interface for TS7700 Virtualization Engine management. The new IBM Virtualization Engine TS7700 Release 3.0 continues the next generation of TS7700 Virtualization Engine servers for System z tape: IBM Virtualization Engine TS7720 Server Model VEB with 3956-CS9 and 3 TB disk drive modules (DDMs) IBM Virtualization Engine TS7740 Server Model V07 with 3956-CC9 with 600 GB DDMs These Virtualization Engines are based on IBM POWER7® technology. They offer improved performance for most System z tape workloads compared to the first generation of TS7700 Virtualization Engine servers. TS7700 Virtualization Engine Release 3.0 builds on the existing capabilities of the TS7700 family. It also introduces the following capabilities: Up to 4,000,000 logical volumes per grid domain Disk cache refresh utilizing 3956-CC9 for TS7740 Model V07 and 3956-CS9 for TS7720 Model VEB Virtualization Engines IPv4 or IPv6 support for customer network and IP Security for grid communication

Introduction to Statistical Process Control

A major tool for quality control and management, statistical process control (SPC) monitors sequential processes, such as production lines and Internet traffic, to ensure that they work stably and satisfactorily. Along with covering traditional methods, Introduction to Statistical Process Control describes many recent SPC methods that improve upon the more established techniques. The author—a leading researcher on SPC—shows how these methods can handle new applications. After exploring the role of SPC and other statistical methods in quality control and management, the book covers basic statistical concepts and methods useful in SPC. It then systematically describes traditional SPC charts, including the Shewhart, CUSUM, and EWMA charts, as well as recent control charts based on change-point detection and fundamental multivariate SPC charts under the normality assumption. The text also introduces novel univariate and multivariate control charts for cases when the normality assumption is invalid and discusses control charts for profile monitoring. All computations in the examples are solved using R, with R functions and datasets available for download on the author’s website. Offering a systematic description of both traditional and newer SPC methods, this book is ideal as a primary textbook for a one-semester course in disciplines concerned with process quality control, such as statistics, industrial and systems engineering, and management sciences. It can also be used as a supplemental textbook for courses on quality improvement and system management. In addition, the book provides researchers with many useful, recent research results on SPC and gives quality control practitioners helpful guidelines on implementing up-to-date SPC techniques.

Oracle Database 12c Install, Configure & Maintain Like a Professional

Master the Fundamentals of Oracle Database 12c Filled with easy-to-follow tutorials, this Oracle Press guide provides detailed coverage of core database concepts, the role of the administrator, and enterprise database capabilities. Oracle Database 12c: Install, Configure & Maintain Like a Professional walks you through database configuration, administration, programming, backup and recovery, and high availability. You'll get in-depth introductions to SQL and PL/SQL as well as important information on managing large databases and using Oracle's engineered systems. This essential beginner's resource features: Critical Skills--Lists of specific skills covered in each chapter Projects--Practical exercises that show how to apply the critical skills learned in each chapter Progress Checks--Quick self-assessment sections to check your progress Ask the Expert--Q&A sections filled with helpful tips Notes--Extra information related to the topic being covered Mastery Checks--Chapter-ending quizzes to test your knowledge

Digital Analytics Primer

Learn the concepts and methods for creating economic and business value with digital analytics, mobile analytics, web analytics, and market research and social media data. In , pioneering expert Judah Phillips introduces the concepts, terms, and methods that comprise the science and art of digital analysis for web, site, social, video, and other types of quantitative and qualitative data. Business readers—from new practitioners to experienced executives—who want to understand how digital analytics can be used to reduce costs and increase profitable revenue throughout the business should read this book. Phillips delivers a comprehensive review of the core concepts, vocabulary, and frameworks, including analytical methods and tools that can help you successfully integrate analytical processes, technology, and people into all aspects of business operations. This unbiased and product-independent primer draws from the author's extensive experience doing and managing analytics in this field. Digital Analytics Primer

Implementing the IBM Storwize V3700

Organizations of all sizes are faced with the challenge of managing massive volumes of increasingly valuable data. But storing this data can be costly, and extracting value from the data is becoming more and more difficult. IT organizations have limited resources but must stay responsive to dynamic environments and act quickly to consolidate, simplify, and optimize their IT infrastructures. The IBM® Storwize® V3700 system provides a smarter solution that is affordable, easy to use, and self-optimizing, which enables organizations to overcome these storage challenges. Storwize V3700 delivers efficient, entry-level configurations that are specifically designed to meet the needs of small and midsize businesses. Designed to provide organizations with the ability to consolidate and share data at an affordable price, Storwize V3700 offers advanced software capabilities that are usually found in more expensive systems. Built upon innovative IBM technology, Storwize V3700 addresses the block storage requirements of small and midsize organizations. Providing up to 240 TB of capacity packaged in a compact 2U, Storwize V3700 is designed to accommodate the most common storage network technologies to enable easy implementation and management. This IBM Redbooks® publication is intended for pre- and post-sales technical support professionals and storage administrators. The concepts in this book also relate to the IBM Storwize V3500. This book was written at a software level of Version 7 Release 1.

The Culture of Big Data

Technology does not exist in a vacuum. In the same way that a plant needs water and nourishment to grow, technology needs people and process to thrive and succeed. Culture (i.e., people and process) is integral and critical to the success of any new technology deployment or implementation. Big data is not just a technology phenomenon. It has a cultural dimension. It's vitally important to remember that most people have not considered the immense difference between a world seen through the lens of a traditional relational database system and a world seen through the lens of a Hadoop Distributed File System.This paper broadly describes the cultural challenges that accompany efforts to create and sustain big data initiatives in an evolving world whose data management processes are rooted firmly in traditional data warehouse architectures.

DB2 10.5 with BLU Acceleration

UPGRADE TO THE NEW GENERATION OF DATABASE SOFTWARE FOR THE ERA OF BIG DATA! If big data is an untapped natural resource, how do you find the gold hidden within? Leaders realize that big data means all data, and are moving quickly to extract more value from both structured and unstructured application data. However, analyzing this data can prove costly and complex, especially while protecting the availability, performance and reliability of essential business applications. In the new era of big data, businesses require data systems that can blend always-available transactions with speed-of-thought analytics. DB2 10.5 with BLU Acceleration provides this speed, simplicity, and affordability while making it easier to build next-generation applications with NoSQL features, such as a mongo-styled JSON document store, a graph store, and more. Dynamic in-memory columnar processing and other innovations deliver faster insights from more data, and enhanced pureScale clustering technology delivers high-availability transactions with application-transparent scalability for business continuity. With this book, you'll learn about the power and flexibility of multiworkload, multi-platform database software. Use the comprehensive knowledge from a team of DB2 developers and experts to get started with the latest DB2 trial version you can download at ibm.com/developerworks/downloads/im/db2/. Stay up to date on DB2 by visiting ibm.com/db2/.

Healthcare Analytics for Quality and Performance Improvement

Improve patient outcomes, lower costs, reduce fraud—all with healthcare analytics Healthcare Analytics for Quality and Performance Improvement walks your healthcare organization from relying on generic reports and dashboards to developing powerful analytic applications that drive effective decision-making throughout your organization. Renowned healthcare analytics leader Trevor Strome reveals in this groundbreaking volume the true potential of analytics to harness the vast amounts of data being generated in order to improve the decision-making ability of healthcare managers and improvement teams. Examines how technology has impacted healthcare delivery Discusses the challenge facing healthcare organizations: to leverage advances in both clinical and information technology to improve quality and performance while containing costs Explores the tools and techniques to analyze and extract value from healthcare data Demonstrates how the clinical, business, and technology components of healthcare organizations (HCOs) must work together to leverage analytics Other industries are already taking advantage of big data. Healthcare Analytics for Quality and Performance Improvement helps the healthcare industry make the most of the precious data already at its fingertips for long-overdue quality and performance improvement.

Joe Celko’s Complete Guide to NoSQL

Joe Celko's Complete Guide to NoSQL provides a complete overview of non-relational technologies so that you can become more nimble to meet the needs of your organization. As data continues to explode and grow more complex, SQL is becoming less useful for querying data and extracting meaning. In this new world of bigger and faster data, you will need to leverage non-relational technologies to get the most out of the information you have. Learn where, when, and why the benefits of NoSQL outweigh those of SQL with Joe Celko's Complete Guide to NoSQL. This book covers three areas that make today's new data different from the data of the past: velocity, volume and variety. When information is changing faster than you can collect and query it, it simply cannot be treated the same as static data. Celko will help you understand velocity, to equip you with the tools to drink from a fire hose. Old storage and access models do not work for big data. Celko will help you understand volume, as well as different ways to store and access data such as petabytes and exabytes. Not all data can fit into a relational model, including genetic data, semantic data, and data generated by social networks. Celko will help you understand variety, as well as the alternative storage, query, and management frameworks needed by certain kinds of data. Gain a complete understanding of the situations in which SQL has more drawbacks than benefits so that you can better determine when to utilize NoSQL technologies for maximum benefit Recognize the pros and cons of columnar, streaming, and graph databases Make the transition to NoSQL with the expert guidance of best-selling SQL expert Joe Celko

Securing Your Mobile Business with IBM Worklight

The IBM® Worklight® mobile application platform helps you to develop, deploy, host, and manage mobile enterprise applications. It also enables companies to integrate security into their overall mobile application lifecycle. This IBM Redbooks® publication describes the security capabilities offered by Worklight to address mobile application security objectives. The book begins with an overview of IBM MobileFirst and its security offerings. The book also describes a business scenario illustrating where security is needed in mobile solutions, and how Worklight can help you achieve it. This publication then provides specific, hands-on guidance about how to integrate Worklight with enterprise security. It also provides step-by-step guidance to implementing mobile security features, including direct update, remote disable, and encrypted offline cache. Integration between Worklight and other IBM security technologies is also covered, including integration with IBM Security Access Manager and IBM WebSphere® DataPower®. This Redbooks publication is of interest to anyone looking to better understand mobile security, and to learn how to enhance mobile security with Worklight.

Oracle Big Data Handbook

Transform Big Data into Insight "In this book, some of Oracle's best engineers and architects explain how you can make use of big data. They'll tell you how you can integrate your existing Oracle solutions with big data systems, using each where appropriate and moving data between them as needed." -- Doug Cutting, co-creator of Apache Hadoop Cowritten by members of Oracle's big data team, Oracle Big Data Handbook provides complete coverage of Oracle's comprehensive, integrated set of products for acquiring, organizing, analyzing, and leveraging unstructured data. The book discusses the strategies and technologies essential for a successful big data implementation, including Apache Hadoop, Oracle Big Data Appliance, Oracle Big Data Connectors, Oracle NoSQL Database, Oracle Endeca, Oracle Advanced Analytics, and Oracle's open source R offerings. Best practices for migrating from legacy systems and integrating existing data warehousing and analytics solutions into an enterprise big data infrastructure are also included in this Oracle Press guide. Understand the value of a comprehensive big data strategy Maximize the distributed processing power of the Apache Hadoop platform Discover the advantages of using Oracle Big Data Appliance as an engineered system for Hadoop and Oracle NoSQL Database Configure, deploy, and monitor Hadoop and Oracle NoSQL Database using Oracle Big Data Appliance Integrate your existing data warehousing and analytics infrastructure into a big data architecture Share data among Hadoop and relational databases using Oracle Big Data Connectors Understand how Oracle NoSQL Database integrates into the Oracle Big Data architecture Deliver faster time to value using in-database analytics Analyze data with Oracle Advanced Analytics (Oracle R Enterprise and Oracle Data Mining), Oracle R Distribution, ROracle, and Oracle R Connector for Hadoop Analyze disparate data with Oracle Endeca Information Discovery Plan and implement a big data governance strategy and develop an architecture and roadmap

Oracle E-Business Suite 12 Tuning Tips & Techniques

Master Oracle E-Business Suite 12 Performance Tuning and Optimization Deliver on the promise of lower TCO and achieve operational excellence by implementing a comprehensive enterprise application management process. Oracle E-Business Suite 12 Tuning Tips & Techniques offers detailed coverage of the versatile tools, features, and services available for managing application reliability, availability, performance, optimization, and governance. Best practices for maintaining overall application health and supporting evolving priorities, technologies, and systems are also included in this Oracle Press guide. Get a comprehensive technical and functional overview of Oracle E-Business Suite 12 Plan, develop, and implement a management lifecycle strategy Execute an effective reliability management solution Monitor and maintain availability Improve application speed and performance Optimize Oracle E-Business Suite for short-term flexibility and long-term strategic goals Implement strong application governance processes Measure the success and performance of your management plan Maintain an agile, future-ready Oracle E-Business Suite platform

Risk Scoring for a Loan Application on IBM System z: Running IBM SPSS Real-Time Analytics

When ricocheting a solution that involves analytics, the mainframe might not be the first platform that comes to mind. However, the IBM® System z® group has developed some innovative solutions that include the well-respected mainframe benefits. This book describes a workshop that demonstrates the use of real-time advanced analytics for enhancing core banking decisions using a loan origination example. The workshop is a live hands-on experience of the entire process from analytics modeling to deployment of real-time scoring services for use on IBM z/OS®. In this IBM Redbooks® publication, we include a facilitator guide chapter as well as a participant guide chapter. The facilitator guide includes information about the preparation, such as the needed material, resources, and steps to set up and run this workshop. The participant guide shows step-by-step the tasks for a successful learning experience. The goal of the first hands-on exercise is to learn how to use IBM SPSS® Modeler for Analytics modeling. This provides the basis for the next exercise "Configuring risk assessment in SPSS Decision Management". In the third exercise, the participant experiences how real-time scoring can be implemented on a System z. This publication is written for consultants, IT architects, and IT administrators who want to become familiar with SPSS and analytics solutions on the System z.

Discovering Partial Least Squares with JMP

Partial Least Squares (PLS) is a flexible statistical modeling technique that applies to data of any shape. It models relationships between inputs and outputs even when there are more predictors than observations. Using JMP statistical discovery software from SAS, Discovering Partial Least Squares with JMP explores PLS and positions it within the more general context of multivariate analysis.

Ian Cox and Marie Gaudard use a “learning through doing” style. This approach, coupled with the interactivity that JMP itself provides, allows you to actively engage with the content. Four complete case studies are presented, accompanied by data tables that are available for download. The detailed “how to” steps, together with the interpretation of the results, help to make this book unique.

Discovering Partial Least Squares with JMP is of interest to professionals engaged in continuing development, as well as to students and instructors in a formal academic setting. The content aligns well with topics covered in introductory courses on: psychometrics, customer relationship management, market research, consumer research, environmental studies, and chemometrics. The book can also function as a supplement to courses in multivariate statistics and to courses on statistical methods in biology, ecology, chemistry, and genomics.

While the book is helpful and instructive to those who are using JMP, a knowledge of JMP is not required, and little or no prior statistical knowledge is necessary. By working through the introductory chapters and the case studies, you gain a deeper understanding of PLS and learn how to use JMP to perform PLS analyses in real-world situations.

This book motivates current and potential users of JMP to extend their analytical repertoire by embracing PLS. Dynamically interacting with JMP, you will develop confidence as you explore underlying concepts and work through the examples. The authors provide background and guidance to support and empower you on this journey.

This book is part of the SAS Press program.