talk-data.com talk-data.com

Topic

data

5765

tagged

Activity Trend

3 peak/qtr
2020-Q1 2026-Q1

Activities

5765 activities · Newest first

Value Realization from Efficient Software Deployment

Unfortunately purchasing software products does not automatically mean that these products are exploited throughout the organization providing the maximum possible value to the business units. Several issues call for a structured approach that gets the most business value out of software already purchased. The objectives of this approach are to: * Create maximum awareness throughout the organization of the software purchased. We can summarize the overall objective of this approach as ensuring that the business units in an organization obtain the maximum possible value of software products purchased, which is also the scope of this IBM Redbooks publication.

IBM Cognos 10 Report Studio: Practical Examples

IBM Cognos 10 is the next generation off the leading performance management, analysis, and reporting standard for mid- to large-sized companies. One of the most exciting and useful aspects of IBM Cognos software is its powerful custom report creation capabilities. After learning the basics, report authors in the enterprise need to apply the technology to reports in their actual, complex work environment. This book provides that advanced know how. Using practical examples based on years of teaching experiences as IBM Cognos instructors, the authors provide you with examples of typical advanced reporting designs and complex queries in reports. The reporting solutions in this book can be directly used in a variety of real-world scenarios to provide answers to your business problems today. The complexity of the queries and the application of design principles go well beyond basic course content or introductory books. IBM Cognos 10 Report Studio: Practical Examples will help you find the answers to specific questions based on your data and your business model. It will use a combination tutorial and cookbook approach to show real-world IBM Cognos 10 Report Studio solutions. If you are still using IBM Cognos 8 BI Report Studio, many of the examples have been tested against this platform as well. The final chapter has been dedicated to showing those features that are unique to the latest version of this powerful reporting solution.

Metadata Management with IBM InfoSphere Information Server

What do you know about your data? And how do you know what you know about your data? Information governance initiatives address corporate concerns about the quality and reliability of information in planning and decision-making processes. Metadata management refers to the tools, processes, and environment that are provided so that organizations can reliably and easily share, locate, and retrieve information from these systems. Enterprise-wide information integration projects integrate data from these systems to one location to generate required reports and analysis. During this type of implementation process, metadata management must be provided along each step to ensure that the final reports and analysis are from the right data sources, are complete, and have quality. This IBM® Redbooks® publication introduces the information governance initiative and highlights the immediate needs for metadata management. It explains how IBM InfoSphere™ Information Server provides a single unified platform and a collection of product modules and components so that organizations can understand, cleanse, transform, and deliver trustworthy and context-rich information. It describes a typical implementation process. It explains how InfoSphere Information Server provides the functions that are required to implement such a solution and, more importantly, to achieve metadata management. This book is for business leaders and IT architects with an overview of metadata management in information integration solution space. It also provides key technical details that IT professionals can use in a solution planning, design, and implementation process.

IBM Style Guide, The: Conventions for Writers and Editors

The IBM Style Guide distills IBM wisdom for developing superior content: information that is consistent, clear, concise, and easy to translate. The IBM Style Guide can help any organization improve and standardize content across authors, delivery mechanisms, and geographic locations. This expert guide contains practical guidance on topic-based writing, writing content for different media types, and writing for global audiences. Throughout, the authors illustrate the guidance with many examples of correct and incorrect usage. Writers and editors will find authoritative guidance on issues ranging from structuring information to writing usable procedures to presenting web addresses to handling cultural sensitivities. The guidelines cover these topics: Using language and grammar to write clearly and consistently Applying punctuation marks and special characters correctly Formatting, organizing, and structuring information so that it is easy to find and use Using footnotes, cross-references, and links to point readers to valuable, related information Presenting numerical information clearly Documenting computer interfaces to make it easy for users to achieve their goals Writing for diverse audiences, including guidelines for improving accessibility Preparing clear and effective glossaries and indexes The IBM Style Guide can help any organization or individual create and manage content more effectively. The guidelines are especially valuable for businesses that have not previously adopted a corporate style guide, for anyone who writes or edits for IBM as an employee or outside contractor, and for anyone who uses modern approaches to information architecture.

Programming Pig

This guide is an ideal learning tool and reference for Apache Pig, the open source engine for executing parallel data flows on Hadoop. With Pig, you can batch-process data without having to create a full-fledged application—making it easy for you to experiment with new datasets. Programming Pig introduces new users to Pig, and provides experienced users with comprehensive coverage on key features such as the Pig Latin scripting language, the Grunt shell, and User Defined Functions (UDFs) for extending Pig. If you need to analyze terabytes of data, this book shows you how to do it efficiently with Pig. Delve into Pig’s data model, including scalar and complex data types Write Pig Latin scripts to sort, group, join, project, and filter your data Use Grunt to work with the Hadoop Distributed File System (HDFS) Build complex data processing pipelines with Pig’s macros and modularity features Embed Pig Latin in Python for iterative processing and other advanced tasks Create your own load and store functions to handle data formats and storage mechanisms Get performance tips for running scripts on Hadoop clusters in less time

SQL Server MVP Deep Dives, Volume 2

SQL Server MVP Deep Dives, Volume 2 lets you learn from the best in the business—64 SQL Server MVPs offer completely new content in this second volume on topics ranging from testing and policy management to integration services, reporting, and performance optimization techniques...and more. About the Technology About the Book To become an MVP requires deep knowledge and impressive skill. Together, the 64 MVPs who wrote this book bring about 1,000 years of experience in SQL Server administration, development, training, and design. This incredible book captures their expertise and passion in 60 concise, hand-picked chapters. SQL Server MVP Deep Dives, Volume 2 picks up where the first volume leaves off, with completely new content on topics ranging from testing and policy management to integration services, reporting, and performance optimization. The chapters fall into five parts: Architecture and Design, Database Administration, Database Development, Performance Tuning and Optimization, and Business Intelligence. What's Inside Discovering servers with PowerShell Using regular expressions in SSMS Tuning the Transaction Log for OLTP Optimizing SSIS for dimensional data Real-time BI Much more About the Reader This unique book is your chance to learn from the best in the business. It offers valuable insights for readers of all levels. About the Authors Written by 64 SQL Server MVPs, the chapters were selected and edited by Kalen Delaney and Section Editors Louis Davidson (Architecture and Design), Paul Randal and Kimberly Tripp (Database Administration), Paul Nielsen (Database Development), Brad McGehee (Performance Tuning and Optimization), and Greg Low (Business Intelligence). Quotes

SAP Applications on IBM PowerVM

IBM® invented the virtualization technology starting in the 1960s on the mainframe, and the functionalities evolved and were ported to other platforms and improved the reliability, availability, and serviceability (RAS) features. With virtualization, you achieve better asset utilization, reduced operating costs, and faster responsiveness to changing business demands. Every technology vendor in the SAP ecosystem understands virtualization as slightly different capabilities on different levels (storage and server hardware, processor, memory, I/O resources or the application, and so on). It is important to understand exactly what functionality is offered and how it supports the client’s business requirements. In this IBM Redbooks® publication we focus on server virtualization technologies in the IBM Power Systems™ hardware, AIX®, IBM i, and Linux space and what they mean specifically for SAP applications running on this platform. SAP clients can leverage the technology that the IBM Power Systems platform offers. In this book, we describe the technologies and functions, what they mean, and how they apply to the SAP system landscape.

IBM InfoSphere Streams: Assembling Continuous Insight in the Information Revolution

In this IBM® Redbooks® publication, we discuss and describe the positioning, functions, capabilities, and advanced programming techniques for IBM InfoSphere™ Streams (V2), a new paradigm and key component of IBM Big Data platform. Data has traditionally been stored in files or databases, and then analyzed by queries and applications. With stream computing, analysis is performed moment by moment as the data is in motion. In fact, the data might never be stored (perhaps only the analytic results). The ability to analyze data in motion is called real-time analytic processing (RTAP). IBM InfoSphere Streams takes a fundamentally different approach to Big Data analytics and differentiates itself with its distributed runtime platform, programming model, and tools for developing and debugging analytic applications that have a high volume and variety of data types. Using in-memory techniques and analyzing record by record enables high velocity. Volume, variety and velocity are the key attributes of Big Data. The data streams that are consumable by IBM InfoSphere Streams can originate from sensors, cameras, news feeds, stock tickers, and a variety of other sources, including traditional databases. It provides an execution platform and services for applications that ingest, filter, analyze, and correlate potentially massive volumes of continuous data streams. This book is intended for professionals that require an understanding of how to process high volumes of streaming data or need information about how to implement systems to satisfy those requirements. See: http://www.redbooks.ibm.com/abstracts/sg247865.html for the IBM InfoSphere Streams (V1) release.

Hacking Healthcare

Ready to take your IT skills to the healthcare industry? This concise book provides a candid assessment of the US healthcare system as it ramps up its use of electronic health records (EHRs) and other forms of IT to comply with the government’s Meaningful Use requirements. It’s a tremendous opportunity for tens of thousands of IT professionals, but it’s also a huge challenge: the program requires a complete makeover of archaic records systems, workflows, and other practices now in place. This book points out how hospitals and doctors’ offices differ from other organizations that use IT, and explains what’s necessary to bridge the gap between clinicians and IT staff. Get an overview of EHRs and the differences among medical settings Learn the variety of ways institutions deal with patients and medical staff, and how workflows vary Discover healthcare’s dependence on paper records, and the problems involved in migrating them to digital documents Understand how providers charge for care, and how they get paid Explore how patients can use EHRs to participate in their own care Examine healthcare’s most pressing problem—avoidable errors—and how EHRs can both help and exacerbate it

Implementing Imaging Solutions with IBM Production Imaging Edition and IBM Datacap Taskmaster Capture

Organizations face many challenges in managing documents that they need to conduct their business. IBM® Production Imaging Edition V5.0 is the comprehensive product that combines imaging, capture, and automation to provide the capabilities to process and manage high volumes of document imaging over their entire life cycle. This IBM Redbooks® publication introduces Production Imaging Edition, its components, the system architecture, its functions, and its capabilities. It primarily focuses on IBM Datacap Taskmaster Capture V8.0, including how it works, how to design a document image capture solution, and how to implement the solution using Datacap Studio. Datacap Studio is a development tool that designers use to create rules and rule sets, configure a document hierarchy and task profiles, and set up a verification panel for image verification. This book highlights the advanced technologies that are used to create dynamic applications, such as IBM Taskmaster Accounts Payable Capture. It includes an in-depth walkthrough of the dynamic application, Taskmaster Accounts Payable Capture, which provides invaluable insight to designers in developing and customizing their applications. In addition, this book includes information about high availability, scalability, performance, and backup and recovery options for the document imaging solution. It provides known best practices and recommendations for designing and implementing such a solution. This book is for IT architects and professionals who are responsible for creating, improving, designing, and implementing document imaging solutions for their organizations.

The Art of R Programming

R is the world's most popular language for developing statistical software: Archaeologists use it to track the spread of ancient civilizations, drug companies use it to discover which medications are safe and effective, and actuaries use it to assess financial risks and keep economies running smoothly. The Art of R Programming takes you on a guided tour of software development with R, from basic types and data structures to advanced topics like closures, recursion, and anonymous functions. No statistical knowledge is required, and your programming skills can range from hobbyist to pro. Along the way, you'll learn about functional and object-oriented programming, running mathematical simulations, and rearranging complex data into simpler, more useful formats. You'll also learn to: •Create artful graphs to visualize complex data sets and functions •Write more efficient code using parallel R and vectorization •Interface R with C/C++ and Python for increased speed or functionality •Find new R packages for text analysis, image manipulation, and more •Squash annoying bugs with advanced debugging techniques Whether you're designing aircraft, forecasting the weather, or you just need to tame your data, The Art of R Programming is your guide to harnessing the power of statistical computing.

IBM zEnterprise 114 Technical Guide

The popularity of the Internet and the affordability of IT hardware and software have resulted in an explosion of applications, architectures, and platforms. Workloads have changed. Many applications, including mission-critical ones, are deployed on a variety of platforms, and the System z® design has adapted to this change. It takes into account a wide range of factors, including compatibility and investment protection, to match the IT requirements of an enterprise. This IBM® Redbooks® publication discusses the IBM zEnterprise System, an IBM scalable mainframe server. IBM is taking a revolutionary approach by integrating separate platforms under the well-proven System z hardware management capabilities, while extending System z qualities of service to those platforms. The zEnterprise System consists of the IBM zEnterprise 114 central processor complex, the IBM zEnterprise Unified Resource Manager, and the IBM zEnterprise BladeCenter® Extension. The z114 is designed with improved scalability, performance, security, resiliency, availability, and virtualization. The z114 provides up to 18% improvement in uniprocessor speed and up to a 12% increase in total system capacity for z/OS®, z/VM®, and Linux on System z over the z10™ Business Class (BC). The zBX infrastructure works with the z114 to enhance System z virtualization and management through an integrated hardware platform that spans mainframe, POWER7™, and System x technologies. The federated capacity from multiple architectures of the zEnterprise System is managed as a single pool of resources, integrating system and workload management across the environment through the Unified Resource Manager. This book provides an overview of the zEnterprise System and its functions, features, and associated software support. Greater detail is offered in areas relevant to technical planning. This book is intended for systems engineers, consultants, planners, and anyone wanting to understand the zEnterprise System functions and plan for their usage. It is not intended as an introduction to mainframes. Readers are expected to be generally familiar with existing IBM System z technology and terminology.

Fundamentals of Stochastic Networks

An interdisciplinary approach to understanding queueing and graphical networks In today's era of interdisciplinary studies and research activities, network models are becoming increasingly important in various areas where they have not regularly been used. Combining techniques from stochastic processes and graph theory to analyze the behavior of networks, Fundamentals of Stochastic Networks provides an interdisciplinary approach by including practical applications of these stochastic networks in various fields of study, from engineering and operations management to communications and the physical sciences. The author uniquely unites different types of stochastic, queueing, and graphical networks that are typically studied independently of each other. With balanced coverage, the book is organized into three succinct parts: Part I introduces basic concepts in probability and stochastic processes, with coverage on counting, Poisson, renewal, and Markov processes Part II addresses basic queueing theory, with a focus on Markovian queueing systems and also explores advanced queueing theory, queueing networks, and approximations of queueing networks Part III focuses on graphical models, presenting an introduction to graph theory along with Bayesian, Boolean, and random networks The author presents the material in a self-contained style that helps readers apply the presented methods and techniques to science and engineering applications. Numerous practical examples are also provided throughout, including all related mathematical details. Featuring basic results without heavy emphasis on proving theorems, Fundamentals of Stochastic Networks is a suitable book for courses on probability and stochastic networks, stochastic network calculus, and stochastic network optimization at the upper-undergraduate and graduate levels. The book also serves as a reference for researchers and network professionals who would like to learn more about the general principles of stochastic networks.

Better Business Decisions Using Cost Modeling

: Information is power in supply chain operations, negotiations, continuous improvement programs, and process improvement, and indeed in all aspects of managing an operation. Accurate and timely information can result in better decisions that translate into improvement of bottom line results. The development and effective use of cost modeling as a method to understand the cost of products, services, and processes can help drive improvements in the quality and timeliness of decision making. In the supply chain community an understanding of the actual cost structures of products and services, whether with new or non-partner suppliers, can facilitate fact-based discussions which are more likely to result in agreements that are competitively priced and with fair margins. Further, accurate cost models which are cooperatively developed between supply chain partners can form the basis for joint efforts to reduce non-value-added costs and provide additional focus towards operational improvement. While many organizations feel confident they have an understanding of the cost structure for products and services produced internally, cost modeling often uncovers areas where significant cost improvement can be obtained. Cost of quality is a particular type of internal cost model that analyzes the true costs associated with the production of less than perfect products and services. The development of a cost of quality model can provide insight into how products or services of higher quality can be produced at lower cost. This book provides the business student or professional a concise guide to the creation and effective use of both internal and external cost models. Development of internal cost models is discussed with illustrations showing how they can be deployed to assist in new product development, pricing decisions, make-or-buy decisions and the identification of opportunities for internal process improvement projects. The creation and use of external cost models are discussed providing insight into how their use can drive collaborative improvement efforts among supply chain partners, better prepare for price negotiations, and keep negotiations focused on facts rather than emotions--all while allowing for future discussions with preferred suppliers to focus on more strategic and operational improvement initiatives, and less on pricing. A number of detailed cost model examples are provided to educate on both how cost models are constructed, and to demonstrate how they have been effectively deployed

Getting Started with RStudio

Dive into the RStudio Integrated Development Environment (IDE) for using and programming R, the popular open source software for statistical computing and graphics. This concise book provides new and experienced users with an overview of RStudio, as well as hands-on instructions for analyzing data, generating reports, and developing R software packages. The open source RStudio IDE brings many powerful coding tools together into an intuitive, easy-to-learn interface. With this guide, you’ll learn how to use its main components—including the console, source code editor, and data viewer—through descriptions and case studies. Getting Started with RStudio serves as both a reference and introduction to this unique IDE. Use RStudio to provide enhanced support for interactive R sessions Clean and format raw data quickly with several RStudio components Edit R commands with RStudio’s code editor, and combine them into functions Easily locate and use more than 3,000 add-on packages in R’s CRAN service Develop and document your own R packages with the code editor and related components Create one-click PDF reports in RStudio with a mix of text and R output

Privacy and Big Data

Much of what constitutes Big Data is information about us. Through our online activities, we leave an easy-to-follow trail of digital footprints that reveal who we are, what we buy, where we go, and much more. This eye-opening book explores the raging privacy debate over the use of personal data, with one undeniable conclusion: once data's been collected, we have absolutely no control over who uses it or how it is used. Personal data is the hottest commodity on the market today—truly more valuable than gold. We are the asset that every company, industry, non-profit, and government wants. Privacy and Big Data introduces you to the players in the personal data game, and explains the stark differences in how the U.S., Europe, and the rest of the world approach the privacy issue. You'll learn about: Collectors: social networking titans that collect, share, and sell user data Users: marketing organizations, government agencies, and many others Data markets: companies that aggregate and sell datasets to anyone Regulators: governments with one policy for commercial data use, and another for providing security

Practical Image and Video Processing Using MATLAB®

Up-to-date, technically accurate coverage of essential topics in image and video processing This is the first book to combine image and video processing with a practical MATLAB®-oriented approach in order to demonstrate the most important image and video techniques and algorithms. Utilizing minimal math, the contents are presented in a clear, objective manner, emphasizing and encouraging experimentation. The book has been organized into two parts. Part I: Image Processing begins with an overview of the field, then introduces the fundamental concepts, notation, and terminology associated with image representation and basic image processing operations. Next, it discusses MATLAB® and its Image Processing Toolbox with the start of a series of chapters with hands-on activities and step-by-step tutorials. These chapters cover image acquisition and digitization; arithmetic, logic, and geometric operations; point-based, histogram-based, and neighborhood-based image enhancement techniques; the Fourier Transform and relevant frequency-domain image filtering techniques; image restoration; mathematical morphology; edge detection techniques; image segmentation; image compression and coding; and feature extraction and representation. Part II: Video Processing presents the main concepts and terminology associated with analog video signals and systems, as well as digital video formats and standards. It then describes the technically involved problem of standards conversion, discusses motion estimation and compensation techniques, shows how video sequences can be filtered, and concludes with an example of a solution to object detection and tracking in video sequences using MATLAB®. Extra features of this book include: More than 30 MATLAB® tutorials, which consist of step-by-step guides to exploring image and video processing techniques using MATLAB® Chapters supported by figures, examples, illustrative problems, and exercises Useful websites and an extensive list of bibliographical references This accessible text is ideal for upper-level undergraduate and graduate students in digital image and video processing courses, as well as for engineers, researchers, software developers, practitioners, and anyone who wishes to learn about these increasingly popular topics on their own.

SAP NetWeaver MDM 7.1 Administrator's Guide

SAP NetWeaver MDM 7.1 Administrator's Guide acts as a complete resource for mastering the administration and configuration of SAP's Master Data Management solution: NetWeaver MDM 7.1. With a hands-on and practical approach, this book connects theoretical understanding with real-world application, tailored specifically for MDM administrators. What this Book will help me do Understand the core concepts and business scenarios associated with SAP NetWeaver MDM. Master the configuration of MDM Console, Servers, repositories, and the underlying database. Learn to maintain repository integrity through backup, restore, and management techniques. Automate data operations like importing and syndicating through MDM tools. Grasp the integration aspects of MDM with other SAP NetWeaver components. Author(s) Uday Rao is an experienced administrator and consultant in SAP systems, specializing in Master Data Management. With years of field experience, Uday brings deep technical insights combined with an approach that simplifies complex administration tasks. His guide emphasizes practical scenarios with step-by-step instructions that empower SAP professionals. Who is it for? This book is ideal for SAP administrators aiming to specialize in Master Data Management with NetWeaver MDM. It targets professionals with foundational knowledge in SAP who are looking to gain expertise in configuring and managing MDM systems. Novices in SAP MDM can still benefit from the guide's structured approach. Whether you're managing corporate data systems or overseeing MDM projects, this guide aligns with your goals.

Designing Data Visualizations

Data visualization is an efficient and effective medium for communicating large amounts of information, but the design process can often seem like an unexplainable creative endeavor. This concise book aims to demystify the design process by showing you how to use a linear decision-making process to encode your information visually. Delve into different kinds of visualization, including infographics and visual art, and explore the influences at work in each one. Then learn how to apply these concepts to your design process. Learn data visualization classifications, including explanatory, exploratory, and hybrid Discover how three fundamental influences—the designer, the reader, and the data—shape what you create Learn how to describe the specific goal of your visualization and identify the supporting data Decide the spatial position of your visual entities with axes Encode the various dimensions of your data with appropriate visual properties, such as shape and color See visualization best practices and suggestions for encoding various specific data types

MongoDB and Python

Learn how to leverage MongoDB with your Python applications, using the hands-on recipes in this book. You get complete code samples for tasks such as making fast geo queries for location-based apps, efficiently indexing your user documents for social-graph lookups, and many other scenarios. This guide explains the basics of the document-oriented database and shows you how to set up a Python environment with it. Learn how to read and write to MongoDB, apply idiomatic MongoDB and Python patterns, and use the database with several popular Python web frameworks. You’ll discover how to model your data, write effective queries, and avoid concurrency problems such as race conditions and deadlocks. The recipes will help you: Read, write, count, and sort documents in a MongoDB collection Learn how to use the rich MongoDB query language Maintain data integrity in replicated/distributed MongoDB environments Use embedding to efficiently model your data without joins Code defensively to avoid keyerrors and other bugs Apply atomic operations to update game scores, billing systems, and more with the fast accounting pattern Use MongoDB with the Pylons 1.x, Django, and Pyramid web frameworks