talk-data.com talk-data.com

Topic

data

5765

tagged

Activity Trend

3 peak/qtr
2020-Q1 2026-Q1

Activities

5765 activities · Newest first

Using IBM CICS Transaction Server Channels and Containers

This IBM® Redbooks® publication describes the new channels and containers support in IBM Customer Information Control System (CICS®) Transaction Server V5.2. The book begins with an overview of the techniques used to pass data between applications running in CICS. This book describes the constraints that these data techniques might be subject to, and how a channels and containers solution can provide solid advantages alongside these techniques. These capabilities enable CICS to fully comply with emerging technology requirements in terms of sizing and flexibility. The book then goes on to describe application design, and looks at implementing channels and containers from an application programmer point of view. It provides examples to show how to evolve channels and containers from communication areas (COMMAREAs). Next, the book explains the channels and containers application programming interface (API). It also describes how this API can be used in both traditional CICS applications and a Java CICS (JCICS) applications. The business transaction services (BTS) API is considered as a similar yet recoverable alternative to channels and containers. Some authorized program analysis reports (APARs) are introduced, which enable more flexible web services features by using channels and containers. The book also presents information from a systems management point of view, describing the systems management and configuration tasks and techniques that you must consider when implementing a channels and containers solution. The book chooses a sample application in the CICS catalog manager example, and describes how you can port an existing CICS application to use channels and containers rather than using COMMAREAs.

Expert T-SQL Window Functions in SQL Server

Expert T-SQL Window Functions in SQL Server takes you from any level of knowledge of windowing functions and turns you into an expert who can use these powerful functions to solve many T-SQL queries. Replace slow cursors and self-joins with queries that are easy to write and fantastically better performing, all through the magic of window functions. First introduced in SQL Server 2005, window functions came into full blossom with SQL Server 2012. They truly are one of the most notable developments in SQL in a decade, and every developer and DBA can benefit from their expressive power in solving day-to-day business problems. Begin using windowing functions like ROW_NUMBER and LAG, and you will discover more ways to use them every day. You will approach SQL Server queries in a different way, thinking about sets of data instead of individual rows. Your queries will run faster, they will be easier to write, and they will be easier to deconstruct and maintain and enhance in the future. Just knowing and using these functions is not enough. You also need to understand how to tune the queries. Expert T-SQL Window Functions in SQL Server explains clearly how to get the best performance. The book also covers the rare cases when older techniques are the best bet. Stop using cursors and self-joins to solve complicated queries. Become a T-SQL expert by mastering windowing functions. Teaches you how to use all the window functions introduced in 2005 and 2012. Provides real Expert T-SQL Window Functions in SQL Server takes you from any level of knowledge of windowing functions and turns you into an expert who can use these powerful functions to solve many T-SQL queries. Replace slow cursors and self-joins with queries that are easy to write and fantastically better performing, all through the magic of window functions. First introduced in SQL Server 2005, window functions came into full blossom with SQL Server 2012. They truly are one of the most notable developments in SQL in a decade, and every developer and DBA can benefit from their expressive power in solving day-to-day business problems. Begin using windowing functions like ROW_NUMBER and LAG, and you will discover more ways to use them every day. You will approach SQL Server queries in a different way, thinking about sets of data instead of individual rows. Your queries will run faster, they will be easier to write, and they will be easier to deconstruct and maintain and enhance in the future. Just knowing and using these functions is not enough. You also need to understand how to tune the queries. Expert T-SQL Window Functions in SQL Server explains clearly how to get the best performance. The book also covers the rare cases when older techniques are the best bet. Stop using cursors and self-joins to solve complicated queries. Become a T-SQL expert by mastering windowing functions. Teaches you how to use all the window function-world examples that you can experiment with in your own database. Explains how to get the best performance when using windowing functions.

IBM DS8870 Architecture and Implementation

This IBM® Redbooks® publication describes the concepts, architecture, and implementation of the IBM DS8870. The book provides reference information to assist readers who need to plan for, install, and configure the DS8870. The IBM DS8870 is the most advanced model in the IBM DS8000 series and is equipped with IBM POWER7+™ based controllers. Various configuration options are available that scale from dual 2-core systems up to dual 16-core systems with up to 1 TB of cache. The DS8870 features an integrated high-performance flash enclosure with flash cards that can deliver up to 250,000 IOPS and up to 3.4 GBps bandwidth. A High-Performance All-Flash configuration is also available. The DS8870 also features enhanced 8 Gbps device adapters and host adapters. Connectivity options, with up to 128 Fibre Channel/IBM FICON® ports for host connections, make the DS8870 suitable for multiple server environments in open systems and IBM System z® environments. The DS8870 supports advanced disaster recovery solutions, business continuity solutions, and thin provisioning. All disk drives in the DS8870 storage system have the Full Disk Encryption (FDE) feature. The DS8870 also can be integrated in a Lightweight Directory Access Protocol (LDAP) infrastructure. The DS8870 can automatically optimize the use of each storage tier, particularly flash drives and flash cards, through the IBM Easy Tier® feature, which is available at no extra charge. This edition applies to Version 7, release 4 of IBM DS8870.

Centrally Managing Access to Self-Encrypting Drives in Lenovo System x Servers Using IBM Security Key Lifecycle Manager

Data security is one of the paramount requirements for organizations of all sizes. Although many companies invested heavily in protection from network-based attacks and other threats, few effective safeguards are available to protect against potentially costly exposures of proprietary data that results from a hard disk drive being stolen, misplaced, retired, or redeployed. Self-encrypting drives (SEDs) can satisfy this need by providing the ultimate in security for data-at-rest and can help reduce IT drive retirement costs in the data center. Self-encrypting drives are also an excellent choice if you must comply with government or industry regulations for data privacy and encryption. To effectively manage a large deployment of SEDs in Lenovo® System x® servers, an organization must rely on a centralized key management solution. This IBM Redbooks® publication explains the technology behind SEDs and demonstrates how to deploy a key management solution that uses IBM Security Key Lifecycle Manager and properly setup your System x servers.

Data Mining and Predictive Analytics, 2nd Edition

Learn methods of data analysis and their application to real-world data sets This updated second edition serves as an introduction to data mining methods and models, including association rules, clustering, neural networks, logistic regression, and multivariate analysis. The authors apply a unified "white box" approach to data mining methods and models. This approach is designed to walk readers through the operations and nuances of the various methods, using small data sets, so readers can gain an insight into the inner workings of the method under review. Chapters provide readers with hands-on analysis problems, representing an opportunity for readers to apply their newly-acquired data mining expertise to solving real problems using large, real-world data sets. Data Mining and Predictive Analytics, Second Edition: Offers comprehensive coverage of association rules, clustering, neural networks, logistic regression, multivariate analysis, and R statistical programming language Features over 750 chapter exercises, allowing readers to assess their understanding of the new material Provides a detailed case study that brings together the lessons learned in the book Includes access to the companion website, www.dataminingconsultant, with exclusive password-protected instructor content Data Mining and Predictive Analytics, Second Edition will appeal to computer science and statistic students, as well as students in MBA programs, and chief executives.

IBM DS8870 Copy Services for Open Systems

This IBM® Redbooks® publication helps you plan, install, tailor, configure, and manage Copy Services for Open Systems environments on the IBM DS8870. This book helps you design and implement a new Copy Services installation or migrate from an existing installation. It includes hints and tips to maximize the effectiveness of your installation, and information about tools and products to automate Copy Services functions. It is intended for anyone who needs a detailed and practical understanding of the DS8870 Copy Services. There is a companion book that supports the configuration of the Copy Services functions in an IBM z/OS® environment, IBM System Storage DS8000 Copy Services for IBM z Systems™, SG24-6787.

Knowledge Discovery Process and Methods to Enhance Organizational Performance

Although the terms "data mining" and "knowledge discovery and data mining" (KDDM) are sometimes used interchangeably, data mining is actually just one step in the KDDM process. Data mining is the process of extracting useful information from data, while KDDM is the coordinated process of understanding the business and mining the data in order to identify previously unknown patterns. Knowledge Discovery Process and Methods to Enhance Organizational Performance explains the knowledge discovery and data mining (KDDM) process in a manner that makes it easy for readers to implement. Sharing the insights of international KDDM experts, it details powerful strategies, models, and techniques for managing the full cycle of knowledge discovery projects. The book supplies a process-centric view of how to implement successful data mining projects through the use of the KDDM process. It discusses the implications of data mining including security, privacy, ethical and legal considerations. Provides an introduction to KDDM, including the various models adopted in academia and industry Details critical success factors for KDDM projects as well as the impact of poor quality data or inaccessibility to data on KDDM projects Proposes the use of hybrid approaches that couple data mining with other analytic techniques (e.g., data envelopment analysis, cluster analysis, and neural networks) to derive greater value and utility Demonstrates the applicability of the KDDM process beyond analytics Shares experiences of implementing and applying various stages of the KDDM process in organizations The book includes case study examples of KDDM applications in business and government. After reading this book, you will understand the critical success factors required to develop robust data mining objectives that are in alignment with your organization’s strategic business objectives.

Probabilities: The Little Numbers That Rule Our Lives, 2nd Edition

Praise for the First Edition "If there is anything you want to know, or remind yourself, about probabilities, then look no further than this comprehensive, yet wittily written and enjoyable, compendium of how to apply probability calculations in real-world situations." - Keith Devlin, Stanford University, National Public Radio's "Math Guy" and author of The Math Gene and The Unfinished Game From probable improbabilities to regular irregularities, Probabilities: The Little Numbers That Rule Our Lives, Second Edition investigates the often surprising effects of risk and chance in our lives. Featuring a timely update, the Second Edition continues to be the go-to guidebook for an entertaining presentation on the mathematics of chance and uncertainty. The new edition develops the fundamental mathematics of probability in a unique, clear, and informal way so readers with various levels of experience with probability can understand the little numbers found in everyday life. Illustrating the concepts of probability through relevant and engaging real-world applications, the Second Edition features numerous examples on weather forecasts, DNA evidence, games and gambling, and medical testing. The revised edition also includes: The application of probability in finance, such as option pricing The introduction of branching processes and the extinction of family names An extended discussion on opinion polls and Nate Silver's election predictions Probabilities: The Little Numbers That Rule Our Lives, Second Edition is an ideal reference for anyone who would like to obtain a better understanding of the mathematics of chance, as well as a useful supplementary textbook for students in any course dealing with probability.

Hadoop Virtualization

Hadoop was built to use local data storage on a dedicated group of commodity hardware, but many organizations are choosing to save money (and operational headaches) by running Hadoop in the cloud. This O'Reilly report focuses on the benefits of deploying Hadoop to a private cloud environment, and provides an overview of best practices to maximize performance. Private clouds provide lower capital expenses than on-site clusters and offer lower operating expenses than public cloud deployment. Author Courtney Webster shows you what's involved in Hadoop virtualization, and how you can efficiently plan a private cloud deployment. Topics include: How Hadoop virtualization offers scalable capability for future growth and minimal downtime Why a private cloud offers unique benefits with comparable (and even improved) performance How you can literally set up Hadoop in a private cloud in minutes How aggregation can be used on top of (or instead of) virtualization Which resources and practices are best for a private cloud deployment How cloud-based management tools lower the complexity of initial configuration and maintenance

Mastering R for Quantitative Finance

Dive deeply into the quantitative finance domain using R with 'Mastering R for Quantitative Finance.' Through this book, you'll explore advanced R programming techniques tailored to financial modeling, risk assessment, and trading strategy optimization. This comprehensive guide aims to equip you with the tools to build practical quantitative finance solutions. What this Book will help me do Analyze detailed financial data using R and quantitative techniques. Develop predictive models for time series and risk management. Implement advanced trading strategies tailored to current market conditions. Master simulation techniques for scenarios without analytical solutions. Evaluate portfolio risks and potential returns with advanced methods. Author(s) None Gabler is a seasoned expert in quantitative finance and R programming, bringing years of practical experience to this book. Her approach combines theoretical depth with practical examples to ensure readers can apply the learned concepts in real-world financial contexts. Her passion for teaching and clear writing style make complex topics accessible to both practitioners and learners. Who is it for? This book is for financial professionals and data scientists seeking to delve into quantitative finance using R. Ideal readers are familiar with the basics of economics and statistics and are looking to apply advanced analytics in finance. If you are aiming to refine your modeling skills or develop precise strategies, this book is tailored for you. It's perfect for those eager to bridge the gap between theory and practical application.

Big Data

Convert the promise of big data into real world results There is so much buzz around big data. We all need to know what it is and how it works - that much is obvious. But is a basic understanding of the theory enough to hold your own in strategy meetings? Probably. But what will set you apart from the rest is actually knowing how to USE big data to get solid, real-world business results - and putting that in place to improve performance. Big Data will give you a clear understanding, blueprint, and step-by-step approach to building your own big data strategy. This is a well-needed practical introduction to actually putting the topic into practice. Illustrated with numerous real-world examples from a cross section of companies and organisations, Big Data will take you through the five steps of the SMART model: Start with Strategy, Measure Metrics and Data, Apply Analytics, Report Results, Transform. Discusses how companies need to clearly define what it is they need to know Outlines how companies can collect relevant data and measure the metrics that will help them answer their most important business questions Addresses how the results of big data analytics can be visualised and communicated to ensure key decisions-makers understand them Includes many high-profile case studies from the author's work with some of the world's best known brands

Business Intelligence with SQL Server Reporting Services

Business Intelligence with SQL Server Reporting Services helps you deliver business intelligence with panache. Harness the power of the Reporting Services toolkit to combine charts, gauges, sparklines, indicators, and maps into compelling dashboards and scorecards. Create compelling visualizations that seize your audience’s attention and help business users identify and react swiftly to changing business conditions. Best of all, you'll do all these things by creating new value from software that is already installed and paid for – SQL Server and the included SQL Server Reporting Services. Businesses run on numbers, and good business intelligence systems make the critical numbers immediately and conveniently accessible. Business users want access to key performance indicators in the office, at the beach, and while riding the subway home after a day's work. Business Intelligence with SQL Server Reporting Services helps you meet these need for anywhere/anytime access by including chapters specifically showing how to deliver on modern devices such as smart phones and tablets. You'll learn to deliver the same information, with similar look-and-feel, across the entire range of devices used in business today. Key performance indicators give fast notification of business unit performance Polished dashboards deliver essential metrics and strategic comparisons Visually arresting output on multiple devices focuses attention

Data Science For Dummies

Discover how data science can help you gain in-depth insight into your business - the easy way! Jobs in data science abound, but few people have the data science skills needed to fill these increasingly important roles in organizations. Data Science For Dummies is the perfect starting point for IT professionals and students interested in making sense of their organization's massive data sets and applying their findings to real-world business scenarios. From uncovering rich data sources to managing large amounts of data within hardware and software limitations, ensuring consistency in reporting, merging various data sources, and beyond, you'll develop the know-how you need to effectively interpret data and tell a story that can be understood by anyone in your organization. Provides a background in data science fundamentals before moving on to working with relational databases and unstructured data and preparing your data for analysis Details different data visualization techniques that can be used to showcase and summarize your data Explains both supervised and unsupervised machine learning, including regression, model validation, and clustering techniques Includes coverage of big data processing tools like MapReduce, Hadoop, Dremel, Storm, and Spark It's a big, big data world out there - let Data Science For Dummies help you harness its power and gain a competitive edge for your organization.

Pro T-SQL Programmer’s Guide, 4th Edition

Pro T–SQL Programmer’s Guide is your guide to making the best use of the powerful, Transact-SQL programming language that is built into Microsoft SQL Server's database engine. This edition is updated to cover the new, in-memory features that are part of SQL Server 2014. Discussing new and existing features, the book takes you on an expert guided tour of Transact–SQL functionality. Fully functioning examples and downloadable source code bring technically accurate and engaging treatment of Transact–SQL into your own hands. Step–by–step explanations ensure clarity, and an advocacy of best–practices will steer you down the road to success. Transact–SQL is the language developers and DBAs use to interact with SQL Server. It’s used for everything from querying data, to writing stored procedures, to managing the database. Support for in-memory stored procedures running queries against in-memory tables is new in the language and gets coverage in this edition. Also covered are must-know features such as window functions and data paging that help in writing fast-performing database queries. Developers and DBAs alike can benefit from the expressive power of T-SQL, and Pro T-SQL Programmer's Guide is your roadmap to success in applying this increasingly important database language to everyday business and technical tasks. Covers the newly-introduced, in-memory database features Shares the best practices used by experienced professionals Goes deeply into the subject matter - an advanced book for the serious reader

Statistics Done Wrong

Scientific progress depends on good research, and good research needs good statistics. But statistical analysis is tricky to get right, even for the best and brightest of us. You'd be surprised how many scientists are doing it wrong. Statistics Done Wrong is a pithy, essential guide to statistical blunders in modern science that will show you how to keep your research blunder-free. You'll examine embarrassing errors and omissions in recent research, learn about the misconceptions and scientific politics that allow these mistakes to happen, and begin your quest to reform the way you and your peers do statistics. You'll find advice on: Asking the right question, designing the right experiment, choosing the right statistical analysis, and sticking to the plan How to think about p values, significance, insignificance, confidence intervals, and regression Choosing the right sample size and avoiding false positives Reporting your analysis and publishing your data and source code Procedures to follow, precautions to take, and analytical software that can help Scientists: Read this concise, powerful guide to help you produce statistically sound research. Statisticians: Give this book to everyone you know. The first step toward statistics done right is Statistics Done Wrong.

Implementing the IBM Storwize V3700

Organizations of all sizes are faced with the challenge of managing massive volumes of increasingly valuable data. However, storing this data can be costly, and extracting value from the data is becoming more and more difficult. IT organizations have limited resources, but must stay responsive to dynamic environments and act quickly to consolidate, simplify, and optimize their IT infrastructures. The IBM® Storwize® V3700 system provides a solution that is affordable, easy to use, and self-optimizing, which enables organizations to overcome these storage challenges. Storwize V3700 delivers efficient, entry-level configurations that are specifically designed to meet the needs of small and midsize businesses. Designed to provide organizations with the ability to consolidate and share data at an affordable price, Storwize V3700 offers advanced software capabilities that are usually found in more expensive systems. Built on innovative IBM technology, Storwize V3700 addresses the block storage requirements of small and midsize organizations, Storwize V3700 is designed to accommodate the most common storage network technologies. This design enables easy implementation and management. Storwize V3700 includes the following features: Web-based GUI provides point-and-click management capabilities. Internal disk storage virtualization enables rapid, flexible provisioning and simple configuration changes. Thin provisioning enables applications to grow dynamically, but only use space they actually need. Enables simple data migration from external storage to Storwize V3700 storage (one-way from another storage device). Remote Mirror creates copies of data at remote locations for disaster recovery. IBM FlashCopy® creates instant application copies for backup or application testing. This IBM Redbooks® publication is intended for pre-sales and post-sales technical support professionals and storage administrators. The concepts in this book also relate to the IBM Storwize V3500. This book was written at a software level of version 7 release 4.

Beginning JSON

Beginning JSON is the definitive guide to JSON - JavaScript Object Notation - today’s standard in data formatting for the web. The book starts with the basics, and walks you through all aspects of using the JSON format. Beginning JSON covers all areas of JSON from the basics of data formats to creating your own server to store and retrieve persistent data. Beginning JSON provides you with the skill set required for reading and writing properly validated JSON data. The first two chapters of the book will discuss the foundations of JavaScript for those who need it, and provide the necessary understandings for later chapters. Chapters 3 through 12 will uncover what data is, how to convert that data into a transmittable/storable format, how to use AJAX to send and receive JSON, and, lastly, how to reassemble that data back into a proper JavaScript object to be used by your program. The final chapters put everything you learned into practice.

Hibernate Recipes: A Problem-Solution Approach, Second Edition

Hibernate Recipes, Second Edition contains a collection of code recipes and templates for learning and building Hibernate solutions for you and your clients, including how to work with the Spring Framework and the JPA. This book is your pragmatic day-to-day reference and guide for doing all things involving Hibernate. There are many books focused on learning Hibernate, but this book takes you further and shows how you can apply it practically in your daily work. Hibernate Recipes, Second Edition is a must have book for your library. Hibernate 4.x continues to be the most popular out-of-the-box, open source framework solution for Java persistence and data/database accessibility techniques and patterns and it works well with the most popular open source enterprise Java framework of all, the Spring Framework. Hibernate is used for e-commerce–based web applications as well as heavy-duty transactional systems for the enterprise.

American-Type Options

The book gives a systematical presentation of stochastic approximation methods for discrete time Markov price processes. Advanced methods combining backward recurrence algorithms for computing of option rewards and general results on convergence of stochastic space skeleton and tree approximations for option rewards are applied to a variety of models of multivariate modulated Markov price processes. The principal novelty of presented results is based on consideration of multivariate modulated Markov price processes and general pay-off functions, which can depend not only on price but also an additional stochastic modulating index component, and use of minimal conditions of smoothness for transition probabilities and pay-off functions, compactness conditions for log-price processes and rate of growth conditions for pay-off functions. The volume presents results on structural studies of optimal stopping domains, Monte Carlo based approximation reward algorithms, and convergence of American-type options for autoregressive and continuous time models, as well as results of the corresponding experimental studies.