talk-data.com talk-data.com

Topic

Unix

operating_system multi_user multitasking

54

tagged

Activity Trend

1 peak/qtr
2020-Q1 2026-Q1

Activities

54 activities · Newest first

In this episode, Conor and Bryce chat with Sean Parent about Pascal, C, Unix, Modula(-2/3) and more! Link to Episode 162 on WebsiteDiscuss this episode, leave a comment, or ask a question (on GitHub)Twitter ADSP: The PodcastConor HoekstraBryce Adelstein LelbachAbout the Guest: Sean Parent is a senior principal scientist and software architect managing Adobe’s Software Technology Lab. Sean first joined Adobe in 1993 working on Photoshop and is one of the creators of Photoshop Mobile, Lightroom Mobile, and Lightroom Web. In 2009 Sean spent a year at Google working on Chrome OS before returning to Adobe. From 1988 through 1993 Sean worked at Apple, where he was part of the system software team that developed the technologies allowing Apple’s successful transition to PowerPC.

Show Notes

Date Recorded: 2023-12-12 Date Released: 2023-12-29 Jonathan O’Connor ADSP EpisodesSean Parent tweet on ADSP Episode 154Software Unscripted Ep77: How Programming has ChangedArrayCast Ep68: Brian Ellingsgaard and the Rayed-BQN Games FrameworkUCSD PascalPascal Programming LanguageSteve Wozniak’s SWEET16p-code machineApple LisaLarry TeslerObject PascalDelphiUnixVAX/VMSC LanguageTurbo PascalApple PascalMetrowerks CodeWarrior IDEModula LanguageModula-2 LanguageModula-3 LanguageOberon LanguageArthur WhitneyAnders HejlsbergCompiler Construction by Niklaus WirthLilith ComputerTilt FiveJeri EllsworthIntro Song Info Miss You by Sarah Jansen https://soundcloud.com/sarahjansenmusic Creative Commons — Attribution 3.0 Unported — CC BY 3.0

How socat and UNIX Pipes Can Help Data Integration

Nearly every developer is familiar with creating a CLI. Containerized CLIs provide a flexible, cross-language standard with a low barrier to entry for open-source contributors. The ETL process can be reduced to two CLIs: one that reads data and one that writes data. While this interface is simple enough to implement from the contributor’s side, Kubernetes’ distributed nature means orchestrating data transfer between the CLIs on Kubernetes presents an unsolved problem.

This talk describes a novel approach to reliably orchestrate CLIs on Kubernetes for data integration. Through this lens, we go through the evaluation of strategies and describe the pros and cons of each architecture for horizontally scaling containerised data integration workflows on Kubernetes. We also cover the journey of implementing a TCP-based “process” abstraction over CLIs using socat and UNIX pipes. This same approach powers all of Airbyte’s Kubernetes deployments and helps sync TBs of data daily.

Connect with us: Website: https://databricks.com Facebook: https://www.facebook.com/databricksinc Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/data... Instagram: https://www.instagram.com/databricksinc/

Data Science at the Command Line, 2nd Edition

This thoroughly revised guide demonstrates how the flexibility of the command line can help you become a more efficient and productive data scientist. You'll learn how to combine small yet powerful command-line tools to quickly obtain, scrub, explore, and model your data. To get you started, author Jeroen Janssens provides a Docker image packed with over 100 Unix power tools--useful whether you work with Windows, macOS, or Linux. You'll quickly discover why the command line is an agile, scalable, and extensible technology. Even if you're comfortable processing data with Python or R, you'll learn how to greatly improve your data science workflow by leveraging the command line's power. This book is ideal for data scientists, analysts, engineers, system administrators, and researchers. Obtain data from websites, APIs, databases, and spreadsheets Perform scrub operations on text, CSV, HTML, XML, and JSON files Explore data, compute descriptive statistics, and create visualizations Manage your data science workflow Create your own tools from one-liners and existing Python or R code Parallelize and distribute data-intensive pipelines Model data with dimensionality reduction, regression, and classification algorithms Leverage the command line from Python, Jupyter, R, RStudio, and Apache Spark

ABCs of IBM z/OS System Programming Volume 2

Abstract The ABCs of IBM® z/OS® System Programming is a 13-volume collection that provides an introduction to the z/OS operating system and the hardware architecture. Whether you are a beginner or an experienced system programmer, the ABCs collection provides the information that you need to start your research into z/OS and related subjects. If you want to become more familiar with z/OS in your current environment or if you are evaluating platforms to consolidate your e-business applications, the ABCs collection can serve as a powerful technical tool. This volume describes the basic system programming activities related to implementing and maintaining the z/OS installation and provides details about the modules that are used to manage jobs and data. It covers the following topics: Overview of the parmlib definitions and the IPL process. The parameters and system data sets necessary to IPL and run a z/OS operating system are described, along with the main daily tasks for maximizing performance of the z/OS system. Basic concepts related to subsystems and subsystem interface and how to use the subsystem services that are provided by IBM subsystems. Job management in the z/OS system using the JES2 and JES3 job entry subsystems. It provides a detailed discussion about how JES2 and JES3 are used to receive jobs into the operating system, schedule them for processing by z/OS, and control their output processing. The link pack area (LPA), LNKLST, authorized libraries, and the role of VLF and LLA components. An overview of SMP/E for z/OS. An overview of IBM Language Environment® architecture and descriptions of Language Environment’s full program model, callable services, storage management model, and debug information. Other volumes in this series include the following content: Volume 1: Introduction to z/OS and storage concepts, TSO/E, ISPF, JCL, SDSF, and z/OS delivery and installation Volume 3: Introduction to DFSMS, data set basics, storage management, hardware and software, catalogs, and DFSMStvs Volume 4: Communication Server, TCP/IP, and IBM VTAM® Volume 5: Base and IBM Parallel Sysplex®, System Logger, Resource Recovery Services (RRS), global resource serialization (GRS), z/OS system operations, automatic restart management (ARM), IBM Geographically Dispersed Parallel Sysplex™ (IBM GDPS®) Volume 6: Introduction to security, IBM RACF®, Digital certificates and PKI, Kerberos, cryptography and z990 integrated cryptography, zSeries firewall technologies, LDAP, and Enterprise Identity Mapping (EIM) Volume 7: Printing in a z/OS environment, Infoprint Server, and Infoprint Central Volume 8: An introduction to z/OS problem diagnosis Volume 9: z/OS UNIX System Services Volume 10: Introduction to IBM z/Architecture®, the IBM Z platform and IBM Z connectivity, LPAR concepts, HCD, and the DS Storage Solution Volume 11: Capacity planning, performance management, WLM, IBM RMF™, and SMF Volume 12: WLM Volume 13: JES3, JES3 SDSF

ABCs of IBM z/OS System Programming Volume 3

Abstract The ABCs of IBM z/OS® System Programming is a 13-volume collection that provides an introduction to the z/OS operating system and the hardware architecture. Whether you are a beginner or an experienced system programmer, the ABCs collection provides the information that you need to start your research into z/OS and related subjects. The ABCs collection serves as a powerful technical tool to help you become more familiar with z/OS in your current environment, or to help you evaluate platforms to consolidate your e-business applications. This edition is updated to z/OS Version 2 Release 3. The other volumes contain the following content: Volume 1: Introduction to z/OS and storage concepts, TSO/E, ISPF, JCL, SDSF, and z/OS delivery and installation Volume 2: z/OS implementation and daily maintenance, defining subsystems, IBM Job Entry Subsystem 2 (JES2) and JES3, link pack area (LPA), LNKLST, authorized libraries, System Modification Program Extended (SMP/E), IBM Language Environment Volume 4: Communication Server, TCP/IP, and IBM VTAM® Volume 5: Base and IBM Parallel Sysplex®, System Logger, Resource Recovery Services (RRS), global resource serialization (GRS), z/OS system operations, automatic restart manager (ARM), IBM Geographically Dispersed Parallel Sysplex™ (IBM GDPS) Volume 6: Introduction to security, IBM RACF®, Digital certificates and PKI, Kerberos, cryptography and z990 integrated cryptography, zSeries firewall technologies, LDAP, and Enterprise Identity Mapping (EIM) Volume 7: Printing in a z/OS environment, Infoprint Server, and Infoprint Central Volume 8: An introduction to z/OS problem diagnosis Volume 9: z/OS UNIX System Services Volume 10: Introduction to IBM z/Architecture®, the IBM Z platform, IBM Z connectivity, LPAR concepts, HCD, and DS Storage Solution. Volume 11: Capacity planning, performance management, WLM, IBM RMF™, and SMF Volume 12: WLM Volume 13: JES3, JES3 SDSF

Gnuplot in Action, Second Edition

Gnuplot in Action, Second Edition is a major revision of this popular and authoritative guide for developers, engineers, and scientists who want to learn and use gnuplot effectively. Fully updated for gnuplot version 5, the book includes four pages of color illustrations and four bonus appendixes available in the eBook. About the Technology Gnuplot is an open-source graphics program that helps you analyze, interpret, and present numerical data. Available for Unix, Mac, and Windows, it is well-maintained, mature, and totally free. About the Book Gnuplot in Action, Second Edition is a major revision of this authoritative guide for developers, engineers, and scientists. The book starts with a tutorial introduction, followed by a systematic overview of gnuplot's core features and full coverage of gnuplot's advanced capabilities. Experienced readers will appreciate the discussion of gnuplot 5?s features, including new plot types, improved text and color handling, and support for interactive, web-based display formats. The book concludes with chapters on graphical effects and general techniques for understanding data with graphs. It includes four pages of color illustrations. 3D graphics, false-color plots, heatmaps, and multivariate visualizations are covered in chapter-length appendixes available in the eBook. What's Inside Creating different types of graphs in detail Animations, scripting, batch operations Extensive discussion of terminals Updated to cover gnuplot version 5 About the Reader No prior experience with gnuplot is required. This book concentrates on practical applications of gnuplot relevant to users of all levels. About the Author Philipp K. Janert, Ph.D, is a programmer and scientist. He is the author of several books on data analysis and applied math and has been a gnuplot power user and developer for over 20 years. Quotes The highly anticipated, updated version of my go-to-for-everything book on gnuplot. - Ryan Balfanz, Shift Medical, Inc. The essential guide for newcomers and the definitive handbook for advanced users. - Zoltán Vörös, University of Innsbruck Learn how to use gnuplot to convert meaningful data into attention-grabbing visualizations that communicate your message quickly and accurately. - David Kerns, Rincon Research Corporation An accessible guide to gnuplot and best practices of everyday data visualization. - Wesley R. Elsberry,PhD, RealPage, Inc.

IBM z/OS V2R2: Security

This IBM® Redbooks® publication helps you to become familiar with the technical changes that were introduced to the security areas with IBM z/OS® V2R2. The following chapters are included: - Chapter 1, “RACF updates” on page 1: In this chapter, we describe the read-only auditor attribute, password security enhancements, RACDCERT (granular certificate administration), UNIX search authority, and RACF Remote sharing facility (RRSF). - Chapter 2, “LDAP updates” on page 13: In this chapter, we describe the activity log enhancements, compatibility level upgrade without LDAP outage, dynamic group performance enhancements, and replication of password policy attributes from a read-only replica. - Chapter 3, “PKI updates” on page 21: In this chapter, we describe the Network Authentication Service (KERBEROS) PKINIT, PKI nxm authorization, PKI OCSP enhancement, and RACDCERT (granular certificate administration) - Chapter 4, “z/OS UNIX search and file execution authority” on page 27: z/OS UNIX search authority, z/OS UNIX file execution, Examples for exploiting the new functions This book is one of a series of IBM Redbooks that take a modular approach to providing information about the updates that are included with z/OS V2R2. This approach has the following goals: - Provide modular content - Group the technical changes into a topic - Provide a more streamlined way of finding relevant information that is based on the topic We hope you find this approach useful and we welcome your feedback.

Learning ELK Stack

Dive into the ELK stack-Elasticsearch, Logstash, and Kibana-with this comprehensive guide. Designed to help you set up, configure, and utilize the stack to its fullest, this book provides you with the skills to manage data with precision, enrich logs, and create meaningful analytics. Develop an entire data pipeline and cultivate powerful visual insights from your data. What this Book will help me do Install and configure Elasticsearch, Logstash, and Kibana to establish a robust ELK stack setup. Understand the role of each component in the stack and master the basics of log analysis. Create custom Logstash plugins to handle non-standard data processing requirements. Develop interactive and insightful data visualizations and dashboards using Kibana. Implement a complete data pipeline and gain expertise in data indexing, searching, and reporting. Author(s) None Chhajed brings depth of technical understanding and practical experience to the exploration of the ELK Stack. With a strong background in open-source technologies and data analytics, Chhajed has worked extensively with ELK stack implementations in real-world scenarios. Through this guide, the author offers clarity, detailed examples, and actionable insights for professionals seeking to improve their data systems. Who is it for? This book is targeted towards software developers, data analysts, and DevOps engineers seeking to harness the potential of the ELK stack for data analysis and logging. It is most suitable for intermediate-level professionals with basic knowledge of Unix or programming. If your aim is to gain insights and build metrics from diverse data formats utilizing open-source technologies, this book is crafted for you.

Bioinformatics Data Skills

Learn the data skills necessary for turning large sequencing datasets into reproducible and robust biological findings. With this practical guide, you’ll learn how to use freely available open source tools to extract meaning from large complex biological data sets. At no other point in human history has our ability to understand life’s complexities been so dependent on our skills to work with and analyze data. This intermediate-level book teaches the general computational and data skills you need to analyze biological data. If you have experience with a scripting language like Python, you’re ready to get started. Go from handling small problems with messy scripts to tackling large problems with clever methods and tools Process bioinformatics data with powerful Unix pipelines and data tools Learn how to use exploratory data analysis techniques in the R language Use efficient methods to work with genomic range data and range operations Work with common genomics data file formats like FASTA, FASTQ, SAM, and BAM Manage your bioinformatics project with the Git version control system Tackle tedious data processing tasks with with Bash scripts and Makefiles

Learning Hadoop 2

Delve into the world of big data with 'Learning Hadoop 2', a comprehensive guide to leveraging the capabilities of Hadoop 2 for data processing and analysis. In this book, you will explore the tools and frameworks that integrate with Hadoop, discovering the best ways to design and deploy effective workflows for managing and analyzing large datasets. What this Book will help me do Understand the fundamentals of the MapReduce framework and its applications. Utilize advanced tools such as Samza and Spark for real-time and iterative data processing. Manage large datasets with data mining techniques tailored for Hadoop environments. Deploy Hadoop applications across various infrastructures, including local clusters and cloud services. Create and orchestrate sophisticated data workflows and pipelines with Apache Pig and Oozie. Author(s) Gabriele Modena is an experienced developer and trained data specialist with a keen focus on distributed data processing frameworks. Having worked extensively with big data platforms, Gabriele brings practical insights and a hands-on perspective to technical subjects. His writing is concise and engaging, aiming to render complex concepts accessible. Who is it for? This book is ideal for system and application developers eager to learn practical implementations of the Hadoop framework. Readers should be familiar with the Unix/Linux command-line interface and Java programming. Prior experience with Hadoop will be advantageous, but not necessary.

Getting Started with IBM InfoSphere Optim Workload Replay for DB2

This IBM® Redbooks® publication will help you install, configure, and use IBM InfoSphere® Optim™ Workload Replay (InfoSphere Workload Replay), a web-based tool that lets you capture real production SQL workload data and then replay the workload data in a pre-production environment. With InfoSphere Workload Replay, you can set up and run realistic tests for enterprise database changes without the need to create a complex client and application infrastructure to mimic your production environment. The publication goes through the steps to install and configure the InfoSphere Workload Replay appliance and related database components for IBM DB2® for Linux, UNIX, and Windows and for DB2 for IBM z/OS®. The capture, replay, and reporting process, including user ID and roles management, is described in detail to quickly get you up and running. Ongoing operations, such as appliance health monitoring, starting and stopping the product, and backup and restore in your day-to-day management of the product, extensive troubleshooting information, and information about how to integrate InfoSphere Workload Replay with other InfoSphere products are covered in separate chapters.

Architecting and Deploying DB2 with BLU Acceleration

IBM® DB2® with BLU Acceleration is a revolutionary technology that is delivered in DB2 for Linux, UNIX, and Windows Release 10.5. BLU Acceleration delivers breakthrough performance improvements for analytic queries by using dynamic in-memory columnar technologies. Different from other vendor solutions, BLU Acceleration allows the unified computing of OLTP and analytics data inside a single database, therefore, removing barriers and accelerating results for users. With observed hundredfold improvement in query response time, BLU Acceleration provides a simple, fast, and easy-to-use solution for the needs of today's organizations; quick access to business answers can be used to gain a competitive edge, lower costs, and more. This IBM Redbooks® publication introduces the concepts of DB2 with BLU Acceleration. It discusses the steps to move from a relational database to using BLU Acceleration, optimizing BLU usage, and deploying BLU into existing analytic solutions today, with an example of IBM Cognos®. This book also describes integration of DB2 with BLU Acceleration into SAP Business Warehouse (SAP BW) and SAP's near-line storage solution on DB2. This publication is intended to be helpful to a wide-ranging audience, including those readers who want to understand the technologies and those who have planning, deployment, and support responsibilities.

ABCs of IBM z/OS System Programming Volume 1

The ABCs of IBM® z/OS® System Programming is a 13-volume collection that provides an introduction to the z/OS operating system and the hardware architecture. Whether you are a beginner or an experienced system programmer, the ABCs collection provides the information that you need to start your research into z/OS and related subjects. Whether you want to become more familiar with z/OS in your current environment, or you are evaluating platforms to consolidate your online business applications, the ABCs collection will serve as a powerful technical tool. Volume 1 provides an updated understanding of the software and IBM zSeries architecture, and explains how it is used together with the z/OS operating system. This includes the main components of z/OS needed to customize and install the z/OS operating system. This edition has been significantly updated and revised. The other volumes contain the following content: Volume 2: z/OS implementation and daily maintenance, defining subsystems, IBM Job Entry Subsystem 2 (JES2) and JES3, link pack area (LPA), LNKLST, authorized libraries, System Modification Program/Extended (SMP/E), IBM Language Environment® Volume 3: Introduction to Data Facility Storage Management Subsystem (DFSMS), data set basics, storage management hardware and software, catalogs, and DFSMS Transactional Virtual Storage Access Method (VSAM), or DFSMStvs Volume 4: z/OS Communications Server, Transmission Control Protocol/Internet Protocol (TCP/IP), and IBM Virtual Telecommunications Access Method (IBM VTAM®) Volume 5: Base and IBM Parallel Sysplex®, z/OS System Logger, Resource Recovery Services (RRS), Global Resource Serialization (GRS), z/OS system operations, z/OS Automatic Restart Manager (ARM), IBM Geographically Dispersed Parallel Sysplex™ (IBM GDPS®) Volume 6: Introduction to security, IBM Resource Access Control Facility (IBM RACF®), Digital certificates and public key infrastructure (PKI), Kerberos, cryptography and IBM eServer™ z990 integrated cryptography, zSeries firewall technologies, Lightweight Directory Access Protocol (LDAP), and Enterprise Identity Mapping (EIM) Volume 7: Printing in a z/OS environment, Infoprint Server, and Infoprint Central Volume 8: An introduction to z/OS problem diagnosis Volume 9: z/OS UNIX System Services Volume 10: Introduction to IBM z/Architecture®, zSeries processor design, zSeries connectivity, LPAR concepts, HCD, and IBM DS8000® Volume 11: Capacity planning, IBM Performance Management, z/OS Workload Manager (WLM), IBM Resource Management Facility (IBM RMF™), and IBM System Management Facility (SMF) Volume 12: WLM Volume 13: JES2 and JES3 System Display and Search Facility (SDSF)

ABCs of IBM z/OS System Programming Volume 6

The ABCs of IBM® z/OS® System Programming is an 11-volume collection that provides an introduction to the z/OS operating system and the hardware architecture. Whether you are a beginner or an experienced system programmer, the ABCs collection provides the information that you need to start your research into z/OS and related subjects. If you want to become more familiar with z/OS in your current environment or if you are evaluating platforms to consolidate your e-business applications, the ABCs collection can serve as a powerful technical tool. Following are the contents of the volumes: Volume 1: Introduction to z/OS and storage concepts, TSO/E, ISPF, JCL, SDSF, and z/OS delivery and installation Volume 2: z/OS implementation and daily maintenance, defining subsystems, JES2 and JES3, LPA, LNKLST, authorized libraries, IBM Language Environment®, and SMP/E Volume 3: Introduction to DFSMS, data set basics, storage management hardware and software, VSAM, System-managed storage, catalogs, and DFSMStvs Volume 4: Communication Server, TCP/IP, and IBM VTAM® Volume 5: Base and IBM Parallel Sysplex®, System Logger, Resource Recovery Services (RRS), global resource serialization (GRS), z/OS system operations, automatic restart management (ARM), and IBM Geographically Dispersed Parallel Sysplex™ (IBM GDPS®) Volume 6: Introduction to security, IBM RACF®, digital certificates and public key infrastructure (PKI), Kerberos, cryptography and IBM z9® integrated cryptography, Lightweight Directory Access Protocol (LDAP), and Enterprise Identity Mapping (EIM) Volume 7: Printing in a z/OS environment, Infoprint Server, and Infoprint Central Volume 8: An introduction to z/OS problem diagnosis Volume 9: z/OS UNIX System Services Volume 10: Introduction to IBM z/Architecture®, IBM System z® processor design, System z connectivity, logical partition (LPAR) concepts, hardware configuration definition (HCD), and Hardware Management Console (HMC) Volume 11: Capacity planning, performance management, Workload Manager (WLM), IBM Resource Measurement Facility™ (RMF™), and System Management Facilities (SMF)

Architecting and Deploying IBM DB2 with BLU Acceleration in Your Analytical Environment

IBM® DB2® with BLU Acceleration is a revolutionary technology that is delivered in DB2 for Linux, UNIX, and Windows Release 10.5. BLU Acceleration delivers breakthrough performance improvements for analytic queries by using dynamic in-memory columnar technologies. Different from other vendor solutions, BLU Acceleration allows the unified computing of online transaction processing (OLTP) and analytics data inside a single database, therefore, removing barriers and accelerating results for users. With observed hundredfold improvement in query response time, BLU Acceleration provides a simple, fast, and easy-to-use solution for the needs of today's organizations; quick access to business answers can be used to gain a competitive edge, lower costs, and more. This IBM Redbooks® publication introduces the concepts of DB2 with BLU Acceleration. It discusses the steps to move from a relational database to using BLU Acceleration, optimizing BLU usage, and deploying BLU into existing analytic solutions today, with an example of IBM Cognos®. This book also describes integration of DB2 with BLU Acceleration into SAP Business Warehouse (SAP BW) and SAP's near-line storage solution on DB2. This publication is intended to be helpful to a wide-ranging audience, including those readers who want to understand the technologies and readers who have planning, deployment, and support responsibilities.

Leveraging DB2 10 for High Performance of Your Data Warehouse

Building on the business intelligence (BI) framework and capabilities that are outlined in InfoSphere Warehouse: A Robust Infrastructure for Business Intelligence, SG24-7813, this IBM® Redbooks® publication focuses on the new business insight challenges that have arisen in the last few years and the new technologies in IBM DB2® 10 for Linux, UNIX, and Windows that provide powerful analytic capabilities to meet those challenges. This book is organized in to two parts. The first part provides an overview of data warehouse infrastructure and DB2 Warehouse, and outlines the planning and design process for building your data warehouse. The second part covers the major technologies that are available in DB2 10 for Linux, UNIX, and Windows. We focus on functions that help you get the most value and performance from your data warehouse. These technologies include database partitioning, intrapartition parallelism, compression, multidimensional clustering, range (table) partitioning, data movement utilities, database monitoring interfaces, infrastructures for high availability, DB2 workload management, data mining, and relational OLAP capabilities. A chapter on BLU Acceleration gives you all of the details about this exciting DB2 10.5 innovation that simplifies and speeds up reporting and analytics. Easy to set up and self-optimizing, BLU Acceleration eliminates the need for indexes, aggregates, or time-consuming database tuning to achieve top performance and storage efficiency. No SQL or schema changes are required to take advantage of this breakthrough technology. This book is primarily intended for use by IBM employees, IBM clients, and IBM Business Partners.

Oracle Exadata Recipes: A Problem-Solution Approach

Oracle Exadata Recipes takes an example-based, problem/solution approach in showing how to size, install, configure, manage, monitor, optimize, and migrate Oracle database workloads on and to the Oracle Exadata Database Machine. Whether you're an Oracle Database administrator, Unix/Linux administrator, storage administrator, network administrator, or Oracle developer, Oracle Exadata Recipes provides effective and proven solutions to accomplish a wide variety of tasks on the Exadata Database Machine. You can feel confident using the reliable solutions that are demonstrated in this book in your enterprise Exadata environment. Managing Oracle Exadata is unlike managing a traditional Oracle database. Oracle's Exadata Database Machine is a pre-configured engineered system comprised of hardware and software, built to deliver extreme performance for Oracle Database workloads. Exadata delivers extreme performance by offering an optimally balanced hardware infrastructure with fast components at each layer of the engineered technology stack, as well as a unique set of Oracle software features designed to leverage the high-performing hardware infrastructure by reducing I/O demands. Let Oracle Exadata Recipes help you translate your existing Oracle Database knowledge into the exciting new growth area that is Oracle Exadata. Helps extend your Oracle Database skillset to the fast-growing, Exadata platform Presents information on managing Exadata in a helpful, example-based format Clearly explains unique Exadata software and hardware features What you'll learn Install and configure Exadata Manage your Exadata hardware infrastructure Monitor and troubleshoot performance issues Manage smart scan and cell offload processing Take advantage of Hybrid Columnar Compression Deploy Smart Flash Cache and Smart Flash Logging Ensure the health of your Exadata environment Who this book is for Oracle Exadata Recipes is for Oracle Database administrators, Unix/Linux administrators, storage administrators, backup administrators, network administrators, and Oracle developers who want to quickly learn to develop effective and proven solutions without reading through a lengthy manual scrubbing for techniques. Readers in a hurry will appreciate the recipe format that sets up solutions to common tasks as the centerpiece of the book.

Delivering Continuity and Extreme Capacity with the IBM DB2 pureScale Feature

The IBM® DB2® pureScale® feature offers clustering technology that helps deliver high availability and exceptional scalability transparent to applications. The DB2 pureScale feature helps you to meet your business needs around availability and scalability, and is also easy to configure and administer. This IBM Redbooks® publication addresses the DB2 pureScale feature that is available in IBM DB2 10.1 for Linux, UNIX, and Windows operating systems. It can help you build skills and deploy the DB2 pureScale feature. This book bundles all the information necessary for a in-depth analysis into the functions of the DB2 pureScale feature, including the actual hardware requirements. It includes validated step-by-step hardware and software installation instructions. In addition, this book provides detailed examples about how to work effectively with a DB2 pureScale cluster and how to plan and run an upgrade for all DB2 related components to DB2 10.1. This book is intended for database administrators (DBAs) who use IBM DB2 10.1 for Linux, UNIX, and Windows operating systems who want to explore and get started with the DB2 pureScale feature.

Unleashing DB2 10 for Linux, UNIX, and Windows

This IBM® Redbooks® publication provides a broad understanding of the key features in IBM DB2® 10 and how to use these features to get more value from business applications. It includes information about the following features: Time Travel Query, which you use to store and retrieve time-based data by using capability built into DB2 10, without needing to build your own solution Adaptive compression, an enhanced compression technology that adapts to changing data patterns, yielding extremely high compression ratios Multi-temperature storage, which you may use to optimize storage costs by identifying and managing data based on its “temperature” or access requirements Row and column access control, which offers security access enforcement on your data, at the row or column level, or both Availability enhancements, which provide different DB2 availability features for different enterprise needs; high availability disaster recovery (HADR) multiple-standby databases provide availability and data recovery in one technology, and the IBM DB2 pureScale® Feature provides continuous availability and scalability that is transparent to applications Oracle compatibility, which allows many applications written for Oracle to run on DB2, virtually unchanged Ingest utility, a feature-rich data movement utility that allows queries to run concurrently with minimal impact on data availability