talk-data.com talk-data.com

Event

O'Reilly Data Engineering Books

2001-10-19 – 2027-05-25 Oreilly Visit website ↗

Activities tracked

499

Collection of O'Reilly books on Data Engineering.

Filtering by: Cloud Computing ×

Sessions & talks

Showing 351–375 of 499 · Newest first

Search within this event →
Sams Teach Yourself Apache Spark™ in 24 Hours

Apache Spark is a fast, scalable, and flexible open source distributed processing engine for big data systems and is one of the most active open source big data projects to date. In just 24 lessons of one hour or less, Sams Teach Yourself Apache Spark in 24 Hours helps you build practical Big Data solutions that leverage Spark’s amazing speed, scalability, simplicity, and versatility. This book’s straightforward, step-by-step approach shows you how to deploy, program, optimize, manage, integrate, and extend Spark–now, and for years to come. You’ll discover how to create powerful solutions encompassing cloud computing, real-time stream processing, machine learning, and more. Every lesson builds on what you’ve already learned, giving you a rock-solid foundation for real-world success. Whether you are a data analyst, data engineer, data scientist, or data steward, learning Spark will help you to advance your career or embark on a new career in the booming area of Big Data. Learn how to • Discover what Apache Spark does and how it fits into the Big Data landscape • Deploy and run Spark locally or in the cloud • Interact with Spark from the shell • Make the most of the Spark Cluster Architecture • Develop Spark applications with Scala and functional Python • Program with the Spark API, including transformations and actions • Apply practical data engineering/analysis approaches designed for Spark • Use Resilient Distributed Datasets (RDDs) for caching, persistence, and output • Optimize Spark solution performance • Use Spark with SQL (via Spark SQL) and with NoSQL (via Cassandra) • Leverage cutting-edge functional programming techniques • Extend Spark with streaming, R, and Sparkling Water • Start building Spark-based machine learning and graph-processing applications • Explore advanced messaging technologies, including Kafka • Preview and prepare for Spark’s next generation of innovations Instructions walk you through common questions, issues, and tasks; Q-and-As, Quizzes, and Exercises build and test your knowledge; "Did You Know?" tips offer insider advice and shortcuts; and "Watch Out!" alerts help you avoid pitfalls. By the time you're finished, you'll be comfortable using Apache Spark to solve a wide spectrum of Big Data problems.

Implementing or Migrating to an IBM Gen 5 b-type SAN

The IBM® b-type Gen 5 Fibre Channel directors and switches provide reliable, scalable, and secure high-performance foundations for high-density server virtualization, cloud architectures, and next generation flash and SSD storage. They are designed to meet the demands of highly virtualized private cloud storage and data center environments. This IBM Redbooks® publication helps administrators learn how to implement or migrate to an IBM Gen 5 b-type SAN. It provides an overview of the key hardware and software products and explains how to install, monitor, tune, and troubleshoot your storage area network (SAN). Read this publication to learn about fabric design, managing and monitoring your network, key tools such as IBM Network Advisor and Fabric Vision, and troubleshooting.

IBM Netcool Operations Insight: A Scenarios Guide

IBM® Netcool® Operations Insight empowers your IT operations to use real-time and historical analytics to identify, isolate, and resolve problems before they affect your business. Powered by IBM Tivoli® Netcool/OMNIbus and the transformative capabilities of cognitive analytics, Netcool Operations Insight consolidates millions of alerts from across local, cloud, and hybrid environments into a few actionable problems. This IBM Redbooks® publication gives a broad understanding of Netcool Operations Insight and describes several scenarios that show the capabilities of this solution in a real-life environment. Each scenario features a different capability of Netcool Operations Insight. The scenarios are documented by using step-by-step figures with explanations to make them easier to implement in your own environment. The scenarios in this book are broken into the following categories: - Network Management-related scenarios - Network Event and cognitive-related scenarios - Network Event-related scenarios The target audience of this book is network specialists, network administrators, and network operators.

IBM System Storage Solutions Handbook

The IBM® System Storage® Solutions Handbook helps you solve your current and future data storage business requirements. It helps you achieve enhanced storage efficiency by design to allow managed cost, capacity of growth, greater mobility, and stronger control over storage performance and management. It describes the most current IBM storage products, including the IBM Spectrum™ family, IBM FlashSystem®, disk, and tape, as well as virtualized solutions such IBM Storage Cloud. This IBM Redbooks® publication provides overviews and information about the most current IBM System Storage products. It shows how IBM delivers the right mix of products for nearly every aspect of business continuance and business efficiency. IBM storage products can help you store, safeguard, retrieve, and share your data. This book is intended as a reference for basic and comprehensive information about the IBM Storage products portfolio. It provides a starting point for establishing your own enterprise storage environment. This book describes the IBM Storage products as of March, 2016.

Perspectives on Data Science for Software Engineering

Perspectives on Data Science for Software Engineering presents the best practices of seasoned data miners in software engineering. The idea for this book was created during the 2014 conference at Dagstuhl, an invitation-only gathering of leading computer scientists who meet to identify and discuss cutting-edge informatics topics. At the 2014 conference, the concept of how to transfer the knowledge of experts from seasoned software engineers and data scientists to newcomers in the field highlighted many discussions. While there are many books covering data mining and software engineering basics, they present only the fundamentals and lack the perspective that comes from real-world experience. This book offers unique insights into the wisdom of the community’s leaders gathered to share hard-won lessons from the trenches. Ideas are presented in digestible chapters designed to be applicable across many domains. Topics included cover data collection, data sharing, data mining, and how to utilize these techniques in successful software projects. Newcomers to software engineering data science will learn the tips and tricks of the trade, while more experienced data scientists will benefit from war stories that show what traps to avoid. Presents the wisdom of community experts, derived from a summit on software analytics Provides contributed chapters that share discrete ideas and technique from the trenches Covers top areas of concern, including mining security and social data, data visualization, and cloud-based data Presented in clear chapters designed to be applicable across many domains

Cassandra: The Definitive Guide, 2nd Edition

Imagine what you could do if scalability wasn't a problem. With this hands-on guide, you’ll learn how the Cassandra database management system handles hundreds of terabytes of data while remaining highly available across multiple data centers. This expanded second edition—updated for Cassandra 3.0—provides the technical details and practical examples you need to put this database to work in a production environment. Authors Jeff Carpenter and Eben Hewitt demonstrate the advantages of Cassandra’s non-relational design, with special attention to data modeling. If you’re a developer, DBA, or application architect looking to solve a database scaling issue or future-proof your application, this guide helps you harness Cassandra’s speed and flexibility. Understand Cassandra’s distributed and decentralized structure Use the Cassandra Query Language (CQL) and cqlsh—the CQL shell Create a working data model and compare it with an equivalent relational model Develop sample applications using client drivers for languages including Java, Python, and Node.js Explore cluster topology and learn how nodes exchange data Maintain a high level of performance in your cluster Deploy Cassandra on site, in the Cloud, or with Docker Integrate Cassandra with Spark, Hadoop, Elasticsearch, Solr, and Lucene

Introducing Microsoft SQL Server 2016: Mission-Critical Applications, Deeper Insights, Hyperscale Cloud

With Microsoft SQL Server 2016, a variety of new features and enhancements to the data platform deliver breakthrough performance, advanced security, and richer, integrated reporting and analytics capabilities. In this ebook, we introduce new security features: Always Encrypted, Row-Level Security, and dynamic data masking; discuss enhancements that enable you to better manage performance and storage: TemDB configuration, query store, and Stretch Database; review several improvements to Reporting Services; and also describe AlwaysOn Availability Groups, tabular enhancements, and R integration.

IBM z13s Technical Guide

Digital business has been driving the transformation of underlying information technology (IT) infrastructure to be more efficient, secure, adaptive, and integrated. IT must be able to handle the explosive growth of mobile clients and employees. It also must be able to process enormous amounts of data to provide deep and real-time insights to help achieve the greatest business impact. This IBM® Redbooks® publication addresses the new IBM z Systems™ single frame, the IBM z13s server. IBM z Systems servers are the trusted enterprise platform for integrating data, transactions, and insight. A data-centric infrastructure must always be available with a 99.999% or better availability, have flawless data integrity, and be secured from misuse. It needs to be an integrated infrastructure that can support new applications. It also needs to have integrated capabilities that can provide new mobile capabilities with real-time analytics delivered by a secure cloud infrastructure. IBM z13s servers are designed with improved scalability, performance, security, resiliency, availability, and virtualization. The superscalar design allows z13s servers to deliver a record level of capacity over the prior single frame z Systems server. In its maximum configuration, the z13s server is powered by up to 20 client characterizable microprocessors (cores) running at 4.3 GHz. This configuration can run more than 18,000 millions of instructions per second (MIPS) and up to 4 TB of client memory. The IBM z13s Model N20 is estimated to provide up to 100% more total system capacity than the IBM zEnterprise® BC12 Model H13. This book provides information about the IBM z13s server and its functions, features, and associated software support. Greater detail is offered in areas relevant to technical planning. It is intended for systems engineers, consultants, planners, and anyone who wants to understand the IBM z Systems™ functions and plan for their usage. It is not intended as an introduction to mainframes. Readers are expected to be generally familiar with existing IBM z Systems technology and terminology.

IBM z13 Technical Guide

Digital business has been driving the transformation of underlying IT infrastructure to be more efficient, secure, adaptive, and integrated. Information Technology (IT) must be able to handle the explosive growth of mobile clients and employees. IT also must be able to use enormous amounts of data to provide deep and real-time insights to help achieve the greatest business impact. This IBM® Redbooks® publication addresses the IBM Mainframe, the IBM z13™. The IBM z13 is the trusted enterprise platform for integrating data, transactions, and insight. A data-centric infrastructure must always be available with a 99.999% or better availability, have flawless data integrity, and be secured from misuse. It needs to be an integrated infrastructure that can support new applications. It needs to have integrated capabilities that can provide new mobile capabilities with real-time analytics delivered by a secure cloud infrastructure. IBM z13 is designed with improved scalability, performance, security, resiliency, availability, and virtualization. The superscalar design allows the z13 to deliver a record level of capacity over the prior IBM z Systems™. In its maximum configuration, z13 is powered by up to 141 client characterizable microprocessors (cores) running at 5 GHz. This configuration can run more than 110,000 millions of instructions per second (MIPS) and up to 10 TB of client memory. The IBM z13 Model NE1 is estimated to provide up to 40% more total system capacity than the IBM zEnterprise® EC12 (zEC1) Model HA1. This book provides information about the IBM z13 and its functions, features, and associated software support. Greater detail is offered in areas relevant to technical planning. It is intended for systems engineers, consultants, planners, and anyone who wants to understand the IBM z Systems functions and plan for their usage. It is not intended as an introduction to mainframes. Readers are expected to be generally familiar with existing IBM z Systems technology and terminology.

Relational Database Design and Implementation, 4th Edition

Relational Database Design and Implementation: Clearly Explained, Fourth Edition, provides the conceptual and practical information necessary to develop a database design and management scheme that ensures data accuracy and user satisfaction while optimizing performance. Database systems underlie the large majority of business information systems. Most of those in use today are based on the relational data model, a way of representing data and data relationships using only two-dimensional tables. This book covers relational database theory as well as providing a solid introduction to SQL, the international standard for the relational database data manipulation language. The book begins by reviewing basic concepts of databases and database design, then turns to creating, populating, and retrieving data using SQL. Topics such as the relational data model, normalization, data entities, and Codd's Rules (and why they are important) are covered clearly and concisely. In addition, the book looks at the impact of big data on relational databases and the option of using NoSQL databases for that purpose. Features updated and expanded coverage of SQL and new material on big data, cloud computing, and object-relational databases Presents design approaches that ensure data accuracy and consistency and help boost performance Includes three case studies, each illustrating a different database design challenge Reviews the basic concepts of databases and database design, then turns to creating, populating, and retrieving data using SQL

IT Modernization using Catalogic ECX Copy Data Management and IBM Spectrum Storage

Data is the currency of the new economy, and organizations are increasingly tasked with finding better ways to protect, recover, access, share, and use data. Traditional storage technologies are being stretched to the breaking point. This challenge is not because of storage hardware performance, but because management tools and techniques have not kept pace with new requirements. Primary data growth rates of 35% to 50% annually only amplify the problem. Organizations of all sizes find themselves needing to modernize their IT processes to enable critical new use cases such as storage self-service, Development and Operations (DevOps), and integration of data centers with the Cloud. They are equally challenged with improving management efficiencies for long established IT processes such as data protection, disaster recovery, reporting, and business analytics. Access to copies of data is the one common feature of all these use cases. However, the slow, manual processes common to IT organizations, including a heavy reliance on labor-intensive scripting and disparate tool sets, are no longer able to deliver the speed and agility required in today's fast-paced world. Copy Data Management (CDM) is an IT modernization technology that focuses on using existing data in a manner that is efficient, automated, scalable, and easy to use, delivering the data access that is urgently needed to meet the new use cases. Catalogic ECX, with IBM® storage, provides in-place copy data management that modernizes IT processes, enables key use cases, and does it all within existing infrastructure. This IBM Redbooks® publication shows how Catalogic Software and IBM have partnered together to create an integrated solution that addresses today's IT environment.

Oracle Database 12c Oracle RMAN Backup & Recovery

This authoritative Oracle Press resource on RMAN has been thoroughly revised to cover every new feature, offering the most up-to-date information This fully updated volume lays out the easiest, fastest, and most effective methods of deploying RMAN in Oracle Database environments of any size. Keeping with previous editions, this book teaches computing professionals at all skill levels how to fully leverage every powerful RMAN tool and protect mission-critical data. Oracle Database 12c RMAN Backup and Recovery explains how to generate reliable archives and carry out successful system restores. You will learn to work from the command line or GUI, automate the database backup process, perform Oracle Flashback recoveries, and deploy third-party administration utilities. The book features full details on cloud computing, report generation, performance tuning, and security. Offers up-to-date coverage of Oracle Database 12 c new features Examples and workshops throughout walk you through important RMAN operations

Big Data, Open Data and Data Development

The world has become digital and technological advances have multiplied circuits with access to data, their processing and their diffusion. New technologies have now reached a certain maturity. Data are available to everyone, anywhere on the planet. The number of Internet users in 2014 was 2.9 billion or 41% of the world population. The need for knowledge is becoming apparent in order to understand this multitude of data. We must educate, inform and train the masses. The development of related technologies, such as the advent of the Internet, social networks, "cloud-computing" (digital factories), has increased the available volumes of data. Currently, each individual creates, consumes, uses digital information: more than 3.4 million e-mails are sent worldwide every second, or 107,000 billion annually with 14,600 e-mails per year per person, but more than 70% are spam. Billions of pieces of content are shared on social networks such as Facebook, more than 2.46 million every minute. We spend more than 4.8 hours a day on the Internet using a computer, and 2.1 hours using a mobile. Data, this new ethereal manna from heaven, is produced in real time. It comes in a continuous stream from a multitude of sources which are generally heterogeneous. This accumulation of data of all types (audio, video, files, photos, etc.) generates new activities, the aim of which is to analyze this enormous mass of information. It is then necessary to adapt and try new approaches, new methods, new knowledge and new ways of working, resulting in new properties and new challenges since SEO logic must be created and implemented. At company level, this mass of data is difficult to manage. Its interpretation is primarily a challenge. This impacts those who are there to "manipulate" the mass and requires a specific infrastructure for creation, storage, processing, analysis and recovery. The biggest challenge lies in "the valuing of data" available in quantity, diversity and access speed.

IBM z13 and IBM z13s Technical Introduction

This IBM® Redbooks® publication introduces the latest IBM z Systems™ platforms, the IBM z13™ and IBM z13s. It includes information about the z Systems environment and how it can help integrate data, transactions, and insight for faster and more accurate business decisions. The z13 and z13s are state-of-the-art data and transaction systems that deliver advanced capabilities that are vital to modern IT infrastructures. These capabilities include: Accelerated data and transaction serving Integrated analytics Access to the API economy Agile development and operations Efficient, scalable, and secure cloud services End-to-end security for data and transactions This book explains how these systems use both new innovations and traditional z Systems strengths to satisfy growing demand for cloud, analytics, and mobile applications. With one of these z Systems platforms as the base, applications can run in a trusted, reliable, and secure environment that both improves operations and lessens business risk.

Ceph Cookbook

Ceph Cookbook is a practical guide offering over 100 detailed recipes to help you effectively design, implement, and manage the Ceph software-defined storage system. Through step-by-step tutorials, readers will master critical tasks, from cluster setup to integration with cloud and virtualization platforms. What this Book will help me do Gain hands-on skills to set up, manage, and maintain a Ceph cluster effectively. Learn to integrate Ceph with popular cloud solutions like OpenStack for optimal performance. Understand techniques for advanced troubleshooting, monitoring, and optimization of storage systems. Develop proficiency in creating scalable storage solutions for enterprise environments. Master best practices in utilizing Ceph's various storage paradigms and technologies. Author(s) Karan Singh is a seasoned technology professional with extensive experience in storage systems and cloud design. With years of experience working with Ceph and an active participant in the open-source community, Karan brings practical insights and in-depth technical knowledge to his writing. His clear and approachable style helps demystify complex concepts for readers. Who is it for? This book is ideal for storage engineers, cloud administrators, and technical architects seeking to understand and deploy software-defined storage solutions. Whether you have foundational knowledge of Linux and storage technologies or are new to Ceph, this book will guide you. Professionals aiming to enhance their cloud infrastructure will find actionable steps and strategies here.

Real-Time Big Data Analytics

This book delves into the techniques and tools essential for designing, processing, and analyzing complex datasets in real-time using advanced frameworks like Apache Spark, Storm, and Amazon Kinesis. By engaging with this thorough guide, you'll build proficiency in creating robust, efficient, and scalable real-time data processing architectures tailored to real-world scenarios. What this Book will help me do Learn the fundamentals of real-time data processing and how it differs from batch processing. Gain hands-on experience with Apache Storm for creating robust data-driven solutions. Develop real-world applications using Amazon Kinesis for cloud-based analytics. Perform complex data queries and transformations with Spark SQL and understand Spark RDDs. Master the Lambda Architecture to combine batch and real-time analytics effectively. Author(s) Shilpi Saxena is a renowned expert in big data technologies, holding extensive experience in real-time data analytics. With a career spanning years in the industry, Shilpi has provided innovative solutions for big data challenges in top-tier organizations. Her teaching approach emphasizes practical applicability, making her writings accessible and impactful for developers and architects alike. Who is it for? This book is for software professionals such as Big Data architects, developers, or programmers looking to enhance their skills in real-time big data analytics. If you are familiar with basic programming principles and seek to build solutions for processing large data streams in real-time environments, this book caters to your needs. It is also suitable for those seeking to familiarize themselves with using state-of-the-art tools like Spark SQL, Apache Storm, and Amazon Kinesis. Whether you're extending current expertise or transitioning into this field, this resource helps you achieve your objectives.

IBM Spectrum Accelerate Deployment, Usage, and Maintenance

This edition applies to IBM® Spectrum Accelerate V11.5.1 and V11.5.3. IBM Spectrum™ Accelerate, a member of IBM Spectrum Storage™, is an agile, software-defined storage solution for enterprise and cloud that builds on the customer-proven and mature IBM XIV® storage software. The key characteristic of Spectrum Accelerate is that it can be easily deployed and run on purpose-built or existing hardware that is chosen by the customer. IBM Spectrum Accelerate™ enables rapid deployment of high-performance and scalable block data storage infrastructure over commodity hardware on-premises or off-premises. This IBM Redbooks® publication provides a broad understanding of IBM Spectrum Accelerate. The book introduces Spectrum Accelerate and describes planning and preparation that are essential for a successful deployment of the solution. The deployment is described through a step-by-step approach, by using a graphical user interface (GUI) based method or a simple command-line interface (CLI) based procedure. Chapters in this book describe the logical configuration of the system, host support and business continuity functions, and migration. Although it makes many references to the XIV storage software, the book also emphasizes where IBM Spectrum Accelerate differs from XIV. Finally, a substantial portion of the book is dedicated to maintenance and troubleshooting to provide detailed guidance for the customer support personnel.

VersaStack Solution by Cisco and IBM with IBM DB2, IBM Spectrum Control, and IBM Spectrum Protect

Dynamic organizations want to accelerate growth while reducing costs. To do so, they must speed the deployment of business applications and adapt quickly to any changes in priorities. Organizations require an IT infrastructure to be easy, efficient, and versatile. The VersaStack solution by Cisco and IBM® can help you accelerate the deployment of your datacenters. It reduces costs by more efficiently managing information and resources while maintaining your ability to adapt to business change. The VersaStack solution combines the innovation of Cisco Unified Computing System (Cisco UCS) Integrated Infrastructure with the efficiency of the IBM Storwize® storage system. The Cisco UCS Integrated Infrastructure includes the Cisco UCS, Cisco Nexus and Cisco MDS switches, and Cisco UCS Director. The IBM Storwize V7000 storage system enhances virtual environments with its Data Virtualization, IBM Real-time Compression™, and IBM Easy Tier® features. These features deliver extraordinary levels of performance and efficiency. The VersaStack solution is Cisco Application Centric Infrastructure (ACI) ready. Your IT team can build, deploy, secure, and maintain applications through a more agile framework. Cisco Intercloud Fabric capabilities help enable the creation of open and highly secure solutions for the hybrid cloud. These solutions accelerate your IT transformation while delivering dramatic improvements in operational efficiency and simplicity. Cisco and IBM are global leaders in the IT industry. The VersaStack solution gives you the opportunity to take advantage of integrated infrastructure solutions that are targeted at enterprise applications, analytics, and cloud solutions. The VersaStack solution is backed by Cisco Validated Designs (CVDs) to provide faster delivery of applications, greater IT efficiency, and less risk. This IBM Redbooks® publication is aimed at experienced storage administrators that are tasked with deploying a VersaStack solution with IBM DB2® High Availability (DB2 HA), IBM Spectrum™ Protect, and IBM Spectrum Control™.

MongoDB Cookbook - Second Edition - Second Edition

Designed to help developers and administrators harness the full potential of MongoDB, this book provides clear instruction and practical guidance no matter your level. By exploring both fundamental aspects like installation and configuration, and advanced topics like using cloud services, this book serves as a comprehensive reference for anyone navigating the modern NoSQL database capabilities of MongoDB. What this Book will help me do Understand how to install and configure MongoDB for different environments, enabling efficient setup and operation. Master database administration skills, including monitoring and backup strategies, which are essential for stability and performance. Develop applications with MongoDB using Java and Python, allowing integration into modern tech stacks. Leverage advanced querying and indexing techniques, improving data retrieval and operational efficiency. Integrate MongoDB with cloud platforms and tools like Hadoop, enhancing scalability and expanded use cases. Author(s) None Dasadia and None Nayak are seasoned database professionals with extensive experience in MongoDB and NoSQL database systems. Their practical approach to technical writing focuses on real-world applications and providing solutions to complex challenges. With backgrounds in software development and data management, they ensure that readers have a hands-on learning experience. Their passion for spreading knowledge makes this book both instructional and engaging. Who is it for? This book is ideal for database administrators and software developers interested in adopting or expanding their knowledge of MongoDB. If you're a complete novice or someone with experience who seeks hands-on solutions and examples, this book offers value. It's particularly suited for professionals working with Java or Python, as examples focus on these programming languages. Whether you're enhancing your skills for personal projects or looking to implement MongoDB at work, this resource equips you with the know-how.

Client-Side Data Storage

One of the most useful features of today’s modern browsers is the ability to store data right on the user’s computer or mobile device. Even as more people move toward the cloud, client-side storage can still save web developers a lot of time and money, if you do it right. This hands-on guide demonstrates several storage APIs in action. You’ll learn how and when to use them, their plusses and minuses, and steps for implementing one or more of them in your application.

IBM Wave for z/VM Installation, Implementation, and Exploitation

IBM® Wave for z/VM® (IBM Wave) is a virtualization management solution for IBM z/VM and Linux on z Systems™. This virtualization management software provides a simplified and cost-effective way for companies to harness the consolidation capabilities of the IBM z™ Systems platform and its ability to host the workloads of tens of thousands of commodity servers. IBM Wave is a complete management solution for z Systems based virtual server farms. This IBM Redbooks® publication provides a guide to understanding IBM Wave by providing information about the IBM Wave architecture and how it fits into the cloud. This publication also provides a planning and design guide that is based on common scenarios. This publication also provides installation and configuration task information and how to manage and operate the environment. The intended audience for this publication is IT Architects who are responsible for planning their IBM Wave environments and IT Specialists who are responsible for implementing them.

Getting Started with KVM for IBM z Systems

This IBM® Redbooks® publication gives a broad explanation of the kernel-based virtual machine (KVM) for IBM z™ Systems and how it uses the architecture of IBM z Systems™. It focuses on the planning and design of the environment and provides installation and configuration definitions that are necessary to build and manage KVM for IBM z Systems. It also helps you plan, install, and configure IBM Cloud Manager with OpenStack for use with KVM for IBM z Systems in a cloud environment. This book is useful to IT architects and system administrators who plan for and install KVM for IBM z Systems. The reader is expected to have a good understanding of IBM z Systems hardware, KVM, Linux on z Systems, and cloud concepts.

Beginning SAP Fiori

Take a deep dive into SAP Fiori and discover Fiori architecture, Fiori landscape installation, Fiori standard applications, Fiori Launchpad configuration, tools for developing Fiori applications and extending standard Fiori applications. You will learn: Fiori architecture and its applications Setting up a Fiori landscape and Fiori Launchpad Configuring, customizing and enhancing standard Fiori applications Developing Fiori native applications for mobile Internet of Things-based custom Fiori applications with the HANA cloud platform Bince Mathew, a SAP mobility expert working for an MNC in Germany, shows you how SAP Fiori, based on HTML5 technology, addresses the most widely and frequently used SAP transactions like purchase order approvals, sales order creation, information lookup, and self-service tasks. This set of HTML5 apps provides a very simple and accessible experience across desktops, tablets, and smartphones.

Elasticsearch in Action

Elasticsearch in Action teaches you how to build scalable search applications using Elasticsearch. You'll ramp up fast, with an informative overview and an engaging introductory example. Within the first few chapters, you'll pick up the core concepts you need to implement basic searches and efficient indexing. With the fundamentals well in hand, you'll go on to gain an organized view of how to optimize your design. Perfect for developers and administrators building and managing search-oriented applications. About the Technology Modern search seems like magic'you type a few words and the search engine appears to know what you want. With the Elasticsearch real-time search and analytics engine, you can give your users this magical experience without having to do complex low-level programming or understand advanced data science algorithms. You just install it, tweak it, and get on with your work. About the Book Elasticsearch in Action teaches you how to write applications that deliver professional quality search. As you read, you'll learn to add basic search features to any application, enhance search results with predictive analysis and relevancy ranking, and use saved data from prior searches to give users a custom experience. This practical book focuses on Elasticsearch's REST API via HTTP. Code snippets are written mostly in bash using cURL, so they're easily translatable to other languages. What's Inside What is a great search application? Building scalable search solutions Using Elasticsearch with any language Configuration and tuning About the Reader This book is for developers and administrators building and managing search-oriented applications. About the Authors Radu Gheorghe is a search consultant and software engineer. Matthew Lee Hinman develops highly available, cloud-based systems. Roy Russo is a specialist in predictive analytics. Quotes To understand how a modern search infrastructure works is a daunting task. Radu, Matt, and Roy make it an engaging, hands-on experience. - Sen Xu, Twitter Inc. An indispensable guide to the challenges of search of semi-structured data. - Artur Nowak, Evidence Prime The best resource for a complex topic. Highly recommended. - Daniel Beck, juris GmbH Took me from confused to confident in a week. - Alan McCann, Givsum.com

Web Development with MongoDB and NodeJS - Second Edition

Discover how to build a full-featured, interactive web application from scratch using Node.js and MongoDB in this comprehensive guide. You will learn to set up your development environment, create a web server with Express.js, and integrate MongoDB for data persistence. By the end of this book, you will have the knowledge and skills to develop and deploy robust web applications ready for the cloud. What this Book will help me do Set up a Node.js development environment and connect it to MongoDB. Develop a web server using Express.js and write integrated APIs. Implement dynamic HTML pages leveraging the Handlebars template engine. Build efficient and scalable data-driven features using Mongoose ODM. Deploy web applications seamlessly to cloud platforms like Heroku, AWS, or Azure. Author(s) This book was co-authored by experts None Satheesh, None Joseph D'mello, and Jason Krol, who bring years of experience in software development and expertise in modern web technologies. With a focus on practical application and best practices, the authors aim to empower readers to succeed in real-world development projects using the innovative Node.js and MongoDB stack. Who is it for? This book is tailored for developers who have a basic understanding of JavaScript and HTML and wish to advance their web development skills. If you are motivated to learn how to leverage Node.js and MongoDB for full-stack development or are curious about building and deploying complete web applications, this book is ideal for you. It addresses learners from early career to experienced developers looking to strengthen their skills in modern development stacks.