talk-data.com talk-data.com

Topic

data

5765

tagged

Activity Trend

3 peak/qtr
2020-Q1 2026-Q1

Activities

5765 activities · Newest first

Moving Hadoop to the Cloud

Until recently, Hadoop deployments existed on hardware owned and run by organizations. Now, of course, you can acquire the computing resources and network connectivity to run Hadoop clusters in the cloud. But there’s a lot more to deploying Hadoop to the public cloud than simply renting machines. This hands-on guide shows developers and systems administrators familiar with Hadoop how to install, use, and manage cloud-born clusters efficiently. You’ll learn how to architect clusters that work with cloud-provider features—not just to avoid pitfalls, but also to take full advantage of these services. You’ll also compare the Amazon, Google, and Microsoft clouds, and learn how to set up clusters in each of them. Learn how Hadoop clusters run in the cloud, the problems they can help you solve, and their potential drawbacks Examine the common concepts of cloud providers, including compute capabilities, networking and security, and storage Build a functional Hadoop cluster on cloud infrastructure, and learn what the major providers require Explore use cases for high availability, relational data with Hive, and complex analytics with Spark Get patterns and practices for running cloud clusters, from designing for price and security to dealing with maintenance

Principles of Data Wrangling

A key task that any aspiring data-driven organization needs to learn is data wrangling, the process of converting raw data into something truly useful. This practical guide provides business analysts with an overview of various data wrangling techniques and tools, and puts the practice of data wrangling into context by asking, "What are you trying to do and why?" Wrangling data consumes roughly 50-80% of an analyst’s time before any kind of analysis is possible. Written by key executives at Trifacta, this book walks you through the wrangling process by exploring several factors—time, granularity, scope, and structure—that you need to consider as you begin to work with data. You’ll learn a shared language and a comprehensive understanding of data wrangling, with an emphasis on recent agile analytic processes used by many of today’s data-driven organizations. Appreciate the importance—and the satisfaction—of wrangling data the right way. Understand what kind of data is available Choose which data to use and at what level of detail Meaningfully combine multiple sources of data Decide how to distill the results to a size and shape that can drive downstream analysis

Dynamic Documents with R and knitr, 2nd Edition

Suitable for both beginners and advanced users, this popular book makes writing statistical reports easier by integrating computing directly with reporting. Reports range from homework, projects, exams, books, blogs, and web pages to virtually any documents related to statistical graphics, computing, and data analysis. This edition includes a new chapter on R Markdown v2, changes that reflect improvements in the knitr package, and several new sections. Demos and other information about the package are available on the author’s website.

Learning SAP Analytics Cloud

Discover the power of SAP Analytics Cloud in solving business intelligence challenges through concise and clear instruction. This book is the essential guide for beginners, providing you a comprehensive understanding of the platform's features and capabilities. By the end, you'll master creating reports, models, and dashboards, making data-driven decisions with confidence. What this Book will help me do Learn how to navigate and utilize the SAP Analytics Cloud interface effectively. Create data models using various sources like Excel or text files for comprehensive insights. Design and compile visually engaging stories, reports, and dashboards effortlessly. Master collaborative and presentation tools inside SAP Digital Boardroom. Understand how to plan, predict, and analyze seamlessly within a single platform. Author(s) None Ahmed is an experienced SAP consultant and analytics professional, bringing years of practical experience in BI tools and enterprise analytics. As an expert in SAP Analytics Cloud, None has guided numerous teams in deploying effective analytics solutions. Their writing aims to demystify complex tools for learners. Who is it for? This book is ideal for IT professionals, business analysts, and newcomers eager to understand SAP Analytics Cloud. Beginner-level BI developers and managers seeking guided steps for mastering this platform will find it invaluable. If you aim to enhance your career in cloud-based analytics, this book is tailored for you.

Building on Multi-Model Databases

In many organizations today, businesspeople are busy requesting unified views of data stored across multiple sources within their organizations. But integrating multiple data types from multiple data stores is a complex, error-prone, and time-consuming process of cobbling everything together manually. This concise book examines how multi-model databases can help you integrate data storage and access across your organization in a seamless and elegant way. Author Pete Aven and Diane Burley from MarkLogic explain how this latest evolution in data management naturally accepts heterogeneous data, enabling you to eventually phase out technical data silos. Through several case studies, you’ll discover how organizations use multi-model databases to reduce complexity, save money, take advantage of opportunities, lessen risk, and shorten time to value. Get unified views across disparate data models and formats within a single database Learn how multi-model databases leverage the inherent structure of the data being stored Load and use unstructured and semi-structured data (such as documents and text) as is Provide agility in data access and delivery through APIs, interfaces, and indexes Learn how to scale a multi-model database, and provide ACID capabilities and security Examine how a multi-model database would fit into your existing architecture

Analytics

For years, organizations have struggled to make sense out of their data. IT projects designed to provide employees with dashboards, KPIs, and business-intelligence tools often take a year or more to reach the finish line...if they get there at all. This has always been a problem. Today, though, it's downright unacceptable. The world changes faster than ever. Speed has never been more important. By adhering to antiquated methods, firms lose the ability to see nascent trends—and act upon them until it's too late. But what if the process of turning raw data into meaningful insights didn't have to be so painful, time-consuming, and frustrating? What if there were a better way to do analytics? Fortunately, you're in luck... Analytics: The Agile Way is the eighth book from award-winning author and Arizona State University professor Phil Simon. Analytics: The Agile Way demonstrates how progressive organizations such as Google, Nextdoor, and others approach analytics in a fundamentally different way. They are applying the same Agile techniques that software developers have employed for years. They have replaced large batches in favor of smaller ones...and their results will astonish you. Through a series of case studies and examples, Analytics: The Agile Way demonstrates the benefits of this new analytics mind-set: superior access to information, quicker insights, and the ability to spot trends far ahead of your competitors.

Streaming Data

Streaming Data introduces the concepts and requirements of streaming and real-time data systems. The book is an idea-rich tutorial that teaches you to think about how to efficiently interact with fast-flowing data. About the Technology As humans, we're constantly filtering and deciphering the information streaming toward us. In the same way, streaming data applications can accomplish amazing tasks like reading live location data to recommend nearby services, tracking faults with machinery in real time, and sending digital receipts before your customers leave the shop. Recent advances in streaming data technology and techniques make it possible for any developer to build these applications if they have the right mindset. This book will let you join them. About the Book Streaming Data is an idea-rich tutorial that teaches you to think about efficiently interacting with fast-flowing data. Through relevant examples and illustrated use cases, you'll explore designs for applications that read, analyze, share, and store streaming data. Along the way, you'll discover the roles of key technologies like Spark, Storm, Kafka, Flink, RabbitMQ, and more. This book offers the perfect balance between big-picture thinking and implementation details. What's Inside The right way to collect real-time data Architecting a streaming pipeline Analyzing the data Which technologies to use and when About the Reader Written for developers familiar with relational database concepts. No experience with streaming or real-time applications required. About the Author Andrew Psaltis is a software engineer focused on massively scalable real-time analytics. Quotes The definitive book if you want to master the architecture of an enterprise-grade streaming application. - Sergio Fernandez Gonzalez, Accenture A thorough explanation and examination of the different systems, strategies, and tools for streaming data implementations. - Kosmas Chatzimichalis, Mach 7x A well-structured way to learn about streaming data and how to put it into practice in modern real-time systems. - Giuliano Araujo Bertoti, FATEC This book is all you need to understand what streaming is all about! - Carlos Curotto, Globant

Building Custom Tasks for SQL Server Integration Services

Learn to build custom SSIS tasks using Visual Studio Community Edition and Visual Basic. Bring all the power of Microsoft .NET to bear on your data integration and ETL processes, and for no added cost over what you’ve already spent on licensing SQL Server. If you already have a license for SQL Server, then you do not need to spend more money to extend SSIS with custom tasks and components. Why are custom components necessary? Because even though the SSIS catalog of built-in tasks and components is a marvel of engineering, there do remain gaps in the functionality that is provided. These gaps are especially relevant to enterprises practicing Data Integration Lifecycle Management (DILMS) and/or DevOps. One of the gaps is a limitation of the SSIS Execute Package task. Developers using the stock version of that task are unable to select SSIS packages from other projects. Yet it’s useful to be able to select and execute tasks across projects, and the example used throughout this book will help you to create an Execute Catalog Package task that does in fact allow you to execute a task from another project. Building on the example’s pattern, you can create any task that you like, custom tailored to your specific, data integration and ETL needs. What You Will Learn Configure and execute Visual Studio in the way that best supports SSIS task development Create a class library as the basis for an SSIS task, and reference the needed SSIS assemblies Properly sign assemblies that you create in order to invoke them from your task Implement source code control via Visual Studio Team Services, or your own favorite tool set Code not only your tasks themselves, but also the associated task editors Troubleshoot and then execute your custom tasks as part of your own project Who This Book Is For Database administrators and developers who are involved in ETL projects built around SQL Server Integration Services (SSIS). Readers should have a background in programming along with a desire to optimize their ETL efforts by creating custom-tailored tasks for execution from SSIS packages.

JSON at Work

JSON is becoming the backbone for meaningful data interchange over the internet. This format is now supported by an entire ecosystem of standards, tools, and technologies for building truly elegant, useful, and efficient applications. With this hands-on guide, author and architect Tom Marrs shows you how to build enterprise-class applications and services by leveraging JSON tooling and message/document design. JSON at Work provides application architects and developers with guidelines, best practices, and use cases, along with lots of real-world examples and code samples. You’ll start with a comprehensive JSON overview, explore the JSON ecosystem, and then dive into JSON’s use in the enterprise. Get acquainted with JSON basics and learn how to model JSON data Learn how to use JSON with Node.js, Ruby on Rails, and Java Structure JSON documents with JSON Schema to design and test APIs Search the contents of JSON documents with JSON Search tools Convert JSON documents to other data formats with JSON Transform tools Compare JSON-based hypermedia formats, including HAL and jsonapi Leverage MongoDB to store and access JSON documents Use Apache Kafka to exchange JSON-based messages between services

Frank Kane's Taming Big Data with Apache Spark and Python

This book introduces you to the world of Big Data processing using Apache Spark and Python. You will learn to set up and run Spark on different systems, process massive datasets, and create solutions to real-world Big Data challenges with over 15 hands-on examples included. What this Book will help me do Understand the basics of Apache Spark and its ecosystem. Learn how to process large datasets with Spark RDDs using Python. Implement machine learning models with Spark's MLlib library. Master real-time data processing with Spark Streaming modules. Deploy and run Spark jobs on cloud clusters using AWS EMR. Author(s) Frank Kane spent 9 years working at Amazon and IMDb, handling and solving real-world machine learning and Big Data problems. Today, as an instructional designer and educator, he brings his wealth of experience to learners around the globe by creating accessible, practical learning resources. His teaching is clear, engaging, and designed to prepare students for real-world applications. Who is it for? This book is ideal for data scientists or data analysts seeking to delve into Big Data processing with Apache Spark. Readers who have foundational knowledge of Python, as well as some understanding of data processing principles, will find this book useful to sharpen their skills further. It is designed for those eager to learn the practical applications of Big Data tools in today's industry environments. By the end of this book, you should feel confident tackling Big Data challenges using Spark and Python.

Learning Elasticsearch

This comprehensive guide to Elasticsearch will teach you how to build robust and scalable search and analytics applications using Elasticsearch 5.x. You will learn the fundamentals of Elasticsearch, including its APIs and tools, and how to apply them to real-world problems. By the end of the book, you will have a solid grasp of Elasticsearch and be ready to implement your own solutions. What this Book will help me do Master the setup and configuration of Elasticsearch and Kibana. Learn to efficiently query and analyze both structured and unstructured data. Understand how to use Elasticsearch aggregations to perform advanced analytics. Gain knowledge of advanced search features including geospatial queries and autocomplete. Explore the Elastic Stack and learn deployment best practices and cloud hosting options. Author(s) None Andhavarapu is an expert in database technology and distributed systems, with years of experience in Elasticsearch. Their passion for search technologies is reflected in their clear and practical teaching style. They've written this guide to help developers of all levels get up to speed with Elasticsearch quickly and comprehensively. Who is it for? This book is perfect for software developers looking to implement effective search and analytics solutions. It's ideal for those who are new to Elasticsearch as well as for professionals familiar with other search tools like Lucene or Solr. The book assumes basic programming knowledge but no prior experience with Elasticsearch.

Learning pandas - Second Edition

Take your Python skills to the next level with 'Learning pandas,' your go-to guide for mastering data manipulation and analysis. This book walks you through the powerful tools offered by the pandas library, helping you unlock key insights from data efficiently. Whether you're handling time-series data or visualizing patterns, you'll gain the proficiency needed to make sense of complex datasets. What this Book will help me do Understand and effectively use pandas Series and DataFrame objects for data representation and manipulation. Master indexing, slicing, and combining data to perform detailed exploration and analysis. Learn to access and work with external data sources, including APIs, databases, and files, using pandas. Develop the skills to handle and analyze time-series data, managing its unique challenges. Create informative and professional data visualizations directly using pandas capabilities. Author(s) Michael Heydt is a respected author and educator in the field of Python and data analysis. With years of experience utilizing pandas in practical and professional environments, Michael offers a unique perspective that combines deep technical insight with approachable examples. His teaching philosophy emphasizes clarity, applicability, and engaging instruction, ensuring learners easily acquire valuable skills. Who is it for? This book is ideal for Python programmers looking to enhance their data analysis capabilities, as well as data analysts and scientists wanting to leverage pandas to improve their workflows. Readers are recommended to have some familiarity with Python, though prior experience with pandas is not required. If you have a keen interest in data exploration and quantitative techniques, this book is for you.

Practical Predictive Analytics

Dive into the world of predictive analytics with 'Practical Predictive Analytics.' This comprehensive guide walks you through analyzing current and historical data to predict future outcomes. Using tools like R and Spark, you will master practical skills, solve real-world challenges, and apply predictive analytics across domains like marketing, healthcare, and retail. What this Book will help me do Learn the six steps for successfully implementing predictive analytics projects. Acquire practical skills in data cleaning, input, and model deployment using tools like R and Spark. Understand core predictive analytics algorithms and their applications in various industries. Apply data analytics techniques to solve problems in fields such as healthcare and marketing. Master methods for handling big data analytics using Databricks and Spark for effective prediction. Author(s) The author, None Winters, is an experienced data scientist and technical educator. With extensive background in predictive analytics, Winters specializes in applying statistical methods and techniques to real-world consultation scenarios. Winters brings a practical and accessible approach to this text, ensuring that learners can follow along and apply their newfound expertise effectively. Who is it for? This book is ideal for statisticians and analysts with some programming background in languages like R, who want to master predictive analytics skills. It caters to intermediate learners who aim to enhance their ability to solve complex analytical problems. Whether you're looking to advance your career or improve your proficiency in data science, this book will serve as a valuable resource for learning and growth.

QlikView for Developers

"QlikView for Developers" is a comprehensive guide to mastering QlikView, a powerful business intelligence tool. This book takes you on a journey from understanding the basics to building scalable and maintainable QlikView applications. Designed to provide practical methods, real-world scenarios, and valuable tips, it is ideal for anyone wanting to learn and effectively use QlikView for BI solutions. What this Book will help me do Understand the key features and architecture of QlikView and what has changed in QlikView 12. Learn to transform, model, and organize data in QlikView to effectively support business processes. Master best practices for creating interactive dashboards using charts, tables, and visualization objects. Discover techniques to optimize data architecture for scalable deployments and ensure data consistency. Implement advanced scripting and calculation methods, such as Set Analysis, to handle complex analytical requirements. Author(s) Miguel Angel Garcia and Barry Harmsen bring years of professional expertise in business intelligence and QlikView application development. Both authors have contributed significantly to the BI community and have extensive experience teaching and consulting on QlikView solutions. Their goal with this book is to provide a resource that is both informative and practical for QlikView developers. Who is it for? This book is intended for developers and analysts looking to harness the capabilities of QlikView for business intelligence purposes. It is suitable for beginners with minimal experience in QlikView, as well as for experienced practitioners wanting to deepen their knowledge and skills. The book provides a balanced approach that caters to various skill levels, ensuring accessible and actionable content for all readers.

SQL Server 2017 Integration Services Cookbook

SQL Server 2017 Integration Services Cookbook is your key to mastering effective data integration and transformation solutions using SSIS 2017. Through clear, concise recipes, this book teaches the advanced ETL techniques necessary for creating efficient data workflows, leveraging both traditional and modern data platforms. What this Book will help me do Master the integration of diverse data sources into comprehensive data models. Develop optimized ETL workflows that improve operational efficiency. Leverage the new features introduced in SQL Server 2017 for enhanced data processing. Implement scalable data warehouse solutions suitable for modern analytics workloads. Customize and extend integration services to handle specific data transformation needs. Author(s) The authors are seasoned professionals in data integration and ETL technologies. They bring years of real-world experience using SQL Server Integration Services in various enterprise scenarios. Their combined expertise ensures practical insights and guidance, making complex concepts accessible to learners and practitioners alike. Who is it for? This book is ideal for data engineers and ETL developers who already understand the basics of SQL Server and want to master advanced data integration techniques. It is also suitable for database administrators and data analysts aiming to enhance their skill set with efficient ETL processes. Arm yourself with this guide to learn not just the how, but also the why, behind successful data transformations.

Practical Data Science Cookbook, Second Edition - Second Edition

The Practical Data Science Cookbook, Second Edition provides hands-on, practical recipes that guide you through all aspects of the data science process using R and Python. Starting with setting up your programming environment, you'll work through a series of real-world projects to acquire, clean, analyze, and visualize data efficiently. What this Book will help me do Set up R and Python environments effectively for data science tasks. Acquire, clean, and preprocess data tailored to analysis with practical steps. Develop robust predictive and exploratory models for actionable insights. Generate analytic reports and share findings with impactful visualizations. Construct tree-based models and master random forests for advanced analytics. Author(s) Authored by a team of experienced professionals in the field of data science and analytics, this book reflects their collective expertise in tackling complex data challenges using programming. With backgrounds spanning industry and academia, the authors bring a practical, application-focused approach to teaching data science. Who is it for? This book is ideal for aspiring data scientists who want hands-on experience with real-world projects, regardless of prior experience. Beginners will gain step-by-step understanding of data science concepts, while seasoned professionals will appreciate the structured projects and use of R and Python for advanced analytics and modeling.

Implementing OpenStack SwiftHLM with IBM Spectrum Archive EE or IBM Spectrum Protect for Space Management

The Swift High Latency Media project seeks to create a high-latency storage back end that makes it easier for users to perform bulk operations of data tiering within a Swift data ring. In today's world, data is produced at significantly higher rates than a decade ago. The storage and data management solutions of the past can no longer keep up with the data demands of today. The policies and structures that decide and execute how that data is used, discarded, or retained determines how efficiently the data is used. The need for intelligent data management and storage is more critical now than ever before. Traditional management approaches hide cost-effective, high-latency media (HLM) storage, such as tape or optical disk archive back ends, underneath a traditional file system. The lack of HLM-aware file system interfaces and software makes it difficult for users to understand and control data access on HLM storage. Coupled with data-access latency, this lack of understanding results in slow responses and potential time-outs that affect the user experience. The Swift HLM project addresses this challenge. Running OpenStack Swift on top of HLM storage allows you to cheaply store and efficiently access large amounts of infrequently used object data. Data that is stored on tape storage can be easily adopted to an Object Storage data interface. This IBM® Redpaper™ publication describes the Swift High Latency Media project and provides guidance for installation and configuration.

Text Mining with R

Much of the data available today is unstructured and text-heavy, making it challenging for analysts to apply their usual data wrangling and visualization tools. With this practical book, you’ll explore text-mining techniques with tidytext, a package that authors Julia Silge and David Robinson developed using the tidy principles behind R packages like ggraph and dplyr. You’ll learn how tidytext and other tidy tools in R can make text analysis easier and more effective. The authors demonstrate how treating text as data frames enables you to manipulate, summarize, and visualize characteristics of text. You’ll also learn how to integrate natural language processing (NLP) into effective workflows. Practical code examples and data explorations will help you generate real insights from literature, news, and social media. Learn how to apply the tidy text format to NLP Use sentiment analysis to mine the emotional content of text Identify a document’s most important terms with frequency measurements Explore relationships and connections between words with the ggraph and widyr packages Convert back and forth between R’s tidy and non-tidy text formats Use topic modeling to classify document collections into natural groups Examine case studies that compare Twitter archives, dig into NASA metadata, and analyze thousands of Usenet messages