talk-data.com talk-data.com

Topic

Hadoop

Apache Hadoop

big_data distributed_computing data_processing

258

tagged

Activity Trend

3 peak/qtr
2020-Q1 2026-Q1

Activities

258 activities · Newest first

Big Data Analytics with R

Unlock the potential of big data analytics by mastering R programming with this comprehensive guide. This book takes you step-by-step through real-world scenarios where R's capabilities shine, providing you with practical skills to handle, process, and analyze large and complex datasets effectively. What this Book will help me do Understand the latest big data processing methods and how R can enhance their application. Set up and use big data platforms such as Hadoop and Spark in conjunction with R. Utilize R for practical big data problems, such as analyzing consumption and behavioral datasets. Integrate R with SQL and NoSQL databases to maximize its versatility in data management. Discover advanced machine learning implementations using R and Spark MLlib for predictive analytics. Author(s) None Walkowiak is an experienced data analyst and R programming expert with a passion for data engineering and machine learning. With a deep knowledge of big data platforms and extensive teaching experience, they bring a clear and approachable writing style to help learners excel. Who is it for? Ideal for data analysts, scientists, and engineers with fundamental data analysis knowledge looking to enhance their big data capabilities using R. If you aim to adapt R for large-scale data management and analysis workflows, this book is your ideal companion to bridge the gap.

Mastering Business Intelligence with MicroStrategy

Mastering Business Intelligence with MicroStrategy offers a thorough walkthrough of implementing enterprise business intelligence solutions using MicroStrategy 10. In this book, you'll learn how to design comprehensive dashboards, analyze data efficiently, and enhance user experiences with modern BI tools. What this Book will help me do Learn to utilize MicroStrategy's advanced BI capabilities, including dashboards and predictive analytics, to enhance business insights. Develop mobile-responsive analytics dashboards to deliver critical data effectively wherever needed. Explore integration techniques to connect MicroStrategy with other data sources like Hadoop and third-party mapping tools. Master visualization techniques such as charts and geospatial mapping to present data insights compellingly. Gain technical expertise in managing, administering, and troubleshooting MicroStrategy systems to maintain robust BI operations. Author(s) The authors Dmitry Anoshin, None Rana, None Ma, and Neil Mehta bring years of expertise in business intelligence and analytics. With backgrounds working in leading technology solutions and BI projects, they aim to share actionable, real-world insights based on their experiences. Who is it for? This book is perfect for BI developers, analytics managers, and business analysts who use MicroStrategy and wish to deepen their proficiency. It provides value for readers migrating from MicroStrategy 9 to 10 and for those seeking to leverage advanced BI functionalities. If you are keen on unlocking the full potential of BI tools for your organization, this book is for you.

The Big Data Market

Which companies have adopted technologies such as Hadoop and Spark, as well as data science in general? And which industries are lagging behind? This O’Reilly report provides the results of a unique, data-driven analysis of the market for big data products and technologies. Using eye-catching charts and visualizations, Spiderbook cofounder Aman Naimat highlights some surprising results from the analysis, such as: The relatively small number of companies using big data in production Industries that have embraced big data the most—and the least The amount of money spent on various big data use cases How many companies actually use “fast data” The results also reveal the geographical locations where companies have been quick to adopt big data, as well as the types of teams that use big data technology. In addition, Naimat takes you through the analysis process with Spiderbook’s graph-based machine-learning model. The company analyzed billions of publicly available documents, canvassed more than 500,000 companies, and searched the entire business internet to compile the most comprehensive results possible.

Cassandra: The Definitive Guide, 2nd Edition

Imagine what you could do if scalability wasn't a problem. With this hands-on guide, you’ll learn how the Cassandra database management system handles hundreds of terabytes of data while remaining highly available across multiple data centers. This expanded second edition—updated for Cassandra 3.0—provides the technical details and practical examples you need to put this database to work in a production environment. Authors Jeff Carpenter and Eben Hewitt demonstrate the advantages of Cassandra’s non-relational design, with special attention to data modeling. If you’re a developer, DBA, or application architect looking to solve a database scaling issue or future-proof your application, this guide helps you harness Cassandra’s speed and flexibility. Understand Cassandra’s distributed and decentralized structure Use the Cassandra Query Language (CQL) and cqlsh—the CQL shell Create a working data model and compare it with an equivalent relational model Develop sample applications using client drivers for languages including Java, Python, and Node.js Explore cluster topology and learn how nodes exchange data Maintain a high level of performance in your cluster Deploy Cassandra on site, in the Cloud, or with Docker Integrate Cassandra with Spark, Hadoop, Elasticsearch, Solr, and Lucene

podcast_episode
by Michael Healy (Search Discovery) , Tim Wilson (Analytics Power Hour - Columbus (OH) , Michael Helbling (Search Discovery)

In this episode, we dive deep on a 1988 classic: Tom Hanks, under the direction of Penny Marshall, was a 12-year-old in a 30-year-old's body... Actually, that's a different "Big" from what we actually cover in this episode. In this instant classic, the star is BigQuery, the director is Google, and Michael Healy, a data scientist from Search Discovery, delivers an Oscar-worthy performance as Zoltar. In under 48 minutes, Michael (Helbling) and Tim drastically increased their understanding of what Google BigQuery is and where it fits in the analytics landscape. If you'd like to do the same, give it a listen! Technologies, books, and sites referenced in this episode were many, including: Google BigQuery and the BigQuery API Libraries, Google Cloud Services, Google Dremel, Apache Drill, Amazon Redshift (AWS), Rambo III (another 1988 movie!), Hadoop, Cloudera, the Observepoint Tag Debugger, Our Mathematical Universe by Max Tegmark, A Brief History of Time by Stephen Hawking, and a video of math savant Scott Flansburg.

Streaming Architecture

More and more data-driven companies are looking to adopt stream processing and streaming analytics. With this concise ebook, you’ll learn best practices for designing a reliable architecture that supports this emerging big-data paradigm. Authors Ted Dunning and Ellen Friedman (Real World Hadoop) help you explore some of the best technologies to handle stream processing and analytics, with a focus on the upstream queuing or message-passing layer. To illustrate the effectiveness of these technologies, this book also includes specific use cases. Ideal for developers and non-technical people alike, this book describes: Key elements in good design for streaming analytics, focusing on the essential characteristics of the messaging layer New messaging technologies, including Apache Kafka and MapR Streams, with links to sample code Technology choices for streaming analytics: Apache Spark Streaming, Apache Flink, Apache Storm, and Apache Apex How stream-based architectures are helpful to support microservices Specific use cases such as fraud detection and geo-distributed data streams Ted Dunning is Chief Applications Architect at MapR Technologies, and active in the open source community. He currently serves as VP for Incubator at the Apache Foundation, as a champion and mentor for a large number of projects, and as committer and PMC member of the Apache ZooKeeper and Drill projects. Ted is on Twitter as @ted_dunning. Ellen Friedman, a committer for the Apache Drill and Apache Mahout projects, is a solutions consultant and well-known speaker and author, currently writing mainly about big data topics. With a PhD in Biochemistry, she has years of experience as a research scientist and has written about a variety of technical topics. Ellen is on Twitter as @Ellen_Friedman.

Professional Hadoop

The professional's one-stop guide to this open-source, Java-based big data framework Professional Hadoop is the complete reference and resource for experienced developers looking to employ Apache Hadoop in real-world settings. Written by an expert team of certified Hadoop developers, committers, and Summit speakers, this book details every key aspect of Hadoop technology to enable optimal processing of large data sets. Designed expressly for the professional developer, this book skips over the basics of database development to get you acquainted with the framework's processes and capabilities right away. The discussion covers each key Hadoop component individually, culminating in a sample application that brings all of the pieces together to illustrate the cooperation and interplay that make Hadoop a major big data solution. Coverage includes everything from storage and security to computing and user experience, with expert guidance on integrating other software and more. Hadoop is quickly reaching significant market usage, and more and more developers are being called upon to develop big data solutions using the Hadoop framework. This book covers the process from beginning to end, providing a crash course for professionals needing to learn and apply Hadoop quickly. Configure storage, UE, and in-memory computing Integrate Hadoop with other programs including Kafka and Storm Master the fundamentals of Apache Big Top and Ignite Build robust data security with expert tips and advice Hadoop's popularity is largely due to its accessibility. Open-source and written in Java, the framework offers almost no barrier to entry for experienced database developers already familiar with the skills and requirements real-world programming entails. Professional Hadoop gives you the practical information and framework-specific skills you need quickly.

Apache Hive Cookbook

Apache Hive Cookbook is a comprehensive resource for mastering Apache Hive, a tool that bridges the gap between SQL and Big Data processing. Through guided recipes, you'll acquire essential skills in Hive query development, optimization, and integration with modern big data frameworks. What this Book will help me do Design efficient Hive query structures for big data analytics. Optimize data storage and query execution using partitions and buckets. Integrate Hive seamlessly with frameworks like Spark and Hadoop. Understand and utilize the HiveQL syntax to perform advanced analytical processing. Implement practical solutions to secure, maintain, and scale Hive environments. Author(s) Hanish Bansal, Saurabh Chauhan, and Shrey Mehrotra bring their extensive expertise in big data technologies and Hive to this cookbook. With years of practical experience and deep technical knowledge, they offer a collection of solutions and best practices that reflect real-world use cases. Their commitment to clarity and depth makes this book an invaluable resource for exploring Hive to its fullest potential. Who is it for? This book is perfect for data professionals, engineers, and developers looking to enhance their capabilities in big data analytics using Hive. It caters to those with a foundational understanding of big data frameworks and some familiarity with SQL. Whether you're planning to optimize data handling or integrate Hive with other data tools, this guide helps you achieve your goals. Step into the world of efficient data analytics with Apache Hive through structured learning paths.

Architecting Data Lakes

Many organizations use Hadoop-driven data lakes as an adjunct staging area for their enterprise data warehouses (EDW). But for those companies ready to take the plunge, a data lake is far more useful as a one-stop-shop for extracting insights from their vast collection of data. With this eBook, you’ll learn best practices for building, maintaining, and deriving value from a Hadoop data lake in production environments. Authors Alice LaPlante and Ben Sharma explain how a data lake will enable your organization to manage an increasing volume of datasets—from blog postings and product reviews to streaming data—and to discover important relationships between them. Whether you want to control administrative costs in healthcare or reduce risk in financial services, this ebook addresses the architectural considerations and required capabilities you need to build your own data lake. With this report, you’ll learn: The key attributes of a data lake, including its ability to store information in native formats for later processing Why implementing data management and governance in your data lake is crucial How to address various challenges for building and managing a data lake Self-service options that enable different users to access the data lake without help from IT Emerging trends that will shape the future of data lakes

The Hadoop Performance Myth

The wish lists of many data-driven organizations seem reasonable enough. They’d like to capitalize on real-time data analysis, move beyond batch processing for time-critical insights, allow multiple users to share cluster resources, and provide predictable service levels. However, fundamental performance limitations of complex distributed systems such as Hadoop prevent much of this from happening. In this report, Courtney Webster examines the root cause of these performance problems and explains why best practices for mitigating them—cluster tuning, provisioning, and even cluster isolation for mission critical jobs—don’t provide viable, scalable, or long-term solutions. Organizations have been pushing Hadoop and other distributed systems to their performance breaking points as they seek to use clusters as shared resources across multiple business units and individual users. Once they hit this performance wall, companies will find it difficult to deliver on the big data promise at scale. Read this report to find out what the implications are for your organization.

Hadoop Real-World Solutions Cookbook - Second Edition

Master the full potential of big data processing using Hadoop with this comprehensive guide. Featuring over 90 practical recipes, this book helps you streamline data workflows and implement machine learning models with tools like Spark, Hive, and Pig. By the end, you'll confidently handle complex data problems and optimize big data solutions effectively. What this Book will help me do Install and manage a Hadoop 2.x cluster efficiently to suit your data processing needs. Explore and utilize advanced tools like Hive, Pig, and Flume for seamless big data analysis. Master data import/export processes with Sqoop and workflows automation using Oozie. Implement machine learning and analytics tasks using Mahout and Apache Spark. Store and process data flexibly across formats like Parquet, ORC, RC, and more. Author(s) None Deshpande is an expert in big data processing and analytics with years of hands-on experience in implementing Hadoop-based solutions for real-world problems. Known for a clear and pragmatic writing style, None brings actionable wisdom and best practices to the forefront, helping readers excel in managing and utilizing big data systems. Who is it for? Designed for technical enthusiasts and professionals, this book is ideal for those familiar with basic big data concepts. If you are looking to expand your expertise in Hadoop's ecosystem and implement data-driven solutions, this book will guide you through essential skills and advanced techniques to efficiently manage complex big data projects.

Spark

Production-targeted Spark guidance with real-world use cases Spark: Big Data Cluster Computing in Production goes beyond general Spark overviews to provide targeted guidance toward using lightning-fast big-data clustering in production. Written by an expert team well-known in the big data community, this book walks you through the challenges in moving from proof-of-concept or demo Spark applications to live Spark in production. Real use cases provide deep insight into common problems, limitations, challenges, and opportunities, while expert tips and tricks help you get the most out of Spark performance. Coverage includes Spark SQL, Tachyon, Kerberos, ML Lib, YARN, and Mesos, with clear, actionable guidance on resource scheduling, db connectors, streaming, security, and much more. Spark has become the tool of choice for many Big Data problems, with more active contributors than any other Apache Software project. General introductory books abound, but this book is the first to provide deep insight and real-world advice on using Spark in production. Specific guidance, expert tips, and invaluable foresight make this guide an incredibly useful resource for real production settings. Review Spark hardware requirements and estimate cluster size Gain insight from real-world production use cases Tighten security, schedule resources, and fine-tune performance Overcome common problems encountered using Spark in production Spark works with other big data tools including MapReduce and Hadoop, and uses languages you already know like Java, Scala, Python, and R. Lightning speed makes Spark too good to pass up, but understanding limitations and challenges in advance goes a long way toward easing actual production implementation. Spark: Big Data Cluster Computing in Production tells you everything you need to know, with real-world production insight and expert guidance, tips, and tricks.

Hadoop: What You Need to Know

Hadoop has revolutionized data processing and enterprise data warehousing, but its explosive growth has come with a large amount of uncertainty, hype, and confusion. With this report, enterprise decision makers will receive a concise crash course on what Hadoop is and why it’s important. Hadoop represents a major shift from traditional enterprise data warehousing and data analytics, and its technology can be daunting at first. Donald Miner, founder of the data science firm Miner & Kasch, covers just enough ground so you can make intelligent decisions about Hadoop in your enterprise. By the end of this report, you’ll know the basics of technologies such as HDFS, MapReduce, and YARN, without becoming mired in the details. Not only will you learn the basics of how Hadoop works and why it’s such an important technology, you’ll get examples of how you should probably be using it.

Fast Data Front Ends for Hadoop

Organizations striving to build applications for streaming data have a new possibility to ponder: the use of ingestion engines at the front end of their Hadoop systems. With this O’Reilly report, you’ll learn how these fast data front ends process data before it reaches the Hadoop Data File System (HDFS), and provide intelligence and context in real time. This helps you reduce response times from hours to minutes, or even minutes to seconds. Author and independent consultant Akmal Chaudhri looks at several popular ingestion engines, including Apache Spark, Apache Storm, and the VoltDB in-memory database. Among them, VoltDB stands out by providing full Atomicity, Consistency, Isolation, and Durability (ACID) support. VoltDB also lets you build a fast data front-end that uses the familiar SQL language and standards. Learn the advantages of ingestion engines as well as the theoretical and practical problems that can come up in an implementation. You’ll discover how this option can handle streaming data, provide state, ensure durability, and support transactions and real-time decisions. Akmal B. Chaudhri is an Independent Consultant, specializing in big data, NoSQL, and NewSQL database technologies. He has previously held roles as a developer, consultant, product strategist, and technical trainer with several blue-chip companies and big data startups. Akmal regularly presents at international conferences and serves on program committees for several major conferences and workshops.

MongoDB Cookbook - Second Edition - Second Edition

Designed to help developers and administrators harness the full potential of MongoDB, this book provides clear instruction and practical guidance no matter your level. By exploring both fundamental aspects like installation and configuration, and advanced topics like using cloud services, this book serves as a comprehensive reference for anyone navigating the modern NoSQL database capabilities of MongoDB. What this Book will help me do Understand how to install and configure MongoDB for different environments, enabling efficient setup and operation. Master database administration skills, including monitoring and backup strategies, which are essential for stability and performance. Develop applications with MongoDB using Java and Python, allowing integration into modern tech stacks. Leverage advanced querying and indexing techniques, improving data retrieval and operational efficiency. Integrate MongoDB with cloud platforms and tools like Hadoop, enhancing scalability and expanded use cases. Author(s) None Dasadia and None Nayak are seasoned database professionals with extensive experience in MongoDB and NoSQL database systems. Their practical approach to technical writing focuses on real-world applications and providing solutions to complex challenges. With backgrounds in software development and data management, they ensure that readers have a hands-on learning experience. Their passion for spreading knowledge makes this book both instructional and engaging. Who is it for? This book is ideal for database administrators and software developers interested in adopting or expanding their knowledge of MongoDB. If you're a complete novice or someone with experience who seeks hands-on solutions and examples, this book offers value. It's particularly suited for professionals working with Java or Python, as examples focus on these programming languages. Whether you're enhancing your skills for personal projects or looking to implement MongoDB at work, this resource equips you with the know-how.

R for Programmers

Unlike other books about R, written from the perspective of statistics, this book is written from the perspective of programmers, providing a channel for programmers with expertise in other programming languages to quickly understand R. The contents are divided into four parts: the basics of R, the server of R, databases and big data, and the appendices, which introduce the installation of Java, various databases, and Hadoop. Because this is a reference book, there is no special sequence for reading all the chapters. Anyone new to the subject who wishes to master R comprehensively can simply follow the chapters in sequence.

Scalable Big Data Architecture: A Practitioner’s Guide to Choosing Relevant Big Data Architecture

This book highlights the different types of data architecture and illustrates the many possibilities hidden behind the term "Big Data", from the usage of No-SQL databases to the deployment of stream analytics architecture, machine learning, and governance. Scalable Big Data Architecture covers real-world, concrete industry use cases that leverage complex distributed applications , which involve web applications, RESTful API, and high throughput of large amount of data stored in highly scalable No-SQL data stores such as Couchbase and Elasticsearch. This book demonstrates how data processing can be done at scale from the usage of NoSQL datastores to the combination of Big Data distribution. When the data processing is too complex and involves different processing topology like long running jobs, stream processing, multiple data sources correlation, and machine learning, it’s often necessary to delegate the load to Hadoop or Spark and use the No-SQL to serve processed data in real time. This book shows you how to choose a relevant combination of big data technologies available within the Hadoop ecosystem. It focuses on processing long jobs, architecture, stream data patterns, log analysis, and real time analytics. Every pattern is illustrated with practical examples, which use the different open sourceprojects such as Logstash, Spark, Kafka, and so on. Traditional data infrastructures are built for digesting and rendering data synthesis and analytics from large amount of data. This book helps you to understand why you should consider using machine learning algorithms early on in the project, before being overwhelmed by constraints imposed by dealing with the high throughput of Big data. Scalable Big Data Architecture is for developers, data architects, and data scientists looking for a better understanding of how to choose the most relevant pattern for a Big Data project and which tools to integrate into that pattern.

Apache Oozie Essentials

Apache Oozie Essentials serves as your guide to mastering Apache Oozie, a powerful workflow scheduler for Hadoop environments. Through lucid explanations and practical examples, you will learn how to create, schedule, and enhance workflows for data ingestion, processing, and machine learning tasks using Oozie. What this Book will help me do Install and configure Apache Oozie in your Hadoop environment to start managing workflows. Develop seamless workflows that integrate tools like Hive, Pig, and Sqoop to automate data operations. Set up coordinators to handle timed and dependent job executions efficiently. Deploy Spark jobs within your workflows for machine learning on large datasets. Harness Oozie security features to improve your system's reliability and trustworthiness. Author(s) Authored by None Singh, a seasoned developer with a deep understanding of big data processing and Apache Oozie. With their practical experience, the book intersperses technical detail with real-world examples for an effective learning experience. The author's goal is to make Oozie accessible and useful to professionals. Who is it for? This book is ideal for data engineers and Hadoop professionals looking to streamline their workflow management using Apache Oozie. Whether you're a novice to Oozie or aiming to implement complex data and ML pipelines, the book offers comprehensive guidance tailored to your needs.

Data Munging with Hadoop

The Example-Rich, Hands-On Guide to Data Munging with Apache Hadoop TM Data scientists spend much of their time “munging” data: handling day-to-day tasks such as data cleansing, normalization, aggregation, sampling, and transformation. These tasks are both critical and surprisingly interesting. Most important, they deepen your understanding of your data’s structure and limitations: crucial insight for improving accuracy and mitigating risk in any analytical project. Now, two leading Hortonworks data scientists, Ofer Mendelevitch and Casey Stella, bring together powerful, practical insights for effective Hadoop-based data munging of large datasets. Drawing on extensive experience with advanced analytics, the authors offer realistic examples that address the common issues you’re most likely to face. They describe each task in detail, presenting example code based on widely used tools such as Pig, Hive, and Spark. This concise, hands-on eBook is valuable for every data scientist, data engineer, and architect who wants to master data munging: not just in theory, but in practice with the field’s #1 platform–Hadoop. Coverage includes A framework for understanding the various types of data quality checks, including cell-based rules, distribution validation, and outlier analysis Assessing tradeoffs in common approaches to imputing missing values Implementing quality checks with Pig or Hive UDFs Transforming raw data into “feature matrix” format for machine learning algorithms Choosing features and instances Implementing text features via “bag-of-words” and NLP techniques Handling time-series data via frequency- or time-domain methods Manipulating feature values to prepare for modeling Data Munging with Hadoop is part of a larger, forthcoming work entitled Data Science Using Hadoop. To be notified when the larger work is available, register your purchase of Data Munging with Hadoop at informit.com/register and check the box “I would like to hear from InformIT and its family of brands about products and special offers.”

Learning Bayesian Models with R

Dive into the world of Bayesian Machine Learning with "Learning Bayesian Models with R." This comprehensive guide introduces the foundations of probability theory and Bayesian inference, teaches you how to implement these concepts with the R programming language, and progresses to practical techniques for supervised and unsupervised problems in data science. What this Book will help me do Understand and set up an R environment for Bayesian modeling Build Bayesian models including linear regression and classification for predictive analysis Learn to apply Bayesian inference to real-world machine learning problems Work with big data and high-performance computation frameworks like Hadoop and Spark Master advanced Bayesian techniques and apply them to deep learning and AI challenges Author(s) Hari Manassery Koduvely is a proficient data scientist with extensive experience in leveraging Bayesian frameworks for real-world applications. His passion for Bayesian Machine Learning is evident in his approachable and detailed teaching methodology, aimed at making these complex topics accessible for practitioners. Who is it for? This book is best suited for data scientists, analysts, and statisticians familiar with R and basic probability theory who aim to enhance their expertise in Bayesian approaches. It's ideal for professionals tackling machine learning challenges in applied data contexts. If you're looking to incorporate advanced probabilistic methods into your projects, this guide will show you how.