talk-data.com talk-data.com

Topic

data

5765

tagged

Activity Trend

3 peak/qtr
2020-Q1 2026-Q1

Activities

5765 activities · Newest first

Introduction to Finite Element Analysis and Design, 2nd Edition

Introduces the basic concepts of FEM in an easy-to-use format so that students and professionals can use the method efficiently and interpret results properly Finite element method (FEM) is a powerful tool for solving engineering problems both in solid structural mechanics and fluid mechanics. This book presents all of the theoretical aspects of FEM that students of engineering will need. It eliminates overlong math equations in favour of basic concepts, and reviews of the mathematics and mechanics of materials in order to illustrate the concepts of FEM. It introduces these concepts by including examples using six different commercial programs online. The all-new, second edition of Introduction to Finite Element Analysis and Design provides many more exercise problems than the first edition. It includes a significant amount of material in modelling issues by using several practical examples from engineering applications. The book features new coverage of buckling of beams and frames and extends heat transfer analyses from 1D (in the previous edition) to 2D. It also covers 3D solid element and its application, as well as 2D. Additionally, readers will find an increase in coverage of finite element analysis of dynamic problems. There is also a companion website with examples that are concurrent with the most recent version of the commercial programs. Offers elaborate explanations of basic finite element procedures Delivers clear explanations of the capabilities and limitations of finite element analysis Includes application examples and tutorials for commercial finite element software, such as MATLAB, ANSYS, ABAQUS and NASTRAN Provides numerous examples and exercise problems Comes with a complete solution manual and results of several engineering design projects Introduction to Finite Element Analysis and Design, 2nd Edition is an excellent text for junior and senior level undergraduate students and beginning graduate students in mechanical, civil, aerospace, biomedical engineering, industrial engineering and engineering mechanics.

Beginning Apache Spark 2: With Resilient Distributed Datasets, Spark SQL, Structured Streaming and Spark Machine Learning library

Develop applications for the big data landscape with Spark and Hadoop. This book also explains the role of Spark in developing scalable machine learning and analytics applications with Cloud technologies. Beginning Apache Spark 2 gives you an introduction to Apache Spark and shows you how to work with it. Along the way, you’ll discover resilient distributed datasets (RDDs); use Spark SQL for structured data; and learn stream processing and build real-time applications with Spark Structured Streaming. Furthermore, you’ll learn the fundamentals of Spark ML for machine learning and much more. After you read this book, you will have the fundamentals to become proficient in using Apache Spark and know when and how to apply it to your big data applications. What You Will Learn Understand Spark unified data processing platform Howto run Spark in Spark Shell or Databricks Use and manipulate RDDs Deal with structured data using Spark SQL through its operations and advanced functions Build real-time applications using Spark Structured Streaming Develop intelligent applications with the Spark Machine Learning library Who This Book Is For Programmers and developers active in big data, Hadoop, and Java but who are new to the Apache Spark platform.

SAP HANA and ESS: A Winning Combination

SAP HANA on IBM® POWER® is an established HANA solution with which customers can run HANA-based analytic and business applications on a flexible IBM Power based infrastructure. IT assets, such as servers, storage, and skills and operation procedures, can easily be used and reused instead of enforcing more investment into dedicated SAP HANA only appliances. In this scenario, IBM Spectrum™ Scale as the underlying block storage and files system adds further benefits to this solution stack to take advantage of scale effects, higher availability, simplification, and performance. With the IBM Elastic Storage™ Server (ESS) based on IBM Spectrum Scale™, RAID capabilities are added to the file system. By using the intelligent internal logic of the IBM Spectrum Scale RAID code, reasonable performance and significant disk failure recovery improvements are achieved. This IBM Redpaper™ publication focuses on the benefits and advantages of implementing a HANA solution on top of IBM Spectrum Scale storage file system. This paper is intended to help experienced administrators and IT specialists to plan and set up an IBM Spectrum Scale cluster and configure an ESS for SAP HANA workloads. It provides important tips and bestpreferred practices about how to manage IBM Spectrum Scale''s availability and performance. If you are familiar with ESS, IBM Spectrum Scale, and IBM Spectrum Scale RAID, and you need only the pertinent documentation about how to configure a IBM Spectrum Scale cluster with an ESS for SAP HANA, see Chapter 5, "IBM Spectrum Scale customization for HANA" on page 25. Before reading this IBM Redpaper publication, you should be familiar with the basic concepts of IBM Spectrum Scale and IBM Spectrum Scale RAID. This IBM Redpaper publication can be helpful for architects and specialists who are planning an SAP HANA on POWER deployment with the IBM Spectrum Scale file system. For more information about planning considerations for Power, see the SAP HANA on Power Planning Guide.

Robust Nonlinear Regression

The first book to discuss robust aspects of nonlinear regression—with applications using R software Robust Nonlinear Regression: with Applications using R covers a variety of theories and applications of nonlinear robust regression. It discusses both parts of the classic and robust aspects of nonlinear regression and focuses on outlier effects. It develops new methods in robust nonlinear regression and implements a set of objects and functions in S-language under SPLUS and R software. The software covers a wide range of robust nonlinear fitting and inferences, and is designed to provide facilities for computer users to define their own nonlinear models as an object, and fit models using classic and robust methods as well as detect outliers. The implemented objects and functions can be applied by practitioners as well as researchers. The book offers comprehensive coverage of the subject in 9 chapters: Theories of Nonlinear Regression and Inference; Introduction to R; Optimization; Theories of Robust Nonlinear Methods; Robust and Classical Nonlinear Regression with Autocorrelated and Heteroscedastic errors; Outlier Detection; R Packages in Nonlinear Regression; A New R Package in Robust Nonlinear Regression; and Object Sets. The first comprehensive coverage of this field covers a variety of both theoretical and applied topics surrounding robust nonlinear regression Addresses some commonly mishandled aspects of modeling R packages for both classical and robust nonlinear regression are presented in detail in the book and on an accompanying website Robust Nonlinear Regression: with Applications using R is an ideal text for statisticians, biostatisticians, and statistical consultants, as well as advanced level students of statistics.

Cosmos DB for MongoDB Developers: Migrating to Azure Cosmos DB and Using the MongoDB API

Learn Azure Cosmos DB and its MongoDB API with hands-on samples and advanced features such as the multi-homing API, geo-replication, custom indexing, TTL, request units (RU), consistency levels, partitioning, and much more. Each chapter explains Azure Cosmos DB’s features and functionalities by comparing it to MongoDB with coding samples. Cosmos DB for MongoDB Developers starts with an overview of NoSQL and Azure Cosmos DB and moves on to demonstrate the difference between geo-replication of Azure Cosmos DB compared to MongoDB. Along the way you’ll cover subjects including indexing, partitioning, consistency, and sizing, all of which will help you understand the concepts of read units and how this calculation is derived from an existing MongoDB’s usage. The next part of the book shows you the process and strategies for migrating to Azure Cosmos DB. You will learn the day-to-day scenarios of using Azure Cosmos DB, its sizing strategies, and optimizing techniques for the MongoDB API. This information will help you when planning to migrate from MongoDB or if you would like to compare MongoDB to the Azure Cosmos DB MongoDB API before considering the switch. What You Will Learn Migrate to MongoDB and understand its strategies Develop a sample application using MongoDB’s client driver Make use of sizing best practices and performance optimization scenarios Optimize MongoDB’s partition mechanism and indexing Who This Book Is For MongoDB developers who wish to learn Azure Cosmos DB. It specifically caters to a technical audience, working on MongoDB.

Applied Analytics through Case Studies Using SAS and R: Implementing Predictive Models and Machine Learning Techniques

Examine business problems and use a practical analytical approach to solve them by implementing predictive models and machine learning techniques using SAS and the R analytical language. This book is ideal for those who are well-versed in writing code and have a basic understanding of statistics, but have limited experience in implementing predictive models and machine learning techniques for analyzing real world data. The most challenging part of solving industrial business problems is the practical and hands-on knowledge of building and deploying advanced predictive models and machine learning algorithms. Applied Analytics through Case Studies Using SAS and R is your answer to solving these business problems by sharpening your analytical skills. What You'll Learn Understand analytics and basic data concepts Use an analytical approach to solve Industrial business problems Build predictive model with machine learning techniques Create and apply analytical strategies Who This Book Is For Data scientists, developers, statisticians, engineers, and research students with a great theoretical understanding of data and statistics who would like to enhance their skills by getting practical exposure in data modeling.

Getting Started with IBM zHyperLink for z/OS

With the pressures to drive transaction processing 24/7 because of online banking and other business demands, IBM® zHyperLink on the IBM DS8880 is making it easy to accelerate transaction processing for the mainframe. This IBM Redpaper™ publication helps you to understand the concepts, business perspectives, and reference architecture of installing, tailoring, and configuring zHyperLink in your own environment.

Expert GeoServer

"Expert GeoServer" guides readers through the process of building, optimizing, and securing GeoServer-powered web mapping applications. By exploring concepts like spatial analysis platforms, tile caching, and secure authentication, this book equips you to create highly performant and secure geospatial applications. What this Book will help me do Learn to develop spatial analysis platforms using web processing services. Master tile caching to significantly enhance the speed of your mapping applications. Implement secure authentication to protect sensitive geospatial data. Optimize GeoServer for improved performance and resource utilization. Deploy your GeoServer-backed applications on modern cloud-hosting infrastructures. Author(s) None Mearns is an experienced software developer and geospatial technology expert. With a strong background in GeoServer implementation, None has helped organizations optimize and secure their geospatial platforms. Their writing aims to provide clear and actionable instructions for professionals and learners alike. Who is it for? This book is perfect for geospatial developers and professionals aiming to take their GeoServer skills to the next level. A basic understanding of GeoServer is assumed, as this guide tackles advanced topics like performance optimization and security. If you are looking to enhance the speed, usability, and security of your mapping applications, this is for you. Those aiming to confidently deploy production-ready applications will find it invaluable.

Healthcare Analytics Made Simple

Navigate the fascinating intersection of healthcare and data science with the book "Healthcare Analytics Made Simple." This comprehensive guide empowers you to use Python and machine learning techniques to analyze and improve real healthcare systems. Demystify intricate concepts with Python code and SQL to gain actionable insights and build predictive models for healthcare. What this Book will help me do Understand healthcare incentives, policies, and datasets to ground your analysis in practical knowledge. Master the use of Python libraries and SQL for healthcare data analysis and visualization. Develop skills to apply machine learning for predictive and descriptive analytics in healthcare. Learn to assess quality metrics and evaluate provider performance using robust tools. Get acquainted with upcoming trends and future applications in healthcare analytics. Author(s) The authors, None Kumar and None Khader, are experts in data science and healthcare informatics. They bring years of experience teaching, researching, and applying data analytics in healthcare. Their approach is hands-on and clear, aiming to make complex topics accessible and engaging for their audience. Who is it for? This book is perfect for data science professionals eager to specialize in healthcare analytics. Additionally, clinicians aiming to leverage computing and data analytics in improving healthcare processes will find valuable insights. Programming enthusiasts and students keen to enter healthcare analytics will also greatly benefit. Tailored for beginners in this field, it is an educational yet robust resource.

Mastering Kibana 6.x

Mastering Kibana 6.x is your guide to leveraging Kibana for creating impactful data visualizations and insightful dashboards. From setting up basic visualizations to exploring advanced analytics and machine learning integrations, this book equips you with the necessary skills to dive deep into your data and gain actionable insights at scale. You'll also learn to effectively manage and monitor data with powerful tools such as X-Pack and Beats. What this Book will help me do Build sophisticated dashboards to visualize elastic stack data effectively. Understand and utilize Timelion expressions for analyzing time series data. Incorporate X-Pack capabilities to enhance security and monitoring in Kibana. Extract, analyze, and visualize data from Elasticsearch for advanced analytics. Set up monitoring and alerting using Beats components for reliable data operations. Author(s) With extensive experience in big data technologies, the author brings a practical approach to teaching advanced Kibana topics. Having worked on real-world data analytics projects, their aim is to make complex concepts accessible while showing how to tackle analytics challenges using Kibana. Who is it for? This book is ideal for data engineers, DevOps professionals, and data scientists who want to optimize large-scale data visualizations. If you're looking to manage Elasticsearch data through insightful dashboards and visual analytics, or enhance your data operations with features like machine learning, then this book is perfect for you. A basic understanding of the Elastic Stack is helpful, though not required.

Professional Azure SQL Database Administration

Learn everything you need to manage Azure SQL Database with 'Professional Azure SQL Database Administration'. This book covers critical tasks such as migration, performance optimization, security, and disaster recovery. Perfect for those transitioning to the cloud, it equips you with skills to ensure your database runs smoothly and efficiently. What this Book will help me do Effectively migrate on-premise SQL Server databases to Azure. Master backup, restore, and security operations with Azure SQL Database. Optimize performance and scalability using monitoring and tuning techniques. Implement high availability and disaster recovery strategies. Simplify database management through automation and advanced techniques. Author(s) Ahmad Osama is a seasoned database admin and Azure expert with extensive experience in SQL Server and cloud database management. As a consultant and trainer, he has guided numerous organizations through cloud transitions. Ahmad's teaching philosophy blends practical insights with clear instruction. Who is it for? This book is intended for database administrators and developers looking to transition their skills to Azure SQL Database. If you have some experience with on-premise SQL Server and are familiar with PowerShell, you'll find this guide invaluable. Ideal for those wanting to develop, migrate, or manage Azure SQL solutions.

Microsoft Power BI Quick Start Guide

Uncover the power of Microsoft Power BI with this accessible and practical guide. This book introduces you to the concepts of data modeling, transformation, and visualization, ensuring that you can build effective dashboards and gain valuable insights. You'll be empowered to productively utilize Power BI in your organization to achieve your analytics goals. What this Book will help me do Connect to various data sources and harness the capabilities of the Query Editor. Transform and clean data for analysis, learning to use languages like M and R. Build robust data models with relationships and powerful DAX expressions. Create impactful reports with efficient and custom visualizations in Power BI. Deploy and administer Power BI solutions both in the cloud and on-premise. Author(s) The authors, Devin Knight, Mitchell Pearson, and Manuel Quintana, are seasoned experts in Business Intelligence and Power BI. They bring years of experience simplifying complex data challenges. Their writing is approachable and hands-on, equipping readers with the skills to solve real-world problems. Who is it for? This book is perfectly suited for professionals in Business Intelligence roles, data analysts, or those aiming to adopt Power BI solutions. Whether you're new to Power BI or have basic BI knowledge, this guide will take you from fundamentals to advanced implementations. Ideal for anyone aiming to unlock actionable insights from their data.

Introduction to IBM Common Data Provider for z Systems

IBM Common Data Provider for z Systems collects, filters, and formats IT operational data in near real-time and provides that data to target analytics solutions. IBM Common Data Provider for z Systems enables authorized IT operations teams using a single web-based interface to specify the IT operational data to be gathered and how it needs to be handled. This data is provided to both on- and off-platform analytic solutions, in a consistent, consumable format for analysis. This Redpaper discusses the value of IBM Common Data Provider for z Systems, provides a high-level reference architecture for IBM Common Data Provider for z Systems, and introduces key components of the architecture. It shows how IBM Common Data Provider for z Systems provides operational data to various analytic solutions. The publication provides high-level integration guidance, preferred practices, tips on planning for IBM Common Data Provider for z Systems, and example integration scenarios.

Ethics and Data Science

As the impact of data science continues to grow on society there is an increased need to discuss how data is appropriately used and how to address misuse. Yet, ethical principles for working with data have been available for decades. The real issue today is how to put those principles into action. With this report, authors Mike Loukides, Hilary Mason, and DJ Patil examine practical ways for making ethical data standards part of your work every day. To help you consider all of possible ramifications of your work on data projects, this report includes: A sample checklist that you can adapt for your own procedures Five framing guidelines (the Five C’s) for building data products: consent, clarity, consistency, control, and consequences Suggestions for building ethics into your data-driven culture Now is the time to invest in a deliberate practice of data ethics, for better products, better teams, and better outcomes. Get a copy of this report and learn what it takes to do good data science today.

Principles and Practice of Big Data, 2nd Edition

Principles and Practice of Big Data: Preparing, Sharing, and Analyzing Complex Information, Second Edition updates and expands on the first edition, bringing a set of techniques and algorithms that are tailored to Big Data projects. The book stresses the point that most data analyses conducted on large, complex data sets can be achieved without the use of specialized suites of software (e.g., Hadoop), and without expensive hardware (e.g., supercomputers). The core of every algorithm described in the book can be implemented in a few lines of code using just about any popular programming language (Python snippets are provided). Through the use of new multiple examples, this edition demonstrates that if we understand our data, and if we know how to ask the right questions, we can learn a great deal from large and complex data collections. The book will assist students and professionals from all scientific backgrounds who are interested in stepping outside the traditional boundaries of their chosen academic disciplines. Presents new methodologies that are widely applicable to just about any project involving large and complex datasets Offers readers informative new case studies across a range scientific and engineering disciplines Provides insights into semantics, identification, de-identification, vulnerabilities and regulatory/legal issues Utilizes a combination of pseudocode and very short snippets of Python code to show readers how they may develop their own projects without downloading or learning new software

Streaming Systems

Streaming data is a big deal in big data these days. As more and more businesses seek to tame the massive unbounded data sets that pervade our world, streaming systems have finally reached a level of maturity sufficient for mainstream adoption. With this practical guide, data engineers, data scientists, and developers will learn how to work with streaming data in a conceptual and platform-agnostic way. Expanded from Tyler Akidau’s popular blog posts "Streaming 101" and "Streaming 102", this book takes you from an introductory level to a nuanced understanding of the what, where, when, and how of processing real-time data streams. You’ll also dive deep into watermarks and exactly-once processing with co-authors Slava Chernyak and Reuven Lax. You’ll explore: How streaming and batch data processing patterns compare The core principles and concepts behind robust out-of-order data processing How watermarks track progress and completeness in infinite datasets How exactly-once data processing techniques ensure correctness How the concepts of streams and tables form the foundations of both batch and streaming data processing The practical motivations behind a powerful persistent state mechanism, driven by a real-world example How time-varying relations provide a link between stream processing and the world of SQL and relational algebra

IBM Software-Defined Storage Guide

Today, new business models in the marketplace coexist with traditional ones and their well-established IT architectures. They generate new business needs and new IT requirements that can only be satisfied by new service models and new technological approaches. These changes are reshaping traditional IT concepts. Cloud in its three main variants (Public, Hybrid, and Private) represents the major and most viable answer to those IT requirements, and software-defined infrastructure (SDI) is its major technological enabler. IBM® technology, with its rich and complete set of storage hardware and software products, supports SDI both in an open standard framework and in other vendors' environments. IBM services are able to deliver solutions to the customers with their extensive knowledge of the topic and the experiences gained in partnership with clients. This IBM Redpaper™ publication focuses on software-defined storage (SDS) and IBM Storage Systems product offerings for software-defined environments (SDEs). It also provides use case examples across various industries that cover different client needs, proposed solutions, and results. This paper can help you to understand current organizational capabilities and challenges, and to identify specific business objectives to be achieved by implementing an SDS solution in your enterprise.

Location Analytics for Business

It’s estimated that 80 percent of an organization’s data contains location attributes, but many don’t understand how to unlock the potential of this data for their organizations to make better decisions. You have just been handed the keys by finding this book. Readers will unlock these methods by learning about location analytics as well as taking a deep dive into the Planned Grocery® platform created in part by the author. The Planned Grocery® location analytics platform has been mentioned in the Wall Street Journal (twice), Forbes, Bloomberg, and Business Insider. A sampling of clients of Planned Grocery® include: Philips Edison and Company, Just Fresh, Slate Retail REIT, Wegmans, and Whole Foods. The practical information in this book is designed to prepare you to recognize and take advantage of situations where you and your organization can become more successful using location analytics. This will be accomplished by taking you through an explanation of the fundamentals of location analytics, by looking at various case studies, by learning how to identify and analyze spatial data sets, and by learning about the companies that are doing interesting work in this space.

Apache Spark Deep Learning Cookbook

Embark on a journey to master distributed deep learning with the "Apache Spark Deep Learning Cookbook". Designed specifically for leveraging the capabilities of Apache Spark, TensorFlow, and Keras, this book offers over 80 problem-solving recipes to efficiently train and deploy state-of-the-art neural networks, addressing real-world AI challenges. What this Book will help me do Set up and configure a working Apache Spark environment optimized for deep learning tasks. Implement distributed training practices for deep learning models using TensorFlow and Keras. Develop and test neural networks such as CNNs and RNNs targeting specific big data problems. Apply Spark's built-in libraries and integrations for enhanced NLP and computer vision applications. Effectively manage and preprocess large datasets using Spark DataFrames for machine learning tasks. Author(s) Authors Ahmed Sherif and None Ravindra bring years of experience in deep learning, Apache Spark use cases, and hands-on practical training. Their collective expertise has contributed to designing this cookbook approach, focusing on clarity and usability for readers tackling challenging machine learning scenarios. Who is it for? This book is ideal for IT professionals, data scientists, and software developers with foundational understanding of machine learning concepts and Apache Spark framework capabilities. If you aim to scale deep learning and integrate efficient computing with Spark's power, this guide is for you. Familiarity with Python will help maximize the book's potential.