talk-data.com talk-data.com

Topic

XML

Extensible Markup Language (XML)

markup_language data_exchange data_storage file_format

289

tagged

Activity Trend

1 peak/qtr
2020-Q1 2026-Q1

Activities

289 activities · Newest first

Implementing CDISC Using SAS, 2nd Edition

For decades researchers and programmers have used SAS to analyze, summarize, and report clinical trial data. Now Chris Holland and Jack Shostak have updated their popular Implementing CDISC Using SAS, the first comprehensive book on applying clinical research data and metadata to the Clinical Data Interchange Standards Consortium (CDISC) standards. Implementing CDISC Using SAS: An End-to-End Guide, Revised Second Edition, is an all-inclusive guide on how to implement and analyze the Study Data Tabulation Model (SDTM) and the Analysis Data Model (ADaM) data and prepare clinical trial data for regulatory submission. Updated to reflect the 2017 FDA mandate for adherence to CDISC standards, this new edition covers creating and using metadata, developing conversion specifications, implementing and validating SDTM and ADaM data, determining solutions for legacy data conversions, and preparing data for regulatory submission. The book covers products such as Base SAS, SAS Clinical Data Integration, and the SAS Clinical Standards Toolkit, as well as JMP Clinical. Topics included in this edition include an implementation of the Define-XML 2.0 standard, new SDTM domains, validation with Pinnacle 21 software, event narratives in JMP Clinical, STDM and ADAM metadata spreadsheets, and of course new versions of SAS and JMP software. The second edition was revised to add the latest C-Codes from the most recent release as well as update the make_define macro that accompanies this book in order to add the capability to handle C-Codes. The metadata spreadsheets were updated accordingly. Any manager or user of clinical trial data in this day and age is likely to benefit from knowing how to either put data into a CDISC standard or analyzing and finding data once it is in a CDISC format. If you are one such person--a data manager, clinical and/or statistical programmer, biostatistician, or even a clinician--then this book is for you.

Java XML and JSON: Document Processing for Java SE

Use this guide to master the XML metalanguage and JSON data format along with significant Java APIs for parsing and creating XML and JSON documents from the Java language. New in this edition is coverage of Jackson (a JSON processor for Java) and Oracle’s own Java API for JSON processing (JSON-P), which is a JSON processing API for Java EE that also can be used with Java SE. This new edition of Java XML and JSON also expands coverage of DOM and XSLT to include additional API content and useful examples. All examples in this book have been tested under Java 11. In some cases, source code has been simplified to use Java 11’s var language feature. The first six chapters focus on XML along with the SAX, DOM, StAX, XPath, and XSLT APIs. The remaining six chapters focus on JSON along with the mJson, GSON, JsonPath, Jackson, and JSON-P APIs. Each chapter ends with select exercises designed to challenge your grasp of the chapter's content.An appendix provides the answers to these exercises. What You'll Learn Master the XML language Create, validate, parse, and transform XML documents Apply Java’s SAX, DOM, StAX, XPath, and XSLT APIs Master the JSON format for serializing and transmitting data Code against third-party APIs such as Jackson, mJson, Gson, JsonPath Master Oracle’s JSON-P API in a Java SE context Who This Book Is For Intermediate and advanced Java programmers who are developing applications that must access data stored in XML or JSON documents. The book also targets developers wanting to understand the XML language and JSON data format.

Access 2019 Bible

Master database creation and management Access 2019 Bible is your, comprehensive reference to the world's most popular database management tool. With clear guidance toward everything from the basics to the advanced, this go-to reference helps you take advantage of everything Access 2019 has to offer. Whether you're new to Access or getting started with Access 2019, you'll find everything you need to know to create the database solution perfectly tailored to your needs, with expert guidance every step of the way. The companion website features all examples and databases used in the book, plus trial software and a special offer from Database Creations. Start from the beginning for a complete tutorial, or dip in and grab what you need when you need it. Access enables database novices and programmers to store, organize, view, analyze, and share data, as well as build powerful, integrable, custom database solutions — but databases can be complex, and difficult to navigate. This book helps you harness the power of the database with a solid understanding of their purpose, construction, and application. Understand database objects and design systems objects Build forms, create tables, manipulate datasheets, and add data validation Use Visual Basic automation and XML Data Access Page design Exchange data with other Office applications, including Word, Excel, and more From database fundamentals and terminology to XML and Web services, this book has everything you need to maximize Access 2019 and build the database you need.

SQL Server Advanced Data Types: JSON, XML, and Beyond

Deliver advanced functionality faster and cheaper by exploiting SQL Server's ever-growing amount of built-in support for modern data formats. Learn about the growing support within SQL Server for operations and data transformations that have previously required third-party software and all the associated licensing and development costs. Benefit through a better understanding of what can be done inside the database engine with no additional costs or development time invested in outside software. Widely used types such as JSON and XML are well-supported by the database engine. The same is true of hierarchical data and even temporal data. Knowledge of these advanced types is crucial to unleashing the full power that's available from your organization's SQL Server database investment. SQL Server Advanced Data Types explores each of the complex data types supplied within SQL Server. Common usage scenarios for eachcomplex data type are discussed, followed by a detailed discussion on how to work with each data type. Each chapter demystifies the complex data and you learn how to use the data types most efficiently. The book offers a practical guide to working with complex data, using real-world examples to demonstrate how each data type can be leveraged. Performance considerations are also discussed, including the implementation of special indexes such as XML indexes and spatial indexes. What You'll Learn Understand the implementation of basic data types and why using the correct type is so important Work with XML data through the XML data type Construct XML data from relational result sets Store and manipulate JSON data using the JSON data type Model and analyze spatial data for geographic information systems Define hierarchies and query them efficiently through the HierarchyID type Who This Book Is For SQL Server developers and application developers who need to store and access complex data structures

Summary

Data integration and routing is a constantly evolving problem and one that is fraught with edge cases and complicated requirements. The Apache NiFi project models this problem as a collection of data flows that are created through a self-service graphical interface. This framework provides a flexible platform for building a wide variety of integrations that can be managed and scaled easily to fit your particular needs. In this episode project members Kevin Doran and Andy LoPresto discuss the ways that NiFi can be used, how to start using it in your environment, and plans for future development. They also explained how it fits in the broad landscape of data tools, the interesting and challenging aspects of the project, and how to build new extensions.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. Are you struggling to keep up with customer request and letting errors slip into production? Want to try some of the innovative ideas in this podcast but don’t have time? DataKitchen’s DataOps software allows your team to quickly iterate and deploy pipelines of code, models, and data sets while improving quality. Unlike a patchwork of manual operations, DataKitchen makes your team shine by providing an end to end DataOps solution with minimal programming that uses the tools you love. Join the DataOps movement and sign up for the newsletter at datakitchen.io/de today. After that learn more about why you should be doing DataOps by listening to the Head Chef in the Data Kitchen at dataengineeringpodcast.com/datakitchen Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. Your host is Tobias Macey and today I’m interviewing Kevin Doran and Andy LoPresto about Apache NiFi

Interview

Introduction How did you get involved in the area of data management? Can you start by explaining what NiFi is? What is the motivation for building a GUI as the primary interface for the tool when the current trend is to represent everything as code? How did you get involved with the project?

Where does it sit in the broader landscape of data tools?

Does the data that is processed by NiFi flow through the servers that it is running on (á la Spark/Flink/Kafka), or does it orchestrate actions on other systems (á la Airflow/Oozie)?

How do you manage versioning and backup of data flows, as well as promoting them between environments?

One of the advertised features is tracking provenance for data flows that are managed by NiFi. How is that data collected and managed?

What types of reporting are available across this information?

What are some of the use cases or requirements that lend themselves well to being solved by NiFi?

When is NiFi the wrong choice?

What is involved in deploying and scaling a NiFi installation?

What are some of the system/network parameters that should be considered? What are the scaling limitations?

What have you found to be some of the most interesting, unexpected, and/or challenging aspects of building and maintaining the NiFi project and community? What do you have planned for the future of NiFi?

Contact Info

Kevin Doran

@kevdoran on Twitter Email

Andy LoPresto

@yolopey on Twitter Email

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

NiFi HortonWorks DataFlow HortonWorks Apache Software Foundation Apple CSV XML JSON Perl Python Internet Scale Asset Management Documentum DataFlow NSA (National Security Agency) 24 (TV Show) Technology Transfer Program Agile Software Development Waterfall Spark Flink Kafka Oozie Luigi Airflow FluentD ETL (Extract, Transform, and Load) ESB (Enterprise Service Bus) MiNiFi Java C++ Provenance Kubernetes Apache Atlas Data Governance Kibana K-Nearest Neighbors DevOps DSL (Domain Specific Language) NiFi Registry Artifact Repository Nexus NiFi CLI Maven Archetype IoT Docker Backpressure NiFi Wiki TLS (Transport Layer Security) Mozilla TLS Observatory NiFi Flow Design System Data Lineage GDPR (General Data Protection Regulation)

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast

MarkLogic Cookbook

Learn how to get the most out of MarkLogic with recipes from people who understand this powerful multi-model database platform from the inside out. MarkLogic comes with a broad set of capabilities to help you quickly integrate data from silos, but it takes time to learn how to harness that power. In this three-part series, key members of the MarkLogic team—including engineers who built the database—provide targeted recipes to get you up to speed. In Part 1, you’ll learn how to solve real-world problems with XQuery, the functional language for working with hierarchical data structures such as XML. Part 2 helps you solve common search-related problems with recipes that work with MarkLogic 9 as well as with older versions. With recipes in Part 3, you’ll explore the multiple ways MarkLogic represents data. XQuery: Gain XQuery peak performance, and explore its use in maps, documents, document security, the task server, and administration Search-related problems: Conduct document searches, score search results, understand how data is used, and search with the Optic API MarkLogic and data: Work with input transformations, tokenization, template-driven extraction, and redaction

Camel in Action, Second Edition

Camel in Action, Second Edition is the most complete Camel book on the market. Written by core developers of Camel and the authors of the highly acclaimed first edition, this book distills their experience and practical insights so that you can tackle integration tasks like a pro. About the Technology Apache Camel is a Java framework that implements enterprise integration patterns (EIPs) and comes with over 200 adapters to third-party systems. A concise DSL lets you build integration logic into your app with just a few lines of Java or XML. By using Camel, you benefit from the testing and experience of a large and vibrant open source community. About the Book Camel in Action, Second Edition is the definitive guide to the Camel framework. It starts with core concepts like sending, receiving, routing, and transforming data. It then goes in depth on many topics such as how to develop, debug, test, deal with errors, secure, scale, cluster, deploy, and monitor your Camel applications. The book also discusses how to run Camel with microservices, reactive systems, containers, and in the cloud. What's Inside Coverage of all relevant EIPs Camel microservices with Spring Boot Camel on Docker and Kubernetes Error handling, testing, security, clustering, monitoring, and deployment Hundreds of examples in Java and XML About the Reader Readers should be familiar with Java. This book is accessible to beginners and invaluable to experts. About the Authors Claus Ibsen is a senior principal engineer working for Red Hat specializing in cloud and integration. He has worked on Apache Camel for the last nine years where he heads the project. Claus lives in Denmark. Jonathan Anstey is an engineering manager at Red Hat and a core Camel contributor. He lives in Newfoundland, Canada. Quotes I highly recommend this book to anyone with even a passing interest in Apache Camel. Do take Camel for a ride...and don't get the hump! - From the Foreword by James Strachan, Creator of Apache Camel Claus and Jon are great writers, relying on figures and diagrams where needed and presenting lots of code snippets and worked examples. - From the Foreword by Dr. Mark Little, Technical Director of JBoss The second edition of this all-time classic is an indispensable companion for your Apache Camel rides. - Gregor Zurowski, Apache Camel Committer The absolute best way to learn and use Camel - top to bottom, front to back, and all the way through. Camel is a fantastic tool - every Java coder should have a copy of this book. - Rick Wagner, Red Hat An excellent book and the definite reference for experienced engineers. - Yan Guo, EventBrite

XML and JSON Recipes for SQL Server: A Problem-Solution Approach

Quickly find solutions to dozens of common problems encountered while using XML and JSON features that are built into SQL Server. Content is presented in the popular problem-solution format. Look up the problem that you want to solve. Read the solution. Apply the solution directly in your own code. Problem solved! This book shows how to take advantage of XML and JSON to share data and automate tasks. JSON is commonly used to move data back and forth between the database and front-end applications, often running in a browser. This book shows all you need to know about transforming query results into JSON format, and back again. Also covered are the processes and techniques for moving data into and out of XML format for business intelligence and other purposes, such as when transferring data from a reporting system into a data warehouse, or between different database brands such as between SQL Server and Oracle. Microsoft intensively implements XML in SQL Server, and in many related products. Execution plans are generated in XML format, and this book shows you how to parse those plans and automate the detection of performance problems. The relatively new Extended Events feature writes tracing data into XML files, and the recipes in this book help in parsing those files. XML is also used in SQL Server's BI tool set, including in SSIS, SSR, and SSAS. XML is used in many configuration files, and is even behind the construction of DDL triggers. In reading this book you’ll dive deeply into the features that allow you to build and parse XML, and also JSON, which is a specific format of XML used to transmit objects in a web-friendly format between a database and its front-end applications. What You Will Learn Build XML and JSON objects in support of automation and data transfer Import and parse XML and JSON from operating system files Build appropriate indexes on XML objects to improve query performance Move data from query result sets into JSON format, and back again Automate the detection of database performance problems by querying and parsing the database’s own execution plans Replace external and manual JSON processes with SQL Server's internal, JSON functionality Who This Book Is For Database administrators, .NET developers, business intelligence developers, and other professionals who want a deep and detailed skill set around working with XML and JSON in a SQL Server database environment. Web developers will particularly find the book useful for its coverage of transforming database result sets into JSON text that can be transmitted to front-end web applications.

Beginning XML with C# 7: XML Processing and Data Access for C# Developers

Master the basics of XML as well as the namespaces and objects you need to know in order to work efficiently with XML. You’ll learn extensive support for XML in everything from data access to configuration, from raw parsing to code documentation. You will see clear, practical examples that illustrate best practices in implementing XML APIs and services as part of your C#-based Windows 10 applications. Beginning XML with C# 7 is completely revised to cover the XML features of .NET Framework 4.7 using C# 7 programming language. In this update, you’ll discover the tight integration of XML with ADO.NET and LINQ as well as additional .NET support for today’s RESTful web services and Web API. Written by a Microsoft Most Valuable Professional and developer, this book demystifies everything to do with XML and C# 7. What You Will Learn: Discover how XML works with the .NET Framework Read, write, access, validate, and manipulate XML documents Transform XML with XSLT Use XML serialization and web services Combine XML in ADO.NET and SQL Server Create services using Windows Communication Foundation Work with LINQ Use XML with Web API and more Who This Book Is For : Those with experience in C# and .NET new to the nuances of using XML. Some XML experience is helpful.

Summary With the wealth of formats for sending and storing data it can be difficult to determine which one to use. In this episode Doug Cutting, creator of Avro, and Julien Le Dem, creator of Parquet, dig into the different classes of serialization formats, what their strengths are, and how to choose one for your workload. They also discuss the role of Arrow as a mechanism for in-memory data sharing and how hardware evolution will influence the state of the art for data formats.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at dataengineeringpodcast.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show. Continuous delivery lets you get new features in front of your users as fast as possible without introducing bugs or breaking production and GoCD is the open source platform made by the people at Thoughtworks who wrote the book about it. Go to dataengineeringpodcast.com/gocd to download and launch it today. Enterprise add-ons and professional support are available for added peace of mind. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch. You can help support the show by checking out the Patreon page which is linked from the site. To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers This is your host Tobias Macey and today I’m interviewing Julien Le Dem and Doug Cutting about data serialization formats and how to pick the right one for your systems.

Interview

Introduction How did you first get involved in the area of data management? What are the main serialization formats used for data storage and analysis? What are the tradeoffs that are offered by the different formats? How have the different storage and analysis tools influenced the types of storage formats that are available? You’ve each developed a new on-disk data format, Avro and Parquet respectively. What were your motivations for investing that time and effort? Why is it important for data engineers to carefully consider the format in which they transfer their data between systems?

What are the switching costs involved in moving from one format to another after you have started using it in a production system?

What are some of the new or upcoming formats that you are each excited about? How do you anticipate the evolving hardware, patterns, and tools for processing data to influence the types of storage formats that maintain or grow their popularity?

Contact Information

Doug:

cutting on GitHub Blog @cutting on Twitter

Julien

Email @J_ on Twitter Blog julienledem on GitHub

Links

Apache Avro Apache Parquet Apache Arrow Hadoop Apache Pig Xerox Parc Excite Nutch Vertica Dremel White Paper

Twitter Blog on Release of Parquet

CSV XML Hive Impala Presto Spark SQL Brotli ZStandard Apache Drill Trevni Apache Calcite

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast

PostgreSQL: Up and Running, 3rd Edition

Thinking of migrating to PostgreSQL? This clear, fast-paced introduction helps you understand and use this open source database system. Not only will you learn about the enterprise class features in versions 9.5 to 10, you’ll also discover that PostgeSQL is more than a database system—it’s an impressive application platform as well. With examples throughout, this book shows you how to achieve tasks that are difficult or impossible in other databases. This third edition covers new features, such as ANSI-SQL constructs found only in proprietary databases until now: foreign data wrapper (FDW) enhancements; new full text functions and operator syntax introduced in version 9.6; XML constructs new in version 10; query parallelization features introduced in 9.6 and enhanced in 10; built-in logical replication introduced in Version 10.e. If you’re a current PostgreSQL user, you’ll pick up gems you may have missed before. Learn basic administration tasks such as role management, database creation, backup, and restore Apply the psql command-line utility and the pgAdmin graphical administration tool Explore PostgreSQL tables, constraints, and indexes Learn powerful SQL constructs not generally found in other databases Use several different languages to write database functions Tune your queries to run as fast as your hardware will allow Query external and variegated data sources with foreign data wrappers Learn how to use built-in replication to replicate data

Apache Spark 2.x for Java Developers

Delve into mastering big data processing with 'Apache Spark 2.x for Java Developers.' This book provides a practical guide to implementing Apache Spark using the Java APIs, offering a unique opportunity for Java developers to leverage Spark's powerful framework without transitioning to Scala. What this Book will help me do Learn how to process data from formats like XML, JSON, CSV using Spark Core. Implement real-time analytics using Spark Streaming and third-party tools like Kafka. Understand data querying with Spark SQL and master SQL schema processing. Apply machine learning techniques with Spark MLlib to real-world scenarios. Explore graph processing and analytics using Spark GraphX. Author(s) None Kumar and None Gulati, experienced professionals in Java development and big data, bring their wealth of practical experience and passion for teaching to this book. With a clear and concise writing style, they aim to simplify Spark for Java developers, making big data approachable. Who is it for? This book is perfect for Java developers who are eager to expand their skillset into big data processing with Apache Spark. Whether you are a seasoned Spark user or first diving into big data concepts, this book meets you at your level. With practical examples and straightforward explanations, you can unlock the potential of Spark in real-world scenarios.

Exam Ref 70-761 Querying Data with Transact-SQL, 1st Edition

Prepare for Microsoft Exam 70-761–and help demonstrate your real-world mastery of SQL Server 2016 Transact-SQL data management, queries, and database programming. Designed for experienced IT professionals ready to advance their status, Exam Ref focuses on the critical-thinking and decision-making acumen needed for success at the MCSA level. Focus on the expertise measured by these objectives: Filter, sort, join, aggregate, and modify data Use subqueries, table expressions, grouping sets, and pivoting Query temporal and non-relational data, and output XML or JSON Create views, user-defined functions, and stored procedures Implement error handling, transactions, data types, and nulls This Microsoft Exam Ref: Organizes its coverage by exam objectives Features strategic, what-if scenarios to challenge you Assumes you have experience working with SQL Server as a database administrator, system engineer, or developer Includes downloadable sample database and code for SQL Server 2016 SP1 (or later) and Azure SQL Database Querying Data with Transact-SQL About the Exam Exam 70-761 focuses on the skills and knowledge necessary to manage and query data and to program databases with Transact-SQL in SQL Server 2016. About Microsoft Certification Passing this exam earns you credit toward a Microsoft Certified Solutions Associate (MCSA) certification that demonstrates your mastery of essential skills for building and implementing on-premises and cloud-based databases across organizations. Exam 70-762 (Developing SQL Databases) is also required for MCSA: SQL 2016 Database Development certification. See full details at: microsoft.com/learning

R: Predictive Analysis

Master the art of predictive modeling About This Book Load, wrangle, and analyze your data using the world's most powerful statistical programming language Familiarize yourself with the most common data mining tools of R, such as k-means, hierarchical regression, linear regression, Naïve Bayes, decision trees, text mining and so on. We emphasize important concepts, such as the bias-variance trade-off and over-fitting, which are pervasive in predictive modeling Who This Book Is For If you work with data and want to become an expert in predictive analysis and modeling, then this Learning Path will serve you well. It is intended for budding and seasoned practitioners of predictive modeling alike. You should have basic knowledge of the use of R, although it’s not necessary to put this Learning Path to great use. What You Will Learn Get to know the basics of R’s syntax and major data structures Write functions, load data, and install packages Use different data sources in R and know how to interface with databases, and request and load JSON and XML Identify the challenges and apply your knowledge about data analysis in R to imperfect real-world data Predict the future with reasonably simple algorithms Understand key data visualization and predictive analytic skills using R Understand the language of models and the predictive modeling process In Detail Predictive analytics is a field that uses data to build models that predict a future outcome of interest. It can be applied to a range of business strategies and has been a key player in search advertising and recommendation engines. The power and domain-specificity of R allows the user to express complex analytics easily, quickly, and succinctly. R offers a free and open source environment that is perfect for both learning and deploying predictive modeling solutions in the real world. This Learning Path will provide you with all the steps you need to master the art of predictive modeling with R. We start with an introduction to data analysis with R, and then gradually you’ll get your feet wet with predictive modeling. You will get to grips with the fundamentals of applied statistics and build on this knowledge to perform sophisticated and powerful analytics. You will be able to solve the difficulties relating to performing data analysis in practice and find solutions to working with “messy data”, large data, communicating results, and facilitating reproducibility. You will then perform key predictive analytics tasks using R, such as train and test predictive models for classification and regression tasks, score new data sets and so on. By the end of this Learning Path, you will have explored and tested the most popular modeling techniques in use on real-world data sets and mastered a diverse range of techniques in predictive analytics. This Learning Path combines some of the best that Packt has to offer in one complete, curated package. It includes content from the following Packt products: Data Analysis with R, Tony Fischetti Learning Predictive Analytics with R, Eric Mayor Mastering Predictive Analytics with R, Rui Miguel Forte Style and approach Learn data analysis using engaging examples and fun exercises, and with a gentle and friendly but comprehensive "learn-by-doing" approach. This is a practical course, which analyzes compelling data about life, health, and death with the help of tutorials. It offers you a useful way of interpreting the data that’s specific to this course, but that can also be applied to any other data. This course is designed to be both a guide and a reference for moving beyond the basics of predictive modeling. Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the code file.

Sams Teach Yourself Microsoft® SQL Server T-SQL in 10 Minutes, Second Edition

Sams Teach Yourself Microsoft SQL Server T-SQL in 10 Minutes offers straightforward, practical answers when you need fast results. By working through the book’s 30 lessons of 10 minutes or less, you’ll learn what you need to know to take advantage of Microsoft SQL Server’s T-SQL language. This handy pocket guide starts with simple data retrieval and moves on to more complex topics, including the use of joins, subqueries, full text-based searches, functions and stored procedures, cursors, triggers, table constraints, XML, JSON, and much more. Learn how to… Use T-SQL in the Microsoft SQL Server environment Construct complex T-SQL statements using multiple clauses and operators Filter data so you get the information you need quickly Retrieve, sort, and format database contents Join two or more related tables Make SQL Server work for you with globalization and localization Create subqueries to pinpoint your data Automate your workload with triggers Create and alter database tables Work with views, stored procedures, and more Contents at a Glance 1 Understanding SQL 2 Introducing SQL Server 3 Working with SQL Server 4 Retrieving Data 5 Sorting Retrieved Data 6 Filtering Data 7 Advanced Data Filtering 8 Using Wildcard Filtering 9 Creating Calculated Fields 10 Using Data Manipulation Functions 11 Summarizing Data 12 Grouping Data 13 Working with Subqueries 14 Joining Tables 15 Creating Advanced Joins 16 Combining Queries 17 Full-Text Searching 18 Inserting Data 19 Updating and Deleting Data 20 Creating and Manipulating Tables 21 Using Views 22 Programming with T-SQL 23 Working with Stored Procedures 24 Using Cursors 25 Using Triggers 26 Managing Transaction Processing 27 Working with XML and JSON 28 Globalization and Localization 29 Managing Security 30 Improving Performance A The Example Tables B T-SQL Statement Syntax C T-SQL Datatypes D T-SQL Reserved Words

Implementing CDISC Using SAS

For decades researchers and programmers have used SAS to analyze, summarize, and report clinical trial data. Now Chris Holland and Jack Shostak have updated their popular Implementing CDISC Using SAS, the first comprehensive book on applying clinical research data and metadata to the Clinical Data Interchange Standards Consortium (CDISC) standards.

Implementing CDISC Using SAS: An End-to-End Guide, Second Edition, is an all-inclusive guide on how to implement and analyze the Study Data Tabulation Model (SDTM) and the Analysis Data Model (ADaM) data and prepare clinical trial data for regulatory submission. Updated to reflect the 2017 FDA mandate for adherence to CDISC standards, this new edition covers creating and using metadata, developing conversion specifications, implementing and validating SDTM and ADaM data, determining solutions for legacy data conversions, and preparing data for regulatory submission. The book covers products such as Base SAS, SAS Clinical Data Integration, and the SAS Clinical Standards Toolkit, as well as JMP Clinical. Topics included in this new edition include an implementation of the Define-XML 2.0 standard, new SDTM domains, validation with Pinnacle 21 software, event narratives in JMP Clinical, and of course new versions of SAS and JMP software.

Any manager or user of clinical trial data in this day and age is likely to benefit from knowing how to either put data into a CDISC standard or analyzing and finding data once it is in a CDISC format. If you are one such person--a data manager, clinical and/or statistical programmer, biostatistician, or even a clinician--then this book is for you.

Microsoft SQL Server 2016: A Beginner's Guide, Sixth Edition, 6th Edition

Up-to-date Microsoft SQL Server 2016 skills made easy! Get up and running on Microsoft SQL Server 2016 in no time with help from this thoroughly revised, practical resource. The book offers thorough coverage of SQL management and development and features full details on the newest business intelligence, reporting, and security features. Filled with new real-world examples and hands-on exercises, Microsoft SQL Server 2016: A Beginner's Guide, Sixth Edition , starts by explaining fundamental relational database system concepts. From there, you will learn how to write Transact-SQL statements, execute simple and complex database queries, handle system administration and security, and use the powerful analysis and BI tools. XML, spatial data, and full-text search are also covered in this step-by-step tutorial. · Revised from the ground up to cover the latest version of SQL Server · Ideal both as a self-study guide and a classroom textbook · Written by a prominent professor and best-selling author

Java XML and JSON

Java XML and JSON is your one-stop guide to mastering the XML metalanguage and JSON data format along with significant Java APIs for parsing and creating XML/JSON documents (and more). The first six chapters focus on XML along with the SAX, DOM, StAX, XPath, and XSLT APIs. The remaining four chapters focus on JSON along with the mJson, GSON, and JsonPath APIs. Each chapter ends with select exercises designed to challenge your grasp of the chapter's content. An appendix provides the answers to these exercises. What You'll Learn Master the XML language Learn how to validate XML documents Learn how to parse XML documents with the SAX, DOM, and StAX APIs Learn how to create XML documents with the DOM and StAX APIs Learn how to extract values from XML documents with the XPath API Learn how to transform XML documents with the XSLT API Master the JSON format Learn how to validate JSON documents Learn how to parse and create JSON documents with the mJson and Gson APIs Learn how to extract values from JSON documents with the JsonPath API Who This Book Is For Intermediate or advanced Java programmers/developers.

Learning Pentaho CTools

Learning Pentaho CTools is a comprehensive guide to building sophisticated and custom analytics dashboards using the powerful capabilities of Pentaho CTools. This book walks you through the process of creating interactive dashboards, integrating data sources, and applying data visualization best practices. You'll quickly gain the expertise needed to create impactful dashboards with ease. What this Book will help me do Master installing and configuring CTools for Pentaho to jumpstart dashboard development. Harness diverse data sources and deliver data in formats like CSV, JSON, and XML for customized analytics. Design and implement dynamic, visually stunning dashboards using Community Dashboard Framework (CDF). Deploy and integrate plugins, leverage widgets, and manage dashboards effectively with version control. Enhance interactivity by customizing dashboard components, charts, and filters to suit unique requirements. Author(s) None Gaspar, an expert in Pentaho and its tools, has been a Senior Consultant at Pentaho, where he gained in-depth experience crafting analytics solutions. He brings to this book his teaching passion and field expertise, combining theoretical insights with practical applications. His approachable style ensures readers can follow technical concepts effectively. Who is it for? This book is ideal for developers who are looking to enhance their understanding of Pentaho's CTools portfolio to build advanced dashboards. A working knowledge of JavaScript and CSS will enable readers to get the most out of this guide. Whether you aim to extend your analytics capabilities or learn the tools from scratch, this book bridges the gap between learning and application.

XQuery, 2nd Edition

The W3C XQuery 3.1 standard provides a tool to search, extract, and manipulate content, whether it's in XML, JSON or plain text. With this fully updated, in-depth tutorial, you’ll learn to program with this highly practical query language. Designed for query writers who have some knowledge of XML basics, but not necessarily advanced knowledge of XML-related technologies, this book is ideal as both a tutorial and a reference. You’ll find background information for namespaces, schemas, built-in types, and regular expressions that are relevant to writing XML queries.