Once every four years in the United States, there is this thing called a "presidential election." It's a pretty boring affair, in that there is so much harmony amongst the electorate, and the two main candidates are pretty indistinguishable when it comes to their world views, policy ideas, and temperaments. But, despite the blandness of the contest, digging in to how the professionals go about forecasting the outcome is an intriguing topic. It turns out that forecasting, be it of the political or the marketing variety, is chock full of considerations like data quality, the quantification of uncertainty, and even () the opportunity to run simulations! On this episode, we sat down with G. Elliott Morris, creator of The Crosstab newsletter and a member of the political forecasting team for The Economist, to chat about the ins and outs of predicting the future with a limited set of historical data and a boatload of uncertainty. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
talk-data.com
Topic
Data Quality
537
tagged
Activity Trend
Top Events
The Data Wrangling Workshop is your beginner's guide to the essential techniques and practices of data manipulation using Python. Throughout the book, you will progressively build your skills, learning key concepts such as extracting, cleaning, and transforming data into actionable insights. By the end, you'll be confident in handling various data wrangling tasks efficiently. What this Book will help me do Understand and apply the fundamentals of data wrangling using Python. Combine and aggregate data from diverse sources like web data, SQL databases, and spreadsheets. Use descriptive statistics and plotting to examine dataset properties. Handle missing or incorrect data effectively to maintain data quality. Gain hands-on experience with Python's powerful data science libraries like Pandas, NumPy, and Matplotlib. Author(s) Brian Lipp, None Roychowdhury, and Dr. Tirthajyoti Sarkar are experienced educators and professionals in the fields of data science and engineering. Their collective expertise spans years of teaching and working with data technologies. They aim to make data wrangling accessible and comprehensible, focusing on practical examples to equip learners with real-world skills. Who is it for? The Data Wrangling Workshop is ideal for developers, data analysts, and business analysts aiming to become data scientists or analytics experts. If you're just getting started with Python, you will find this book guiding you step-by-step. A basic understanding of Python programming, as well as relational databases and SQL, is recommended for smooth learning.
While Airflow is a central product for data engineering teams, it’s usually one piece of a bigger puzzle. The vast majority of teams use Airflow in combination with other tools like Spark, Snowflake, and BigQuery. Making sure pipelines are reliable, detecting issues that lead to SLA misses, and identifying data quality problems requires deep visibility into DAGs and data flows. Join this session to learn how Databand’s observability system makes it easy to monitor your end-to-end pipeline health and quickly remediate issues. This is a sponsored talk, presented by Databand .
Summary The landscape of data management and processing is rapidly changing and evolving. There are certain foundational elements that have remained steady, but as the industry matures new trends emerge and gain prominence. In this episode Astasia Myers of Redpoint Ventures shares her perspective as an investor on which categories she is paying particular attention to for the near to medium term. She discusses the work being done to address challenges in the areas of data quality, observability, discovery, and streaming. This is a useful conversation to gain a macro perspective on where businesses are looking to improve their capabilities to work with data.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise. When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar to get you up and running in no time. With simple pricing, fast networking, S3 compatible object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! You listen to this show because you love working with data and want to keep your skills up to date. Machine learning is finding its way into every aspect of the data landscape. Springboard has partnered with us to help you take the next step in your career by offering a scholarship to their Machine Learning Engineering career track program. In this online, project-based course every student is paired with a Machine Learning expert who provides unlimited 1:1 mentorship support throughout the program via video conferences. You’ll build up your portfolio of machine learning projects and gain hands-on experience in writing machine learning algorithms, deploying models into production, and managing the lifecycle of a deep learning prototype. Springboard offers a job guarantee, meaning that you don’t have to pay for the program until you get a job in the space. The Data Engineering Podcast is exclusively offering listeners 20 scholarships of $500 to eligible applicants. It only takes 10 minutes and there’s no obligation. Go to dataengineeringpodcast.com/springboard and apply today! Make sure to use the code AISPRINGBOARD when you enroll. Your host is Tobias Macey and today I’m interviewing Astasia Myers about the trends in the data industry that she sees as an investor at Redpoint Ventures
Interview
Introduction How did you get involved in the area of data management? Can you start by giving an overview of Redpoint Ventures and your role there? From an investor perspective, what is most appealing about the category of data-oriented businesses? What are the main sources of information that you rely on to keep up to date with what is happening in the data industry?
What is your personal heuristic for determining the relevance of any given piece of information to decide whether it is worthy of further investigation?
As someone who works closely with a variety of companies across different industry verticals and different areas of focus, what are some of the common trends that you have identified in the data ecosystem? In your article that covers the trends you are keeping an eye on for 2020 you call out 4 in particular, data quality, data catalogs, observability of what influences critical business indicators, and streaming data. Taking those in turn:
What are the driving factors that influence data quality, and what elements of that problem space are being addressed by the companies you are watching?
What are the unsolved areas that you see as being viable for newcomers?
What are the challenges faced by businesses in establishing and maintaining data catalogs?
What approaches are being taken by the companies who are trying to solve this problem?
What shortcomings do you see in the available products?
For gaining visibility into the forces that impact the key performance indicators (KPI) of businesses, what is lacking in the current approaches?
What additional information needs to be tracked to provide the needed context for making informed decisions about what actions to take to improve KPIs? What challenges do businesses in this observability space face to provide useful access and analysis to this collected data?
Streaming is an area that has been growing rapidly over the past few years, with many open source and commercial options. What are the major business opportunities that you see to make streaming more accessible and effective?
What are the main factors that you see as driving this growth in the need for access to streaming data?
With your focus on these trends, how does that influence your investment decisions and where you spend your time? What are the unaddressed markets or product categories that you see which would be lucrative for new businesses? In most areas of technology now there is a mix of open source and commercial solutions to any given problem, with varying levels of maturity and polish between them. What are your views on the balance of this relationship in the data ecosystem?
For data in particular, there is a strong potential for vendor lock-in which can cause potential customers to avoid adoption of commercial solutions. What has been your experience in that regard with the companies that you work with?
Contact Info
@AstasiaMyers on Twitter @astasia on Medium LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.init to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Redpoint Ventures 4 Data Trends To Watch in 2020 Seagate Western Digital Pure Storage Cisco Cohesity Looker
Podcast Episode
DGraph
Podcast Episode
Dremio
Podcast Episode
SnowflakeDB
Podcast Episode
Thoughspot Tibco Elastic Splunk Informatica Data Council DataCoral Mattermost Bitwarden Snowplow
Podcast Interview Interview About Snowplow Infrastructure
CHAOSSEARCH
Podcast Episode
Kafka Streams Pulsar
Podcast Interview Followup Podcast Interview
Soda Toro Great Expectations Alation Collibra Amundsen DataHub Netflix Metacat Marquez
Podcast Episode
LDAP == Lightweight Directory Access Protocol Anodot Databricks Flink
a…
Summary Data governance is a complex endeavor, but scaling it to meet the needs of a complex or globally distributed organization requires a well considered and coherent strategy. In this episode Tim Ward describes an architecture that he has used successfully with multiple organizations to scale compliance. By treating it as a graph problem, where each hub in the network has localized control with inheritance of higher level controls it reduces overhead and provides greater flexibility. Tim provides useful examples for understanding how to adopt this approach in your own organization, including some technology recommendations for making it maintainable and scalable. If you are struggling to scale data quality controls and governance requirements then this interview will provide some useful ideas to incorporate into your roadmap.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, a 40Gbit public network, fast object storage, and a brand new managed Kubernetes platform, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. And for your machine learning workloads, they’ve got dedicated CPU and GPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Corinium Global Intelligence, ODSC, and Data Council. Upcoming events include the Software Architecture Conference in NYC, Strata Data in San Jose, and PyCon US in Pittsburgh. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today. Your host is Tobias Macey and today I’m interviewing Tim Ward about using an architectural pattern called data hub that allows for scaling data management across global businesses
Interview
Introduction How did you get involved in the area of data management? Can you start by giving an overview of the goals of a data hub architecture? What are the elements of a data hub architecture and how do they contribute to the overall goals?
What are some of the patterns or reference architectures that you drew on to develop this approach?
What are some signs that an organization should implement a data hub architecture? What is the migration path for an organization who has an existing data platform but needs to scale their governance and localize storage and access? What are the features or attributes of an individual hub that allow for them to be interconnected?
What is the interface presented between hubs to allow for accessing information across these localized repositories?
What is the process for adding a new hub and making it discoverable across the organization? How is discoverability of data managed within and between hubs? If someone wishes to access information between hubs or across several of them, how do you prevent data proliferation?
If data is copied between hubs, how are record updates accounted for to ensure that they are replicated to the hubs that hold a copy of that entity? How are access controls and data masking managed to ensure that various compliance regimes are honored? In addition to compliance issues, another challenge of distributed data repositories is the
MDE calculations: real data example how to know if you could run experiments on a specific location. Data quality: real data examples to show what it means to have 20% more users in your experiment. Prioritization: how to optimize your chance of success by prioritizing your experimentation roadmap based on data? Test results: what are valid results in the data and what not? When is the experiment done? Business case calculations: how much money is your experimentation program really making?
It's a never ending game of whack-a-mole, where Peter Swire already talked about technological escalation wars back in 2012 when he quit the W3C DNT efforts.
Recent changes in how web browsers handle client-side state have unveiled something that many have known for a long time: first-party storage (cookies, localStorage, etc.) is unreliable. Initiatives like Intelligent Tracking Prevention (Safari/WebKit) and Enhanced Tracking Protection (Firefox/Mozilla) seek to erode ad tech’s capability to track users cross-site, at the expense of first-party analytics and advertising.
Summary Building clean datasets with reliable and reproducible ingestion pipelines is completely useless if it’s not possible to find them and understand their provenance. The solution to discoverability and tracking of data lineage is to incorporate a metadata repository into your data platform. The metadata repository serves as a data catalog and a means of reporting on the health and status of your datasets when it is properly integrated into the rest of your tools. At WeWork they needed a system that would provide visibility into their Airflow pipelines and the outputs produced. In this episode Julien Le Dem and Willy Lulciuc explain how they built Marquez to serve that need, how it is architected, and how it compares to other options that you might be considering. Even if you already have a metadata repository this is worth a listen to learn more about the value that visibility of your data can bring to your organization.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! You work hard to make sure that your data is clean, reliable, and reproducible throughout the ingestion pipeline, but what happens when it gets to the data warehouse? Dataform picks up where your ETL jobs leave off, turning raw data into reliable analytics. Their web based transformation tool with built in collaboration features lets your analysts own the full lifecycle of data in your warehouse. Featuring built in version control integration, real-time error checking for their SQL code, data quality tests, scheduling, and a data catalog with annotation capabilities it’s everything you need to keep your data warehouse in order. Sign up for a free trial today at dataengineeringpodcast.com/dataform and email [email protected] with the subject "Data Engineering Podcast" to get a hands-on demo from one of their data experts. You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Corinium Global Intelligence, ODSC, and Data Council. Upcoming events include the Software Architecture Conference, the Strata Data conference, and PyCon US. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today. Your host is Tobias Macey and today I’m interviewing Willy Lulciuc and Julien Le Dem about Marquez, an open source platform to collect, aggregate, and visualize a data ecosystem’s metadata
Interview
Introduction How did you get involved in the area of data management? Can you start by describing what Marquez is?
What was missing in existing metadata management platforms that necessitated the creation of Marquez?
How do the capabilities of Marquez compare with tools and services that bill themselves as data catalogs?
How does it compare to the Amundsen platform that Lyft recently released?
What are some of the tools or platforms that are currently integrated with Marquez and what additional integrations would you like to see? What are some of the capabilities that are unique to Marquez and how are you using them at WeWork? What are the primary resource types that you support in Marquez?
What are some of the lowest common denominator attributes that are necessary and useful to track in a metadata repository?
Can you explain how Marquez is architected and how the design has evolved since you first began working on it?
Many metadata management systems are simply a service layer on top of a separate data storage engine. What are the benefits of using PostgreSQL as the system of record for Marquez?
What are some of the complexities that arise from relying on a relational engine as opposed to a document store or graph database?
How is the metadata itself stored and managed in Marquez?
How much up-front data modeling is necessary and what types of schema representations are supported?
Can you talk through the overall workflow of someone using Marquez in their environment?
What is involved in registering and updating datasets? How do you define and track the health of a given dataset? What are some of the interesting questions that can be answered from the information stored in Marquez?
What were your assumptions going into this project and how have they been challenged or updated as you began using it for production use cases? For someone who is interested in using Marquez what is involved in deploying and maintaining an installation of it? What have you found to be the most challenging or unanticipated aspects of building and maintaining a metadata repository and data discovery platform? When is Marquez the wrong choice for a metadata repository? What do you have planned for the future of Marquez?
Contact Info
Julien Le Dem
@J_ on Twitter Email julienledem on GitHub
Willy
LinkedIn @wslulciuc on Twitter wslulciuc on GitHub
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.init to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Marquez
DataEngConf Presentation
WeWork Canary Yahoo Dremio Hadoop Pig Parquet
Podcast Episode
Airflow Apache Atlas Amundsen
Podcast Episode
Uber DataBook LinkedIn DataHub Iceberg Table Format
Podcast Episode
Delta Lake
Podcast Episode
Great Expectations data pipeline unit testing framework
Podcast.init Episode
Redshift SnowflakeDB
Podcast Episode
Apache Kafka Schema Registry
Podcast Episode
Open Tracing Jaeger Zipkin DropWizard Java framework Marquez UI Cayley Graph Database Kubernetes Marquez Helm Chart Marquez Docker Container Dagster
Podcast Episode
Luigi DBT
Podcast Episode
Thrift Protocol Buffers
The intro and outro music is from a href="http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug?utm_source=rss&utm_medium=rss"…
Focus on the most important and most often overlooked factor in a successful Tableau project—data. Without a reliable data source, you will not achieve the results you hope for in Tableau. This book does more than teach the mechanics of data preparation. It teaches you: how to look at data in a new way, to recognize the most common issues that hinder analytics, and how to mitigate those factors one by one. Tableau can change the course of business, but the old adage of "garbage in, garbage out" is the hard truth that hides behind every Tableau sales pitch. That amazing sales demo does not work as well with bad data. The unfortunate reality is that almost all data starts out in a less-than-perfect state. Data prep is hard. Traditionally, we were forced into the world of the database where complex ETL (Extract, Transform, Load) operations created by the data team did all the heavy lifting for us. Fortunately, we have moved past those days. With the introduction of the Tableau Data Prep tool you can now handle most of the common Data Prep and cleanup tasks on your own, at your desk, and without the help of the data team. This essential book will guide you through: The layout and important parts of the Tableau Data Prep tool Connecting to data Data quality and consistency The shape of the data. Is the data oriented in columns or rows? How to decide? Why does it matter? What is the level of detail in the source data? Why is that important? Combining source data to bring in more fields and rows Saving the data flow and the results of our data prep work Common cleanup and setup tasks in Tableau Desktop What You Will Learn Recognize data sources that are good candidates for analytics in Tableau Connect tolocal, server, and cloud-based data sources Profile data to better understand its content and structure Rename fields, adjust data types, group data points, and aggregate numeric data Pivot data Join data from local, server, and cloud-based sources for unified analytics Review the steps and results of each phase of the Data Prep process Output new data sources that can be reviewed in Tableau or any other analytics tool Who This Book Is For Tableau Desktop users who want to: connect to data, profile the data to identify common issues, clean up those issues, join to additional data sources, and save the newly cleaned, joined data so that it can be used more effectively in Tableau
Gain a practical introduction to DataOps, a new discipline for delivering data science at scale inspired by practices at companies such as Facebook, Uber, LinkedIn, Twitter, and eBay. Organizations need more than the latest AI algorithms, hottest tools, and best people to turn data into insight-driven action and useful analytical data products. Processes and thinking employed to manage and use data in the 20th century are a bottleneck for working effectively with the variety of data and advanced analytical use cases that organizations have today. This book provides the approach and methods to ensure continuous rapid use of data to create analytical data products and steer decision making. Practical DataOps shows you how to optimize the data supply chain from diverse raw data sources to the final data product, whether the goal is a machine learning model or other data-orientated output. The book provides an approach to eliminate wasted effort and improve collaboration between data producers, data consumers, and the rest of the organization through the adoption of lean thinking and agile software development principles. This book helps you to improve the speed and accuracy of analytical application development through data management and DevOps practices that securely expand data access, and rapidly increase the number of reproducible data products through automation, testing, and integration. The book also shows how to collect feedback and monitor performance to manage and continuously improve your processes and output. What You Will Learn Develop a data strategy for your organization to help it reach its long-term goals Recognize and eliminate barriers to delivering data to users at scale Work on the right things for the right stakeholders through agile collaboration Create trust in data via rigorous testing and effective data management Build a culture of learning and continuous improvement through monitoring deployments and measuring outcomes Create cross-functional self-organizing teams focused on goals not reporting lines Build robust, trustworthy, data pipelines in support of AI, machine learning, and other analytical data products Who This Book Is For Data science and advanced analytics experts, CIOs, CDOs (chief data officers), chief analytics officers, business analysts, business team leaders, and IT professionals (data engineers, developers, architects, and DBAs) supporting data teams who want to dramatically increase the value their organization derives from data. The book is ideal for data professionals who want to overcome challenges of long delivery time, poor data quality, high maintenance costs, and scaling difficulties in getting data science output and machine learning into customer-facing production.
Data quality and leadership trust levels may not seem connected, but they’re inextricably linked. Here’s why ...
Originally published at https://www.eckerson.com/articles/using-data-quality-to-build-trust-in-the-business-leaders
Learn how to achieve the DataOps objectives of improved efficiency and data quality by migrating to a streaming architecture based on Apache Kafka.
Why is Data Quality still an issue after all these years? To get an answer to the prevalent question, Wayne Eckerson and Jason Beard engage in a dynamic exchange of questions which lead us to the root cause of data quality and data governance problems. Using examples from his past projects, Jason shows the value of business process mapping and how it exposes the hidden problems which go undetected under the standard IT lens.
In his most recent role as Vice President of Process & Data Management at Wiley, a book publisher, he was responsible for master data setup and governance, process optimization, business continuity planning, and change management for new and emerging business models. Jason has led business intelligence, data governance, master data management, Process Improvement, Business Transformation, and ERP projects in a variety of industries, including Scientific and Trade publishing, Educational Technology, Consumer Goods, Banking, Investments, and Insurance.
Leverage health data into insight! Applied Health Analytics and Informatics Using SAS describes health anamatics, a result of the intersection of data analytics and health informatics. Healthcare systems generate nearly a third of the world’s data, and analytics can help to eliminate medical errors, reduce readmissions, provide evidence-based care, demonstrate quality outcomes, and add cost-efficient care. This comprehensive textbook includes data analytics and health informatics concepts, along with applied experiential learning exercises and case studies using SAS Enterprise MinerTM within the healthcare industry setting. Topics covered include: Sampling and modeling health data – both structured and unstructured Exploring health data quality Developing health administration and health data assessment procedures Identifying future health trends Analyzing high-performance health data mining models Applied Health Analytics and Informatics Using SAS is intended for professionals, lifelong learners, senior-level undergraduates, graduate-level students in professional development courses, health informatics courses, health analytics courses, and specialized industry track courses. This textbook is accessible to a wide variety of backgrounds and specialty areas, including administrators, clinicians, and executives. This book is part of the SAS Press program.
In this podcast, Carl Gerber and Wayne Eckerson discuss Gerber’s top five data governance best practices: Motivation, Assessment, Data Assets Catalog, CxO Alliance, and Data Quality.
Gerber is a long-time chief data officer and data leader at several large, diverse financial services and manufacturing firms, who is now an independent consultant and an Eckerson Group partner.
He helps large organizations develop data strategies, modernize analytics, and establish enterprise data governance programs that ensure data quality, operational efficiency, regulatory compliance, and business outcomes. He also mentors and coaches Chief Data Officers and fills that role on an interim basis.
Prepare for Microsoft Exam 70-767–and help demonstrate your real-world mastery of skills for managing data warehouses. This exam is intended for Extract, Transform, Load (ETL) data warehouse developers who create business intelligence (BI) solutions. Their responsibilities include data cleansing as well as ETL and data warehouse implementation. The reader should have experience installing and implementing a Master Data Services (MDS) model, using MDS tools, and creating a Master Data Manager database and web application. The reader should understand how to design and implement ETL control flow elements and work with a SQL Service Integration Services package. Focus on the expertise measured by these objectives: • Design, and implement, and maintain a data warehouse • Extract, transform, and load data • Build data quality solutionsThis Microsoft Exam Ref: • Organizes its coverage by exam objectives • Features strategic, what-if scenarios to challenge you • Assumes you have working knowledge of relational database technology and incremental database extraction, as well as experience with designing ETL control flows, using and debugging SSIS packages, accessing and importing or exporting data from multiple sources, and managing a SQL data warehouse. Implementing a SQL Data Warehouse About the Exam Exam 70-767 focuses on skills and knowledge required for working with relational database technology. About Microsoft Certification Passing this exam earns you credit toward a Microsoft Certified Professional (MCP) or Microsoft Certified Solutions Associate (MCSA) certification that demonstrates your mastery of data warehouse management Passing this exam as well as Exam 70-768 (Developing SQL Data Models) earns you credit toward a Microsoft Certified Solutions Associate (MCSA) SQL 2016 Business Intelligence (BI) Development certification. See full details at: microsoft.com/learning
The definitive guide for statisticians and data scientists who understand the advantages of becoming proficient in both R and Python The first book of its kind, Python for R Users: A Data Science Approach makes it easy for R programmers to code in Python and Python users to program in R. Short on theory and long on actionable analytics, it provides readers with a detailed comparative introduction and overview of both languages and features concise tutorials with command-by-command translations—complete with sample code—of R to Python and Python to R. Following an introduction to both languages, the author cuts to the chase with step-by-step coverage of the full range of pertinent programming features and functions, including data input, data inspection/data quality, data analysis, and data visualization. Statistical modeling, machine learning, and data mining—including supervised and unsupervised data mining methods—are treated in detail, as are time series forecasting, text mining, and natural language processing. • Features a quick-learning format with concise tutorials and actionable analytics • Provides command-by-command translations of R to Python and vice versa • Incorporates Python and R code throughout to make it easier for readers to compare and contrast features in both languages • Offers numerous comparative examples and applications in both programming languages • Designed for use for practitioners and students that know one language and want to learn the other • Supplies slides useful for teaching and learning either software on a companion website Python for R Users: A Data Science Approach is a valuable working resource for computer scientists and data scientists that know R and would like to learn Python or are familiar with Python and want to learn R. It also functions as textbook for students of computer science and statistics. A. Ohri is the founder of Decisionstats.com and currently works as a senior data scientist. He has advised multiple startups in analytics off-shoring, analytics services, and analytics education, as well as using social media to enhance buzz for analytics products. Mr. Ohri's research interests include spreading open source analytics, analyzing social media manipulation with mechanism design, simpler interfaces for cloud computing, investigating climate change and knowledge flows. His other books include R for Business Analytics and R for Cloud Computing.
Dive into 'Big Data Visualization' and uncover how to tackle the challenges of visualizing vast quantities of complex data. With a focus on scalable and dynamic techniques, this guide explores the nuances of effective data analysis. You'll master tools and approaches to display, interpret, and communicate data in impactful ways. What this Book will help me do Understand the fundamentals of big data visualization, including unique challenges and solutions. Explore practical techniques for using D3 and Python to visualize and detect anomalies in big data. Learn to leverage dashboards like Tableau to present data insights effectively. Address and improve data quality issues to enhance analysis accuracy. Gain hands-on experience with real-world use cases for tools such as Hadoop and Splunk. Author(s) James D. Miller is an IBM-certified expert specializing in data analytics and visualization. With years of experience handling massive datasets and extracting actionable insights, he is dedicated to sharing his expertise. His practical approach is evident in how he combines tool mastery with a clear understanding of data complexities. Who is it for? This book is designed for data analysts, data scientists, and others involved in interpreting and presenting big datasets. Whether you are a beginner looking to understand big data visualization or an experienced professional seeking advanced tools and techniques, this guide suits your needs perfectly. A foundational knowledge in programming languages like R and big data platforms such as Hadoop is recommended to maximize your learning.
Featuring a timely presentation of total survey error (TSE), this edited volume introduces valuable tools for understanding and improving survey data quality in the context of evolving large-scale data sets This book provides an overview of the TSE framework and current TSE research as related to survey design, data collection, estimation, and analysis. It recognizes that survey data affects many public policy and business decisions and thus focuses on the framework for understanding and improving survey data quality. The book also addresses issues with data quality in official statistics and in social, opinion, and market research as these fields continue to evolve, leading to larger and messier data sets. This perspective challenges survey organizations to find ways to collect and process data more efficiently without sacrificing quality. The volume consists of the most up-to-date research and reporting from over 70 contributors representing the best academics and researchers from a range of fields. The chapters are broken out into five main sections: The Concept of TSE and the TSE Paradigm, Implications for Survey Design, Data Collection and Data Processing Applications, Evaluation and Improvement, and Estimation and Analysis. Each chapter introduces and examines multiple error sources, such as sampling error, measurement error, and nonresponse error, which often offer the greatest risks to data quality, while also encouraging readers not to lose sight of the less commonly studied error sources, such as coverage error, processing error, and specification error. The book also notes the relationships between errors and the ways in which efforts to reduce one type can increase another, resulting in an estimate with larger total error. This book: • Features various error sources, and the complex relationships between them, in 25 high-quality chapters on the most up-to-date research in the field of TSE • Provides comprehensive reviews of the literature on error sources as well as data collection approaches and estimation methods to reduce their effects • Presents examples of recent international events that demonstrate the effects of data error, the importance of survey data quality, and the real-world issues that arise from these errors • Spans the four pillars of the total survey error paradigm (design, data collection, evaluation and analysis) to address key data quality issues in official statistics and survey research Total Survey Error in Practice is a reference for survey researchers and data scientists in research areas that include social science, public opinion, public policy, and business. It can also be used as a textbook or supplementary material for a graduate-level course in survey research methods. Paul P. Biemer, PhD, is distinguished fellow at RTI International and associate director of Survey Research and Development at the Odum Institute, University of North Carolina, USA. Edith de Leeuw, PhD, is professor of survey methodology in the Department of Methodology and Statistics at Utrecht University, the Netherlands. Stephanie Eckman, PhD, is fellow at RTI International, USA. Brad Edwards is vice president, director of Field Services, and deputy area director at Westat, USA. Frauke Kreuter, PhD, is professor and director of the Joint Program in Survey Methodology, University of Maryland, USA; professor of statistics and methodology at the University of Mannheim, Germany; and head of the Statistical Methods Research Department at the Institute for Employment Research, Germany. Lars E. Lyberg, PhD, is senior advisor at Inizio, Sweden. N. Clyde Tucker, PhD, is principal survey methodologist at the American Institutes for Research, USA. Brady T. West, PhD, is research associate professor in the Survey Resea