talk-data.com talk-data.com

Topic

Python

programming_language data_science web_development

1446

tagged

Activity Trend

185 peak/qtr
2020-Q1 2026-Q1

Activities

1446 activities · Newest first

Data Science with SQL Server Quick Start Guide

"Data Science with SQL Server Quick Start Guide" introduces you to leveraging SQL Server's most recent features for data science projects. You will explore the integration of data science techniques using R, Python, and Transact-SQL within SQL Server's environment. What this Book will help me do Use SQL Server's capabilities for data science projects effectively. Understand and preprocess data using SQL queries and statistics. Design, train, and evaluate machine learning models in SQL Server. Visualize data insights through advanced graphing techniques. Deploy and utilize machine learning models within SQL Server environments. Author(s) Dejan Sarka is a data science and SQL Server expert with years of industry experience. He specializes in melding database systems with advanced analytics, offering practical guidance through real-world scenarios. His writing provides clear, step-by-step methods, making complex topics accessible. Who is it for? This book is tailored for professionals familiar with SQL Server who are looking to delve into data science. It is also ideal for data scientists aiming to incorporate SQL Server into their analytics workflows. The content assumes basic exposure to SQL Server, ensuring a straightforward learning curve for its audience.

Summary

The way that you store your data can have a huge impact on the ways that it can be practically used. For a substantial number of use cases, the optimal format for storing and querying that information is as a graph, however databases architected around that use case have historically been difficult to use at scale or for serving fast, distributed queries. In this episode Manish Jain explains how DGraph is overcoming those limitations, how the project got started, and how you can start using it today. He also discusses the various cases where a graph storage layer is beneficial, and when you would be better off using something else. In addition he talks about the challenges of building a distributed, consistent database and the tradeoffs that were made to make DGraph a reality.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. If you have ever wished that you could use the same tools for versioning and distributing your data that you use for your software then you owe it to yourself to check out what the fine folks at Quilt Data have built. Quilt is an open source platform for building a sane workflow around your data that works for your whole team, including version history, metatdata management, and flexible hosting. Stop by their booth at JupyterCon in New York City on August 22nd through the 24th to say Hi and tell them that the Data Engineering Podcast sent you! After that, keep an eye on the AWS marketplace for a pre-packaged version of Quilt for Teams to deploy into your own environment and stop fighting with your data. Python has quickly become one of the most widely used languages by both data engineers and data scientists, letting everyone on your team understand each other more easily. However, it can be tough learning it when you’re just starting out. Luckily, there’s an easy way to get involved. Written by MIT lecturer Ana Bell and published by Manning Publications, Get Programming: Learn to code with Python is the perfect way to get started working with Python. Ana’s experience

as a teacher of Python really shines through, as you get hands-on with the language without being drowned in confusing jargon or theory. Filled with practical examples and step-by-step lessons to take on, Get Programming is perfect for people who just want to get stuck in with Python. Get your copy of the book with a special 40% discount for Data Engineering Podcast listeners by going to dataengineeringpodcast.com/get-programming and use the discount code PodInit40! Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m interviewing Manish Jain about DGraph, a low latency, high throughput, native and distributed graph database.

Interview

Introduction How did you get involved in the area of data management? What is DGraph and what motivated you to build it? Graph databases and graph algorithms have been part of the computing landscape for decades. What has changed in recent years to allow for the current proliferation of graph oriented storage systems?

The graph space is becoming crowded in recent years. How does DGraph compare to the current set of offerings?

What are some of the common uses of graph storage systems?

What are some potential uses that are often overlooked?

There are a few ways that graph structures and properties can be implemented, including the ability t

Summary

The theory behind how a tool is supposed to work and the realities of putting it into practice are often at odds with each other. Learning the pitfalls and best practices from someone who has gained that knowledge the hard way can save you from wasted time and frustration. In this episode James Meickle discusses his recent experience building a new installation of Airflow. He points out the strengths, design flaws, and areas of improvement for the framework. He also describes the design patterns and workflows that his team has built to allow them to use Airflow as the basis of their data science platform.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m interviewing James Meickle about his experiences building a new Airflow installation

Interview

Introduction How did you get involved in the area of data management? What was your initial project requirement?

What tooling did you consider in addition to Airflow? What aspects of the Airflow platform led you to choose it as your implementation target?

Can you describe your current deployment architecture?

How many engineers are involved in writing tasks for your Airflow installation?

What resources were the most helpful while learning about Airflow design patterns?

How have you architected your DAGs for deployment and extensibility?

What kinds of tests and automation have you put in place to support the ongoing stability of your deployment? What are some of the dead-ends or other pitfalls that you encountered during the course of this project? What aspects of Airflow have you found to be lacking that you would like to see improved? What did you wish someone had told you before you started work on your Airflow installation?

If you were to start over would you make the same choice? If Airflow wasn’t available what would be your second choice?

What are your next steps for improvements and fixes?

Contact Info

@eronarn on Twitter Website eronarn on GitHub

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

Quantopian Harvard Brain Science Initiative DevOps Days Boston Google Maps API Cron ETL (Extract, Transform, Load) Azkaban Luigi AWS Glue Airflow Pachyderm

Podcast Interview

AirBnB Python YAML Ansible REST (Representational State Transfer) SAML (Security Assertion Markup Language) RBAC (Role-Based Access Control) Maxime Beauchemin

Medium Blog

Celery Dask

Podcast Interview

PostgreSQL

Podcast Interview

Redis Cloudformation Jupyter Notebook Qubole Astronomer

Podcast Interview

Gunicorn Kubernetes Airflow Improvement Proposals Python Enhancement Proposals (PEP)

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast

Healthcare Analytics Made Simple

Navigate the fascinating intersection of healthcare and data science with the book "Healthcare Analytics Made Simple." This comprehensive guide empowers you to use Python and machine learning techniques to analyze and improve real healthcare systems. Demystify intricate concepts with Python code and SQL to gain actionable insights and build predictive models for healthcare. What this Book will help me do Understand healthcare incentives, policies, and datasets to ground your analysis in practical knowledge. Master the use of Python libraries and SQL for healthcare data analysis and visualization. Develop skills to apply machine learning for predictive and descriptive analytics in healthcare. Learn to assess quality metrics and evaluate provider performance using robust tools. Get acquainted with upcoming trends and future applications in healthcare analytics. Author(s) The authors, None Kumar and None Khader, are experts in data science and healthcare informatics. They bring years of experience teaching, researching, and applying data analytics in healthcare. Their approach is hands-on and clear, aiming to make complex topics accessible and engaging for their audience. Who is it for? This book is perfect for data science professionals eager to specialize in healthcare analytics. Additionally, clinicians aiming to leverage computing and data analytics in improving healthcare processes will find valuable insights. Programming enthusiasts and students keen to enter healthcare analytics will also greatly benefit. Tailored for beginners in this field, it is an educational yet robust resource.

Principles and Practice of Big Data, 2nd Edition

Principles and Practice of Big Data: Preparing, Sharing, and Analyzing Complex Information, Second Edition updates and expands on the first edition, bringing a set of techniques and algorithms that are tailored to Big Data projects. The book stresses the point that most data analyses conducted on large, complex data sets can be achieved without the use of specialized suites of software (e.g., Hadoop), and without expensive hardware (e.g., supercomputers). The core of every algorithm described in the book can be implemented in a few lines of code using just about any popular programming language (Python snippets are provided). Through the use of new multiple examples, this edition demonstrates that if we understand our data, and if we know how to ask the right questions, we can learn a great deal from large and complex data collections. The book will assist students and professionals from all scientific backgrounds who are interested in stepping outside the traditional boundaries of their chosen academic disciplines. Presents new methodologies that are widely applicable to just about any project involving large and complex datasets Offers readers informative new case studies across a range scientific and engineering disciplines Provides insights into semantics, identification, de-identification, vulnerabilities and regulatory/legal issues Utilizes a combination of pseudocode and very short snippets of Python code to show readers how they may develop their own projects without downloading or learning new software

Apache Spark Deep Learning Cookbook

Embark on a journey to master distributed deep learning with the "Apache Spark Deep Learning Cookbook". Designed specifically for leveraging the capabilities of Apache Spark, TensorFlow, and Keras, this book offers over 80 problem-solving recipes to efficiently train and deploy state-of-the-art neural networks, addressing real-world AI challenges. What this Book will help me do Set up and configure a working Apache Spark environment optimized for deep learning tasks. Implement distributed training practices for deep learning models using TensorFlow and Keras. Develop and test neural networks such as CNNs and RNNs targeting specific big data problems. Apply Spark's built-in libraries and integrations for enhanced NLP and computer vision applications. Effectively manage and preprocess large datasets using Spark DataFrames for machine learning tasks. Author(s) Authors Ahmed Sherif and None Ravindra bring years of experience in deep learning, Apache Spark use cases, and hands-on practical training. Their collective expertise has contributed to designing this cookbook approach, focusing on clarity and usability for readers tackling challenging machine learning scenarios. Who is it for? This book is ideal for IT professionals, data scientists, and software developers with foundational understanding of machine learning concepts and Apache Spark framework capabilities. If you aim to scale deep learning and integrate efficient computing with Spark's power, this guide is for you. Familiarity with Python will help maximize the book's potential.

Summary

Data integration and routing is a constantly evolving problem and one that is fraught with edge cases and complicated requirements. The Apache NiFi project models this problem as a collection of data flows that are created through a self-service graphical interface. This framework provides a flexible platform for building a wide variety of integrations that can be managed and scaled easily to fit your particular needs. In this episode project members Kevin Doran and Andy LoPresto discuss the ways that NiFi can be used, how to start using it in your environment, and plans for future development. They also explained how it fits in the broad landscape of data tools, the interesting and challenging aspects of the project, and how to build new extensions.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. Are you struggling to keep up with customer request and letting errors slip into production? Want to try some of the innovative ideas in this podcast but don’t have time? DataKitchen’s DataOps software allows your team to quickly iterate and deploy pipelines of code, models, and data sets while improving quality. Unlike a patchwork of manual operations, DataKitchen makes your team shine by providing an end to end DataOps solution with minimal programming that uses the tools you love. Join the DataOps movement and sign up for the newsletter at datakitchen.io/de today. After that learn more about why you should be doing DataOps by listening to the Head Chef in the Data Kitchen at dataengineeringpodcast.com/datakitchen Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. Your host is Tobias Macey and today I’m interviewing Kevin Doran and Andy LoPresto about Apache NiFi

Interview

Introduction How did you get involved in the area of data management? Can you start by explaining what NiFi is? What is the motivation for building a GUI as the primary interface for the tool when the current trend is to represent everything as code? How did you get involved with the project?

Where does it sit in the broader landscape of data tools?

Does the data that is processed by NiFi flow through the servers that it is running on (á la Spark/Flink/Kafka), or does it orchestrate actions on other systems (á la Airflow/Oozie)?

How do you manage versioning and backup of data flows, as well as promoting them between environments?

One of the advertised features is tracking provenance for data flows that are managed by NiFi. How is that data collected and managed?

What types of reporting are available across this information?

What are some of the use cases or requirements that lend themselves well to being solved by NiFi?

When is NiFi the wrong choice?

What is involved in deploying and scaling a NiFi installation?

What are some of the system/network parameters that should be considered? What are the scaling limitations?

What have you found to be some of the most interesting, unexpected, and/or challenging aspects of building and maintaining the NiFi project and community? What do you have planned for the future of NiFi?

Contact Info

Kevin Doran

@kevdoran on Twitter Email

Andy LoPresto

@yolopey on Twitter Email

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

NiFi HortonWorks DataFlow HortonWorks Apache Software Foundation Apple CSV XML JSON Perl Python Internet Scale Asset Management Documentum DataFlow NSA (National Security Agency) 24 (TV Show) Technology Transfer Program Agile Software Development Waterfall Spark Flink Kafka Oozie Luigi Airflow FluentD ETL (Extract, Transform, and Load) ESB (Enterprise Service Bus) MiNiFi Java C++ Provenance Kubernetes Apache Atlas Data Governance Kibana K-Nearest Neighbors DevOps DSL (Domain Specific Language) NiFi Registry Artifact Repository Nexus NiFi CLI Maven Archetype IoT Docker Backpressure NiFi Wiki TLS (Transport Layer Security) Mozilla TLS Observatory NiFi Flow Design System Data Lineage GDPR (General Data Protection Regulation)

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast

Hands-On Data Analysis with NumPy and pandas

Dive into 'Hands-On Data Analysis with NumPy and pandas' to explore the world of Python for data analysis. This book guides you through using these powerful Python libraries to handle and manipulate data efficiently. You will learn hands-on techniques to read, sort, group, and visualize data for impactful analysis. What this Book will help me do Learn to set up a Python environment for data analysis with tools like Jupyter notebooks. Master data handling using NumPy, focusing on array creation, slicing, and operations. Understand the functionalities of pandas for managing datasets, including DataFrame operations. Discover techniques for data preparation, such as handling missing data and hierarchical indexing. Explore data visualization using pandas and create impactful plots for data insights. Author(s) The book is authored by None Miller, a seasoned Python developer and data analyst. With a strong background in leveraging Python for data processing, None focuses on creating content that is practical and accessible. The author's teaching approach emphasizes hands-on practice and understanding, making technical topics approachable and engaging. Who is it for? This book is ideal for Python developers at a beginner to intermediate level looking to venture into data analysis. If you are transitioning from general programming to data-focused work or need to enhance your skills in data manipulation and processing, this book will be a strong foundation. It requires no prior experience with data analysis, so it is accessible to many learners.

PySpark Cookbook

Dive into the world of big data processing and analytics with the "PySpark Cookbook". This book provides over 60 hands-on recipes for implementing efficient data-intensive solutions using Apache Spark and Python. By mastering these recipes, you'll be equipped to tackle challenges in large-scale data processing, machine learning, and stream analytics. What this Book will help me do Set up and configure PySpark environments effectively, including working with Jupyter for enhanced interactivity. Understand and utilize DataFrames for data manipulation, analysis, and transformation tasks. Develop end-to-end machine learning solutions using the ML and MLlib modules in PySpark. Implement structured streaming and graph-processing solutions to analyze and visualize data streams and relationships. Deploy PySpark applications to the cloud infrastructure efficiently using best practices. Author(s) This book is co-authored by None Lee and None Drabas, who are experienced professionals in data processing and analytics leveraging Python and Apache Spark. With their deep technical expertise and a passion for teaching through practical examples, they aim to make the complex concepts of PySpark accessible to developers of varied experience levels. Who is it for? This book is ideal for Python developers who are keen to delve into the Apache Spark ecosystem. Whether you're just starting with big data or have some experience with Spark, this book provides practical recipes to enhance your skills. Readers looking to solve real-world data-intensive challenges using PySpark will find this resource invaluable.

Mastering Numerical Computing with NumPy

"Mastering Numerical Computing with NumPy" is a comprehensive guide to becoming proficient in numerical computing using Python's NumPy library. This book will teach you how to perform advanced numerical operations, explore data statistically, and build predictive models effectively. By mastering the provided concepts and exercises, you'll be empowered in your scientific computing projects. What this Book will help me do Perform and optimize vector and matrix operations effectively using NumPy. Analyze data using exploratory data analysis techniques and predictive modeling. Implement unsupervised learning algorithms such as clustering with relevant datasets. Understand advanced benchmarks and select optimal configurations for performance. Write efficient and scalable programs utilizing advanced NumPy features. Author(s) The authors of "Mastering Numerical Computing with NumPy" include domain experts and educators with years of experience in Python programming, numerical computing, and data science. They bring a practical and detailed approach to teaching advanced topics and guide you through every step of mastering NumPy. Who is it for? This book is ideal for Python programmers, data analysts, and data science enthusiasts who aim to deepen their understanding of numerical computing. If you have basic mathematics skills and want to utilize NumPy to solve complex data problems, this book is an excellent resource. Whether you're a beginner or an intermediate user, you will find this content approachable and enriching. Advanced users will benefit from the highly specialized content and real-world examples.

Python Graphics: A Reference for Creating 2D and 3D Images

This book will show you how to use Python to create graphic objects for technical illustrations and data visualization. Often, the function you need to produce the image you want cannot be found in a standard Python library. Knowing how to create your own graphics will free you from the chore of looking for a function that may not exist or be difficult to use. This book will give you the tools to eliminate that process and create and customize your own graphics to satisfy your own unique requirements. Using basic geometry and trigonometry, you will learn how to create math models of 2D and 3D shapes. Using Python, you will then learn how to project these objects onto the screen of your monitor, translate and rotate them in 2D and 3D, remove hidden lines, add shading, view in perspective, view intersections between surfaces, and display shadows cast from one object onto another. You will also learn how to visualize and analyze 2D and 3D data sets, fit lines, splines and functions. The final chapter includes demonstrations from quantum mechanics, astronomy and climate science. Includes Python programs written in a clear and open style with detailed explanation of the code. What You Will Learn How to create math and Python models of 2D and 3D shapes. How to rotate, view in perspective, shade, remove hidden lines, display projected shadows, and more. How to analyze and display data sets as curves and surfaces, fit lines and functions. Who This Book Is For Python developers, scientists, engineers, and students using Python to produce technical illustrations, display and analyze data sets. Assumes familiarity with vectors, matrices, geometry and trigonometry.

Hands-On Data Visualization with Bokeh

Dive into the world of interactive data visualization with the Python library Bokeh. In this book, you will learn to create dynamic, engaging visualizations that communicate your data insights effectively. Starting with the basics of installation and setup, you will be guided through progressively advanced techniques to build visually appealing and interactive plots, concluding with hosting your Bokeh applications. What this Book will help me do Install and configure the Bokeh Python library for interactive data visualization projects. Create visually appealing and informative plots using Bokeh's glyph model. Leverage data structures like Pandas and NumPy to efficiently visualize data. Enhance the interactivity and functionality of plots using widgets and layouts in Bokeh. Build and deploy professional-grade data visualization applications using the Bokeh Server. Author(s) None Jolly is an experienced data visualization expert and Python programmer specializing in creating interactive and insightful visualizations. With a passion for teaching and a knack for simplifying complex concepts, they bring a practical and hands-on approach to technical education. Their work empowers professionals to effectively communicate complex data through visually intuitive designs. Who is it for? This book is intended for data professionals like analysts and scientists who seek to add interactivity to their visualizations using Python. Ideal readers will have basic Python knowledge but are new to Bokeh. It's also for anyone curious about building data visualization web applications, moving beyond static charts to impactful interactive tools, and extending their data storytelling skills.

Python vs. R for Data Science

Python and R are two of the mainstream languages in data science. Fundamentally, Python is a language for programmers, whereas R is a language for statisticians. In a data science context, there is a significant degree of overlap when it comes to the capabilities of each language in the fields of regression analysis and machine learning. Your choice of language will depend highly on the environment in which you are operating. In a production environment, Python integrates with other languages much more seamlessly and is therefore the modus operandi in this context. However, R is much more common in research environments due to its more extensive selection of libraries for statistical analysis.

Data Analytics with Spark Using Python, First edition

Spark for Data Professionals introduces and solidifies the concepts behind Spark 2.x, teaching working developers, architects, and data professionals exactly how to build practical Spark solutions. Jeffrey Aven covers all aspects of Spark development, including basic programming to SparkSQL, SparkR, Spark Streaming, Messaging, NoSQL and Hadoop integration. Each chapter presents practical exercises deploying Spark to your local or cloud environment, plus programming exercises for building real applications. Unlike other Spark guides, Spark for Data Professionals explains crucial concepts step-by-step, assuming no extensive background as an open source developer. It provides a complete foundation for quickly progressing to more advanced data science and machine learning topics. This guide will help you: Understand Spark basics that will make you a better programmer and cluster “citizen” Master Spark programming techniques that maximize your productivity Choose the right approach for each problem Make the most of built-in platform constructs, including broadcast variables, accumulators, effective partitioning, caching, and checkpointing Leverage powerful tools for managing streaming, structured, semi-structured, and unstructured data

Big Data Analytics with Hadoop 3

Big Data Analytics with Hadoop 3 is your comprehensive guide to understanding and leveraging the power of Apache Hadoop for large-scale data processing and analytics. Through practical examples, it introduces the tools and techniques necessary to integrate Hadoop with other popular frameworks, enabling efficient data handling, processing, and visualization. What this Book will help me do Understand the foundational components and features of Apache Hadoop 3 such as HDFS, YARN, and MapReduce. Gain the ability to integrate Hadoop with programming languages like Python and R for data analysis. Learn the skills to utilize tools such as Apache Spark and Apache Flink for real-time data analytics within the Hadoop ecosystem. Develop expertise in setting up a Hadoop cluster and performing analytics in cloud environments such as AWS. Master the process of building practical big data analytics pipelines for end-to-end data processing. Author(s) Sridhar Alla is a seasoned big data professional with extensive industry experience in building and deploying scalable big data analytics solutions. Known for his expertise in Hadoop and related ecosystems, Sridhar combines technical depth with clear communication in his writing, providing practical insights and hands-on knowledge. Who is it for? This book is tailored for data professionals, software engineers, and data scientists looking to expand their expertise in big data analytics using Hadoop 3. Whether you're an experienced developer or new to the big data ecosystem, this book provides the step-by-step guidance and practical examples needed to advance your skills and achieve your analytical goals.

Hands-On Data Science with Anaconda

Hands-On Data Science with Anaconda is your guide to harnessing the full potential of Anaconda, a powerful platform for data science and machine learning. With this book, you will learn how to set up Anaconda, manage packages, explore advanced data processing techniques, and create robust machine learning models using Python, R, and Julia. What this Book will help me do Master data preprocessing techniques including cleaning, sorting, and classification using Anaconda. Understand and utilize the conda package manager for efficient package management. Learn to explore and visualize data using packages and frameworks supported by Anaconda. Perform advanced operations like clustering, regression, and building predictive models. Implement distributed computing and manage environments effectively with Anaconda Cloud. Author(s) Yuxing Yan and co-author None Yan are seasoned data science professionals with extensive experience in utilizing cutting-edge tools like Anaconda to simplify and enhance data science workflows. With a focus on making complex concepts accessible, they offer a practical and systematic approach to mastering tools that power real-world data science projects. Who is it for? This book is for data science practitioners, analysts, or developers with a basic understanding of Python, R, and linear algebra who want to scale their skills and learn to utilize the Anaconda platform for their projects. If you're seeking to work more effectively within the Anaconda ecosystem or equip yourself with efficient tools for data analysis and machine learning, this book is for you.

Summary

Building an ETL pipeline is a common need across businesses and industries. It’s easy to get one started but difficult to manage as new requirements are added and greater scalability becomes necessary. Rather than duplicating the efforts of other engineers it might be best to use a hosted service to handle the plumbing so that you can focus on the parts that actually matter for your business. In this episode CTO and co-founder of Alooma, Yair Weinberger, explains how the platform addresses the common needs of data collection, manipulation, and storage while allowing for flexible processing. He describes the motivation for starting the company, how their infrastructure is architected, and the challenges of supporting multi-tenancy and a wide variety of integrations.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. For complete visibility into the health of your pipeline, including deployment tracking, and powerful alerting driven by machine-learning, DataDog has got you covered. With their monitoring, metrics, and log collection agent, including extensive integrations and distributed tracing, you’ll have everything you need to find and fix performance bottlenecks in no time. Go to dataengineeringpodcast.com/datadog today to start your free 14 day trial and get a sweet new T-Shirt. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch. Your host is Tobias Macey and today I’m interviewing Yair Weinberger about Alooma, a company providing data pipelines as a service

Interview

Introduction How did you get involved in the area of data management? What is Alooma and what is the origin story? How is the Alooma platform architected?

I want to go into stream VS batch here What are the most challenging components to scale?

How do you manage the underlying infrastructure to support your SLA of 5 nines? What are some of the complexities introduced by processing data from multiple customers with various compliance requirements?

How do you sandbox user’s processing code to avoid security exploits?

What are some of the potential pitfalls for automatic schema management in the target database? Given the large number of integrations, how do you maintain the

What are some challenges when creating integrations, isn’t it simply conforming with an external API?

For someone getting started with Alooma what does the workflow look like? What are some of the most challenging aspects of building and maintaining Alooma? What are your plans for the future of Alooma?

Contact Info

LinkedIn @yairwein on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

Alooma Convert Media Data Integration ESB (Enterprise Service Bus) Tibco Mulesoft ETL (Extract, Transform, Load) Informatica Microsoft SSIS OLAP Cube S3 Azure Cloud Storage Snowflake DB Redshift BigQuery Salesforce Hubspot Zendesk Spark The Log: What every software engineer should know about real-time data’s unifying abstraction by Jay Kreps RDBMS (Relational Database Management System) SaaS (Software as a Service) Change Data Capture Kafka Storm Google Cloud PubSub Amazon Kinesis Alooma Code Engine Zookeeper Idempotence Kafka Streams Kubernetes SOC2 Jython Docker Python Javascript Ruby Scala PII (Personally Identifiable Information) GDPR (General Data Protection Regulation) Amazon EMR (Elastic Map Reduce) Sequoia Capital Lightspeed Investors Redis Aerospike Cassandra MongoDB

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast

Data Science Fundamentals for Python and MongoDB

Build the foundational data science skills necessary to work with and better understand complex data science algorithms. This example-driven book provides complete Python coding examples to complement and clarify data science concepts, and enrich the learning experience. Coding examples include visualizations whenever appropriate. The book is a necessary precursor to applying and implementing machine learning algorithms. The book is self-contained. All of the math, statistics, stochastic, and programming skills required to master the content are covered. In-depth knowledge of object-oriented programming isn’t required because complete examples are provided and explained. Data Science Fundamentals with Python and MongoDB is an excellent starting point for those interested in pursuing a career in data science. Like any science, the fundamentals of data science are a prerequisite to competency. Without proficiency in mathematics, statistics, data manipulation, and coding, the path to success is “rocky” at best. The coding examples in this book are concise, accurate, and complete, and perfectly complement the data science concepts introduced. What You'll Learn Prepare for a career in data science Work with complex data structures in Python Simulate with Monte Carlo and Stochastic algorithms Apply linear algebra using vectors and matrices Utilize complex algorithms such as gradient descent and principal component analysis Wrangle, cleanse, visualize, and problem solve with data Use MongoDB and JSON to work with data Who This Book Is For The novice yearning to break into the data science world, and the enthusiast looking to enrich, deepen, and develop data science skills through mastering the underlying fundamentalsthat are sometimes skipped over in the rush to be productive. Some knowledge of object-oriented programming will make learning easier.

Summary

Business Intelligence software is often cumbersome and requires specialized knowledge of the tools and data to be able to ask and answer questions about the state of the organization. Metabase is a tool built with the goal of making the act of discovering information and asking questions of an organizations data easy and self-service for non-technical users. In this episode the CEO of Metabase, Sameer Al-Sakran, discusses how and why the project got started, the ways that it can be used to build and share useful reports, some of the useful features planned for future releases, and how to get it set up to start using it in your environment.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. For complete visibility into the health of your pipeline, including deployment tracking, and powerful alerting driven by machine-learning, DataDog has got you covered. With their monitoring, metrics, and log collection agent, including extensive integrations and distributed tracing, you’ll have everything you need to find and fix performance bottlenecks in no time. Go to dataengineeringpodcast.com/datadog today to start your free 14 day trial and get a sweet new T-Shirt. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch. Your host is Tobias Macey and today I’m interviewing Sameer Al-Sakran about Metabase, a free and open source tool for self service business intelligence

Interview

Introduction How did you get involved in the area of data management? The current goal for most companies is to be “data driven”. How would you define that concept?

How does Metabase assist in that endeavor?

What is the ratio of users that take advantage of the GUI query builder as opposed to writing raw SQL?

What level of complexity is possible with the query builder?

What have you found to be the typical use cases for Metabase in the context of an organization? How do you manage scaling for large or complex queries? What was the motivation for using Clojure as the language for implementing Metabase? What is involved in adding support for a new data source? What are the differentiating features of Metabase that would lead someone to choose it for their organization? What have been the most challenging aspects of building and growing Metabase, both from a technical and business perspective? What do you have planned for the future of Metabase?

Contact Info

Sameer

salsakran on GitHub @sameer_alsakran on Twitter LinkedIn

Metabase

Website @metabase on Twitter metabase on GitHub

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

Expa Metabase Blackjet Hadoop Imeem Maslow’s Hierarchy of Data Needs 2 Sided Marketplace Honeycomb Interview Excel Tableau Go-JEK Clojure React Python Scala JVM Redash How To Lie With Data Stripe Braintree Payments

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast

Matplotlib for Python Developers - Second Edition

"Matplotlib for Python Developers" is your comprehensive guide to creating interactive and informative data visualizations using the Matplotlib library in Python. This book covers all the essentials-from building static plots to integrating dynamic graphics with web applications. What this Book will help me do Design and customize stunning data visualizations including heatmaps and scatter plots. Integrate Matplotlib visualization seamlessly into GUI applications using GTK3 or Qt. Utilize advanced plotting libraries like Seaborn and GeoPandas for enhanced visual representation. Develop web-based dashboards and plots that dynamically update using Django. Master techniques to prepare your Matplotlib projects for deployment in a cloud-based environment. Author(s) Authors Aldrin Yim, Claire Chung, and Allen Yu are seasoned developers and data scientists with extensive experience in Python and data visualization. They bring a practical touch to technical concepts, aiming to bridge theory with hands-on applications. With such a skilled team behind this book, you'll gain both foundational knowledge and advanced insights into Matplotlib. Who is it for? This book is the ideal resource for Python developers and data analysts looking to enhance their data visualization skills. If you're familiar with Python and want to create engaging, clear, and dynamic visualizations, this book will give you the tools to achieve that. Designed for a range of expertise, from beginners understanding the basics to experienced users diving into complex integrations, this book has something for everyone. You'll be guided through every step, ensuring you build the confidence and skills needed to thrive in this area.