talk-data.com talk-data.com

Topic

Python

programming_language data_science web_development

1446

tagged

Activity Trend

185 peak/qtr
2020-Q1 2026-Q1

Activities

1446 activities · Newest first

Send us a text Adam Weinstein is currently CEO and Co-Founder of Cursor, having worked at LinkedIn as a Senior Manager of Business Development and having founded enGreet, a print-on-demand greeting card company that merged crowd-sourcing with social expressions. In this episode, he describes his data analytics company and provides insight into creating a successful startup.


Shownotes

00:00 - Check us out on YouTube and SoundCloud!   

00:10 - Connect with Producer Steve Moore on LinkedIn & Twitter   

00:15 - Connect with Producer Liam Seston on LinkedIn & Twitter.   

00:20 - Connect with Producer Rachit Sharma on LinkedIn.

00:25 - Connect with Host Al Martin on LinkedIn & Twitter.   

00:55 - Connect with Adam Weinstein on LinkedIn.

03:55 - Find out more about Cursor.

06:45 - Learn more about Cursor's Co-Founder and CEO Adam Weinstein.

13:10 - Learn more about Big Data Analytics.

19:20 - What is Python/Jupyter Notebooks?

26:35 - Learn more about Data Fluency.

35:30 - What is a startup? 

Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

podcast_episode
by Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Pawel Kapuscinski (Analytics Pros) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

WHERE were you the first time you listened to this podcast? Did you feel like you were JOINing a SELECT GROUP BY doing so? Can you COUNT the times you've thought to yourself, "Wow. These guys are sometimes really unFILTERed?" On this episode, Pawel Kapuscinski from Analytics Pros (and the Burnley Football Club) sits down with the group to shout at them in all caps. Or, at least, to talk about SQL: where it fits in the analyst's toolbox, how it is a powerful and necessary complement to Python and R, and who's to blame for the existence of so many different flavors of the language. Give it a listen. That's an ORDER (BY?)! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

This episode reboots our podcast with the theme of Natural Language Processing for the next few months. We begin with introductions of Yoshi and Linh Da and then get into a broad discussion about natural language processing: what it is, what some of the classic problems are, and just a bit on approaches. Finishing out the show is an interview with Lucy Park about her work on the KoNLPy library for Korean NLP in Python. If you want to share your NLP project, please join our Slack channel.  We're eager to see what listeners are working on! http://konlpy.org/en/latest/    

Principles of Data Science - Second Edition

Dive into the intricacies of data science with 'Principles of Data Science'. This book takes you on a journey to explore, analyze, and transform data into actionable insights using mathematical models, Python programming, and machine learning concepts. With a clear and engaging style, you will progress from understanding theoretical foundations to implementing advanced techniques in real-world scenarios. What this Book will help me do Master the five critical steps in a practical data science workflow. Clean and prepare raw datasets for accurate machine learning models. Understand and apply statistical models and mathematical principles for data analysis. Build and evaluate predictive models using Python and effective metrics. Create impactful visualizations that clearly convey data insights. Author(s) Sinan Ozdemir is an expert in data science, with a background in developing and teaching advanced courses in machine learning and predictive analytics. With co-authors None Kakade and None Tibaldeschi, they bring years of hands-on experience in data science to this comprehensive guide. Their approach simplifies complex concepts, making them accessible without sacrificing depth, to empower readers to make data-driven decisions confidently. Who is it for? This book is ideal for aspiring data scientists seeking a practical introduction to the field. It's perfect for those with basic math skills looking to apply them to data science or experienced programmers who want to explore the mathematical foundation of data science. A basic understanding of Python programming will be invaluable, but the book builds up core concepts step-by-step, making it accessible to both beginners and experienced professionals.

Numerical Python: Scientific Computing and Data Science Applications with Numpy, SciPy and Matplotlib

Leverage the numerical and mathematical modules in Python and its standard library as well as popular open source numerical Python packages like NumPy, SciPy, FiPy, matplotlib and more. This fully revised edition, updated with the latest details of each package and changes to Jupyter projects, demonstrates how to numerically compute solutions and mathematically model applications in big data, cloud computing, financial engineering, business management and more. Numerical Python, Second Edition, presents many brand-new case study examples of applications in data science and statistics using Python, along with extensions to many previous examples. Each of these demonstrates the power of Python for rapid development and exploratory computing due to its simple and high-level syntax and multiple options for data analysis. After reading this book, readers will be familiar with many computing techniques including array-based and symbolic computing, visualization and numerical file I/O, equation solving, optimization, interpolation and integration, and domain-specific computational problems, such as differential equation solving, data analysis, statistical modeling and machine learning. What You'll Learn Work with vectors and matrices using NumPy Plot and visualize data with Matplotlib Perform data analysis tasks with Pandas and SciPy Review statistical modeling and machine learning with statsmodels and scikit-learn Optimize Python code using Numba and Cython Who This Book Is For Developers who want to understand how to use Python and its related ecosystem for numerical computing.

Bioinformatics with Python Cookbook - Second Edition

"Bioinformatics with Python Cookbook" offers a detailed exploration into the modern approaches to computational biology using the Python programming language. Through hands-on recipes, you will master the practical applications of bioinformatics, enabling you to analyze vast biological data effectively using Python libraries and tools. What this Book will help me do Master processing and analyzing genomic datasets in Python to enable accurate bioinformatics discoveries. Understand and apply next-generation sequencing techniques for advanced biological research. Learn to utilize machine learning approaches such as PCA and decision trees for insightful data analysis in biology. Gain proficiency in using high-performance computing frameworks like Dask and Spark for scalable bioinformatics workflows. Develop capabilities to visually represent biological data interactions and insights for presentation and analysis. Author(s) Tiago Antao is a computational scientist specializing in bioinformatics with extensive experience in Python programming applied to biological sciences. He has worked on numerous bioinformatics projects and has a special interest in using Python to bridge biology and data science. Tiago's approachable writing style ensures that both newcomers and experts benefit from his insights. Who is it for? This book is designed for bioinformatics professionals, researchers, and data scientists who are eager to harness the power of Python programming for their biological data analysis needs. If you are familiar with Python and are looking to tackle intermediate to advanced bioinformatics challenges using practical recipes, this book is ideal for you. It is suitable for those seeking to expand their knowledge in computational biology and data visualization techniques. Whether you are working on next-generation sequencing or population genetics, this resource will guide you effectively.

Hands-On Big Data Modeling

This book, Hands-On Big Data Modeling, provides you with practical guidance on data modeling techniques, focusing particularly on the challenges of big data. You will learn the concepts behind various data models, explore tools and platforms for efficient data management, and gain hands-on experience with structured and unstructured data. What this Book will help me do Master the fundamental concepts of big data and its challenges. Explore advanced data modeling techniques using SQL, Python, and R. Design effective models for structured, semi-structured, and unstructured data types. Apply data modeling to real-world datasets like social media and sensor data. Optimize data models for performance and scalability in various big data platforms. Author(s) The authors of this book are experienced data architects and engineers with a strong background in developing scalable data solutions. They bring their collective expertise to simplify complex concepts in big data modeling, ensuring readers can effectively apply these techniques in their projects. Who is it for? This book is intended for data architects, business intelligence professionals, and any programmer interested in understanding and applying big data modeling concepts. If you are already familiar with basic data management principles and want to enhance your skills, this book is perfect for you. You will learn to tackle real-world datasets and create scalable models. Additionally, it is suitable for professionals transitioning to working with big data frameworks.

Hands-On Data Science with SQL Server 2017

In "Hands-On Data Science with SQL Server 2017," you will discover how to implement end-to-end data analysis workflows, leveraging SQL Server's robust capabilities. This book guides you through collecting, cleaning, and transforming data, querying for insights, creating compelling visualizations, and even constructing predictive models for sophisticated analytics. What this Book will help me do Grasp the essential data science processes and how SQL Server supports them. Conduct data analysis and create interactive visualizations using Power BI. Build, train, and assess predictive models using SQL Server tools. Integrate SQL Server with R, Python, and Azure for enhanced functionality. Apply best practices for managing and transforming big data with SQL Server. Author(s) Marek Chmel and Vladimír Mužný bring their extensive experience in data science and database management to this book. Marek is a seasoned database specialist with a strong background in SQL, while Vladimír is known for his instructional expertise in analytics and data manipulation. Together, they focus on providing actionable insights and practical examples tailored for data professionals. Who is it for? This book is an ideal resource for aspiring and seasoned data scientists, data analysts, and database professionals aiming to deepen their expertise in SQL Server for data science workflows. Beginners with fundamental SQL knowledge will find it a guided entry into data science applications. It is especially suited for those who aim to implement data-driven solutions in their roles while leveraging SQL's capabilities.

Mastering Matplotlib 2.x

Mastering Matplotlib 2.x guides you through the art and science of creating sophisticated data visualizations with Python's powerful Matplotlib library. You will start by learning the basics of plotting and customizing your charts, progressing to more advanced topics such as 3D visualization, geospatial data display, and creating interactive plots using Jupyter Notebook. What this Book will help me do Create complex and highly customizable data plots using Matplotlib. Effectively visualize data in three dimensions, including geospatial data. Use advanced matplotlib features to represent non-Cartesian and vector data. Build interactive visualizations using Jupyter Notebook and Python. Develop special-purpose and movie-style plots to enhance data representation. Author(s) None Keller is a seasoned software engineer and data visualization enthusiast with years of experience using Python for data analysis. Their practical and hands-on approach ensures that readers can directly apply the concepts taught in their projects. None aims to make advanced visualization techniques accessible to all. Who is it for? This book is perfect for developers, scientists, and analysts who need sophisticated visualization tools for their projects. Prior experience with Python and basic familiarity with Matplotlib will help you get the most out of the book. If you're looking to deepen your understanding of data visualization or to create interactive and advanced visualizations, this book is for you.

Learn QGIS - Fourth Edition

Unlock the world of geospatial analysis and mapping with 'Learn QGIS.' This comprehensive guide takes you through the capabilities of QGIS 3.4, covering everything from data loading and styling to spatial analysis and plugin development. Geared towards beginners and seasoned GIS users alike, you'll gain hands-on expertise to master QGIS effectively and confidently. What this Book will help me do Load, edit, and manage geospatial data efficiently in QGIS 3.4 for impactful analysis. Create professional-grade maps with custom styling and data visualization techniques. Delve into the QGIS 3.4 processing toolbox, enhancing analysis workflows. Build bespoke QGIS plugins using Python and QT Designer for tailored solutions. Use QGIS 3.4's advanced features like 3D views and GeoPackage efficiently. Author(s) None Cutts and Anita Graser bring their extensive technical expertise to 'Learn QGIS.' None Cutts has a background in geospatial technologies and a focus on practical GIS applications. Anita Graser is a recognized QGIS expert, experienced in both software development and geospatial analysis. Together, they share their knowledge in an accessible style, ensuring readers of different levels can benefit. Who is it for? This book is ideal for developers, consultants, or GIS enthusiasts who want to expand their skills in using QGIS 3.4 for geospatial data analysis and mapping. Beginners looking to understand core QGIS capabilities will also find value. If you're aiming to develop professional maps and customize QGIS, this is the resource for you.

Data Analysis and Visualization Using Python: Analyze Data to Create Visualizations for BI Systems

Look at Python from a data science point of view and learn proven techniques for data visualization as used in making critical business decisions. Starting with an introduction to data science with Python, you will take a closer look at the Python environment and get acquainted with editors such as Jupyter Notebook and Spyder. After going through a primer on Python programming, you will grasp fundamental Python programming techniques used in data science. Moving on to data visualization, you will see how it caters to modern business needs and forms a key factor in decision-making. You will also take a look at some popular data visualization libraries in Python. Shifting focus to data structures, you will learn the various aspects of data structures from a data science perspective. You will then work with file I/O and regular expressions in Python, followed by gathering and cleaning data. Moving on to exploring and analyzing data, you will look at advanced data structures in Python. Then, you will take a deep dive into data visualization techniques, going through a number of plotting systems in Python. In conclusion, you will complete a detailed case study, where you’ll get a chance to revisit the concepts you’ve covered so far. What You Will Learn Use Python programming techniques for data science Master data collections in Python Create engaging visualizations for BI systems Deploy effective strategies for gathering and cleaning data Integrate the Seaborn and Matplotlib plotting systems Who This Book Is For Developers with basic Python programming knowledge looking to adopt key strategies for data analysis and visualizations using Python.

Summary

Jupyter notebooks have gained popularity among data scientists as an easy way to do exploratory analysis and build interactive reports. However, this can cause difficulties when trying to move the work of the data scientist into a more standard production environment, due to the translation efforts that are necessary. At Netflix they had the crazy idea that perhaps that last step isn’t necessary, and the production workflows can just run the notebooks directly. Matthew Seal is one of the primary engineers who has been tasked with building the tools and practices that allow the various data oriented roles to unify their work around notebooks. In this episode he explains the rationale for the effort, the challenges that it has posed, the development that has been done to make it work, and the benefits that it provides to the Netflix data platform teams.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Your host is Tobias Macey and today I’m interviewing Matthew Seal about the ways that Netflix is using Jupyter notebooks to bridge the gap between data roles

Interview

Introduction How did you get involved in the area of data management? Can you start by outlining the motivation for choosing Jupyter notebooks as the core interface for your data teams?

Where are you using notebooks and where are you not?

What is the technical infrastructure that you have built to suppport that design choice? Which team was driving the effort?

Was it difficult to get buy in across teams?

How much shared code have you been able to consolidate or reuse across teams/roles? Have you investigated the use of any of the other notebook platforms for similar workflows? What are some of the notebook anti-patterns that you have encountered and what conventions or tooling have you established to discourage them? What are some of the limitations of the notebook environment for the work that you are doing? What have been some of the most challenging aspects of building production workflows on top of Jupyter notebooks? What are some of the projects that are ongoing or planned for the future that you are most excited by?

Contact Info

Matthew Seal

Email LinkedIn @codeseal on Twitter MSeal on GitHub

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

Netflix Notebook Blog Posts Nteract Tooling OpenGov Project Jupyter Zeppelin Notebooks Papermill Titus Commuter Scala Python R Emacs NBDime

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast

Matplotlib 3.0 Cookbook

Matplotlib 3.0 Cookbook is your go-to guide for mastering the Matplotlib library in Python for creating a wide range of data visualizations. Through 150+ practical recipes, you will learn how to design intuitive and detailed charts, graphs, and dashboards, navigating from simple plots to advanced interactive and 3D visualizations. What this Book will help me do Develop professional-quality data visualizations using Matplotlib. Leverage Matplotlib's API for both quick plotting and advanced customization. Create interactive and animative plots for engaging data representation. Extend Matplotlib functionalities with toolkits like cartopy and axisartist. Integrate Matplotlib figures into GUI applications for broader usage. Author(s) None Poladi and None Borkar are experienced Python developers and enthusiasts who have collaborated in creating a resourceful guide to Matplotlib. They bring extensive experience in data science visualization and Python programming. Their collaborative effort ensures clarity and an approachable learning curve for anyone delving into graphical data representation using Matplotlib. Who is it for? This book is ideal for data scientists, Python developers, and visualization enthusiasts eager to enhance their technical plotting skills. The content covers both fundamentals and advanced topics, suitable for users ranging from beginners curious about Python visualization to experts seeking streamlined workflows and advanced techniques.

Summary

As data science becomes more widespread and has a bigger impact on the lives of people, it is important that those projects and products are built with a conscious consideration of ethics. Keeping ethical principles in mind throughout the lifecycle of a data project helps to reduce the overall effort of preventing negative outcomes from the use of the final product. Emily Miller and Peter Bull of Driven Data have created Deon to improve the communication and conversation around ethics among and between data teams. It is a Python project that generates a checklist of common concerns for data oriented projects at the various stages of the lifecycle where they should be considered. In this episode they discuss their motivation for creating the project, the challenges and benefits of maintaining such a checklist, and how you can start using it today.

Preamble

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to run a bullet-proof data platform. Go to dataengineeringpodcast.com/linode to get a $20 credit and launch a new server in under a minute. Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch. Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat This is your host Tobias Macey and this week I am sharing an episode from my other show, Podcast.init, about a project from Driven Data called Deon. It is a simple tool that generates a checklist of ethical considerations for the various stages of the lifecycle for data oriented projects. This is an important topic for all of the teams involved in the management and creation of projects that leverage data. So give it a listen and if you like what you hear, be sure to check out the other episodes at pythonpodcast.com

Interview

Introductions How did you get introduced to Python? Can you start by describing what Deon is and your motivation for creating it? Why a checklist, specifically? What’s the advantage of this over an oath, for example? What is unique to data science in terms of the ethical concerns, as compared to traditional software engineering? What is the typical workflow for a team that is using Deon in their projects? Deon ships with a default checklist but allows for customization. What are some common addendums that you have seen?

Have you received pushback on any of the default items?

How does Deon simplify communication around ethics across team boundaries? What are some of the most often overlooked items? What are some of the most difficult ethical concerns to comply with for a typical data science project? How has Deon helped you at Driven Data? What are the customer facing impacts of embedding a discussion of ethics in the product development process? Some of the items on the default checklist coincide with regulatory requirements. Are there any cases where regulation is in conflict with an ethical concern that you would like to see practiced? What are your hopes for the future of the Deon project?

Keep In Touch

Emily

LinkedIn ejm714 on GitHub

Peter

LinkedIn @pjbull on Twitter pjbull on GitHub

Driven Data

@drivendataorg on Twitter drivendataorg on GitHub Website

Picks

Tobias

Richard Bond Glass Art

Emily

Tandem Coffee in Portland, Maine

Peter

The Model Bakery in Saint Helena and Napa, California

Links

Deon Driven Data International Development Brookings Institution Stata Econometrics Metis Bootcamp Pandas

Podcast Episode

C# .NET Podcast.init Episode On Software Ethics Jupyter Notebook

Podcast Episode

Word2Vec cookiecutter data science Logistic Regression

The intro and outro music is

Data Analytics for IT Networks: Developing Innovative Use Cases, First Edition

Use data analytics to drive innovation and value throughout your network infrastructure Network and IT professionals capture immense amounts of data from their networks. Buried in this data are multiple opportunities to solve and avoid problems, strengthen security, and improve network performance. To achieve these goals, IT networking experts need a solid understanding of data science, and data scientists need a firm grasp of modern networking concepts. Data Analytics for IT Networks fills these knowledge gaps, allowing both groups to drive unprecedented value from telemetry, event analytics, network infrastructure metadata, and other network data sources. Drawing on his pioneering experience applying data science to large-scale Cisco networks, John Garrett introduces the specific data science methodologies and algorithms network and IT professionals need, and helps data scientists understand contemporary network technologies, applications, and data sources. After establishing this shared understanding, Garrett shows how to uncover innovative use cases that integrate data science algorithms with network data. He concludes with several hands-on, Python-based case studies reflecting Cisco Customer Experience (CX) engineers’ supporting its largest customers. These are designed to serve as templates for developing custom solutions ranging from advanced troubleshooting to service assurance. Understand the data analytics landscape and its opportunities in Networking See how elements of an analytics solution come together in the practical use cases Explore and access network data sources, and choose the right data for your problem Innovate more successfully by understanding mental models and cognitive biases Walk through common analytics use cases from many industries, and adapt them to your environment Uncover new data science use cases for optimizing large networks Master proven algorithms, models, and methodologies for solving network problems Adapt use cases built with traditional statistical methods Use data science to improve network infrastructure analysisAnalyze control and data planes with greater sophistication Fully leverage your existing Cisco tools to collect, analyze, and visualize data

podcast_episode
by Val Kroll , Julie Hoyer , Simo Ahava (NetBooster, Helsinki - Finland) , Tim Wilson (Analytics Power Hour - Columbus (OH) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

Are you deeply knowledgable in JavaScript, R, the DOM, Python, AWS, jQuery, Google Cloud Platform, and SQL? Good for you! If you're not, should you be? What does "technical" mean, anyway? And, is it even possible for an analyst to dive into all of these different areas? English philosophy expert The Notorious C.M.O. (aka, Simo Ahava) returns to the show to share his thoughts on the subject in this episode. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

Python Data Science Essentials - Third Edition

Learn the essentials of data science with Python through this comprehensive guide. By the end of this book, you'll have an in-depth understanding of core data science workflows, tools, and techniques. What this Book will help me do Understand and apply data manipulation techniques with pandas and NumPy. Build and optimize machine learning models with scikit-learn. Analyze and visualize complex datasets for derived insights. Implement exploratory data analysis to uncover trends in data. Leverage advanced techniques like graph analysis and deep learning for sophisticated projects. Author(s) Alberto Boschetti and Luca Massaron combine their extensive expertise in data science and Python programming to guide readers effectively. With hands-on knowledge and a passion for teaching, they provide practical insights across the data science lifecycle. Who is it for? This book is ideal for aspiring data scientists, data analysts, and software developers aiming to enhance their data analysis skills. Suited for beginners familiar with Python and basic statistics, this guide bridges the gap to real-world applications. Advance your career by unlocking crucial data science expertise.

Python Data Analytics: With Pandas, NumPy, and Matplotlib

Explore the latest Python tools and techniques to help you tackle the world of data acquisition and analysis. You'll review scientific computing with NumPy, visualization with matplotlib, and machine learning with scikit-learn. This revision is fully updated with new content on social media data analysis, image analysis with OpenCV, and deep learning libraries. Each chapter includes multiple examples demonstrating how to work with each library. At its heart lies the coverage of pandas, for high-performance, easy-to-use data structures and tools for data manipulation Author Fabio Nelli expertly demonstrates using Python for data processing, management, and information retrieval. Later chapters apply what you've learned to handwriting recognition and extending graphical capabilities with the JavaScript D3 library. Whether you are dealing with sales data, investment data, medical data, web page usage, or other data sets, Python Data Analytics, Second Edition is an invaluable reference with its examples of storing, accessing, and analyzing data. What You'll Learn Understand the core concepts of data analysis and the Python ecosystem Go in depth with pandas for reading, writing, and processing data Use tools and techniques for data visualization and image analysis Examine popular deep learning libraries Keras, Theano,TensorFlow, and PyTorch Who This Book Is For Experienced Python developers who need to learn about Pythonic tools for data analysis

podcast_episode
by Val Kroll , Julie Hoyer , Tim Wilson (Analytics Power Hour - Columbus (OH) , Simon Rumble (Snowflake Analytics) , Moe Kiss (Canva) , Michael Helbling (Search Discovery)

Tell me about a time you produced an amazing analysis. Please provide your response in the form of a Jupyter notebook that uses Python or R (or both!) to pull words from a corpus that contains all words in the OED stored in a BigQuery table. I mean, that's a fair question to ask, right? No? Well, what questions and techniques are effective for assessing an analyst's likelihood of succeeding in your organization? How should those techniques differ when looking for a technical analyst as opposed to a more business-oriented one? On this episode of the show -- recorded while our recording service clearly thought it was in a job interview that it needed to deliberately tank -- Simon Rumble from Snowflake Analytics joined the gang to share ideas on the topic. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

Nonlinear Digital Filtering with Python

This book discusses important structural filter classes including the median filter and a number of its extensions (e.g., weighted and recursive median filters), and Volterra filters based on polynomial nonlinearities. Using results from algebra and the theory of functional equations to construct and characterize behaviorally defined nonlinear filter classes, the text first introduces Python programming, and then proposes practical, bottom-up strategies for designing more complex and capable filters from simpler components in a way that preserves the key properties of these components.