talk-data.com talk-data.com

Topic

Python

programming_language data_science web_development

1446

tagged

Activity Trend

185 peak/qtr
2020-Q1 2026-Q1

Activities

1446 activities · Newest first

A 90-minute hands-on workshop to learn how to leverage the FiftyOne computer vision toolset. Topics include FiftyOne basics (terms, architecture, installation), an overview of useful workflows to explore, understand, and curate data, and how FiftyOne represents and semantically slices unstructured CV data. The second half provides a hands-on introduction to FiftyOne: loading datasets from the FiftyOne Dataset Zoo, navigating the FiftyOne App, programmatically inspecting dataset attributes, adding new samples and custom attributes, generating and evaluating model predictions, and saving insightful views.

Principles of Data Science - Third Edition

Principles of Data Science offers an end-to-end introduction to data science fundamentals, blending key mathematical concepts with practical programming. You'll learn how to clean and prepare data, construct predictive models, and leverage modern tools like pre-trained models for NLP and computer vision. By integrating theory and practice, this book sets the foundation for impactful data-driven decision-making. What this Book will help me do Develop a solid understanding of foundational statistics and machine learning. Learn how to clean, transform, and visualize data for impactful analysis. Explore transfer learning and pre-trained models for advanced AI tasks. Understand ethical implications, biases, and governance in AI and ML. Gain the knowledge to implement complete data pipelines effectively. Author(s) Sinan Ozdemir is an experienced data scientist, educator, and author with a deep passion for making complex topics accessible. With a background in computer science and applied statistics, Sinan has taught data science at leading institutions and authored multiple books on the topic. His practical approach to teaching combines real-world examples with insightful explanations, ensuring learners gain both competence and confidence. Who is it for? This book is ideal for beginners in data science who want to gain a comprehensive understanding of the field. If you have a background in programming or mathematics and are eager to combine these skills to analyze and extract insights from data, this book will guide you. Individuals working with machine learning or AI who need to solidify their foundational knowledge will find it invaluable. Some familiarity with Python is recommended to follow along seamlessly.

Summary

Monitoring and auditing IT systems for security events requires the ability to quickly analyze massive volumes of unstructured log data. The majority of products that are available either require too much effort to structure the logs, or aren't fast enough for interactive use cases. Cliff Crosland co-founded Scanner to provide fast querying of high scale log data for security auditing. In this episode he shares the story of how it got started, how it works, and how you can get started with it.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I'm interviewing Cliff Crosland about Scanner, a security data lake platform for analyzing security logs and identifying issues quickly and cost-effectively

Interview

Introduction How did you get involved in the area of data management? Can you describe what Scanner is and the story behind it?

What were the shortcomings of other tools that are available in the ecosystem?

What is Scanner explicitly not trying to solve for in the security space? (e.g. SIEM) A query engine is useless without data to analyze. What are the data acquisition paths/sources that you are designed to work with?- e.g. cloudtrail logs, app logs, etc.

What are some of the other sources of signal for security monitoring that would be valuable to incorporate or integrate with through Scanner?

Log data is notoriously messy, with no strictly defined format. How do you handle introspection and querying across loosely structured records that might span multiple sources and inconsistent labelling strategies? Can you describe the architecture of the Scanner platform?

What were the motivating constraints that led you to your current implementation? How have the design and goals of the product changed since you first started working on it?

Given the security oriented customer base that you are targeting, how do you address trust/network boundaries for compliance with regulatory/organizational policies? What are the personas of the end-users for Scanner?

How has that influenced the way that you think about the query formats, APIs, user experience etc. for the prroduct?

For teams who are working with Scanner can you describe how it fits into their workflow? What are the most interesting, innovative, or unexpected ways that you have seen Scanner used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Scanner? When is Scanner the wrong choice? What do you have planned for the future of Scanner?

Contact Info

LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the s

We talked about:

Ivan’s background How Ivan became interested in investing Getting financial data to run simulations Open, High, Low, Close, Volume Risk management strategy Testing your trading strategies Sticking to your strategy Important metrics and remembering about trading fees Important features Deployment How DataTalks.Club courses helped Ivan Ivan’s site and course sign-up

Links:

Exploring Finance APIs: https://pythoninvest.com/long-read/exploring-finance-apis Python Invest Blog Articles: https://pythoninvest.com/blog

Free ML Engineering course: http://mlzoomcamp.com Join DataTalks.Club: https://datatalks.club/slack.html Our events: https://datatalks.club/events.html

In January 2024, six activists were identified by British Police in London, suspected of planning to disrupt the London Stock Exchange through a lock-in. In an attempt to prevent the building from opening for trading. Despite the foiled attempt, the strategy for this protest was inherently flawed. Trading no longer requires a busy exchange with raucous shouting and phone calls to facilitate the flow of investment around the world. Nowadays, machines can trade at a fraction of a second, ingesting huge amounts of real-time data to execute finely tuned-trading strategies. But who programs these trading machines, how do we assess risk when trading at such a high volume and in such short periods of time? Anthony Markham is Vice President, Quantitative Developer at Deutsche Bank. With a background in Aerospace and Software Engineering, Anthony has experience in Data Science, facial recognition research, tertiary education, and Quantitative Finance, developing mostly in Python, Julia, and C++. When not working, Anthony enjoys working on personal projects, flying aircraft, and playing sports. In the episode, Richie and Anthony cover what algorithmic trading is, the use of machine learning techniques in trading strategies, the challenges of handling large datasets with low latency, risk management in algorithmic trading, data analysis techniques for handling time series data, the challenges of deep neural networks in trading, the diverse roles and skills of those who work in algorithmic trading and much more.  Links Mentioned in the Show: Flash crash of 2010KDB+Q Query Language[Course] Quantitative Risk Management in PythonUnderstanding Value at Risk (VaR)

Summary

Databases and analytics architectures have gone through several generational shifts. A substantial amount of the data that is being managed in these systems is related to customers and their interactions with an organization. In this episode Tasso Argyros, CEO of ActionIQ, gives a summary of the major epochs in database technologies and how he is applying the capabilities of cloud data warehouses to the challenge of building more comprehensive experiences for end-users through a modern customer data platform (CDP).

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Data projects are notoriously complex. With multiple stakeholders to manage across varying backgrounds and toolchains even simple reports can become unwieldy to maintain. Miro is your single pane of glass where everyone can discover, track, and collaborate on your organization's data. I especially like the ability to combine your technical diagrams with data documentation and dependency mapping, allowing your data engineers and data consumers to communicate seamlessly about your projects. Find simplicity in your most complex projects with Miro. Your first three Miro boards are free when you sign up today at dataengineeringpodcast.com/miro. That’s three free boards at dataengineeringpodcast.com/miro. Your host is Tobias Macey and today I'm interviewing Tasso Argyros about the role of a customer data platform in the context of the modern data stack

Interview

Introduction How did you get involved in the area of data management? Can you describe what the role of the CDP is in the context of a businesses data ecosystem?

What are the core technical challenges associated with building and maintaining a CDP? What are the organizational/business factors that contribute to the complexity of these systems?

The early days of CDPs came with the promise of "Customer 360". Can you unpack that concept and how it has changed over the past ~5 years? Recent years have seen the adoption of reverse ETL, cloud data warehouses, and sophisticated product analytics suites. How has that changed the architectural approach to CDPs?

How have the architectural shifts changed the ways that organizations interact with their customer data?

How have the responsibilities shifted across different roles?

What are the governance policy and enforcement challenges that are added with the expansion of access and responsibility?

What are the most interesting, innovative, or unexpected ways that you have seen CDPs built/used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on CDPs? When is a CDP the wrong choice? What do you have planned for the future of ActionIQ?

Contact Info

LinkedIn @Tasso on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being us

We’ve never been more aware of the word ‘hallucinate’ in a professional setting. Generative AI has taught us that we need to work in tandem with personal AI tools when we want accurate and reliable information. We’ve also seen the impacts of bias in AI systems, and why trusting outputs at face value can be a dangerous game, even for the largest tech organizations in the world. It seems we could be both very close and very far away from being able to fully trust AI in a work setting. To really find out what trustworthy AI is, and what causes us to lose trust in an AI system, we need to hear from someone who’s been at the forefront of the policy and tech around the issue.  Alexandra Ebert is an expert in data privacy and responsible AI. She works on public policy issues in the emerging field of synthetic data and ethical AI. Alexandra is on Forbes ‘30 Under 30’ list and has an upcoming course on DataCamp! In addition to her role as Chief Trust Officer at MOSTLY AI, Alexandra is the chair of the IEEE Synthetic Data IC expert group and the host of the Data Democratization podcast. In the episode, Richie and Alexandra explore the importance of trust in AI, what causes us to lose trust in AI systems and the impacts of a lack of trust, AI regulation and adoption, AI decision accuracy and fairness, privacy concerns in AI, handling sensitive data in AI systems, the benefits of synthetic data, explainability and transparency in AI, skills for using AI in a trustworthy fashion and much more.  Links Mentioned in the Show: MOSTLY.AIMicrosoft Research on AI FairnessUsing Synthetic Data for Machine Learning & AI in Python[Course] AI Ethics

Summary

Data processing technologies have dramatically improved in their sophistication and raw throughput. Unfortunately, the volumes of data that are being generated continue to double, requiring further advancements in the platform capabilities to keep up. As the sophistication increases, so does the complexity, leading to challenges for user experience. Jignesh Patel has been researching these areas for several years in his work as a professor at Carnegie Mellon University. In this episode he illuminates the landscape of problems that we are faced with and how his research is aimed at helping to solve these problems.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I'm interviewing Jignesh Patel about the research that he is conducting on technical scalability and user experience improvements around data management

Interview

Introduction How did you get involved in the area of data management? Can you start by summarizing your current areas of research and the motivations behind them? What are the open questions today in technical scalability of data engines?

What are the experimental methods that you are using to gain understanding in the opportunities and practical limits of those systems?

As you strive to push the limits of technical capacity in data systems, how does that impact the usability of the resulting systems?

When performing research and building prototypes of the projects, what is your process for incorporating user experience into the implementation of the product?

What are the main sources of tension between technical scalability and user experience/ease of comprehension? What are some of the positive synergies that you have been able to realize between your teaching, research, and corporate activities?

In what ways do they produce conflict, whether personally or technically?

What are the most interesting, innovative, or unexpected ways that you have seen your research used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on research of the scalability limits of data systems? What is your heuristic for when a given research project needs to be terminated or productionized? What do you have planned for the future of your academic research?

Contact Info

Website LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tel

Summary

Working with financial data requires a high degree of rigor due to the numerous regulations and the risks involved in security breaches. In this episode Andrey Korchack, CTO of fintech startup Monite, discusses the complexities of designing and implementing a data platform in that sector.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It’s the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it’s real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize today to get 2 weeks free! Your host is Tobias Macey and today I'm interviewing Andrey Korchak about how to manage data in a fintech environment

Interview

Introduction How did you get involved in the area of data management? Can you start by summarizing the data challenges that are particular to the fintech ecosystem? What are the primary sources and types of data that fintech organizations are working with?

What are the business-level capabilities that are dependent on this data?

How do the regulatory and business requirements influence the technology landscape in fintech organizations?

What does a typical build vs. buy decision process look like?

Fraud prediction in e.g. banks is one of the most well-established applications of machine learning in industry. What are some of the other ways that ML plays a part in fintech?

How does that influence the architectural design/capabilities for data platforms in those organizations?

Data governance is a notoriously challenging problem. What are some of the strategies that fintech companies are able to apply to this problem given their regulatory burdens? What are the most interesting, innovative, or unexpected approaches to data management that you have seen in the fintech sector? What are the most interesting, unexpected, or challenging lessons that you have learned while working on data in fintech? What do you have planned for the future of your data capabilities at Monite?

Contact Info

LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

Monite ISO 270001 Tesseract GitOps SWIFT Protocol

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Sponsored By: Starburst: Starburst Logo

This episode is brought to you by Starburst - a data lake analytics platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, Starburst runs petabyte-scale SQL analytics fast at a fraction of the cost of traditional methods, helping you meet all your data needs ranging from AI/ML workloads to data applications to complete analytics.

Trusted by the teams at Comcast and Doordash, Starburst delivers the adaptability and flexibility a lakehouse ecosystem promises, while providing a single point of access for your data and all your data governance allowing you to discover, transform, govern, and secure all in one place. Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Try Starburst Galaxy today, the easiest and fastest way to get started using Trino, and get $500 of credits free. dataengineeringpodcast.com/starburstRudderstack: Rudderstack

Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstackMaterialize: Materialize

You shouldn't have to throw away the database to build with fast-changing data. Keep the familiar SQL, keep the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date.

That is Materialize, the only true SQL streaming database built from the ground up to meet the needs of modern data products: Fresh, Correct, Scalable — all in a familiar SQL UI. Built on Timely Dataflow and Differential Dataflow, open source frameworks created by cofounder Frank McSherry at Microsoft Research, Materialize is trusted by data and engineering teams at Ramp, Pluralsight, Onward and more to build real-time data products without the cost, complexity, and development time of stream processing.

Go to materialize.com today and get 2 weeks free!Support Data Engineering Podcast

Data Observability for Data Engineering

"Data Observability for Data Engineering" introduces you to the foundational concepts of observing and validating data pipeline health. With real-world projects and Python code examples, you'll gain hands-on experience in improving data quality and minimizing risks, enabling you to implement strategies that ensure accuracy and reliability in your data systems. What this Book will help me do Master data observability techniques to monitor and validate data pipelines effectively. Learn to collect and analyze meaningful metrics to gauge and improve data quality. Develop skills in Python programming specific to applying data concepts such as observable data state. Address scalability challenges using state-of-the-art observability frameworks and practices. Enhance your ability to manage and optimize data workflows ensuring seamless operation from start to end. Author(s) Authors Michele Pinto and Sammy El Khammal bring a wealth of experience in data engineering and observing scalable data systems. Pinto specializes in constructing robust analytics platforms while Khammal offers insights into integrating software observability into massive pipelines. Their collaborative writing style ensures readers find both practical advice and theoretical foundations. Who is it for? This book is geared toward data engineers, architects, and scientists who seek to confidently handle pipeline challenges. Whether you're addressing specific issues or wish to introduce proactive measures in your team, this guide meets the needs of those ready to leverage observability as a key practice.

Data Science for Web3

Discover how to navigate the world of Web3 data with 'Data Science for Web3,' an expertly crafted guide by Gabriela Castillo Areco. Through practical examples, industry insights, and real-world use cases, you will learn the skills needed to analyze blockchain data and extract actionable business insights. What this Book will help me do Understand blockchain transactions and data structures to build robust datasets. Leverage on-chain and off-chain data for valuable Web3 business insights. Create DeFi- and NFT-specific datasets for targeted analysis. Develop machine learning models tailored for blockchain use cases. Apply data science techniques to innovate in the Web3 ecosystem. Author(s) Gabriela Castillo Areco is a seasoned data scientist and an expert in blockchain analytics. With years of experience in the technology and finance sectors, Gabriela brings a practical perspective to understanding intricate data within the emerging Web3 paradigm. Her engaging approach makes technical concepts accessible and actionable. Who is it for? This book is ideal for data professionals such as analysts, scientists, or engineers, aiming to harness the potential of blockchain analytics. It's also suitable for business professionals exploring data-driven opportunities within Web3. Whether you're a beginner or an experienced learner with some Python background, this book will meet you at your level.

Redis Stack for Application Modernization

In "Redis Stack for Application Modernization," you will explore how the Redis Stack extends traditional Redis capabilities, allowing you to innovate in building real-time, scalable, multi-model applications. Through practical examples and hands-on sessions, this book equips you with skills to manage, implement, and optimize data flows and database features. What this Book will help me do Learn how to use Redis Stack for handling real-time data with JSON, hash, and other document types. Discover modern techniques for performing vector similarity searches and hybrid workflows. Become proficient in integrating Redis Stack with programming languages like Java, Python, and Node.js. Gain skills to configure Redis Stack server for scalability, security, and high availability. Master RedisInsight for data visualization, analysis, and efficient database management. Author(s) Luigi Fugaro and None Ortensi are experienced software professionals with deep expertise in database systems and application architecture. They bring years of experience working with Redis and developing real-world applications. Their hands-on approach to teaching and real-world examples make this book a valuable resource for professionals in the field. Who is it for? This book is ideal for database administrators, developers, and architects looking to leverage Redis Stack for real-time multi-model applications. It requires a basic understanding of Redis and any programming language such as Python or Java. If you wish to modernize your applications and efficiently manage databases, this book is for you.

We talked about:

Atita’s background How NLP relates to search Atita’s experience with Lucidworks and OpenSource Connections Atita’s experience with Qdrant and vector databases Utilizing vector search Major changes to search Atita has noticed throughout her career RAG (Retrieval-Augmented Generation) Building a chatbot out of transcripts with LLMs Ingesting the data and evaluating the results Keeping humans in the loop Application of vector databases for machine learning Collaborative filtering Atita’s resource recommendations

Links:

LinkedIn: https://www.linkedin.com/in/atitaarora/
Twitter: https://x.com/atitaarora Github: https://github.com/atarora Human-in-the-Loop Machine Learning: https://www.manning.com/books/human-in-the-loop-machine-learning Relevant Search: https://www.manning.com/books/relevant-search Let's learn about Vectors: https://hub.superlinked.com/ Langchain: https://python.langchain.com/docs/get_started/introduction Qdrant blog: https://blog.qdrant.tech/ OpenSource Connections Blog: https://opensourceconnections.com/blog/

Free ML Engineering course: http://mlzoomcamp.com Join DataTalks.Club: https://datatalks.club/slack.html Our events: https://datatalks.club/events.html

Bayesian Optimization in Action

Bayesian optimization helps pinpoint the best configuration for your machine learning models with speed and accuracy. Put its advanced techniques into practice with this hands-on guide. In Bayesian Optimization in Action you will learn how to: Train Gaussian processes on both sparse and large data sets Combine Gaussian processes with deep neural networks to make them flexible and expressive Find the most successful strategies for hyperparameter tuning Navigate a search space and identify high-performing regions Apply Bayesian optimization to cost-constrained, multi-objective, and preference optimization Implement Bayesian optimization with PyTorch, GPyTorch, and BoTorch Bayesian Optimization in Action shows you how to optimize hyperparameter tuning, A/B testing, and other aspects of the machine learning process by applying cutting-edge Bayesian techniques. Using clear language, illustrations, and concrete examples, this book proves that Bayesian optimization doesn’t have to be difficult! You’ll get in-depth insights into how Bayesian optimization works and learn how to implement it with cutting-edge Python libraries. The book’s easy-to-reuse code samples let you hit the ground running by plugging them straight into your own projects. About the Technology In machine learning, optimization is about achieving the best predictions—shortest delivery routes, perfect price points, most accurate recommendations—in the fewest number of steps. Bayesian optimization uses the mathematics of probability to fine-tune ML functions, algorithms, and hyperparameters efficiently when traditional methods are too slow or expensive. About the Book Bayesian Optimization in Action teaches you how to create efficient machine learning processes using a Bayesian approach. In it, you’ll explore practical techniques for training large datasets, hyperparameter tuning, and navigating complex search spaces. This interesting book includes engaging illustrations and fun examples like perfecting coffee sweetness, predicting weather, and even debunking psychic claims. You’ll learn how to navigate multi-objective scenarios, account for decision costs, and tackle pairwise comparisons. What's Inside Gaussian processes for sparse and large datasets Strategies for hyperparameter tuning Identify high-performing regions Examples in PyTorch, GPyTorch, and BoTorch About the Reader For machine learning practitioners who are confident in math and statistics. About the Author Quan Nguyen is a research assistant at Washington University in St. Louis. He writes for the Python Software Foundation and has authored several books on Python programming. Quotes Using a hands-on approach, clear diagrams, and real-world examples, Quan lifts the veil off the complexities of Bayesian optimization. - From the Foreword by Luis Serrano, Author of Grokking Machine Learning This book teaches Bayesian optimization, starting from its most basic components. You’ll find enough depth to make you comfortable with the tools and methods and enough code to do real work very quickly. - From the Foreword by David Sweet, Author of Experimentation for Engineers Combines modern computational frameworks with visualizations and infographics you won’t find anywhere else. It gives readers the confidence to apply Bayesian optimization to real world problems! - Ravin Kumar, Google

Python 3 and Data Visualization Using ChatGPT /GPT-4

This book is designed to show readers the concepts of Python 3 programming and the art of data visualization. It also explores cutting-edge techniques using ChatGPT/GPT-4 in harmony with Python for generating visuals that tell more compelling data stories. Chapter 1 introduces the essentials of Python, covering a vast array of topics from basic data types, loops, and functions to more advanced constructs like dictionaries, sets, and matrices. In Chapter 2, the focus shifts to NumPy and its powerful array operations, leading into data visualization using prominent libraries such as Matplotlib. Chapter 6 includes Seaborn's rich visualization tools, offering insights into datasets like Iris and Titanic. Further, the book covers other visualization tools and techniques, including SVG graphics, D3 for dynamic visualizations, and more. Chapter 7 covers information about the main features of ChatGPT and GPT-4, as well as some of their competitors. Chapter 8 contains examples of using ChatGPT in order to perform data visualization, such as charts and graphs that are based on datasets (e.g., the Titanic dataset). Companion files with code, datasets, and figures are available for downloading. From foundational Python concepts to the intricacies of data visualization, this book is ideal for Python practitioners, data scientists, and anyone in the field of data analytics looking to enhance their storytelling with data through visuals. It's also perfect for educators seeking material for teaching advanced data visualization techniques.