In this session learn about performance optimizations for PyTorch on Google Cloud accelerators using OpenXLA. These models are powerful but can be disrupted by resource failures. This talk also explores strategies for achieving greater resiliency when running PyTorch on GPUs, focusing on fault tolerance, checkpointing, and distributed training. Learn how to leverage open source tools to minimize downtime and ensure your deep learning workloads run smoothly.
talk-data.com
Topic
PyTorch
80
tagged
Activity Trend
Top Events
Day 1 focuses on building and training neural networks with PyTorch. Learn to implement neural networks for image classification from scratch using PyTorch.
Day 1 focuses on building and training neural networks with PyTorch.
Focus on building and training neural networks with PyTorch.
Focus on visual dataset curation with FiftyOne and iterative improvement of image classification models.
Short recap on CUDA, PyTorch and Lightning.
A short recap on CUDA, PyTorch and Lightning.
Session on 2025-02-23 focusing on fine-tuning techniques for a pre-trained model, in line with the series described in the description.
Business runs on tabular data in databases, spreadsheets, and logs. Crunch that data using deep learning, gradient boosting, and other machine learning techniques. Machine Learning for Tabular Data teaches you to train insightful machine learning models on common tabular business data sources such as spreadsheets, databases, and logs. You’ll discover how to use XGBoost and LightGBM on tabular data, optimize deep learning libraries like TensorFlow and PyTorch for tabular data, and use cloud tools like Vertex AI to create an automated MLOps pipeline. Machine Learning for Tabular Data will teach you how to: Pick the right machine learning approach for your data Apply deep learning to tabular data Deploy tabular machine learning locally and in the cloud Pipelines to automatically train and maintain a model Machine Learning for Tabular Data covers classic machine learning techniques like gradient boosting, and more contemporary deep learning approaches. By the time you’re finished, you’ll be equipped with the skills to apply machine learning to the kinds of data you work with every day. About the Technology Machine learning can accelerate everyday business chores like account reconciliation, demand forecasting, and customer service automation—not to mention more exotic challenges like fraud detection, predictive maintenance, and personalized marketing. This book shows you how to unlock the vital information stored in spreadsheets, ledgers, databases and other tabular data sources using gradient boosting, deep learning, and generative AI. About the Book Machine Learning for Tabular Data delivers practical ML techniques to upgrade every stage of the business data analysis pipeline. In it, you’ll explore examples like using XGBoost and Keras to predict short-term rental prices, deploying a local ML model with Python and Flask, and streamlining workflows using large language models (LLMs). Along the way, you’ll learn to make your models both more powerful and more explainable. What's Inside Master XGBoost Apply deep learning to tabular data Deploy models locally and in the cloud Build pipelines to train and maintain models About the Reader For readers experienced with Python and the basics of machine learning. About the Authors Mark Ryan is the AI Lead of the Developer Knowledge Platform at Google. A three-time Kaggle Grandmaster, Luca Massaron is a Google Developer Expert (GDE) in machine learning and AI. He has published 17 other books. Quotes
"Deep Learning and AI Superhero" is an extensive resource for mastering the core concepts and advanced techniques in AI and deep learning using TensorFlow, Keras, and PyTorch. This comprehensive guide walks you through topics from foundational neural network concepts to implementing real-world machine learning solutions. You will gain hands-on experience and theoretical knowledge to elevate your AI development skills. What this Book will help me do Develop a solid foundation in neural networks, their structure, and their training methodologies. Understand and implement deep learning models using TensorFlow and Keras effectively. Gain experience using PyTorch for creating, training, and optimizing advanced machine learning models. Learn advanced applications such as CNNs for computer vision, RNNs for sequential data, and Transformers for natural language processing. Deploy AI models on cloud and edge platforms through practical examples and optimized workflows. Author(s) Cuantum Technologies LLC has established itself as a pioneer in creating educational resources for advanced AI technologies. Their team consists of experts and practitioners in the field, combining years of industry and academic experience. Their books are crafted to ensure readers can practically apply cutting-edge AI techniques with clarity and confidence. Who is it for? This book is ideally suited for software developers, AI enthusiasts, and data scientists who have a basic understanding of programming and machine learning concepts. It's perfect for those seeking to enhance their skills and tackle real-world AI challenges. Whether your goals are professional development, research, or personal learning, you'll find practical and detailed guidance throughout this book.
MAPIE (Model Agnostic Prediction Interval Estimator) is your go-to solution for managing uncertainties and risks in machine learning models. This Python library, nestled within scikit-learn-contrib, offers a way to calculate prediction intervals with controlled coverage rates for regression, classification, and even time series analysis. But it doesn't stop there - MAPIE can also be used to handle more complex tasks like multi-label classification and semantic segmentation in computer vision, ensuring probabilistic guarantees on crucial metrics like recall and precision. MAPIE can be integrated with any model - whether it's scikit-learn, TensorFlow, or PyTorch. Join us as we delve into the world of conformal predictions and how to quickly manage your uncertainties using MAPIE.
Link to Github: https://github.com/scikit-learn-contrib/MAPIE
Lot’s of AI use-cases can start with big ideas and exciting possibilities, but turning those ideas into real results is where the challenge lies. How do you take a powerful model and make it work effectively in a specific business context? What steps are necessary to fine-tune and optimize your AI tools to deliver both performance and cost efficiency? And as AI continues to evolve, how do you stay ahead of the curve while ensuring that your solutions are scalable and sustainable? Lin Qiao is the CEO and Co-Founder of Fireworks AI. She previously worked at Meta as a Senior Director of Engineering and as head of Meta's PyTorch, served as a Tech Lead at Linkedin, and worked as a Researcher and Software Engineer at IBM. In the episode, Richie and Lin explore generative AI use cases, getting AI into products, foundational models, the effort required and benefits of fine-tuning models, trade-offs between models sizes, use cases for smaller models, cost-effective AI deployment, the infrastructure and team required for AI product development, metrics for AI success, open vs closed-source models, excitement for the future of AI development and much more. Links Mentioned in the Show: Fireworks.aiHugging Face - Preference Tuning LLMs with Direct Preference Optimization MethodsConnect with LinCourse - Artificial Intelligence (AI) StrategyRelated Episode: Creating Custom LLMs with Vincent Granville, Founder, CEO & Chief Al Scientist at GenAltechLab.comRewatch sessions from RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
In this session, we will explore the architecture of Diffusers models and discuss components such as VAE and UNet. An example will be presented of how to combine text-to-image and image-to-image into one data pipeline with the Cloudera Data Platform (CDP). Specific emphasis will be placed on using ControlNet, PyTorch, and metadata persistence within CDP for editing images.
Shine a spotlight into the deep learning “black box”. This comprehensive and detailed guide reveals the mathematical and architectural concepts behind deep learning models, so you can customize, maintain, and explain them more effectively. Inside Math and Architectures of Deep Learning you will find: Math, theory, and programming principles side by side Linear algebra, vector calculus and multivariate statistics for deep learning The structure of neural networks Implementing deep learning architectures with Python and PyTorch Troubleshooting underperforming models Working code samples in downloadable Jupyter notebooks The mathematical paradigms behind deep learning models typically begin as hard-to-read academic papers that leave engineers in the dark about how those models actually function. Math and Architectures of Deep Learning bridges the gap between theory and practice, laying out the math of deep learning side by side with practical implementations in Python and PyTorch. Written by deep learning expert Krishnendu Chaudhury, you’ll peer inside the “black box” to understand how your code is working, and learn to comprehend cutting-edge research you can turn into practical applications. About the Technology Discover what’s going on inside the black box! To work with deep learning you’ll have to choose the right model, train it, preprocess your data, evaluate performance and accuracy, and deal with uncertainty and variability in the outputs of a deployed solution. This book takes you systematically through the core mathematical concepts you’ll need as a working data scientist: vector calculus, linear algebra, and Bayesian inference, all from a deep learning perspective. About the Book Math and Architectures of Deep Learning teaches the math, theory, and programming principles of deep learning models laid out side by side, and then puts them into practice with well-annotated Python code. You’ll progress from algebra, calculus, and statistics all the way to state-of-the-art DL architectures taken from the latest research. What's Inside The core design principles of neural networks Implementing deep learning with Python and PyTorch Regularizing and optimizing underperforming models About the Reader Readers need to know Python and the basics of algebra and calculus. About the Author Krishnendu Chaudhury is co-founder and CTO of the AI startup Drishti Technologies. He previously spent a decade each at Google and Adobe. Quotes Machine learning uses a cocktail of linear algebra, vector calculus, statistical analysis, and topology to represent, visualize, and manipulate points in high dimensional spaces. This book builds that foundation in an intuitive way–along with the PyTorch code you need to be a successful deep learning practitioner. - Vineet Gupta, Google Research A thorough explanation of the mathematics behind deep learning! - Grigory Sapunov, Intento Deep learning in its full glory, with all its mathematical details. This is the book! - Atul Saurav, Genworth Financial
In this presentation, we delve into the cutting-edge realm of large-scale AI training and inference, focusing on the open models and their deployment on Google Cloud Accelerators. Open models such as the Llama family of LLMs and Gemma are state-of-the-art language models that demand robust computational resources and efficient strategies for training and inference at scale. This session aims to provide a comprehensive guide on harnessing the power of PyTorch on Google Cloud Accelerators, specifically designed to meet the high-performance requirements of such models.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Large Language Models like the GPT, Gemini, Gemma and Llama series are rapidly transforming the world in general and the field of data science in particular. This talk introduces deep-learning transformer architectures including LLMs. Critically, it also demonstrates the breadth of capabilities state-of-the-art LLMs can deliver, including for dramatically revolutionizing the development of machine learning models and commercially successful AI products. This talk provides an overview of the full lifecycle of LLM development, from training to production deployment, with an emphasis on leveraging the open-source Python libraries like Hugging Face Transformers and PyTorch Lightning.
More generative AI models are built on PyTorch than on any other framework. We partner with Lightricks to share how PyTorch/XLA offers a performant, automatic compiler experience with all the ease-of-use and ecosystem benefits of PyTorch. Learn from Hugging Face as they share more about the latest features that improve PyTorch/XLA performance and usability on GPUs and TPUs.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Deploying AI to production can be bafflingly complex. Learn how Google Cloud is bringing its over two decades of expertise in productionizing planet scale AI to our cloud customers with the AI Hypercomputer architecture. It’s a groundbreaking supercomputing architecture built on performance-optimized hardware (TPUs, GPUs), open software (PyTorch, Jax, Kubernetes), and tailored consumption models that optimize efficiency and productivity across AI training, tuning, and serving. Plus, gain valuable insights from our customers Kakao Brain and Nuro on their journey to deploying large scale AI on Google Cloud.
Click the blue “Learn more” button above to tap into special offers designed to help you implement what you are learning at Google Cloud Next 25.
Bayesian optimization helps pinpoint the best configuration for your machine learning models with speed and accuracy. Put its advanced techniques into practice with this hands-on guide. In Bayesian Optimization in Action you will learn how to: Train Gaussian processes on both sparse and large data sets Combine Gaussian processes with deep neural networks to make them flexible and expressive Find the most successful strategies for hyperparameter tuning Navigate a search space and identify high-performing regions Apply Bayesian optimization to cost-constrained, multi-objective, and preference optimization Implement Bayesian optimization with PyTorch, GPyTorch, and BoTorch Bayesian Optimization in Action shows you how to optimize hyperparameter tuning, A/B testing, and other aspects of the machine learning process by applying cutting-edge Bayesian techniques. Using clear language, illustrations, and concrete examples, this book proves that Bayesian optimization doesn’t have to be difficult! You’ll get in-depth insights into how Bayesian optimization works and learn how to implement it with cutting-edge Python libraries. The book’s easy-to-reuse code samples let you hit the ground running by plugging them straight into your own projects. About the Technology In machine learning, optimization is about achieving the best predictions—shortest delivery routes, perfect price points, most accurate recommendations—in the fewest number of steps. Bayesian optimization uses the mathematics of probability to fine-tune ML functions, algorithms, and hyperparameters efficiently when traditional methods are too slow or expensive. About the Book Bayesian Optimization in Action teaches you how to create efficient machine learning processes using a Bayesian approach. In it, you’ll explore practical techniques for training large datasets, hyperparameter tuning, and navigating complex search spaces. This interesting book includes engaging illustrations and fun examples like perfecting coffee sweetness, predicting weather, and even debunking psychic claims. You’ll learn how to navigate multi-objective scenarios, account for decision costs, and tackle pairwise comparisons. What's Inside Gaussian processes for sparse and large datasets Strategies for hyperparameter tuning Identify high-performing regions Examples in PyTorch, GPyTorch, and BoTorch About the Reader For machine learning practitioners who are confident in math and statistics. About the Author Quan Nguyen is a research assistant at Washington University in St. Louis. He writes for the Python Software Foundation and has authored several books on Python programming. Quotes Using a hands-on approach, clear diagrams, and real-world examples, Quan lifts the veil off the complexities of Bayesian optimization. - From the Foreword by Luis Serrano, Author of Grokking Machine Learning This book teaches Bayesian optimization, starting from its most basic components. You’ll find enough depth to make you comfortable with the tools and methods and enough code to do real work very quickly. - From the Foreword by David Sweet, Author of Experimentation for Engineers Combines modern computational frameworks with visualizations and infographics you won’t find anywhere else. It gives readers the confidence to apply Bayesian optimization to real world problems! - Ravin Kumar, Google
We are embarking on the creation of a specialized programming assistant, meticulously fine-tuned for libraries such as PyTorch, TensorFlow, or Dart, FastAIs. This intelligent assistant, accessible through a chat-like interface, is designed to offer tailored guidance, provide access to the latest documentation, and suggest learning resources to users. It will comprehend contextual queries, ensuring deep library expertise, and offer direct links to official documentation, facilitating efficient problem-solving and learning. With continuous updates, personalization options, and a commitment to privacy, this coding assistant aims to significantly enhance the development experience for programmers and serve as an invaluable resource in the ever-evolving landscape of software development.