talk-data.com talk-data.com

Topic

Python

programming_language data_science web_development

1446

tagged

Activity Trend

185 peak/qtr
2020-Q1 2026-Q1

Activities

1446 activities · Newest first

Summary

Modern businesses aspire to be data driven, and technologists enjoy working through the challenge of building data systems to support that goal. Data governance is the binding force between these two parts of the organization. Nicola Askham found her way into data governance by accident, and stayed because of the benefit that she was able to provide by serving as a bridge between the technology and business. In this episode she shares the practical steps to implementing a data governance practice in your organization, and the pitfalls to avoid.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to dataengineeringpodcast.com/codecomments today to subscribe. My thanks to the team at Code Comments for their support. Your host is Tobias Macey and today I'm interviewing Nicola Askham about the practical steps of building out a data governance practice in your organization

Interview

Introduction How did you get involved in the area of data management? Can you start by giving an overview of the scope and boundaries of data governance in an organization?

At what point does a lack of an explicit governance policy become a liability?

What are some of the misconceptions that you encounter about data governance? What impact has the evolution of data technologies had on the implementation of governance practices? (e.g. number/scale of systems, types of data, AI) Data governance can often become an exercise in boiling the ocean. What are the concrete first steps that will increase the success rate of a governance practice?

Once a data governance project is underway, what are some of the common roadblocks that might derail progress?

What are the net benefits to the data team and the organization when a data governance practice is established, active, and healthy? What are the most interesting, innovative, or unexpected ways that you have seen data governance applied? What are the most interesting, unexpected, or challenging lessons that you have learned while working on data governance/training/coaching? What are some of the pitfalls in data governance? What are some of the future trends in data governance that you are excited by?

Are there any trends that concern you?

Contact Info

Website LinkedIn

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is

Data Engineering with Databricks Cookbook

In "Data Engineering with Databricks Cookbook," you'll learn how to efficiently build and manage data pipelines using Apache Spark, Delta Lake, and Databricks. This recipe-based guide offers techniques to transform, optimize, and orchestrate your data workflows. What this Book will help me do Master Apache Spark for data ingestion, transformation, and analysis. Learn to optimize data processing and improve query performance with Delta Lake. Manage streaming data processing with Spark Structured Streaming capabilities. Implement DataOps and DevOps workflows tailored for Databricks. Enforce data governance policies using Unity Catalog for scalable solutions. Author(s) Pulkit Chadha, the author of this book, is a Senior Solutions Architect at Databricks. With extensive experience in data engineering and big data applications, he brings practical insights into implementing modern data solutions. His educational writings focus on empowering data professionals with actionable knowledge. Who is it for? This book is ideal for data engineers, data scientists, and analysts who want to deepen their knowledge in managing and transforming large datasets. Readers should have an intermediate understanding of SQL, Python programming, and basic data architecture concepts. It is especially well-suited for professionals working with Databricks or similar cloud-based data platforms.

The Ultimate Guide to Snowpark

The Ultimate Guide to Snowpark serves as a comprehensive resource to help you master the Snowflake Snowpark framework using Python. You'll learn how to manage data engineering, data science, and data applications in Snowpark, coupled with practical implementations and examples. By following this guide, you'll gain the skills needed to efficiently process and analyze data in the Snowflake Data Cloud. What this Book will help me do Master Snowpark with Python for data engineering, data science, and data application workloads. Develop and deploy robust data pipelines using Snowpark in Python. Design, implement, and produce machine learning models using Snowpark. Learn to monetize and operationalize Snowflake-native applications. Effectively adopt Snowpark in production for scalable, efficient data solutions. Author(s) Shankar Narayanan SGS and Vivekanandan SS are experienced professionals in data engineering and Snowflake technologies. Shankar has extensive experience in utilizing Snowflake Snowpark to manage and enhance data solutions. Vivekanandan brings expertise in the intersection of Python programming and cloud-based data processing. Together, their combined knowledge and approachable writing style make this book an invaluable resource to readers. Who is it for? This book is designed for data engineers, data scientists, developers, and seasoned data practitioners. Ideal candidates are those looking to expand their skills in implementing Snowpark solutions using Python. A prior understanding of SQL, Python programming, and familiarity with Snowflake is beneficial for readers to fully leverage the techniques presented.

Hands-on workshop to learn how to leverage the FiftyOne computer vision toolset. The session covers FiftyOne basics, useful workflows to explore, understand, and curate data, and a hands-on introduction to loading datasets, navigating the FiftyOne App, inspecting attributes, adding samples and custom attributes, generating model predictions, and saving insightful views.

Hands-on workshop to learn how to leverage the open source FiftyOne computer vision toolset. Part 1 covers FiftyOne basics (terms, architecture, installation, and general usage), an overview of useful workflows to explore, understand, and curate data, and how FiftyOne represents and semantically slices unstructured computer vision data. Part 2 is a hands-on introduction to FiftyOne: load datasets from the FiftyOne Dataset Zoo, navigate the FiftyOne App, programmatically inspect attributes of a dataset, add new sample and custom attributes to a dataset, generate and evaluate model predictions, and save insightful views into the data.

Two-part workshop: Part 1 covers FiftyOne Basics (terms, architecture, installation, and general usage), an overview of useful workflows to explore, understand, and curate data, and how FiftyOne represents and semantically slices unstructured computer vision data. Part 2 is a hands-on introduction to FiftyOne: loading datasets from the FiftyOne Dataset Zoo, navigating the FiftyOne App, programmatically inspecting attributes of a dataset, adding new samples and custom attributes to a dataset, generating and evaluating model predictions, and saving insightful views into the data.

Visualize This, 2nd Edition

One of the most influential data visualization books—updated with new techniques, technologies, and examples Visualize This demonstrates how to explain data visually, so that you can present and communicate information in a way that is appealing and easy to understand. Today, there is a continuous flow of data available to answer almost any question. Thoughtful charts, maps, and analysis can help us make sense of this data. But the data does not speak for itself. As leading data expert Nathan Yau explains in this book, graphics provide little value unless they are built upon a firm understanding of the data behind them. Visualize This teaches you a data-first approach from a practical point of view. You'll start by exploring what your data has to say, and then you'll design visualizations that are both remarkable and meaningful. With this book, you'll discover what tools are available to you without becoming overwhelmed with options. You'll be exposed to a variety of software and code and jump right into real-world datasets so that you can learn visualization by doing. You'll learn to ask and answer questions with data, so that you can make charts that are both beautiful and useful. Visualize This also provides you with opportunities to apply what you learn to your own data. This completely updated, full-color second edition: Presents a unique approach to visualizing and telling stories with data, from data visualization expert Nathan Yau Offers step-by-step tutorials and practical design tips for creating statistical graphics, geographical maps, and information design Details tools that can be used to visualize data graphics for reports, presentations, and stories, for the web or for print, with major updates for the latest R packages, Python libraries, JavaScript libraries, illustration software, and point-and-click applications Contains numerous examples and descriptions of patterns and outliers and explains how to show them Information designers, analysts, journalists, statisticians, data scientists—as well as anyone studying for careers in these fields—will gain a valuable background in the concepts and techniques of data visualization, thanks to this legendary book.

Hands-on workshop on building a search engine from scratch, focusing on text search and vector search. Topics include in-memory text search, tokenization and preprocessing, inverted index construction, embeddings, converting text to vectors, cosine similarity, and strategies to combine text and vector search. The session includes practical coding in a Jupyter Notebook using Python to implement both text and vector search approaches.

Fundamentals of infrastructure as code through guided exercises. Introduction to Pulumi and using programming languages (e.g., Python) to provision modern cloud infrastructure on AWS. Learn the Pulumi programming model and how to provision, update, and destroy AWS resources.

Predictive Analytics for the Modern Enterprise

The surging predictive analytics market is expected to grow from $10.5 billion today to $28 billion by 2026. With the rise in automation across industries, the increase in data-driven decision-making, and the proliferation of IoT devices, predictive analytics has become an operational necessity in today's forward-thinking companies. If you're a data professional, you need to be aligned with your company's business activities more than ever before. This practical book provides the background, tools, and best practices necessary to help you design, implement, and operationalize predictive analytics on-premises or in the cloud. Explore ways that predictive analytics can provide direct input back to your business Understand mathematical tools commonly used in predictive analytics Learn the development frameworks used in predictive analytics applications Appreciate the role of predictive analytics in the machine learning process Examine industry implementations of predictive analytics Build, train, and retrain predictive models using Python and TensorFlow

Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society. Dive into conversations that flow like your morning coffee, where industry insights meet laid-back banter. Whether you're a data aficionado or just curious about the digital age, pull up a chair and let's explore the heart of data, unplugged style!

Stack Overflow and OpenAI Deal Controversy: Discussing the partnership controversy, with users protesting the lack of an opt-out option and how this could reshape the platform. Look into Phind here.Apple and OpenAI Rumors - could ChatGPT be the new Siri? Examining rumors of ChatGPT potentially replacing Siri, and Apple's AI strategy compared to Microsoft’s MAI-1. Check out more community opinions here.Hello GPT-4o: Exploring the new era with OpenAI's GPT-4o that blends video, text, and audio for more dynamic human-AI interactions. Discussing AI's challenges under the European AI Act and chatgpt’s use in daily life and dating apps like Bumble.Claude Takes Europe: Claude 3 now available in the EU. How does it compare to ChatGPT in coding and conversation?ElevenLabs' Music Generation AI: A look at ElevenLabs' AI for generating music and the broader AI music landscape. How are these algorithms transforming music creation? Check out the AI Song Contest here.Google Cloud’s Big Oops with UniSuper: Unpack the shocking story of how Google Cloud accidentally wiped out UniSuper’s account. What does this mean for data security and redundancy strategies?The Great CLI Debate: Is Python really the right choice for CLI tools? We spark the debate over Python vs. Go and Rust in building efficient CLI tools.

Passing metadata such as sample_weight and groups through a scikit-learn cross_validate, GridSearchCV, or a Pipeline to the right estimators, scorers, and CV splitters has been either cumbersome, hacky, or impossible. The new metadata routing mechanism in scikit-learn enables you to pass metadata through these objects. As a use-case, we study how you can implement a revenue sensitive scoring while doing a hyperparameter search within a GridSearchCV object.

In this talk, V willl give an overview of what scope is, why it’s important (and actually quite cool) and go over the order in which Python looks up variable names. I’ll show you a couple of examples, including a code snippet which we’ll play around with together (interactively and anonymously). All while using the multi-layered image of a Nesting doll (or Russian or Matryoshka doll) who has been a helpful partner on my way to understanding this topic.

Summary

Building a data platform is a substrantial engineering endeavor. Once it is running, the next challenge is figuring out how to address release management for all of the different component parts. The services and systems need to be kept up to date, but so does the code that controls their behavior. In this episode your host Tobias Macey reflects on his current challenges in this area and some of the factors that contribute to the complexity of the problem.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to dataengineeringpodcast.com/codecomments today to subscribe. My thanks to the team at Code Comments for their support. Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. Your host is Tobias Macey and today I want to talk about my experiences managing the QA and release management process of my data platform

Interview

Introduction As a team, our overall goal is to ensure that the production environment for our data platform is highly stable and reliable. This is the foundational element of establishing and maintaining trust with the consumers of our data. In order to support this effort, we need to ensure that only changes that have been tested and verified are promoted to production. Our current challenge is one that plagues all data teams. We want to have an environment that mirrors our production environment that is available for testing, but it’s not feasible to maintain a complete duplicate of all of the production data. Compounding that challenge is the fact that each of the components of our data platform interact with data in slightly different ways and need different processes for ensuring that changes are being promoted safely.

Contact Info

LinkedIn Website

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.

Links

Data Platforms and Leaky Abstractions Episode Building A Data Platform From Scratch Airbyte

Podcast Episode

Trino dbt Starburst Galaxy Superset Dagster LakeFS

Podcast Episode

Nessie

Podcast Episode

Iceberg Snowflake LocalStack DSL == Domain Specific Language

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-S

Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society. Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style! In this episode, we're joined by special guest Maryam Ilyas as we delve into a variety of topics that shape our digital world: Women’s Healthcare Insights: Exploring the Oura ring's commitment during Women's Health Awareness Month and its role in addressing the underrepresentation of female health conditions in research. A Deep Dive into the EU AI Act: Examining the AI Act’s implications, including its classification of AI systems (prohibited, high-risk, limited-risk, and minimal-risk), ethical concerns, regulatory challenges & the act's impact on AI usage, particularly regarding mass surveillance at the Paris Olympics.The Evolution of Music and AI: Reviewing the AI-generated music video for "The Hardest Part" by Washed Out, directed by Paul Trillo, showcasing AI’s growing role in the arts.Hot Takes on Data Tools: Is combining SQL, PySpark (and Python) in Databricks the most powerful tool in the data space? Let's dissect the possibilities and limitations.Don't forget to check us out on Youtube too, where you can find a lot more content beyond the podcast!

Workshop led by Alexey Grigorev on building a chatbot using large language models with Python. Topics include data extraction from FAQs, knowledge base indexing, chatbot setup in a Jupyter Notebook, interfacing with LLMs, and implementing Retrieval-Augmented Generation (RAG).

Summary Artificial intelligence has dominated the headlines for several months due to the successes of large language models. This has prompted numerous debates about the possibility of, and timeline for, artificial general intelligence (AGI). Peter Voss has dedicated decades of his life to the pursuit of truly intelligent software through the approach of cognitive AI. In this episode he explains his approach to building AI in a more human-like fashion and the emphasis on learning rather than statistical prediction. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementDagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.Your host is Tobias Macey and today I'm interviewing Peter Voss about what is involved in making your AI applications more "human"Interview IntroductionHow did you get involved in machine learning?Can you start by unpacking the idea of "human-like" AI? How does that contrast with the conception of "AGI"?The applications and limitations of GPT/LLM models have been dominating the popular conversation around AI. How do you see that impacting the overrall ecosystem of ML/AI applications and investment?The fundamental/foundational challenge of every AI use case is sourcing appropriate data. What are the strategies that you have found useful to acquire, evaluate, and prepare data at an appropriate scale to build high quality models? What are the opportunities and limitations of causal modeling techniques for generalized AI models?As AI systems gain more sophistication there is a challenge with establishing and maintaining trust. What are the risks involved in deploying more human-level AI systems and monitoring their reliability?What are the practical/architectural methods necessary to build more cognitive AI systems? How would you characterize the ecosystem of tools/frameworks available for creating, evolving, and maintaining these applications?What are the most interesting, innovative, or unexpected ways that you have seen cognitive AI applied?What are the most interesting, unexpected, or challenging lessons that you have learned while working on desiging/developing cognitive AI systems?When is cognitive AI the wrong choice?What do you have planned for the future of cognitive AI applications at Aigo?Contact Info LinkedInWebsiteParting Question From your perspective, what is the biggest barrier to adoption of machine learning today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.Links Aigo.aiArtificial General IntelligenceCognitive AIKnowledge GraphCausal ModelingBayesian StatisticsThinking Fast & Slow by Daniel Kahneman (affiliate link)Agent-Based ModelingReinforcement LearningDARPA 3 Waves of AI presentationWhy Don't We Have AGI Yet? whitepaperConcepts Is All You Need WhitepaperHellen KellerStephen HawkingThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0

Math and Architectures of Deep Learning

Shine a spotlight into the deep learning “black box”. This comprehensive and detailed guide reveals the mathematical and architectural concepts behind deep learning models, so you can customize, maintain, and explain them more effectively. Inside Math and Architectures of Deep Learning you will find: Math, theory, and programming principles side by side Linear algebra, vector calculus and multivariate statistics for deep learning The structure of neural networks Implementing deep learning architectures with Python and PyTorch Troubleshooting underperforming models Working code samples in downloadable Jupyter notebooks The mathematical paradigms behind deep learning models typically begin as hard-to-read academic papers that leave engineers in the dark about how those models actually function. Math and Architectures of Deep Learning bridges the gap between theory and practice, laying out the math of deep learning side by side with practical implementations in Python and PyTorch. Written by deep learning expert Krishnendu Chaudhury, you’ll peer inside the “black box” to understand how your code is working, and learn to comprehend cutting-edge research you can turn into practical applications. About the Technology Discover what’s going on inside the black box! To work with deep learning you’ll have to choose the right model, train it, preprocess your data, evaluate performance and accuracy, and deal with uncertainty and variability in the outputs of a deployed solution. This book takes you systematically through the core mathematical concepts you’ll need as a working data scientist: vector calculus, linear algebra, and Bayesian inference, all from a deep learning perspective. About the Book Math and Architectures of Deep Learning teaches the math, theory, and programming principles of deep learning models laid out side by side, and then puts them into practice with well-annotated Python code. You’ll progress from algebra, calculus, and statistics all the way to state-of-the-art DL architectures taken from the latest research. What's Inside The core design principles of neural networks Implementing deep learning with Python and PyTorch Regularizing and optimizing underperforming models About the Reader Readers need to know Python and the basics of algebra and calculus. About the Author Krishnendu Chaudhury is co-founder and CTO of the AI startup Drishti Technologies. He previously spent a decade each at Google and Adobe. Quotes Machine learning uses a cocktail of linear algebra, vector calculus, statistical analysis, and topology to represent, visualize, and manipulate points in high dimensional spaces. This book builds that foundation in an intuitive way–along with the PyTorch code you need to be a successful deep learning practitioner. - Vineet Gupta, Google Research A thorough explanation of the mathematics behind deep learning! - Grigory Sapunov, Intento Deep learning in its full glory, with all its mathematical details. This is the book! - Atul Saurav, Genworth Financial

Está no ar, o Data Hackers News !! Os assuntos mais quentes da semana, com as principais notícias da área de Dados, IA e Tecnologia, que você também encontra na nossa Newsletter semanal, agora no Podcast do Data Hackers !!

Aperte o play e ouça agora, o Data Hackers News dessa semana !

Para saber tudo sobre o que está acontecendo na área de dados, se inscreva na Newsletter semanal:

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.datahackers.news/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Baixe o relatório completo do State of Data Brazil e os highlights da pesquisa :

⁠⁠⁠⁠⁠https://stateofdata.datahackers.com.br/⁠⁠⁠⁠⁠

Conheça nossos comentaristas do Data Hackers News:

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Monique Femme⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Paulo Vasconcellos⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Demais canais do Data Hackers:

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Site⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Linkedin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Tik Tok⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠You Tube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Matérias/assuntos comentados:

⁠Webinar Data Hackers + Thoughtworks (08.05 I 19h): GenAI para impulsionar resultados no Mercado Financeiro Óculos da Ray-Ban e Meta agora têm IA multimodal

Google demite seu time de Python inteiro

Já aproveita, para nos seguir no Spotify, Apple Podcasts, ou no seu player de podcasts favoritos !