talk-data.com talk-data.com

Topic

Python

programming_language data_science web_development

1446

tagged

Activity Trend

185 peak/qtr
2020-Q1 2026-Q1

Activities

1446 activities · Newest first

NetworkX is arguably the most popular graph analytics library available today, but one of its greatest strengths - the pure-python implementation - is also possibly its biggest weakness. If you're a seasoned data scientists or a new student of the fascinating field of graph analytics, you're probably familiar with NetworkX and interested in how to make this extremely easy-to-use library powerful enough to handle realistically large graph workflows that often exceed the limitations of its pure-python implementation.

This talk will describe a relatively new capability of NetworkX; support for accelerated backends, and how they can benefit NetworkX users by allowing it to finally be both easy to use and fast. Through the use of backends, NetworkX can also be incorporated into workflows that take advantage of similar accelerators, such as Accelerated Pandas (cudf.pandas), to finally make these easy to use solutions scale to larger problems.

Attend this talk to learn about how you can leverage the various backends available to NetworkX today to seamlessly run graph analytics on GPUs, use GraphBLAS implementations, and more, all without leaving the comfort and convenience of the most popular graph analytics library available.

One of the more mundane tasks in the business analytics world is to measure KPIs: averages, sums, ratios, etc. Typically, these are measured period over period, to see how they trend. If you're a data analyst, you've likely been asked to debug/explain a metric, because a stakeholder wants to understand why a number has changed.

This topic isn't well grounded theory, and the answers we come up with can be lacklustre. In this talk, we discuss solutions to this very common topic. We will look at a methodology we have developed at Carbonfact, and the opensource Python tool we are sharing.

Did you know that all top PyPI packages declare their 3rd party dependencies? In contrast, only about 53% of scientific projects do the same. The question arises: How can we reproduce Python-based scientific experiments if we're unaware of the necessary libraries for our environment? In this talk, we delve into the Python packaging ecosystem and employ a data-driven approach to analyze the structure and reproducibility of packages. We compare two distinct groups of Python packages: the most popular ones on PyPI, which we anticipate to adhere more closely to best practices, and a selection from biomedical experiments. Through our analysis, we uncover common development patterns in Python projects and utilize our open-source library, FawltyDeps, to identify undeclared dependencies and assess the reproducibility of these projects. This discussion is especially valuable for enthusiasts of clean Python code, as well as for data scientists and engineers eager to adopt best practices and enhance reproducibility. Attendees will depart with actionable insights on enhancing the transparency and reliability of their Python projects, thereby advancing the cause of reproducible scientific research.

Polars is a dataframe library taking the world by storm. It is very runtime and memory efficient and comes with a clean and expressive API. Sometimes, however, the built-in API isn't enough. And that's where its killer feature comes in: plugins. You can extend Polars, and solve practically any problem.

No prior Rust experience required, intermediate Python and programming experience required. By the end of the talk, you will know how to write your own Polars Plugin! This talk is aimed at data practitioners.

Many Python frameworks are suitable for creating basic dashboards or prototypes but struggle with more complex ones. Taking lessons from the JavaScript community, the experts on building UI’s, we created a new framework called Solara. Solara scales to much more complex apps and compute-intensive dashboards. Built on the Jupyter stack, Solara apps and its reusable components run in the Jupyter notebook and on its own production quality server based on Starlette/FastAPI.

Solara has a declarative API that is designed for dynamic and complex UIs yet is easy to write. Reactive variables power our state management, which automatically triggers rerenders. Our component-centric architecture stimulates code reusability, and hot reloading promotes efficient workflows. With our rich set of UI and data-focused components, Solara spans the entire spectrum from rapid prototyping to robust, complex dashboards.

90-minute hands-on workshop on the FiftyOne computer vision toolset. Part 1 covers FiftyOne basics (terms, architecture, installation, and general usage), an overview of useful workflows to explore, understand, and curate data, and how FiftyOne represents and semantically slices unstructured computer vision data. Part 2 is a hands-on introduction to FiftyOne: loading datasets from the FiftyOne Dataset Zoo, navigating the FiftyOne App, programmatically inspecting attributes, adding new samples and custom attributes, generating and evaluating model predictions, and saving insightful views into the data.

Summary Airbyte is one of the most prominent platforms for data movement. Over the past 4 years they have invested heavily in solutions for scaling the self-hosted and cloud operations, as well as the quality and stability of their connectors. As a result of that hard work, they have declared their commitment to the future of the platform with a 1.0 release. In this episode Michel Tricot shares the highlights of their journey and the exciting new capabilities that are coming next. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementYour host is Tobias Macey and today I'm interviewing Michel Tricot about the journey to the 1.0 launch of Airbyte and what that means for the projectInterview IntroductionHow did you get involved in the area of data management?Can you describe what Airbyte is and the story behind it?What are some of the notable milestones that you have traversed on your path to the 1.0 release?The ecosystem has gone through some significant shifts since you first launched Airbyte. How have trends such as generative AI, the rise and fall of the "modern data stack", and the shifts in investment impacted your overall product and business strategies?What are some of the hard-won lessons that you have learned about the realities of data movement and integration?What are some of the most interesting/challenging/surprising edge cases or performance bottlenecks that you have had to address?What are the core architectural decisions that have proven to be effective?How has the architecture had to change as you progressed to the 1.0 release?A 1.0 version signals a degree of stability and commitment. Can you describe the decision process that you went through in committing to a 1.0 version?What are the most interesting, innovative, or unexpected ways that you have seen Airbyte used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Airbyte?When is Airbyte the wrong choice?What do you have planned for the future of Airbyte after the 1.0 launch?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links AirbytePodcast EpisodeAirbyte CloudAirbyte Connector BuilderSinger ProtocolAirbyte ProtocolAirbyte CDKModern Data StackELTVector DatabasedbtFivetranPodcast EpisodeMeltanoPodcast EpisodedltReverse ETLGraphRAGAI Engineering Podcast EpisodeThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

In the era of AI-driven applications, personalization is paramount. This talk explores the concept of Full RAG (Retrieval-Augmented Generation) and its potential to revolutionize user experiences across industries. We examine four levels of context personalization, from basic recommendations to highly tailored, real-time interactions.

The presentation demonstrates how increasing levels of context - from batch data to streaming and real-time inputs - can dramatically improve AI model outputs. We discuss the challenges of implementing sophisticated context personalization, including data engineering complexities and the need for efficient, scalable solutions.

Introducing the concept of a Context Platform, we showcase how tools like Tecton can simplify the process of building, deploying, and managing personalized context at scale. Through practical examples in travel recommendations, we illustrate how developers can easily create and integrate batch, streaming, and real-time context using simple Python code, enabling more engaging and valuable AI-powered experiences.

Data Storytelling with Altair and AI

Great data presentations tell a story. Learn how to organize, visualize, and present data using Python, generative AI, and the cutting-edge Altair data visualization toolkit. Take the fast track to amazing data presentations! Data Storytelling with Altair and AI introduces a stack of useful tools and tried-and-tested methodologies that will rapidly increase your productivity, streamline the visualization process, and leave your audience inspired. In Data Storytelling with Altair and AI you’ll discover: Using Python Altair for data visualization Using Generative AI tools for data storytelling The main concepts of data storytelling Building data stories with the DIKW pyramid approach Transforming raw data into a data story Data Storytelling with Altair and AI teaches you how to turn raw data into effective, insightful data stories. You’ll learn exactly what goes into an effective data story, then combine your Python data skills with the Altair library and AI tools to rapidly create amazing visualizations. Your bosses and decision-makers will love your new presentations—and you’ll love how quick Generative AI makes the whole process! About the Technology Every dataset tells a story. After you’ve cleaned, crunched, and organized the raw data, it’s your job to share its story in a way that connects with your audience. Python’s Altair data visualization library, combined with generative AI tools like Copilot and ChatGPT, provide an amazing toolbox for transforming numbers, code, text, and graphics into intuitive data presentations. About the Book Data Storytelling with Altair and AI teaches you how to build enhanced data visualizations using these tools. The book uses hands-on examples to build powerful narratives that can inform, inspire, and motivate. It covers the Altair data visualization library, along with AI techniques like generating text with ChatGPT, creating images with DALL-E, and Python coding with Copilot. You’ll learn by practicing with each interesting data story, from tourist arrivals in Portugal to population growth in the USA to fake news, salmon aquaculture, and more. What's Inside The Data-Information-Knowledge-Wisdom (DIKW) pyramid Publish data stories using Streamlit, Tableau, and Comet Vega and Vega-Lite visualization grammar About the Reader For data analysts and data scientists experienced with Python. No previous knowledge of Altair or Generative AI required. About the Author Angelica Lo Duca is a researcher at the Institute of Informatics and Telematics of the National Research Council, Italy. The technical editor on this book was Ninoslav Cerkez. Quotes This book’s step-by-step approach, illustrated through real-world examples, makes complex data accessible and actionable. - Alexey Grigorev, DataTalks.Club A clear and concise guide to data storytelling. Highly recommended. - Andrew Madson, Insights x Design Data storytelling in a way that anyone can do! This book feels ahead of its time. - Avery Smith, Data Career Jumpstart Excellent hands-on exercises that combine two of my favorite tools: AI and the Altair library. - Jose Berengueres, Author of DataViz and Storytelling

Statistics for Data Science and Analytics

Introductory statistics textbook with a focus on data science topics such as prediction, correlation, and data exploration Statistics for Data Science and Analytics is a comprehensive guide to statistical analysis using Python, presenting important topics useful for data science such as prediction, correlation, and data exploration. The authors provide an introduction to statistical science and big data, as well as an overview of Python data structures and operations. A range of statistical techniques are presented with their implementation in Python, including hypothesis testing, probability, exploratory data analysis, categorical variables, surveys and sampling, A/B testing, and correlation. The text introduces binary classification, a foundational element of machine learning, validation of statistical models by applying them to holdout data, and probability and inference via the easy-to-understand method of resampling and the bootstrap instead of using a myriad of “kitchen sink” formulas. Regression is taught both as a tool for explanation and for prediction. This book is informed by the authors’ experience designing and teaching both introductory statistics and machine learning at Statistics.com. Each chapter includes practical examples, explanations of the underlying concepts, and Python code snippets to help readers apply the techniques themselves. Statistics for Data Science and Analytics includes information on sample topics such as: Int, float, and string data types, numerical operations, manipulating strings, converting data types, and advanced data structures like lists, dictionaries, and sets Experiment design via randomizing, blinding, and before-after pairing, as well as proportions and percents when handling binary data Specialized Python packages like numpy, scipy, pandas, scikit-learn and statsmodels—the workhorses of data science—and how to get the most value from them Statistical versus practical significance, random number generators, functions for code reuse, and binomial and normal probability distributions Written by and for data science instructors, Statistics for Data Science and Analytics is an excellent learning resource for data science instructors prescribing a required intro stats course for their programs, as well as other students and professionals seeking to transition to the data science field.

Summary As data architectures become more elaborate and the number of applications of data increases, it becomes increasingly challenging to locate and access the underlying data. Gravitino was created to provide a single interface to locate and query your data. In this episode Junping Du explains how Gravitino works, the capabilities that it unlocks, and how it fits into your data platform. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementYour host is Tobias Macey and today I'm interviewing Junping Du about Gravitino, an open source metadata service for a unified view of all of your schemasInterview IntroductionHow did you get involved in the area of data management?Can you describe what Gravitino is and the story behind it?What problems are you solving with Gravitino?What are the methods that teams have relied on in the absence of Gravitino to address those use cases?What led to the Hive Metastore being the default for so long?What are the opportunities for innovation and new functionality in the metadata service?The documentation suggests that Gravitino has overlap with a number of tool categories such as table schema (Hive metastore), metadata repository (Open Metadata), data federation (Trino/Alluxio). What are the capabilities that it can completely replace, and which will require other systems for more comprehensive functionality?What are the capabilities that you are explicitly keeping out of scope for Gravitino?Can you describe the technical architecture of Gravitino?How have the design and scope evolved from when you first started working on it?Can you describe how Gravitino integrates into an overall data platform?In a typical day, what are the different ways that a data engineer or data analyst might interact with Gravitino?One of the features that you highlight is centralized permissions management. Can you describe the access control model that you use for unifying across underlying sources?What are the most interesting, innovative, or unexpected ways that you have seen Gravitino used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Gravitino?When is Gravitino the wrong choice?What do you have planned for the future of Gravitino?Contact Info LinkedInGitHubParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links GravitinoHadoopDatastratoPyTorchRayData FabricHiveIcebergPodcast EpisodeHive MetastoreTrinoOpenMetadataPodcast EpisodeAlluxioAtlanPodcast EpisodeSparkThriftThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Inhalte: Piwik PRO hat jede Menge APIs für alle erdenklichen Zwecke. Auch zum gezielten Abruf von konsolidierten Zahlen, Rohdaten und Berichten. Wir schauen uns an, was man braucht, um die API zu nutzen, wie man Daten abruft und für verschiedene Zwecke einsetzen kann. Dazu verwenden wir Python und Google Colab Notebooks als Basis und fangen ganz von vorn an, so dass jeder auf Wunsch - parallel oder später - die einzelnen Schritte mit den eigenen Daten nachvollziehen und weiter ausbauen kann. Programmierkenntnisse sind dafür nicht zwingend erforderlich – auch das ist ein Vorteil des Tool-Stacks, den wir im Training näher beleuchten werden.

Full Stack FastAPI, React, and MongoDB - Second Edition

Full Stack FastAPI, React, and MongoDB guides you step-by-step through creating web applications using the FARM stack. This hands-on resource teaches you how to integrate FastAPI, a modern Python framework, React for front-end development, and MongoDB for data storage to build and deploy powerful, scalable web applications. What this Book will help me do Master the essentials of MongoDB, including creating and managing document-based databases. Gain proficiency in building APIs using FastAPI and Python for robust backend systems. Develop dynamic frontends using React, integrating seamlessly with a FastAPI backend. Securely authenticate and authorize users using JSON Web Tokens in your applications. Explore advanced features like integrating AI models and building with Next.js for production-ready development. Author(s) Marko Aleksendrić, Shrey Batra, Rachelle Palmer, and Shubham Ranjan combine their expertise in web development and software engineering in this book. Together, they bring years of professional experience and a passion for teaching developers to create modern web applications effectively using cutting-edge tools. Who is it for? Intermediate web developers who possess foundational JavaScript and Python skills are the ideal audience for this book. If you want to advance your skills by mastering modern web application development with the FARM stack, this book will guide you comprehensively. With practical, real-world examples, it is designed for developers aiming to build production-grade applications.

Polars Cookbook

Dive into the world of data analysis with the Polars Cookbook. This book, ideal for data professionals, covers practical recipes to manipulate, transform, and analyze data using the Python Polars library. You'll learn both the fundamentals and advanced techniques to build efficient and scalable data workflows. What this Book will help me do Master the basics of Python Polars including installation and setup. Perform complex data manipulation like pivoting, grouping, and joining. Handle large-scale time series data for accurate analysis. Understand data integration with libraries like pandas and numpy. Optimize workflows for both on-premise and cloud environments. Author(s) Yuki Kakegawa is an experienced data analytics consultant who has collaborated with companies such as Microsoft and Stanford Health Care. His passion for data led him to create this detailed guide on Polars. His expertise ensures you gain real-world, actionable insights from every chapter. Who is it for? This book is perfect for data analysts, engineers, and scientists eager to enhance their efficiency with Python Polars. If you are familiar with Python and tools like pandas but are new to Polars, this book will upskill you. Whether handling big data or optimizing code for performance, the Polars Cookbook has the guidance you need to succeed.

Hands-on 90-minute workshop to learn how to leverage the FiftyOne open source computer vision toolset. Part 1 covers FiftyOne basics (terms, architecture, installation, and general usage); an overview of useful workflows to explore, understand, and curate data; and how FiftyOne represents and semantically slices unstructured computer vision data. Part 2 is a hands-on introduction to FiftyOne: loading datasets from the FiftyOne Dataset Zoo, navigating the FiftyOne App, programmatically inspecting attributes, adding new samples and custom attributes, generating and evaluating model predictions, and saving insightful views into the data.

Está no ar, o Data Hackers News !! Os assuntos mais quentes da semana, com as principais notícias da área de Dados, IA e Tecnologia, que você também encontra na nossa Newsletter semanal, agora no Podcast do Data Hackers !!

Aperte o play e ouça agora, o Data Hackers News dessa semana !

Para saber tudo sobre o que está acontecendo na área de dados, se inscreva na Newsletter semanal:

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.datahackers.news/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Conheça nossos comentaristas do Data Hackers News:

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Monique Femme⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Paulo Vasconcellos

⁠Matérias/assuntos comentados:

Anthropic lança Claude no Brasil com foco em uso corporativo;

Vazamento de dados atinge 39 milhões de registros de brasileiros.

Baixe o relatório completo do State of Data Brazil e os highlights da pesquisa :

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Elon Musk lança polêmica nova versão de sua IA de geração de imagens;•⁠ ⁠Taxis da Waymo começam a buzinar de madrugada;•⁠ Ex-CEO da Google pede desculpas depois de dizer que a Google estava perdendo a batalha de IA por causa de trabalho remoto.

Evento Python Brasil use o cupom 10%0ff - DATAHACKERS_NA_PYBR24

Demais canais do Data Hackers:

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Site⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Linkedin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Tik Tok⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠You Tube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠