talk-data.com talk-data.com

Event

SciPy 2025

2025-07-07 – 2025-07-13 PyData

Activities tracked

142

Sessions & talks

Showing 1–25 of 142 · Newest first

Search within this event →

Agentic-Ai and latency implications

2025-07-12
talk

Since agent processing take significant time, what happens to this latency induced if agentic-ai is implemented in existing workflow. What are the latency challenges ? What could be key strategies to overcome challenges? What should we do to change the user expectation.=? What should be done to maintain/enhance user experience? What trade-offs should be considers between performance, latency, cost etc?

SciPy 2025 Sprint Prep BoF

2025-07-12
talk

Come join the BoF to do a practice run on contributing to a GitHub project. We will walk through how to open a Pull Request for a bugfix, using the workflow most libraries participating at the weekend sprints use (hosted by the sprint chairs)

Towards Robust Security in Scientific Open Source Projects

2025-07-12
talk

In the open-source community, the security of software packages is a critical concern since it constitutes a significant portion of the global digital infrastructure. This BoF session will focus on the supply chain security of open-source software in scientific computing. We aim to bring together maintainers and contributors of scientific Python packages to discuss current security practices, identify common vulnerabilities, and explore tools and strategies to enhance the security of the ecosystem. Join us to share your experiences, challenges, and ideas on fortifying our open-source projects against potential threats and ensuring the integrity of scientific research.

GPU Accelerated Python

2025-07-11
talk

If you have interest in NumPy, SciPy, Signal Processing, Simulation, DataFrames, Linear Programming (LP), Vehicle Routing Problems (VRP), or Graph Analysis, we'd love to hear what performance you're seeing and how you're measuring.

Real-world Impacts of Generative AI in the Research Software Engineer and Data Scientist Workplace

2025-07-11
talk

Recent breakthroughs in large language model-based artificial intelligence (AI) have captured the public’s interest in AI more broadly. With the growing adoption of these technologies in professional and educational settings, public dialog about their potential impacts on the workforce has been ubiquitous. It is, however, difficult to separate the public dialog about the potential impact of the technology from the experienced impact of the technology in the research software engineer and data science workplace. Likewise, it is challenging to separate the generalized anxiety about AI from its specific impacts on individuals working in specialized work settings.

As research software engineers (RSEs) and those in adjacent computational fields engage with AI in the workplace, the realities of the impacts of this technology are becoming clearer. However, much of the dialog has been limited to high-level discussion around general intra-institutional impacts, and lacks the nuance required to provide helpful guidance to RSE practitioners in research settings, specifically. Surprisingly, many RSEs are not involved in career discussions on what the rise of AI means for their professions.

During this BoF, we will hold a structured, interactive discussion session with the goal of identifying critical areas of engagement with AI in the workplace including: current use of AI, AI assistance and automation, AI skills and workforce development, AI and open science, and AI futures. This BoF will represent the first of a series of discussions held jointly by the Academic Data Science Alliance and the US Research Software Engineer Association over the coming year, with support from Schmidt Sciences. The insights gathered from these sessions will inform the development of guidance resources on these topic areas for the broader RSE and computational data practitioner communities.

SciPy 2026

2025-07-11
talk

Come share your ideas next year's SciPy. Participants will have an opportunity to sign up to be on next year's organizing committee.

Lightning Talks

2025-07-11
talk

Lightning talks are 5-minute talks on any topic of interest for the SciPy community. We encourage spontaneous and prepared talks from everyone, but we can’t guarantee spots. Sign ups are at the NumFOCUS booth during the conference.

Break

2025-07-11
talk

Accelerating scientific data releases: Automated metadata generation with LLM agents

2025-07-11
talk

The rapid growth of scientific data repositories demands innovative solutions for efficient metadata creation. In this talk, we present our open-source project that leverages large language models to automate the generation of standard-compliant metadata files from raw scientific datasets. Our approach harnesses the capabilities of pre-trained open source models, finetuned with domain-specific data, and integrated with Langgraph to orchestrate a modular, end-to-end pipeline capable of ingesting heterogeneous raw data files and outputting metadata conforming to specific standards.

The methodology involves a multi-stage process where raw data is first parsed and analyzed by the LLM to extract relevant scientific and contextual information. This information is then structured into metadata templates that adhere strictly to recognized standards, thereby reducing human error and accelerating the data release cycle. We demonstrate the effectiveness of our approach using the USGS ScienceBase repository, where we have successfully generated metadata for a variety of scientific datasets, including images, time series, and text data.

Beyond its immediate application to the USGS ScienceBase repository, our open-source framework is designed to be extensible, allowing adaptation to other data release processes across various scientific domains. We will discuss the technical challenges encountered, such as managing diverse data formats and ensuring metadata quality, and outline strategies for community-driven enhancements. This work not only streamlines the metadata creation workflow but also sets the stage for broader adoption of generative AI in scientific data management.

Additional Material: - Project supported by USGS and ORNL - Codebase will be available on GitHub after paper publication - Fine-tuned LLM models will be available on Hugginface after paper publication

Dive into Flytekit's Internals: A Python SDK to Quickly Bring your Code Into Production

2025-07-11
talk

Flyte is a Linux Foundation OSS orchestrator built for Data and Machine Learning workflows focused on scalability, reliability, and developer productivity. Flyte’s Python SDK, Flytekit, empowers developers by shipping their code from their local environments onto a cluster with one simple CLI command. In this talk, you will learn about the design and implementation details that powers Flytekit’s core features, such as “fast registration” and “type transformers”, and a plugin system that enables Dask, Ray, or distributed GPU workflows.

Remote development for students and indie researchers with Spyder

2025-07-11
talk

PhD students, postdocs and independent researchers often struggle when trying to execute code developed locally in the cloud or HPC clusters for better performance. This is even more difficult if they can't count on IT staff to set up the necessary infrastructure for them on the remote machine, which is common in third-world countries. Spyder 6.1 will come with a whole set of improvements to address that limitation, from setting up a server automatically to easily run code remotely on behalf of users, to manage remote Conda environments and the remote file system from the comfort of a local Spyder installation.

From Model to Trust: Building upon tamper-proof ML metadata records

2025-07-11
talk

The increasing prevalence of AI models necessitates robust mechanisms to ensure their trustworthiness. This talk introduces a standardized, PKI-agnostic approach to verifying the origins and integrity of machine learning models, as built by the OpenSSF Model Signing project. We extend this methodology beyond models to encompass datasets and other associated files, offering a holistic solution for maintaining data provenance and integrity.

From the outside, in: How the napari community supports users and empowers transition to contribution

2025-07-11
talk

Napari, an open-source viewer for scientific data, has an inviting and well-established community that encourages contribution to its own project and the broader bioimage analysis community. This talk will explore how napari supports non-traditional contributors—especially those without formal software development experience—through its welcoming community, human-centered documentation, and rich plugin ecosystem.
As someone with a pure biology background, I will share my journey into computational bioimage analysis and the scientific Python world, and contributing to napari's community. By sharing my experience writing a plugin and contributing to the core project, I will show how community-driven projects, like napari, lower barriers to entry, empower scientists, and cultivate a diverse, engaged research and developer community.

marimo: an open-source reactive Python notebook

2025-07-11
talk
Akshay Agrawal (Marimo)

Python notebooks are a workhorse of scientific computing. But traditional notebooks have problems — they suffer from a reproducibility crisis; they are difficult to use with interactive widgets; their file format does not play well with Git; and they aren't reusable like regular Python scripts or modules.

This talk presents a marimo, an open-source reactive Python notebook that addresses these concerns by modeling notebooks as dataflow graphs and storing them as Python files. We discuss design decisions and their tradeoffs, and show how these decisions make marimo notebooks reproducible in execution and packaging, Git-friendly, executable as scripts, and shareable as apps.

From One Notebook to Many Reports: Automating with Quarto

2025-07-11
talk

Would you rather read a “Climate summary” or a “Climate summary for exactly where you live”? Producing documents that tailor your scientific results to an individual or their situation increases understanding, engagement, and connection. But, producing many reports can be onerous.

If you are looking for a way to automate producing many reports, or you produce reports like this but find yourself in copy-and-paste hell, come along to learn how Quarto solves this problem with parameterized reports - you create a single Python notebook, but you generate many beautiful customized PDFs.

Slides

Learning the art of fostering open-source communities

2025-07-11
talk

Open-source projects are intricate ecosystems that consist of humans contributing in a diverse manner. These contributions are one of the essential elements driving the projects and must be encouraged. The humans behind these contributions play a vital role in constituting the lively and diverse community of the project. Both the humans and their contributions must be preserved and handled with utmost care for the success and evolution of the project.

As with every community, certain best practices should be followed to maintain its health, and certain pitfalls should be avoided. In this talk, I’ll share what I have learned from maintaining the vibrant and wonderful Zarr project and its community over the years.

Real-time ML: Accelerating Python for inference (< 10ms) at scale

2025-07-11
talk

Real-time machine learning depends on features and data that by definition can’t be pre-computed. Detecting fraud or acute diseases like sepsis requires processing events that emerged seconds ago. How do we build an infrastructure platform that executes complex data pipelines (< 10ms) end-to-end and on-demand? All while meeting data teams where they are–in Python–the language of ML! Learn how we built a symbolic interpreter that accelerates ML pipelines by transpiling Python into DAGs of static expressions. These expressions are optimized in C++ and eventually run in production workloads at scale with Velox–an OSS (~4k stars) unified query engine (C++) from Meta.

(Exclusively on Zoom) Not Remotely Fun: Virtual Lightning Talks

2025-07-11
talk

Sign up for the CHANCE to give a 5-minute lightning talk by messaging David Nicholson or Rebecca BurWei on Slack. Or, show up to the Zoom on time and we'll take names for the first 5 minutes. Talks will be randomly selected. Virtual surprises await! Virtual and in-person conference attendees welcome!

Zoom: https://numfocus-org.zoom.us/j/82704423021?pwd=rJSUmdWwGaqIL8WKY4s6l7B6049rBM.1 2025-07-11 12:00 until 2026-07-11 13:00

Lunch

2025-07-11
talk

From Legacy to Leading-Edge: Revamping NCEI Software for the Cloud Era

2025-07-11
talk

Extreme weather events threaten industries and economic stability. NOAA’s National Centers for Environmental Information (NCEI) addresses this through the Industry Proving Grounds (IPG), which modernizes data delivery by collaborating with sectors like re/insurance and retail to develop practical, data-driven solutions. This presentation explores IPG’s technical innovations, including implementing Polars for efficient data processing, AWS for scalability, and CI/CD pipelines for streamlined deployment. These tools enhance data accessibility, reduce latency, and support real-time decision-making. By integrating scientific computing, cloud technology, and DevOps, NCEI improves climate resilience and provides a model for leveraging open-source tools to address global challenges.

Processing Cloud-optimized data in Python (Dataplug)

2025-07-11
talk

The elasticity of the Cloud is very appealing for processing large scientific data. However, enormous volumes of unstructured research data, totaling petabytes, remain untapped in data repositories due to the lack of efficient parallel data access. Even-sized partitioning of these data to enable its parallel processing requires a complete re-write to storage, becoming prohibitively expensive for high volumes. In this article we present Dataplug, an extensible framework that enables fine-grained parallel data access to unstructured scientific data in object storage. Dataplug employs read-only, format-aware indexing, allowing to define dynamically-sized partitions using various partitioning strategies. This approach avoids writing the partitioned dataset back to storage, enabling distributed workers to fetch data partitions on-the-fly directly from large data blobs, efficiently leveraging the high bandwidth capability of object storage. Validations on genomic (FASTQGZip) and geospatial (LiDAR) data formats demonstrate that Dataplug considerably lowers pre-processing compute costs (between 65.5% — 71.31% less) without imposing significant overheads.

SciPy’s New Infrastructure for Probability Distributions and Random Variables

2025-07-11
talk

The SciPy library provides objects representing well over 100 univariate probability distributions. These have served the scientific Python ecosystem for decades, but they are built upon an infrastructure that has not kept up with the demands of today’s users. To address its shortcomings, SciPy 1.15 includes a new infrastructure for working with probability distributions. This talk will introduce users to the new infrastructure and demonstrate its many advantages in terms of usability, flexibility, accuracy, and performance.

Lessons Learned from Adding Backend Dispatching to NetworkX and scikit-image

2025-07-11
talk

As scientific computing increasingly relies on diverse hardware (CPUs, GPUs, etc) and data structures, libraries face pressure to support multiple backends while maintaining a consistent API. This talk presents practical considerations for adding dispatching to existing libraries, enabling seamless integration with external backends. Using NetworkX and scikit-image as case studies, we demonstrate how they evolved to become a common API with multiple implementations, handle backend-specific behaviors, and ensure robustness through testing and documentation. We also discuss technical challenges, differences in approaches, community adoption strategies, and the broader implications for the SciPy ecosystem.

SciPy Proceedings: An Exemplar for Publishing Computational Open Science

2025-07-11
talk

The SciPy Proceedings (https://proceedings.scipy.org) have long served as a cornerstone for publishing research in the scientific python community; with over 330 peer-reviewed articles being published over the last 17 years. In 2024, the SciPy Proceedings underwent a significant transformation, adopting MyST Markdown (https://mystmd.org) and Curvenote (https://curvenote.com) to enhance accessibility, interactivity, and reproducibility — including publishing of Jupyter Notebooks. The new proceedings articles are web-first, providing features such as deep-dive links for cross-references and previews of GItHub content, interactive 3D visualizations, and rich-rendering of Jupyter Notebooks. In this talk, we will (1) present the new authoring & reading capabilities introduced in 2024; (2) highlight connections to prominent open-science initiatives and their impact on advancing computational research publishing; and (3) demonstrate the underlying technologies and how they enhance integrations with SciPy packages and how to use these tools in your own communication workflows.

Our presentation will give an overview of the revised authoring process for SciPy Proceedings; how we improve metadata standards in a similar way to code-linting and continuous integration; and the integration of live previews of the articles, including auto-generated PDFs and JATS XML (a standard used in scientific publishing). The peer-review process for the proceedings currently happens using GitHub’s peer-review commenting in a similar fashion to the Journal of Open Source Software; we will demonstrate this process as well as showcase opportunities for working with distributed review services such as PREreview (https://prereview.org). The open publishing pipeline has streamlined the submission, review, and revision processes while maintaining high scientific quality and improving the completeness of scholarly metadata. Finally, we will present how this work connects into other high-profile scientific publishing initiatives that have incorporated Jupyter Notebooks and live computational figures as well as interactive displays of large-scale data. These initiatives include Notebooks Now! by the American Geophysical Union, which is focusing on ensuring that Jupyter Notebooks can be properly integrated into the scholarly record; and the Microscopy Society of America’s work on interactive publishing and publishing of large-scale microscopy data with interactive visualizations. These initiatives and the SciPy Proceedings are enabled by recent improvements in open-source tools including MyST Markdown, JupyterLab, BinderHub, and Curvenote, which enable new ways to share executable research content. These initiatives collectively aim to improve both the reproducibility, interactivity, and the accessibility of research by providing improved connections between data, software and narrative research articles.

By embracing open science principles and modern technologies, the SciPy Proceedings exemplify how computational research can be more transparent, reproducible, and accessible. The shift to computational publishing, especially in the context of the scientific python community, opens new opportunities for researchers to publish not only their final results but also the computational workflows, datasets, and interactive visualizations that underpin them. This transformation aligns with broader efforts in open science infrastructure, such as integrating persistent identifiers (DOIs, ORCID, ROR), and adopting FAIR (Findable, Accessible, Interoperable, Reusable) principles for computational content. Building on these foundations, as well as open tools like MyST Markdown and Curvenote, provides a scalable model for open scientific publishing that bridges the gap between computational research and scholarly communication, fostering a more collaborative, iterative, and continuous approach to scientific knowledge dissemination.

SpikeInterface: Streamlining End-to-End Spike Sorting Workflows

2025-07-11
talk

Neuroscientists record brain activity using probes that capture rapid voltage changes ('spikes') from neurons. Spike sorting, the process of isolating these signals and attributing them to specific neurons, faces significant challenges: incompatible file formats, diverse algorithms, and inconsistent quality control. SpikeInterface provides a unified Python framework that standardizes data handling across technologies and enables reproducibility. In this talk, we will discuss: 1) SpikeInterface's modular components for I/O, processing, and sorting; 2) containerized dependency management that eliminates complex installation conflicts between diverse spike sorters; and 3) parallelization tools optimized for the memory-intensive nature of large-scale electrophysiology recordings.