talk-data.com
People (1 result)
Activities & events
| Title & Speakers | Event |
|---|---|
|
The S in SRE is for...Sustainable! + Pub Quiz
2026-01-13 · 17:00
Hello everyone! Come and join us for our first meetup of 2026 at AWS, Mr.Treublaan 7 on the 4th floor, 1097 DP Amsterdam, where we dive into the topic of sustainability with the featured talks from Nati Cohen and Nourolhoda Alemi followed by an SRE themed pub quiz🥳 Agenda 18:00 Doors open, food and socialising 19:00 Opening by SRE NL Meetup Host 19:05 ARM Migration Made Practical by Nati Cohen (AWS) 19:35 Building a Greener Digital Landscape: Core Principles of Sustainable IT by Nourolhoda Alemi (ING) 20:10 Pub Quiz, Networking & Drinks 21:00 End Abstracts: ARM Migration Made Practical by Nati Cohen ARM processors are shaking up the cloud by delivering faster performance, lower costs, and greener computing. But with all these benefits, why do so many teams still hesitate to make the leap? This session makes ARM migration practical: we’ll clarify the architecture, identify easy-to-migrate workloads, and share proven steps for evaluation and transition. Learn why real-world testing matters, discover essential tools, and build multi-arch containers without increasing CI time. Whether you’re starting a new project or updating legacy apps, get actionable insights for a smooth, successful migration. Nati is a Solutions Architect with AWS. He delights in helping customers simplify complex systems, teaching them about the inner workings of cloud services and debugging annoying technical oddities. When he is not at his computer he is soldering electronic kits, tinkering with smaller computers and drumming on a Taiko. Building a Greener Digital Landscape: Core Principles of Sustainable IT by Nourolhoda Alemi This talk delves into the foundational concepts of Green IT alongside ING’s best practices, offering a comprehensive perspective on how technology can drive both innovation and sustainability. Join me to explore: • The challenges of assessing sustainability in IT systems • Strategic approaches to reduce digital environmental footprints • Principles of sustainable software design and development • Best practices of sustainable IT within ING organization Whether you're an IT leader, developer, or tech enthusiast, this talk will provide you with practical insights to align your digital strategies with sustainability goals. Let’s think green—and build a cleaner digital world. Nourolhoda is an Engineering Manager within the Cash & Payments domain at ING. With 13 years of solid experience in Backend Engineering and an academic background in Artificial Intelligence, she is passionate about integrating Green IT best practices into infrastructure, codebases, and architectural design. Beyond her professional expertise, she is a watercolor artist as well. Privacy notice(s):
|
The S in SRE is for...Sustainable! + Pub Quiz
|
|
AWS re:Invent 2025 - Sustainable and cost-efficient generative AI with agentic workflows (AIM333)
2025-12-03 · 21:21
Building sustainable, cost-effective generative AI on AWS requires integrating agentic AI, efficient architecture, and cloud-native optimization. Agentic systems using Amazon Bedrock AgentCore employ contextual memory, asynchronous execution, and on-demand tool invocation to minimize compute waste. MCP enables secure connections between AI agents, AWS services, and custom tools. Efficiency increases through AWS's Trainium and Inferentia2 silicon (50% better performance per watt), Amazon SageMaker for scalable development, and optimization techniques like quantization and speculative decoding. Auto-scaling, batch processing, and spot instances prevent over-provisioning. Combined with CloudWatch and Cost Explorer monitoring, this approach delivers high-performance, low-carbon generative AI solutions. Learn more: More AWS events: https://go.aws/3kss9CP Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4 ABOUT AWS: Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster. AWSreInvent #AWSreInvent2025 #AWS |
AWS re:Invent 2024 |
|
From Full-Time Mom to Head of Data and Cloud
2025-11-17 · 11:30
How resilience and inclusion shape better teams – Xia He-Bleinagel We will explore what it means to rebuild a career in technology and grow into leadership after a major life change. The session will look at how resilience, curiosity, and continuous learning can open new paths in data and cloud, especially when re-entering the workforce after time away. Drawing from practical experiences, it will touch on the mindset shifts, support systems, and habits that make career reinvention possible in a fast-moving field. The conversation will also highlight the human side of leadership that inclusive practices, empathy, and mentorship can help teams perform better and stay connected. It will examine how organizations can better support women and parents in tech, why visibility and representation matter, and what it takes to build confidence and belonging in technical environments. She will cover:
About the Speaker: Xia He-Bleinagel is Head of Data and Cloud at N O W GmbH, a German federal organization advancing zero-emission mobility and sustainable technology. After taking time off to raise her children, she returned to the workforce and built a successful career in data and cloud engineering. Through continuous learning and community involvement, she rose to a leadership role where she now focuses on building inclusive teams, empowering women in tech, and leading with empathy. Join our slack: https://datatalks.club/slack.html |
From Full-Time Mom to Head of Data and Cloud
|
|
Beyond the Perimeter: Practical Patterns for Fine‑Grained Data Access
2025-10-27 · 01:32
Matt Topper
– President
@ UberEther
,
Tobias Macey
– host
Summary In this episode of the Data Engineering Podcast Matt Topper, president of UberEther, talks about the complex challenge of identity, credentials, and access control in modern data platforms. With the shift to composable ecosystems, integration burdens have exploded, fracturing governance and auditability across warehouses, lakes, files, vector stores, and streaming systems. Matt shares practical solutions, including propagating user identity via JWTs, externalizing policy with engines like OPA/Rego and Cedar, and using database proxies for native row/column security. He also explores catalog-driven governance, lineage-based label propagation, and OpenTDF for binding policies to data objects. The conversation covers machine-to-machine access, short-lived credentials, workload identity, and constraining access by interface choke points, as well as lessons from Zanzibar-style policy models and the human side of enforcement. Matt emphasizes the need for trust composition - unifying provenance, policy, and identity context - to answer questions about data access, usage, and intent across the entire data path. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData teams everywhere face the same problem: they're forcing ML models, streaming data, and real-time processing through orchestration tools built for simple ETL. The result? Inflexible infrastructure that can't adapt to different workloads. That's why Cash App and Cisco rely on Prefect. Cash App's fraud detection team got what they needed - flexible compute options, isolated environments for custom packages, and seamless data exchange between workflows. Each model runs on the right infrastructure, whether that's high-memory machines or distributed compute. Orchestration is the foundation that determines whether your data team ships or struggles. ETL, ML model training, AI Engineering, Streaming - Prefect runs it all from ingestion to activation in one platform. Whoop and 1Password also trust Prefect for their data operations. If these industry leaders use Prefect for critical workflows, see what it can do for you at dataengineeringpodcast.com/prefect.Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.Composable data infrastructure is great, until you spend all of your time gluing it together. Bruin is an open source framework, driven from the command line, that makes integration a breeze. Write Python and SQL to handle the business logic, and let Bruin handle the heavy lifting of data movement, lineage tracking, data quality monitoring, and governance enforcement. Bruin allows you to build end-to-end data workflows using AI, has connectors for hundreds of platforms, and helps data teams deliver faster. Teams that use Bruin need less engineering effort to process data and benefit from a fully integrated data platform. Go to dataengineeringpodcast.com/bruin today to get started. And for dbt Cloud customers, they'll give you $1,000 credit to migrate to Bruin Cloud.Your host is Tobias Macey and today I'm interviewing Matt Topper about the challenges of managing identity and access controls in the context of data systemsInterview IntroductionHow did you get involved in the area of data management?The data ecosystem is a uniquely challenging space for creating and enforcing technical controls for identity and access control. What are the key considerations for designing a strategy for addressing those challenges?For data acess the off-the-shelf options are typically on either extreme of too coarse or too granular in their capabilities. What do you see as the major factors that contribute to that situation?Data governance policies are often used as the primary means of identifying what data can be accesssed by whom, but translating that into enforceable constraints is often left as a secondary exercise. How can we as an industry make that a more manageable and sustainable practice?How can the audit trails that are generated by data systems be used to inform the technical controls for identity and access?How can the foundational technologies of our data platforms be improved to make identity and authz a more composable primitive?How does the introduction of streaming/real-time data ingest and delivery complicate the challenges of security controls?What are the most interesting, innovative, or unexpected ways that you have seen data teams address ICAM?What are the most interesting, unexpected, or challenging lessons that you have learned while working on ICAM?What are the aspects of ICAM in data systems that you are paying close attention to?What are your predictions for the industry adoption or enforcement of those controls?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links UberEtherJWT == JSON Web TokenOPA == Open Policy AgentRegoPingIdentityOktaMicrosoft EntraSAML == Security Assertion Markup LanguageOAuthOIDC == OpenID ConnectIDP == Identity ProviderKubernetesIstioAmazon CEDAR policy languageAWS IAMPII == Personally Identifiable InformationCISO == Chief Information Security OfficerOpenTDFOpenFGAGoogle ZanzibarRisk Management FrameworkModel Context ProtocolGoogle Data ProjectTPM == Trusted Platform ModulePKI == Public Key InfrastructurePassskeysDuckLakePodcast EpisodeAccumuloJDBCOpenBaoHashicorp VaultLDAPThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA |
Data Engineering Podcast |
|
Oct 16 - Visual AI in Agriculture (Day 2)
2025-10-16 · 16:00
Join us for day two of a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI in Agriculture. Date and Time Oct 16 at 9 AM Pacific Location Virtual. Register for the Zoom. Field-Ready Vision: Building the Agricultural Image Repository (AgIR) for Sustainable Farming Data—not models—is the bottleneck in agricultural computer vision. This talk shares how Precision Sustainable Agriculture (PSA) is tackling that gap with the Agricultural Image Repository (AgIR): a cloud bank of high-resolution, labeled images spanning weeds (40+ species), cover crops, and cash crops across regions, seasons, and sensors. We’ll show how AgIR blends two complementary streams: (1) semi-field, high-throughput data captured by BenchBot, our open-source, modular gantry that autonomously images plants and feeds a semi-automated annotation pipeline; (2) true field images that capture real environmental variability. Together, they cut labeling cost, accelerate pretraining, and improve robustness in production. On top of AgIR, we’ve built a data-centric training stack: hierarchical augmentation groups, batch mixers, a stand-alone visualizer for rapid iteration, and a reproducible PyTorch Lightning pipeline. We’ll cover practical lessons from segmentation (crop/weed/residue/water/soil), handling domain shift between semi-field and field scenes, and designing metadata schemas that actually pay off at model time. About the Speaker Sina Baghbanijam is a Ph.D. candidate in Electrical and Computer Engineering at North Carolina State University, where his research centers on generative AI, computer vision, and machine learning. His work bridges advanced AI methods with real-world applications across agriculture, medicine, and the social sciences, with a focus on large-scale image segmentation, bias-aware modeling, and data-driven analysis. In addition to his academic research, Sina is currently serving as an Agricultural Image Repository Software Engineering Intern with Precision Sustainable Agriculture, where he develops scalable pipelines and metadata systems to support AI-driven analysis of crop, soil, and field imagery. Beyond Manual Measurements: How AI is Accelerating Plant Breeding Traditional plant breeding relies on manual phenotypic measurements that are time-intensive, subjective, and create bottlenecks in variety development. This presentation demonstrates how computer vision and artificial intelligence are revolutionizing plant selection processes by automating trait extraction from simple photographs. Our cloud-based platform transforms images captured with smartphones, drones, or laboratory cameras into instant, quantitative phenotypic data including fruit count, size measurements, and weight estimations. The system integrates phenotypic data with genotypic, pedigree, and environmental information in a unified database, enabling real-time analytics and decision support through intuitive dashboards. Unlike expensive hardware-dependent solutions, our software-focused approach works with existing camera equipment and standard breeding workflows, making advanced phenotyping accessible to organizations of all sizes. About the Speaker Dr. Sharon Inch is a botanist with a PhD in Plant Pathology and over 20 years of experience in horticulture and agricultural research. Throughout her career, she has witnessed firsthand the inefficiencies of traditional breeding methods, inspiring her to found AgriVision Analytics. As CEO, she leads the development of cloud-based computer vision platforms that transform plant breeding workflows through AI-powered phenotyping. Her work focuses on accelerating variety development and improving breeding decision-making through automated trait extraction and data integration. Dr. Sharon Inch is passionate about bridging the gap between advanced technology and practical agricultural applications to address global food security challenges. AI-assisted sweetpotato yield estimation pipelines using optical sensor data In this presentation, we will introduce the sensor systems and AI-powered analysis algorithms used in high-throughput sweetpotato post-harvest packing pipelines (developed by the Optical Sensing Lab at NC State University). By collecting image data from sweetpotato fields and packing lines respectively, we aim to quantitatively optimize the grading and yield estimation process, and the planning on storage and inventory-order matching. We built two customized sensor devices to collect data respectively from the top bins when receiving sweetpotatoes from farmers, and eliminator table before grading and packing process. We also developed a compact instance segmentation pipeline that can run on smart phones for rapid yield estimation in-field with resource limitations. To minimize data privacy concerns and Internet connectivity issues, we try to keep all the analysis pipelines on the edge, which results in a design tradeoff between resource availability and environmental constraints. We will also introduce sensor building with these considerations. The analysis results and real time production information are then integrated into an interactive online dashboard, where stakeholders can leverage to help with inventory-order management and making operational decisions. About the Speaker Yifan Wu is a current Ph.D candidate at NC State University working in the Optical Sensing Lab (OSL) supervised by Dr. Michael Kudenov. Research focuses on developing sensor systems and machine learning platforms for business intelligence applications. An End-to-End AgTech Use Case in FiftyOne The agricultural sector is increasingly turning to computer vision to tackle challenges in crop monitoring, pest detection, and yield optimization. Yet, developing robust models in this space often requires careful data exploration, curation, and evaluation—steps that are just as critical as model training itself. In this talk, we will walk through an end-to-end AgTech use case using FiftyOne, an open-source tool for dataset visualization, curation, and model evaluation. Starting with a pest detection dataset, we will explore the samples and annotations to understand dataset quality and potential pitfalls. From there, we will curate the dataset by filtering, tagging, and identifying edge cases that could impact downstream performance. Next, we’ll train a computer vision model to detect different pest species and demonstrate how FiftyOne can be used to rigorously evaluate the results. Along the way, we’ll highlight how dataset-centric workflows can accelerate experimentation, improve model reliability, and surface actionable insights specific to agricultural applications. By the end of the session, attendees will gain a practical understanding of how to: - Explore and diagnose real-world agricultural datasets - Curate training data for improved performance - Train and evaluate pest detection models - Use FiftyOne to close the loop between data and models This talk will be valuable for anyone working at the intersection of agriculture and computer vision, whether you’re building production models or just beginning to explore AgTech use cases. About the Speaker Prerna Dhareshwar is a Machine Learning Engineer at Voxel51, where she helps customers leverage FiftyOne to accelerate dataset curation, model development, and evaluation in real-world AI workflows. She brings extensive experience building and deploying computer vision and machine learning systems across industries. Prior to Voxel51, Prerna was a Senior Machine Learning Engineer at Instrumental Inc., where she developed models for defect detection in manufacturing, and a Machine Learning Software Engineer at Pure Storage, focusing on predictive analytics and automation. |
Oct 16 - Visual AI in Agriculture (Day 2)
|
|
Oct 16 - Visual AI in Agriculture (Day 2)
2025-10-16 · 16:00
Join us for day two of a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI in Agriculture. Date and Time Oct 16 at 9 AM Pacific Location Virtual. Register for the Zoom. Field-Ready Vision: Building the Agricultural Image Repository (AgIR) for Sustainable Farming Data—not models—is the bottleneck in agricultural computer vision. This talk shares how Precision Sustainable Agriculture (PSA) is tackling that gap with the Agricultural Image Repository (AgIR): a cloud bank of high-resolution, labeled images spanning weeds (40+ species), cover crops, and cash crops across regions, seasons, and sensors. We’ll show how AgIR blends two complementary streams: (1) semi-field, high-throughput data captured by BenchBot, our open-source, modular gantry that autonomously images plants and feeds a semi-automated annotation pipeline; (2) true field images that capture real environmental variability. Together, they cut labeling cost, accelerate pretraining, and improve robustness in production. On top of AgIR, we’ve built a data-centric training stack: hierarchical augmentation groups, batch mixers, a stand-alone visualizer for rapid iteration, and a reproducible PyTorch Lightning pipeline. We’ll cover practical lessons from segmentation (crop/weed/residue/water/soil), handling domain shift between semi-field and field scenes, and designing metadata schemas that actually pay off at model time. About the Speaker Sina Baghbanijam is a Ph.D. candidate in Electrical and Computer Engineering at North Carolina State University, where his research centers on generative AI, computer vision, and machine learning. His work bridges advanced AI methods with real-world applications across agriculture, medicine, and the social sciences, with a focus on large-scale image segmentation, bias-aware modeling, and data-driven analysis. In addition to his academic research, Sina is currently serving as an Agricultural Image Repository Software Engineering Intern with Precision Sustainable Agriculture, where he develops scalable pipelines and metadata systems to support AI-driven analysis of crop, soil, and field imagery. Beyond Manual Measurements: How AI is Accelerating Plant Breeding Traditional plant breeding relies on manual phenotypic measurements that are time-intensive, subjective, and create bottlenecks in variety development. This presentation demonstrates how computer vision and artificial intelligence are revolutionizing plant selection processes by automating trait extraction from simple photographs. Our cloud-based platform transforms images captured with smartphones, drones, or laboratory cameras into instant, quantitative phenotypic data including fruit count, size measurements, and weight estimations. The system integrates phenotypic data with genotypic, pedigree, and environmental information in a unified database, enabling real-time analytics and decision support through intuitive dashboards. Unlike expensive hardware-dependent solutions, our software-focused approach works with existing camera equipment and standard breeding workflows, making advanced phenotyping accessible to organizations of all sizes. About the Speaker Dr. Sharon Inch is a botanist with a PhD in Plant Pathology and over 20 years of experience in horticulture and agricultural research. Throughout her career, she has witnessed firsthand the inefficiencies of traditional breeding methods, inspiring her to found AgriVision Analytics. As CEO, she leads the development of cloud-based computer vision platforms that transform plant breeding workflows through AI-powered phenotyping. Her work focuses on accelerating variety development and improving breeding decision-making through automated trait extraction and data integration. Dr. Sharon Inch is passionate about bridging the gap between advanced technology and practical agricultural applications to address global food security challenges. AI-assisted sweetpotato yield estimation pipelines using optical sensor data In this presentation, we will introduce the sensor systems and AI-powered analysis algorithms used in high-throughput sweetpotato post-harvest packing pipelines (developed by the Optical Sensing Lab at NC State University). By collecting image data from sweetpotato fields and packing lines respectively, we aim to quantitatively optimize the grading and yield estimation process, and the planning on storage and inventory-order matching. We built two customized sensor devices to collect data respectively from the top bins when receiving sweetpotatoes from farmers, and eliminator table before grading and packing process. We also developed a compact instance segmentation pipeline that can run on smart phones for rapid yield estimation in-field with resource limitations. To minimize data privacy concerns and Internet connectivity issues, we try to keep all the analysis pipelines on the edge, which results in a design tradeoff between resource availability and environmental constraints. We will also introduce sensor building with these considerations. The analysis results and real time production information are then integrated into an interactive online dashboard, where stakeholders can leverage to help with inventory-order management and making operational decisions. About the Speaker Yifan Wu is a current Ph.D candidate at NC State University working in the Optical Sensing Lab (OSL) supervised by Dr. Michael Kudenov. Research focuses on developing sensor systems and machine learning platforms for business intelligence applications. An End-to-End AgTech Use Case in FiftyOne The agricultural sector is increasingly turning to computer vision to tackle challenges in crop monitoring, pest detection, and yield optimization. Yet, developing robust models in this space often requires careful data exploration, curation, and evaluation—steps that are just as critical as model training itself. In this talk, we will walk through an end-to-end AgTech use case using FiftyOne, an open-source tool for dataset visualization, curation, and model evaluation. Starting with a pest detection dataset, we will explore the samples and annotations to understand dataset quality and potential pitfalls. From there, we will curate the dataset by filtering, tagging, and identifying edge cases that could impact downstream performance. Next, we’ll train a computer vision model to detect different pest species and demonstrate how FiftyOne can be used to rigorously evaluate the results. Along the way, we’ll highlight how dataset-centric workflows can accelerate experimentation, improve model reliability, and surface actionable insights specific to agricultural applications. By the end of the session, attendees will gain a practical understanding of how to: - Explore and diagnose real-world agricultural datasets - Curate training data for improved performance - Train and evaluate pest detection models - Use FiftyOne to close the loop between data and models This talk will be valuable for anyone working at the intersection of agriculture and computer vision, whether you’re building production models or just beginning to explore AgTech use cases. About the Speaker Prerna Dhareshwar is a Machine Learning Engineer at Voxel51, where she helps customers leverage FiftyOne to accelerate dataset curation, model development, and evaluation in real-world AI workflows. She brings extensive experience building and deploying computer vision and machine learning systems across industries. Prior to Voxel51, Prerna was a Senior Machine Learning Engineer at Instrumental Inc., where she developed models for defect detection in manufacturing, and a Machine Learning Software Engineer at Pure Storage, focusing on predictive analytics and automation. |
Oct 16 - Visual AI in Agriculture (Day 2)
|
|
Oct 16 - Visual AI in Agriculture (Day 2)
2025-10-16 · 16:00
Join us for day two of a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI in Agriculture. Date and Time Oct 16 at 9 AM Pacific Location Virtual. Register for the Zoom. Field-Ready Vision: Building the Agricultural Image Repository (AgIR) for Sustainable Farming Data—not models—is the bottleneck in agricultural computer vision. This talk shares how Precision Sustainable Agriculture (PSA) is tackling that gap with the Agricultural Image Repository (AgIR): a cloud bank of high-resolution, labeled images spanning weeds (40+ species), cover crops, and cash crops across regions, seasons, and sensors. We’ll show how AgIR blends two complementary streams: (1) semi-field, high-throughput data captured by BenchBot, our open-source, modular gantry that autonomously images plants and feeds a semi-automated annotation pipeline; (2) true field images that capture real environmental variability. Together, they cut labeling cost, accelerate pretraining, and improve robustness in production. On top of AgIR, we’ve built a data-centric training stack: hierarchical augmentation groups, batch mixers, a stand-alone visualizer for rapid iteration, and a reproducible PyTorch Lightning pipeline. We’ll cover practical lessons from segmentation (crop/weed/residue/water/soil), handling domain shift between semi-field and field scenes, and designing metadata schemas that actually pay off at model time. About the Speaker Sina Baghbanijam is a Ph.D. candidate in Electrical and Computer Engineering at North Carolina State University, where his research centers on generative AI, computer vision, and machine learning. His work bridges advanced AI methods with real-world applications across agriculture, medicine, and the social sciences, with a focus on large-scale image segmentation, bias-aware modeling, and data-driven analysis. In addition to his academic research, Sina is currently serving as an Agricultural Image Repository Software Engineering Intern with Precision Sustainable Agriculture, where he develops scalable pipelines and metadata systems to support AI-driven analysis of crop, soil, and field imagery. Beyond Manual Measurements: How AI is Accelerating Plant Breeding Traditional plant breeding relies on manual phenotypic measurements that are time-intensive, subjective, and create bottlenecks in variety development. This presentation demonstrates how computer vision and artificial intelligence are revolutionizing plant selection processes by automating trait extraction from simple photographs. Our cloud-based platform transforms images captured with smartphones, drones, or laboratory cameras into instant, quantitative phenotypic data including fruit count, size measurements, and weight estimations. The system integrates phenotypic data with genotypic, pedigree, and environmental information in a unified database, enabling real-time analytics and decision support through intuitive dashboards. Unlike expensive hardware-dependent solutions, our software-focused approach works with existing camera equipment and standard breeding workflows, making advanced phenotyping accessible to organizations of all sizes. About the Speaker Dr. Sharon Inch is a botanist with a PhD in Plant Pathology and over 20 years of experience in horticulture and agricultural research. Throughout her career, she has witnessed firsthand the inefficiencies of traditional breeding methods, inspiring her to found AgriVision Analytics. As CEO, she leads the development of cloud-based computer vision platforms that transform plant breeding workflows through AI-powered phenotyping. Her work focuses on accelerating variety development and improving breeding decision-making through automated trait extraction and data integration. Dr. Sharon Inch is passionate about bridging the gap between advanced technology and practical agricultural applications to address global food security challenges. AI-assisted sweetpotato yield estimation pipelines using optical sensor data In this presentation, we will introduce the sensor systems and AI-powered analysis algorithms used in high-throughput sweetpotato post-harvest packing pipelines (developed by the Optical Sensing Lab at NC State University). By collecting image data from sweetpotato fields and packing lines respectively, we aim to quantitatively optimize the grading and yield estimation process, and the planning on storage and inventory-order matching. We built two customized sensor devices to collect data respectively from the top bins when receiving sweetpotatoes from farmers, and eliminator table before grading and packing process. We also developed a compact instance segmentation pipeline that can run on smart phones for rapid yield estimation in-field with resource limitations. To minimize data privacy concerns and Internet connectivity issues, we try to keep all the analysis pipelines on the edge, which results in a design tradeoff between resource availability and environmental constraints. We will also introduce sensor building with these considerations. The analysis results and real time production information are then integrated into an interactive online dashboard, where stakeholders can leverage to help with inventory-order management and making operational decisions. About the Speaker Yifan Wu is a current Ph.D candidate at NC State University working in the Optical Sensing Lab (OSL) supervised by Dr. Michael Kudenov. Research focuses on developing sensor systems and machine learning platforms for business intelligence applications. An End-to-End AgTech Use Case in FiftyOne The agricultural sector is increasingly turning to computer vision to tackle challenges in crop monitoring, pest detection, and yield optimization. Yet, developing robust models in this space often requires careful data exploration, curation, and evaluation—steps that are just as critical as model training itself. In this talk, we will walk through an end-to-end AgTech use case using FiftyOne, an open-source tool for dataset visualization, curation, and model evaluation. Starting with a pest detection dataset, we will explore the samples and annotations to understand dataset quality and potential pitfalls. From there, we will curate the dataset by filtering, tagging, and identifying edge cases that could impact downstream performance. Next, we’ll train a computer vision model to detect different pest species and demonstrate how FiftyOne can be used to rigorously evaluate the results. Along the way, we’ll highlight how dataset-centric workflows can accelerate experimentation, improve model reliability, and surface actionable insights specific to agricultural applications. By the end of the session, attendees will gain a practical understanding of how to: - Explore and diagnose real-world agricultural datasets - Curate training data for improved performance - Train and evaluate pest detection models - Use FiftyOne to close the loop between data and models This talk will be valuable for anyone working at the intersection of agriculture and computer vision, whether you’re building production models or just beginning to explore AgTech use cases. About the Speaker Prerna Dhareshwar is a Machine Learning Engineer at Voxel51, where she helps customers leverage FiftyOne to accelerate dataset curation, model development, and evaluation in real-world AI workflows. She brings extensive experience building and deploying computer vision and machine learning systems across industries. Prior to Voxel51, Prerna was a Senior Machine Learning Engineer at Instrumental Inc., where she developed models for defect detection in manufacturing, and a Machine Learning Software Engineer at Pure Storage, focusing on predictive analytics and automation. |
Oct 16 - Visual AI in Agriculture (Day 2)
|
|
Oct 16 - Visual AI in Agriculture (Day 2)
2025-10-16 · 16:00
Join us for day two of a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI in Agriculture. Date and Time Oct 16 at 9 AM Pacific Location Virtual. Register for the Zoom. Field-Ready Vision: Building the Agricultural Image Repository (AgIR) for Sustainable Farming Data—not models—is the bottleneck in agricultural computer vision. This talk shares how Precision Sustainable Agriculture (PSA) is tackling that gap with the Agricultural Image Repository (AgIR): a cloud bank of high-resolution, labeled images spanning weeds (40+ species), cover crops, and cash crops across regions, seasons, and sensors. We’ll show how AgIR blends two complementary streams: (1) semi-field, high-throughput data captured by BenchBot, our open-source, modular gantry that autonomously images plants and feeds a semi-automated annotation pipeline; (2) true field images that capture real environmental variability. Together, they cut labeling cost, accelerate pretraining, and improve robustness in production. On top of AgIR, we’ve built a data-centric training stack: hierarchical augmentation groups, batch mixers, a stand-alone visualizer for rapid iteration, and a reproducible PyTorch Lightning pipeline. We’ll cover practical lessons from segmentation (crop/weed/residue/water/soil), handling domain shift between semi-field and field scenes, and designing metadata schemas that actually pay off at model time. About the Speaker Sina Baghbanijam is a Ph.D. candidate in Electrical and Computer Engineering at North Carolina State University, where his research centers on generative AI, computer vision, and machine learning. His work bridges advanced AI methods with real-world applications across agriculture, medicine, and the social sciences, with a focus on large-scale image segmentation, bias-aware modeling, and data-driven analysis. In addition to his academic research, Sina is currently serving as an Agricultural Image Repository Software Engineering Intern with Precision Sustainable Agriculture, where he develops scalable pipelines and metadata systems to support AI-driven analysis of crop, soil, and field imagery. Beyond Manual Measurements: How AI is Accelerating Plant Breeding Traditional plant breeding relies on manual phenotypic measurements that are time-intensive, subjective, and create bottlenecks in variety development. This presentation demonstrates how computer vision and artificial intelligence are revolutionizing plant selection processes by automating trait extraction from simple photographs. Our cloud-based platform transforms images captured with smartphones, drones, or laboratory cameras into instant, quantitative phenotypic data including fruit count, size measurements, and weight estimations. The system integrates phenotypic data with genotypic, pedigree, and environmental information in a unified database, enabling real-time analytics and decision support through intuitive dashboards. Unlike expensive hardware-dependent solutions, our software-focused approach works with existing camera equipment and standard breeding workflows, making advanced phenotyping accessible to organizations of all sizes. About the Speaker Dr. Sharon Inch is a botanist with a PhD in Plant Pathology and over 20 years of experience in horticulture and agricultural research. Throughout her career, she has witnessed firsthand the inefficiencies of traditional breeding methods, inspiring her to found AgriVision Analytics. As CEO, she leads the development of cloud-based computer vision platforms that transform plant breeding workflows through AI-powered phenotyping. Her work focuses on accelerating variety development and improving breeding decision-making through automated trait extraction and data integration. Dr. Sharon Inch is passionate about bridging the gap between advanced technology and practical agricultural applications to address global food security challenges. AI-assisted sweetpotato yield estimation pipelines using optical sensor data In this presentation, we will introduce the sensor systems and AI-powered analysis algorithms used in high-throughput sweetpotato post-harvest packing pipelines (developed by the Optical Sensing Lab at NC State University). By collecting image data from sweetpotato fields and packing lines respectively, we aim to quantitatively optimize the grading and yield estimation process, and the planning on storage and inventory-order matching. We built two customized sensor devices to collect data respectively from the top bins when receiving sweetpotatoes from farmers, and eliminator table before grading and packing process. We also developed a compact instance segmentation pipeline that can run on smart phones for rapid yield estimation in-field with resource limitations. To minimize data privacy concerns and Internet connectivity issues, we try to keep all the analysis pipelines on the edge, which results in a design tradeoff between resource availability and environmental constraints. We will also introduce sensor building with these considerations. The analysis results and real time production information are then integrated into an interactive online dashboard, where stakeholders can leverage to help with inventory-order management and making operational decisions. About the Speaker Yifan Wu is a current Ph.D candidate at NC State University working in the Optical Sensing Lab (OSL) supervised by Dr. Michael Kudenov. Research focuses on developing sensor systems and machine learning platforms for business intelligence applications. An End-to-End AgTech Use Case in FiftyOne The agricultural sector is increasingly turning to computer vision to tackle challenges in crop monitoring, pest detection, and yield optimization. Yet, developing robust models in this space often requires careful data exploration, curation, and evaluation—steps that are just as critical as model training itself. In this talk, we will walk through an end-to-end AgTech use case using FiftyOne, an open-source tool for dataset visualization, curation, and model evaluation. Starting with a pest detection dataset, we will explore the samples and annotations to understand dataset quality and potential pitfalls. From there, we will curate the dataset by filtering, tagging, and identifying edge cases that could impact downstream performance. Next, we’ll train a computer vision model to detect different pest species and demonstrate how FiftyOne can be used to rigorously evaluate the results. Along the way, we’ll highlight how dataset-centric workflows can accelerate experimentation, improve model reliability, and surface actionable insights specific to agricultural applications. By the end of the session, attendees will gain a practical understanding of how to: - Explore and diagnose real-world agricultural datasets - Curate training data for improved performance - Train and evaluate pest detection models - Use FiftyOne to close the loop between data and models This talk will be valuable for anyone working at the intersection of agriculture and computer vision, whether you’re building production models or just beginning to explore AgTech use cases. About the Speaker Prerna Dhareshwar is a Machine Learning Engineer at Voxel51, where she helps customers leverage FiftyOne to accelerate dataset curation, model development, and evaluation in real-world AI workflows. She brings extensive experience building and deploying computer vision and machine learning systems across industries. Prior to Voxel51, Prerna was a Senior Machine Learning Engineer at Instrumental Inc., where she developed models for defect detection in manufacturing, and a Machine Learning Software Engineer at Pure Storage, focusing on predictive analytics and automation. |
Oct 16 - Visual AI in Agriculture (Day 2)
|
|
Oct 16 - Visual AI in Agriculture (Day 2)
2025-10-16 · 16:00
Join us for day two of a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI in Agriculture. Date and Time Oct 16 at 9 AM Pacific Location Virtual. Register for the Zoom. Field-Ready Vision: Building the Agricultural Image Repository (AgIR) for Sustainable Farming Data—not models—is the bottleneck in agricultural computer vision. This talk shares how Precision Sustainable Agriculture (PSA) is tackling that gap with the Agricultural Image Repository (AgIR): a cloud bank of high-resolution, labeled images spanning weeds (40+ species), cover crops, and cash crops across regions, seasons, and sensors. We’ll show how AgIR blends two complementary streams: (1) semi-field, high-throughput data captured by BenchBot, our open-source, modular gantry that autonomously images plants and feeds a semi-automated annotation pipeline; (2) true field images that capture real environmental variability. Together, they cut labeling cost, accelerate pretraining, and improve robustness in production. On top of AgIR, we’ve built a data-centric training stack: hierarchical augmentation groups, batch mixers, a stand-alone visualizer for rapid iteration, and a reproducible PyTorch Lightning pipeline. We’ll cover practical lessons from segmentation (crop/weed/residue/water/soil), handling domain shift between semi-field and field scenes, and designing metadata schemas that actually pay off at model time. About the Speaker Sina Baghbanijam is a Ph.D. candidate in Electrical and Computer Engineering at North Carolina State University, where his research centers on generative AI, computer vision, and machine learning. His work bridges advanced AI methods with real-world applications across agriculture, medicine, and the social sciences, with a focus on large-scale image segmentation, bias-aware modeling, and data-driven analysis. In addition to his academic research, Sina is currently serving as an Agricultural Image Repository Software Engineering Intern with Precision Sustainable Agriculture, where he develops scalable pipelines and metadata systems to support AI-driven analysis of crop, soil, and field imagery. Beyond Manual Measurements: How AI is Accelerating Plant Breeding Traditional plant breeding relies on manual phenotypic measurements that are time-intensive, subjective, and create bottlenecks in variety development. This presentation demonstrates how computer vision and artificial intelligence are revolutionizing plant selection processes by automating trait extraction from simple photographs. Our cloud-based platform transforms images captured with smartphones, drones, or laboratory cameras into instant, quantitative phenotypic data including fruit count, size measurements, and weight estimations. The system integrates phenotypic data with genotypic, pedigree, and environmental information in a unified database, enabling real-time analytics and decision support through intuitive dashboards. Unlike expensive hardware-dependent solutions, our software-focused approach works with existing camera equipment and standard breeding workflows, making advanced phenotyping accessible to organizations of all sizes. About the Speaker Dr. Sharon Inch is a botanist with a PhD in Plant Pathology and over 20 years of experience in horticulture and agricultural research. Throughout her career, she has witnessed firsthand the inefficiencies of traditional breeding methods, inspiring her to found AgriVision Analytics. As CEO, she leads the development of cloud-based computer vision platforms that transform plant breeding workflows through AI-powered phenotyping. Her work focuses on accelerating variety development and improving breeding decision-making through automated trait extraction and data integration. Dr. Sharon Inch is passionate about bridging the gap between advanced technology and practical agricultural applications to address global food security challenges. AI-assisted sweetpotato yield estimation pipelines using optical sensor data In this presentation, we will introduce the sensor systems and AI-powered analysis algorithms used in high-throughput sweetpotato post-harvest packing pipelines (developed by the Optical Sensing Lab at NC State University). By collecting image data from sweetpotato fields and packing lines respectively, we aim to quantitatively optimize the grading and yield estimation process, and the planning on storage and inventory-order matching. We built two customized sensor devices to collect data respectively from the top bins when receiving sweetpotatoes from farmers, and eliminator table before grading and packing process. We also developed a compact instance segmentation pipeline that can run on smart phones for rapid yield estimation in-field with resource limitations. To minimize data privacy concerns and Internet connectivity issues, we try to keep all the analysis pipelines on the edge, which results in a design tradeoff between resource availability and environmental constraints. We will also introduce sensor building with these considerations. The analysis results and real time production information are then integrated into an interactive online dashboard, where stakeholders can leverage to help with inventory-order management and making operational decisions. About the Speaker Yifan Wu is a current Ph.D candidate at NC State University working in the Optical Sensing Lab (OSL) supervised by Dr. Michael Kudenov. Research focuses on developing sensor systems and machine learning platforms for business intelligence applications. An End-to-End AgTech Use Case in FiftyOne The agricultural sector is increasingly turning to computer vision to tackle challenges in crop monitoring, pest detection, and yield optimization. Yet, developing robust models in this space often requires careful data exploration, curation, and evaluation—steps that are just as critical as model training itself. In this talk, we will walk through an end-to-end AgTech use case using FiftyOne, an open-source tool for dataset visualization, curation, and model evaluation. Starting with a pest detection dataset, we will explore the samples and annotations to understand dataset quality and potential pitfalls. From there, we will curate the dataset by filtering, tagging, and identifying edge cases that could impact downstream performance. Next, we’ll train a computer vision model to detect different pest species and demonstrate how FiftyOne can be used to rigorously evaluate the results. Along the way, we’ll highlight how dataset-centric workflows can accelerate experimentation, improve model reliability, and surface actionable insights specific to agricultural applications. By the end of the session, attendees will gain a practical understanding of how to: - Explore and diagnose real-world agricultural datasets - Curate training data for improved performance - Train and evaluate pest detection models - Use FiftyOne to close the loop between data and models This talk will be valuable for anyone working at the intersection of agriculture and computer vision, whether you’re building production models or just beginning to explore AgTech use cases. About the Speaker Prerna Dhareshwar is a Machine Learning Engineer at Voxel51, where she helps customers leverage FiftyOne to accelerate dataset curation, model development, and evaluation in real-world AI workflows. She brings extensive experience building and deploying computer vision and machine learning systems across industries. Prior to Voxel51, Prerna was a Senior Machine Learning Engineer at Instrumental Inc., where she developed models for defect detection in manufacturing, and a Machine Learning Software Engineer at Pure Storage, focusing on predictive analytics and automation. |
Oct 16 - Visual AI in Agriculture (Day 2)
|
|
Oct 16 - Visual AI in Agriculture (Day 2)
2025-10-16 · 16:00
Join us for day two of a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI in Agriculture. Date and Time Oct 16 at 9 AM Pacific Location Virtual. Register for the Zoom. Field-Ready Vision: Building the Agricultural Image Repository (AgIR) for Sustainable Farming Data—not models—is the bottleneck in agricultural computer vision. This talk shares how Precision Sustainable Agriculture (PSA) is tackling that gap with the Agricultural Image Repository (AgIR): a cloud bank of high-resolution, labeled images spanning weeds (40+ species), cover crops, and cash crops across regions, seasons, and sensors. We’ll show how AgIR blends two complementary streams: (1) semi-field, high-throughput data captured by BenchBot, our open-source, modular gantry that autonomously images plants and feeds a semi-automated annotation pipeline; (2) true field images that capture real environmental variability. Together, they cut labeling cost, accelerate pretraining, and improve robustness in production. On top of AgIR, we’ve built a data-centric training stack: hierarchical augmentation groups, batch mixers, a stand-alone visualizer for rapid iteration, and a reproducible PyTorch Lightning pipeline. We’ll cover practical lessons from segmentation (crop/weed/residue/water/soil), handling domain shift between semi-field and field scenes, and designing metadata schemas that actually pay off at model time. About the Speaker Sina Baghbanijam is a Ph.D. candidate in Electrical and Computer Engineering at North Carolina State University, where his research centers on generative AI, computer vision, and machine learning. His work bridges advanced AI methods with real-world applications across agriculture, medicine, and the social sciences, with a focus on large-scale image segmentation, bias-aware modeling, and data-driven analysis. In addition to his academic research, Sina is currently serving as an Agricultural Image Repository Software Engineering Intern with Precision Sustainable Agriculture, where he develops scalable pipelines and metadata systems to support AI-driven analysis of crop, soil, and field imagery. Beyond Manual Measurements: How AI is Accelerating Plant Breeding Traditional plant breeding relies on manual phenotypic measurements that are time-intensive, subjective, and create bottlenecks in variety development. This presentation demonstrates how computer vision and artificial intelligence are revolutionizing plant selection processes by automating trait extraction from simple photographs. Our cloud-based platform transforms images captured with smartphones, drones, or laboratory cameras into instant, quantitative phenotypic data including fruit count, size measurements, and weight estimations. The system integrates phenotypic data with genotypic, pedigree, and environmental information in a unified database, enabling real-time analytics and decision support through intuitive dashboards. Unlike expensive hardware-dependent solutions, our software-focused approach works with existing camera equipment and standard breeding workflows, making advanced phenotyping accessible to organizations of all sizes. About the Speaker Dr. Sharon Inch is a botanist with a PhD in Plant Pathology and over 20 years of experience in horticulture and agricultural research. Throughout her career, she has witnessed firsthand the inefficiencies of traditional breeding methods, inspiring her to found AgriVision Analytics. As CEO, she leads the development of cloud-based computer vision platforms that transform plant breeding workflows through AI-powered phenotyping. Her work focuses on accelerating variety development and improving breeding decision-making through automated trait extraction and data integration. Dr. Sharon Inch is passionate about bridging the gap between advanced technology and practical agricultural applications to address global food security challenges. AI-assisted sweetpotato yield estimation pipelines using optical sensor data In this presentation, we will introduce the sensor systems and AI-powered analysis algorithms used in high-throughput sweetpotato post-harvest packing pipelines (developed by the Optical Sensing Lab at NC State University). By collecting image data from sweetpotato fields and packing lines respectively, we aim to quantitatively optimize the grading and yield estimation process, and the planning on storage and inventory-order matching. We built two customized sensor devices to collect data respectively from the top bins when receiving sweetpotatoes from farmers, and eliminator table before grading and packing process. We also developed a compact instance segmentation pipeline that can run on smart phones for rapid yield estimation in-field with resource limitations. To minimize data privacy concerns and Internet connectivity issues, we try to keep all the analysis pipelines on the edge, which results in a design tradeoff between resource availability and environmental constraints. We will also introduce sensor building with these considerations. The analysis results and real time production information are then integrated into an interactive online dashboard, where stakeholders can leverage to help with inventory-order management and making operational decisions. About the Speaker Yifan Wu is a current Ph.D candidate at NC State University working in the Optical Sensing Lab (OSL) supervised by Dr. Michael Kudenov. Research focuses on developing sensor systems and machine learning platforms for business intelligence applications. An End-to-End AgTech Use Case in FiftyOne The agricultural sector is increasingly turning to computer vision to tackle challenges in crop monitoring, pest detection, and yield optimization. Yet, developing robust models in this space often requires careful data exploration, curation, and evaluation—steps that are just as critical as model training itself. In this talk, we will walk through an end-to-end AgTech use case using FiftyOne, an open-source tool for dataset visualization, curation, and model evaluation. Starting with a pest detection dataset, we will explore the samples and annotations to understand dataset quality and potential pitfalls. From there, we will curate the dataset by filtering, tagging, and identifying edge cases that could impact downstream performance. Next, we’ll train a computer vision model to detect different pest species and demonstrate how FiftyOne can be used to rigorously evaluate the results. Along the way, we’ll highlight how dataset-centric workflows can accelerate experimentation, improve model reliability, and surface actionable insights specific to agricultural applications. By the end of the session, attendees will gain a practical understanding of how to: - Explore and diagnose real-world agricultural datasets - Curate training data for improved performance - Train and evaluate pest detection models - Use FiftyOne to close the loop between data and models This talk will be valuable for anyone working at the intersection of agriculture and computer vision, whether you’re building production models or just beginning to explore AgTech use cases. About the Speaker Prerna Dhareshwar is a Machine Learning Engineer at Voxel51, where she helps customers leverage FiftyOne to accelerate dataset curation, model development, and evaluation in real-world AI workflows. She brings extensive experience building and deploying computer vision and machine learning systems across industries. Prior to Voxel51, Prerna was a Senior Machine Learning Engineer at Instrumental Inc., where she developed models for defect detection in manufacturing, and a Machine Learning Software Engineer at Pure Storage, focusing on predictive analytics and automation. |
Oct 16 - Visual AI in Agriculture (Day 2)
|
|
Oct 16 - Visual AI in Agriculture (Day 2)
2025-10-16 · 16:00
Join us for day two of a series of virtual events to hear talks from experts on the latest developments at the intersection of Visual AI in Agriculture. Date and Time Oct 16 at 9 AM Pacific Location Virtual. Register for the Zoom. Field-Ready Vision: Building the Agricultural Image Repository (AgIR) for Sustainable Farming Data—not models—is the bottleneck in agricultural computer vision. This talk shares how Precision Sustainable Agriculture (PSA) is tackling that gap with the Agricultural Image Repository (AgIR): a cloud bank of high-resolution, labeled images spanning weeds (40+ species), cover crops, and cash crops across regions, seasons, and sensors. We’ll show how AgIR blends two complementary streams: (1) semi-field, high-throughput data captured by BenchBot, our open-source, modular gantry that autonomously images plants and feeds a semi-automated annotation pipeline; (2) true field images that capture real environmental variability. Together, they cut labeling cost, accelerate pretraining, and improve robustness in production. On top of AgIR, we’ve built a data-centric training stack: hierarchical augmentation groups, batch mixers, a stand-alone visualizer for rapid iteration, and a reproducible PyTorch Lightning pipeline. We’ll cover practical lessons from segmentation (crop/weed/residue/water/soil), handling domain shift between semi-field and field scenes, and designing metadata schemas that actually pay off at model time. About the Speaker Sina Baghbanijam is a Ph.D. candidate in Electrical and Computer Engineering at North Carolina State University, where his research centers on generative AI, computer vision, and machine learning. His work bridges advanced AI methods with real-world applications across agriculture, medicine, and the social sciences, with a focus on large-scale image segmentation, bias-aware modeling, and data-driven analysis. In addition to his academic research, Sina is currently serving as an Agricultural Image Repository Software Engineering Intern with Precision Sustainable Agriculture, where he develops scalable pipelines and metadata systems to support AI-driven analysis of crop, soil, and field imagery. Beyond Manual Measurements: How AI is Accelerating Plant Breeding Traditional plant breeding relies on manual phenotypic measurements that are time-intensive, subjective, and create bottlenecks in variety development. This presentation demonstrates how computer vision and artificial intelligence are revolutionizing plant selection processes by automating trait extraction from simple photographs. Our cloud-based platform transforms images captured with smartphones, drones, or laboratory cameras into instant, quantitative phenotypic data including fruit count, size measurements, and weight estimations. The system integrates phenotypic data with genotypic, pedigree, and environmental information in a unified database, enabling real-time analytics and decision support through intuitive dashboards. Unlike expensive hardware-dependent solutions, our software-focused approach works with existing camera equipment and standard breeding workflows, making advanced phenotyping accessible to organizations of all sizes. About the Speaker Dr. Sharon Inch is a botanist with a PhD in Plant Pathology and over 20 years of experience in horticulture and agricultural research. Throughout her career, she has witnessed firsthand the inefficiencies of traditional breeding methods, inspiring her to found AgriVision Analytics. As CEO, she leads the development of cloud-based computer vision platforms that transform plant breeding workflows through AI-powered phenotyping. Her work focuses on accelerating variety development and improving breeding decision-making through automated trait extraction and data integration. Dr. Sharon Inch is passionate about bridging the gap between advanced technology and practical agricultural applications to address global food security challenges. AI-assisted sweetpotato yield estimation pipelines using optical sensor data In this presentation, we will introduce the sensor systems and AI-powered analysis algorithms used in high-throughput sweetpotato post-harvest packing pipelines (developed by the Optical Sensing Lab at NC State University). By collecting image data from sweetpotato fields and packing lines respectively, we aim to quantitatively optimize the grading and yield estimation process, and the planning on storage and inventory-order matching. We built two customized sensor devices to collect data respectively from the top bins when receiving sweetpotatoes from farmers, and eliminator table before grading and packing process. We also developed a compact instance segmentation pipeline that can run on smart phones for rapid yield estimation in-field with resource limitations. To minimize data privacy concerns and Internet connectivity issues, we try to keep all the analysis pipelines on the edge, which results in a design tradeoff between resource availability and environmental constraints. We will also introduce sensor building with these considerations. The analysis results and real time production information are then integrated into an interactive online dashboard, where stakeholders can leverage to help with inventory-order management and making operational decisions. About the Speaker Yifan Wu is a current Ph.D candidate at NC State University working in the Optical Sensing Lab (OSL) supervised by Dr. Michael Kudenov. Research focuses on developing sensor systems and machine learning platforms for business intelligence applications. An End-to-End AgTech Use Case in FiftyOne The agricultural sector is increasingly turning to computer vision to tackle challenges in crop monitoring, pest detection, and yield optimization. Yet, developing robust models in this space often requires careful data exploration, curation, and evaluation—steps that are just as critical as model training itself. In this talk, we will walk through an end-to-end AgTech use case using FiftyOne, an open-source tool for dataset visualization, curation, and model evaluation. Starting with a pest detection dataset, we will explore the samples and annotations to understand dataset quality and potential pitfalls. From there, we will curate the dataset by filtering, tagging, and identifying edge cases that could impact downstream performance. Next, we’ll train a computer vision model to detect different pest species and demonstrate how FiftyOne can be used to rigorously evaluate the results. Along the way, we’ll highlight how dataset-centric workflows can accelerate experimentation, improve model reliability, and surface actionable insights specific to agricultural applications. By the end of the session, attendees will gain a practical understanding of how to: - Explore and diagnose real-world agricultural datasets - Curate training data for improved performance - Train and evaluate pest detection models - Use FiftyOne to close the loop between data and models This talk will be valuable for anyone working at the intersection of agriculture and computer vision, whether you’re building production models or just beginning to explore AgTech use cases. About the Speaker Prerna Dhareshwar is a Machine Learning Engineer at Voxel51, where she helps customers leverage FiftyOne to accelerate dataset curation, model development, and evaluation in real-world AI workflows. She brings extensive experience building and deploying computer vision and machine learning systems across industries. Prior to Voxel51, Prerna was a Senior Machine Learning Engineer at Instrumental Inc., where she developed models for defect detection in manufacturing, and a Machine Learning Software Engineer at Pure Storage, focusing on predictive analytics and automation. |
Oct 16 - Visual AI in Agriculture (Day 2)
|
|
July 16 - Paris AI, ML and Computer Vision Meetup
2025-07-16 · 15:30
Hear talks from experts on cutting-edge topics in AI, ML, and computer vision! Register for the event to reserve your seat. When and Where July 16 5:30-8:30 PM Paris Marriott Opera Ambassador Vendôme Meeting Room 16 Bd Haussmann Building and working with Small Language Models This session focuses on practical techniques for using small open-source language models (SLMs) in enterprise settings. We'll explore modern workflows for adapting SLMs with domain-specific pre-training, instruction fine-tuning, and alignment. Along the way, we will introduce and demonstrate open-source tools such as DistillKit, Spectrum, and MergeKit, which implement advanced techniques crucial for achieving task-specific accuracy while optimizing computational costs. We'll also discuss some of the models and solutions built by Arcee AI. Join us to learn how small, efficient, and adaptable models can transform your AI applications. About the Speaker Julien Simon, the Chief Evangelist at Arcee.ai, is dedicated to helping enterprise clients develop top-notch and cost-efficient AI solutions using small language models. With over 30 years of tech experience, including more than a decade in cloud computing and machine learning, Julien is committed to daily learning and is passionate about sharing his expertise through code demos, blogs, and YouTube videos. Before joining Arcee.ai, he was Chief Evangelist at Hugging Face and Global AI Evangelist at Amazon Web Services. He also served as a CTO at prominent startups. Accelerating sustainable inference with Pruna AI This talk explores how to make AI faster and more sustainable. We’ll look at the high costs and carbon impact of fine-tuning and self deploying models, and show how optimization techniques available in the Pruna library can reduce size and latency with little to no quality loss. About the Speaker Gabriel Tregoat is the software lead at Pruna.ai, a specialist company on model inference optimisation with an open source library called “pruna”. He previously was a leader for AI in production at Ekimetrics, and started his career as a data scientist and ml-engineer at Shell Energy. He’s passionate about tech, code and new technologies. Visual Agents: What it takes to build an agent that can navigate GUIs like humans We’ll examine conceptual frameworks, potential applications, and future directions of technologies that can “see” and “act” with increasing independence. The discussion will touch on both current limitations and promising horizons in this evolving field. About the Speaker Harpreet Sahota is a hacker-in-residence and machine learning engineer with a passion for deep learning and generative AI. He’s got a deep interest in RAG, Agents, and Multimodal AI. What I Learned About Systematic AI Improvement Most AI teams go through the same story: fast early progress, and then suddenly things slow down. The AI isn’t broken, but new changes don’t seem to help, and it’s not even clear how to tell if things are getting better. I’ve faced this plateau myself—both in my own work and while helping other teams. In this talk, I’ll share what I’ve learned about getting unstuck: how to build genuine confidence in your AI, what “trust” really means in practice, and practical steps to move from “it kind of works” to “this is actually improving.” My goal is to give you real-world ideas you can use when you hit the same wall. About the Speaker Louis Dupont is an AI engineer with over eight years of experience developing AI solutions across multiple industries. Currently, he work directly with companies to build and deploy AI internally, and as a consultant specializing in helping teams overcome common roadblocks in AI development. |
July 16 - Paris AI, ML and Computer Vision Meetup
|
|
Building a sustainable AI ecosystem with Google Cloud
2025-04-11 · 18:00
Jill Higgins
– Global ISV Partnerships - Sustainability
@ Google Cloud
,
Nicole Daignault
– Technology/ISV Partner Marketing Manager
@ Google Cloud
Grow your business and help customers build a greener future with AI. The Google Cloud Ready Sustainability program empowers partners with the tools and resources to deliver sustainable AI solutions. Learn how to leverage AI-optimized infrastructure, measure and optimize energy consumption, and reduce your carbon footprint, while helping your customers do the same. |
|
|
Building efficient and sustainable AI on Google Cloud
2025-04-10 · 21:45
Antoine Castex
– Data & A.I Enterprise Architect
@ L'Oréal
,
Cynthia Wu
– Senior Product Manager
@ Google Cloud
,
Alfonso Hernandez
– Product Manager, Google Cloud FinOps
@ Google Cloud
Want to run AI workloads that are both performant and sustainable? Join L’Oreal and Google Cloud as they reveal how L’Oreal leverages Google Cloud’s powerful suite of tools to gain unprecedented insights into the cost, performance, and carbon footprint of their AI models. This session showcases how L’Oreal measures, reports, and reduces their environmental impact on Google Cloud, offering practical strategies and actionable takeaways for anyone looking to optimize their cloud efficiency and environmental responsibility. |
|
|
Building the future of AI: Intelligent infrastructure and sustainable energy
2025-04-10 · 15:00
Urs Hölzle
– Google Fellow
,
Parthasarathy Ranganathan
– VP, Google Fellow
@ Google Cloud
The explosive growth of artificial intelligence demands a paradigm shift in how we build and power its infrastructure. Join Urs Hölzle, Google Fellow, and Parthasarathy Ranganathan, VP, Engineering Fellow at Google Cloud, for a fireside chat exploring the critical intersection of intelligent infrastructure and sustainable energy. They will delve into the challenges and opportunities of scaling AI, from the hardware powering massive models to the energy sources that fuel them. Discover how innovative approaches to infrastructure design and energy efficiency are shaping the future of AI, 25 years of cloud infrastructure design at Google, and what is coming for the next 25 years. |
|
|
Google /dev/cloud day London
2025-03-06 · 09:00
Register at https://cloud.google.com/events/google-dev-cloud-day-london 🚀 Elevate Your Cloud & GenAI Skills at Google /dev/cloud day London! Join us on March 6th, 2025, for Google /dev/cloud day London, a one-day event designed for developers, engineers, and tech enthusiasts eager to explore the latest in AI, cloud, and application development. 📍 Venue: Sustainable Ventures, County Hall, Belvedere Rd, London SE1 7PB 📅 Date: March 6th, 2025 🕜 Time: 9:00 AM—3:30 PM 💻 Don’t forget to bring your laptop! 🌟 What will you learn? ✅ How to get started with building applications on Cloud Run and dive deep into its advanced features. ✅ Use Gemini 2.0 to build real-time voice and video apps, integrate Google Search for advanced workflows, and detect objects in images and video. ✅ Use evaluation frameworks to ensure your LLM apps are safe, and overcome challenges like hallucinations, outdated information, and chaotic output formats. Spots are limited—register now to secure your place! Looking forward to seeing you there! |
Google /dev/cloud day London
|
|
Swimming Upstream: Using Machine Vision to Create Sustainable Practices in Fisheries of the Future
2025-01-30 · 18:00
Orvis Evans
– Software Engineer
@ AI.Fish
Fishing vessels are on track to generate 10 million hours of video footage annually, creating a massive ML/Ops challenge. At AI.Fish, we are building an end-to-end system enabling non-technical users to harness AI for catch monitoring and classification both on-board and in the cloud. This talk explores our journey in building these approachable systems and working toward answering an old question: How many fish are in the ocean? |
Jan 30 - AI, Machine Learning and Computer Vision Meetup
|
|
Swimming Upstream: Using Machine Vision to Create Sustainable Practices in Fisheries of the Future
2025-01-30 · 18:00
Orvis Evans
– Software Engineer
@ AI.Fish
Fishing vessels are on track to generate 10 million hours of video footage annually, creating a massive machine learning operations challenge. At AI.Fish, we are building an end-to-end system enabling non-technical users to harness AI for catch monitoring and classification both on-board and in the cloud. This talk explores our journey in building these approachable systems and working toward answering an old question: How many fish are in the ocean? |
Jan 30 - AI, Machine Learning and Computer Vision Meetup
|
|
Jan 30 - AI, Machine Learning and Computer Vision Meetup
2025-01-30 · 18:00
Date and Time Jan 30, 2025 at 10 AM Pacific Swimming Upstream: Using Machine Vision to Create Sustainable Practices in Fisheries of the Future Fishing vessels are on track to generate 10 million hours of video footage annually, creating a massive machine learning operations challenge. At AI.Fish, we are building an end-to-end system enabling non-technical users to harness AI for catch monitoring and classification both on-board and in the cloud. This talk explores our journey in building these approachable systems and working toward answering an old question: How many fish are in the ocean? About the Speaker Orvis Evans is a Software Engineer at AI.Fish, where he co-architects ML-Ops pipelines and develops intuitive interfaces that make machine vision accessible to non-technical users. Drawing from his background in building interactive systems, he builds front-end applications and APIs that enable fisheries to process thousands of hours of footage without machine learning expertise. Scaling Semantic Segmentation with Blender Generating datasets for semantic segmentation can be time-intensive. Learn how to use Blender’s Python API to create diverse and realistic synthetic data with automated labels, saving time and improving model performance. Preview the topics to be discussed in this Medium post. About the Speaker Vincent Vandenbussche has a PhD in Physics, is an author, and Machine Learning Engineer with 10 years of experience in software engineering and machine learning. WACV 2025 - Elderly Action Recognition Challenge Join us for a quick update on the Elderly Action Recognition (EAR) Challenge, part of the Computer Vision for Smalls (CV4Smalls) Workshop at the WACV 2025 conference! This challenge focuses on advancing research in Activity of Daily Living (ADL) recognition, particularly within the elderly population, a domain with profound societal implications. Participants will employ transfer learning techniques with any architecture or model they want to use. For example, starting with a general human action recognition benchmark and fine-tuning models on a subset of data tailored to elderly-specific activities. Sign up for the EAR challenge and learn more. About the Speaker Paula Ramos, PhD is a Senior DevRel and Applied AI Research Advocate at Voxel51. Transforming Programming Ed: An AI-Powered Teaching Assistant for Scalable and Adaptive Learning The future of education lies in personalized and scalable solutions, especially in fields like computer engineering where complex concepts often challenge students. This talk introduces Lumina (AI Teaching Assistant), a cutting-edge agentic system designed to revolutionize programming education through its innovative architecture and teaching strategies. Built using OpenAI API, LangChain, RAG, and ChromaDB, Lumina employs an agentic, multi-modal framework that dynamically integrates course materials, technical documentation, and pedagogical strategies into an adaptive knowledge-driven system. Its unique “Knowledge Components” approach decomposes programming concepts into interconnected teachable units, enabling proficiency-based learning and dynamic problem-solving guidance. Attendees will discover how Lumina’s agentic architecture enhances engagement, fosters critical thinking, and improves concept mastery, paving the way for scalable AI-driven educational solutions. About the Speaker Nittin Murthi Dhekshinamoorthy is a computer engineering student and researcher at the University of Illinois Urbana-Champaign with a strong focus on advancing artificial intelligence to solve real-world challenges in education and technology. He is the creator of an AI agent-based Teaching Assistant, leveraging cutting-edge frameworks to provide scalable, adaptive learning solutions, and has contributed to diverse, impactful projects, including natural language-to-SQL systems and deep learning models for clinical image segmentation. |
Jan 30 - AI, Machine Learning and Computer Vision Meetup
|
|
Jan 30 - AI, Machine Learning and Computer Vision Meetup
2025-01-30 · 18:00
Date and Time Jan 30, 2025 at 10 AM Pacific Swimming Upstream: Using Machine Vision to Create Sustainable Practices in Fisheries of the Future Fishing vessels are on track to generate 10 million hours of video footage annually, creating a massive machine learning operations challenge. At AI.Fish, we are building an end-to-end system enabling non-technical users to harness AI for catch monitoring and classification both on-board and in the cloud. This talk explores our journey in building these approachable systems and working toward answering an old question: How many fish are in the ocean? About the Speaker Orvis Evans is a Software Engineer at AI.Fish, where he co-architects ML-Ops pipelines and develops intuitive interfaces that make machine vision accessible to non-technical users. Drawing from his background in building interactive systems, he builds front-end applications and APIs that enable fisheries to process thousands of hours of footage without machine learning expertise. Scaling Semantic Segmentation with Blender Generating datasets for semantic segmentation can be time-intensive. Learn how to use Blender’s Python API to create diverse and realistic synthetic data with automated labels, saving time and improving model performance. Preview the topics to be discussed in this Medium post. About the Speaker Vincent Vandenbussche has a PhD in Physics, is an author, and Machine Learning Engineer with 10 years of experience in software engineering and machine learning. WACV 2025 - Elderly Action Recognition Challenge Join us for a quick update on the Elderly Action Recognition (EAR) Challenge, part of the Computer Vision for Smalls (CV4Smalls) Workshop at the WACV 2025 conference! This challenge focuses on advancing research in Activity of Daily Living (ADL) recognition, particularly within the elderly population, a domain with profound societal implications. Participants will employ transfer learning techniques with any architecture or model they want to use. For example, starting with a general human action recognition benchmark and fine-tuning models on a subset of data tailored to elderly-specific activities. Sign up for the EAR challenge and learn more. About the Speaker Paula Ramos, PhD is a Senior DevRel and Applied AI Research Advocate at Voxel51. Transforming Programming Ed: An AI-Powered Teaching Assistant for Scalable and Adaptive Learning The future of education lies in personalized and scalable solutions, especially in fields like computer engineering where complex concepts often challenge students. This talk introduces Lumina (AI Teaching Assistant), a cutting-edge agentic system designed to revolutionize programming education through its innovative architecture and teaching strategies. Built using OpenAI API, LangChain, RAG, and ChromaDB, Lumina employs an agentic, multi-modal framework that dynamically integrates course materials, technical documentation, and pedagogical strategies into an adaptive knowledge-driven system. Its unique “Knowledge Components” approach decomposes programming concepts into interconnected teachable units, enabling proficiency-based learning and dynamic problem-solving guidance. Attendees will discover how Lumina’s agentic architecture enhances engagement, fosters critical thinking, and improves concept mastery, paving the way for scalable AI-driven educational solutions. About the Speaker Nittin Murthi Dhekshinamoorthy is a computer engineering student and researcher at the University of Illinois Urbana-Champaign with a strong focus on advancing artificial intelligence to solve real-world challenges in education and technology. He is the creator of an AI agent-based Teaching Assistant, leveraging cutting-edge frameworks to provide scalable, adaptive learning solutions, and has contributed to diverse, impactful projects, including natural language-to-SQL systems and deep learning models for clinical image segmentation. |
Jan 30 - AI, Machine Learning and Computer Vision Meetup
|