talk-data.com
People (116 results)
See all 116 →Activities & events
| Title & Speakers | Event |
|---|---|
|
Topic 2: Newcomers, experiencing the power of Trae’s AI Coding firsthand
2025-11-15 · 19:30
Andrew Zhang
– CEO & Co-Founder
@ Azure Partners
Andrew Zhang, CEO & Co-Founder of Azure Partners, demonstrates Trae’s AI Coding. |
KSUG.AI EMEA Online Meetup - AI Vibe Coding on Nov 15 2025
|
|
Cross-Network Interoperability with Hyperledger Cacti Workshop
2025-11-12 · 16:00
In this workshop, participants will gain a comprehensive understanding of the Hyperledger Cacti project—its current state, opportunities for contribution, and the end-to-end solutions it enables, with a special focus on the Secure Asset Transfer Protocol (SATP), soon to be an IETF standard. We will discuss how developers can contribute and extend the project by creating new plugins and connectors for unsupported distributed ledger technologies (DLTs). We will also showcase, use, and enhance existing end-to-end solutions already available within Hyperledger Cacti with SATP-based solutions as example use cases. Key Topics Covered
Target Audience The workshop is meant for developers, blockchain enthusiasts, and enterprise stakeholders who want to enhance their knowledge of DLT interoperability and standardized asset transfer protocols. While prior experience with blockchain fundamentals and some familiarity with Hyperledger Cacti can be helpful, it is not a strict requirement. Speakers Carlos Amaro, Ph.D. Student André Augusto, Ph.D. Candidate and Blockchain Researcher Venkatraman Ramakrishna, Senior Researcher at IBM Weijia Zhang, Vice President of Engineering at Wanchain |
Cross-Network Interoperability with Hyperledger Cacti Workshop
|
|
Cross-Network Interoperability with Hyperledger Cacti Workshop
2025-11-12 · 16:00
In this workshop, participants will gain a comprehensive understanding of the Hyperledger Cacti project—its current state, opportunities for contribution, and the end-to-end solutions it enables, with a special focus on the Secure Asset Transfer Protocol (SATP), soon to be an IETF standard. We will discuss how developers can contribute and extend the project by creating new plugins and connectors for unsupported distributed ledger technologies (DLTs). We will also showcase, use, and enhance existing end-to-end solutions already available within Hyperledger Cacti with SATP-based solutions as example use cases. Key Topics Covered
Target Audience The workshop is meant for developers, blockchain enthusiasts, and enterprise stakeholders who want to enhance their knowledge of DLT interoperability and standardized asset transfer protocols. While prior experience with blockchain fundamentals and some familiarity with Hyperledger Cacti can be helpful, it is not a strict requirement. Speakers Carlos Amaro, Ph.D. Student André Augusto, Ph.D. Candidate and Blockchain Researcher Venkatraman Ramakrishna, Senior Researcher at IBM Weijia Zhang, Vice President of Engineering at Wanchain |
Cross-Network Interoperability with Hyperledger Cacti Workshop
|
|
Cross-Network Interoperability with Hyperledger Cacti Workshop
2025-11-12 · 16:00
In this workshop, participants will gain a comprehensive understanding of the Hyperledger Cacti project—its current state, opportunities for contribution, and the end-to-end solutions it enables, with a special focus on the Secure Asset Transfer Protocol (SATP), soon to be an IETF standard. We will discuss how developers can contribute and extend the project by creating new plugins and connectors for unsupported distributed ledger technologies (DLTs). We will also showcase, use, and enhance existing end-to-end solutions already available within Hyperledger Cacti with SATP-based solutions as example use cases. Key Topics Covered
Target Audience The workshop is meant for developers, blockchain enthusiasts, and enterprise stakeholders who want to enhance their knowledge of DLT interoperability and standardized asset transfer protocols. While prior experience with blockchain fundamentals and some familiarity with Hyperledger Cacti can be helpful, it is not a strict requirement. Speakers Carlos Amaro, Ph.D. Student André Augusto, Ph.D. Candidate and Blockchain Researcher Venkatraman Ramakrishna, Senior Researcher at IBM Weijia Zhang, Vice President of Engineering at Wanchain |
Cross-Network Interoperability with Hyperledger Cacti Workshop
|
|
Cross-Network Interoperability with Hyperledger Cacti Workshop
2025-11-12 · 16:00
In this workshop, participants will gain a comprehensive understanding of the Hyperledger Cacti project—its current state, opportunities for contribution, and the end-to-end solutions it enables, with a special focus on the Secure Asset Transfer Protocol (SATP), soon to be an IETF standard. We will discuss how developers can contribute and extend the project by creating new plugins and connectors for unsupported distributed ledger technologies (DLTs). We will also showcase, use, and enhance existing end-to-end solutions already available within Hyperledger Cacti with SATP-based solutions as example use cases. Key Topics Covered
Target Audience The workshop is meant for developers, blockchain enthusiasts, and enterprise stakeholders who want to enhance their knowledge of DLT interoperability and standardized asset transfer protocols. While prior experience with blockchain fundamentals and some familiarity with Hyperledger Cacti can be helpful, it is not a strict requirement. Speakers Carlos Amaro, Ph.D. Student André Augusto, Ph.D. Candidate and Blockchain Researcher Venkatraman Ramakrishna, Senior Researcher at IBM Weijia Zhang, Vice President of Engineering at Wanchain |
Cross-Network Interoperability with Hyperledger Cacti Workshop
|
|
How can a worm’s intestine influence its descendants’ lifespan? This episode explores how lysosomes send metabolic signals through the epigenome to extend longevity across generations. Researchers found that activating lysosomal lipid metabolism triggers transcriptional up-regulation of a histone variant, H3.3 (his-71), in the intestine. This histone is transported to the germ line, where it’s methylated at K79 by the methyltransferase DOT-1.3. The result is a heritable epigenetic state that promotes longer life across multiple generations of C. elegans. The work reveals how metabolic signalling through lysosomes interacts with chromatin to link soma and germ line, showing how environmental changes like starvation can shape longevity inheritance. 📖 Based on: Zhang Q., Dang W., Wang M.C. Science (2025). “Lysosomes signal through the epigenome to regulate longevity across generations.” https://doi.org/10.1126/science.adn8754 🎧 Subscribe to the WOrM Podcast for more deep dives into the molecular lives of worms. This podcast is generated with artificial intelligence and curated by Veeren. If you’d like your publication featured on the show, please get in touch. 📩 More info: 🔗 www.veerenchauhan.com 📧 [email protected] |
WOrM Podcast: Whole Organism Analytics Podcast |
|
Uncover the power of Graph Query Language (GQL) with 'Getting Started with the Graph Query Language'. This book is your comprehensive guide to mastering GQL, the cornerstone of managing and analyzing complex graph data. Dive into foundational concepts, explore advanced capabilities, and apply them using real-world examples. What this Book will help me do Understand and use GQL syntax effectively, including commands like MATCH, RETURN, INSERT, and DELETE. Master operations with graph patterns, variables, and functions to manipulate and query graph data. Apply advanced GQL techniques such as path matching modes, shortest paths, and transaction commands. Optimize graph database performance using indexing or caching strategies. Utilize GQL on a practical application, such as analyzing money transaction data for behavior and risk insights. Author(s) Ricky Sun, Jason Zhang, and Yuri Simione are seasoned experts in graph database technologies and standards. With years of professional experience and a collaborative spirit, they bring clarity and practice-oriented guidance to understanding GQL. Their passion for teaching and simplifying complex ideas shines through this well-crafted book. Who is it for? This book is ideal for graph database developers, database administrators, and data engineers looking to grasp GQL's fundamentals and advanced features. Beginners familiar with databases and programming fundamentals can follow along seamlessly. It also appeals to analysts and programmers seeking to enhance their graph data handling skills. Prior knowledge of graph theory concepts like nodes and edges is helpful but not mandatory, ensuring accessibility for learners of diverse levels. |
O'Reilly Data Engineering Books
|
|
Tech Pulse 2030
2025-07-22 · 21:30
Tech Pulse 2030 is more than an event series—it's a global movement. With 12 events last year, it fuels collaboration, sparks bold ideas, and drives real-world impact across AI, FinTech, Web3, and beyond. Why Attend:
HostXiaochen Zhang, Executive Director & Chief AI Officer, AI 2030 Moderator: Uvika Sharma, Founder & Managing Partner INTLDA Speakers:
About AI 2030 AI 2030 is a global initiative committed to harnessing AI’s transformative power to benefit humanity.Focused on Responsible AI, AI for All, and AI for Good, we empower individuals and organizations with the knowledge, tools, and networks needed to lead in Responsible AI, closing key gaps in awareness, talent, and solutions to enable responsible AI adoption across public and private sectors globally. www.ai2030.org About FinTech4Good FinTech4Good is a global network focused on emerging technologies. We collaborate with startups, industrial leaders, NPOs, and investors to develop solutions for a better world. https://www.fintech4good.co/ About 1871 The Chicagoland Entrepreneurial Center, dba 1871 is a 501c3 organization that exists to inspire, equip, and support people from all backgrounds to build and innovate extraordinary businesses. https://1871.com/ |
Tech Pulse 2030
|
|
172 - Building AI Assistants, Not Autopilots: What Tony Zhang’s Research Shows About Automation Blindness
2025-06-24 · 21:04
Tony Zhang
– guest
,
Brian T. O’Neill
– host
Today on the podcast, I interview AI researcher Tony Zhang about some of his recent findings about the effects that fully automated AI has on user decision-making. Tony shares lessons from his recent research study comparing typical recommendation AIs with a “forward-reasoning” approach that nudges users to contribute their own reasoning with process-oriented support that may lead to better outcomes. We’ll look at his two study examples where they provided an AI-enabled interface for pilots tasked with deciding mid-flight the next-best alternate airport to land at, and another scenario asking investors to rebalance an ETF portfolio. The takeaway, taken right from Tony’s research, is that “going forward, we suggest that process-oriented support can be an effective framework to inform the design of both 'traditional' AI-assisted decision-making tools but also GenAI-based tools for thought.” Highlights/ Skip to: Tony Zhang’s background (0:46) Context for the study (4:12) Zhang’s metrics for measuring over-reliance on AI (5:06) Understanding the differences between the two design options that study participants were given (15:39) How AI-enabled hints appeared for pilots in each version of the UI (17:49) Using AI to help pilots make good decisions faster (20:15) We look at the ETF portfolio rebalancing use case in the study (27:46) Strategic and tactical findings that Tony took away from his study (30:47) The possibility of commercially viable recommendations based on Tony’s findings (35:40) Closing thoughts (39:04) Quotes from Today’s Episode “I wanted to keep the difference between the [recommendation & forward reasoning versions] very minimal to isolate the effect of the recommendation coming in. So, if I showed you screenshots of those two versions, they would look very, very similar. The only difference that you would immediately see is that the recommendation version is showing numbers 1, 2, and 3 for the recommended airports. These [rankings] are not present in the forward-reasoning one [airports are default sorted nearest to furthest]. This actually is a pretty profound difference in terms of the interaction or the decision-making impact that the AI has. There is this normal flight mode and forward reasoning, so that pilots are already immersed in the system and thinking with the system during normal flight. It changes the process that they are going through while they are working with the AI.” Tony (18:50 - 19:42) “You would imagine that giving the recommendation makes your decision faster, but actually, the recommendations were not faster than the forward-reasoning one. In the forward-reasoning one, during normal flight, pilots could already prepare and have a good overview of their surroundings, giving them time to adjust to the new situation. Now, in normal flight, they don’t know what might be happening, and then suddenly, a passenger emergency happens. While for the recommendation version, the AI just comes into the situation once you have the emergency, and then you need to do this backward reasoning that we talked about initially.” Tony ( 21:12 - 21:58) “Imagine reviewing code written by other people. It’s always hard because you had no idea what was going on when it was written. That was the idea behind the forward reasoning. You need to look at how people are working and how you can insert AI in a way that it seamlessly fits and provides some benefit to you while keeping you in your usual thought process. So, the way that I see it is you need to identify where the key pain points actually are in your current decision-making process and try to address those instead of just trying to solve the task entirely for users.” Tony (25:40 - 26:19) Links LinkedIn: https://www.linkedin.com/in/zelun-tony-zhang/ Augmenting Human Cognition With Generative AI: Lessons From AI-Assisted Decision-Making: https://arxiv.org/html/2504.03207v1 |
Experiencing Data w/ Brian T. O’Neill (AI & data product management leadership—powered by UX design) |
|
Open Source AI in NYC
2025-06-11 · 21:30
Please RSVP here on Luma. 5:30pm - Networking 6:00pm - TechXChange Dev Day Meetup - Open Source AI in NYC for Developers Agenda will/may include as many as possible of these talks and demos
7:30pm - Networking |
Open Source AI in NYC
|
|
Open Source AI in NYC
2025-06-11 · 21:00
To attend: Please enroll here https://lu.ma/muqvquop 5:30pm - Networking 6:00pm - TechXChange Dev Day Meetup - Open Source AI in NYC for Developers Agenda will/may include as many as possible of these talks and demos Granite - Open Source Models - BJ Hargrave & Ming Zhang Docling - Document processing\, parsing diverse formats - Michele Dolfi LM Studio - Your local AI toolkit & bringing it all together - Yagil Burowski Bee AI - Discover\, run\, and compose AI agents from any framework - Shereen Bellamy LocalStack - Develop and test your AI-powered cloud apps locally - Waldemar Hummer Panel for Q&A 7:30pm - Networking |
Open Source AI in NYC
|
|
Best of NeurIPS - Feb 6
2025-02-06 · 17:00
Date and Time Feb 6, 2025 at 9 AM Pacific Welcome to the Best of NeurIPS virtual series that highlights some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you. Intrinsic Self-Supervision for Data Quality Audits Benchmark datasets in computer vision often contain issues such as off-topic samples, near-duplicates, and label errors, compromising model evaluation accuracy. This talk will discuss SelfClean, a data-cleaning framework that leverages self-supervised representation learning and distance-based indicators to detect these issues effectively. By framing the task as a ranking or scoring problem, SelfClean minimizes human effort while outperforming competing methods in identifying synthetic and natural contamination across natural and medical domains. With this methodology, we identified up to 16% of problematic samples in current benchmark datasets and enhanced the reliability of model performance evaluation. Read the paper, “Intrinsic Self-Supervision for Data Quality Audits” About the Speaker Fabian Gröger is a second-year PhD Student supervised by Alexander A. Navarini and Marc Pouly at the University of Basel. His research interests include self-supervised learning, data-centric machine learning research, and medical imaging. CLIP: Insights into Zero-Shot Image Classification with Mutual Knowledge We interpret CLIP’s zero-shot image classification by examining shared textual concepts learned by its vision and language encoders. We analyzes 13 CLIP models across various architectures, sizes, and datasets. The approach highlights a human-friendly way to understand CLIP’s classification decisions. Read the paper, “Interpreting and Analysing CLIP’s Zero-Shot Image Classification via Mutual Knowledge” About the Speaker Fawaz Sammani is a 2nd year PhD student at the Vrije Universiteit Brussel. His research focuses on Human-Friendly Interpretability and Explainability of deep neural networks. Multiview Scene Graph Motivated by how humans perceive scenes, we propose the Multiview Scene Graph (MSG) as a general topological scene representation. MSG constructs a place+object graph from unposed RGB images and we provide novel metrics to evaluate the graph quality. We combine visual place recognition and object association to build MSG in one Transformer decoder model. We believe MSG can connect dots across classic vision tasks to promote spatial intelligence and open new doors for topological 3D scene understanding. Read the paper, “Multiview Scene Graph” About the Speaker Juexiao Zhang is a second-year PhD student in computer science at NYU Courant, advised by Professor Chen Feng. He is interested in learning scene representations that are useful for robots to understand the world and interact with it. A Simple and Scalable Approach to Improve Vision Model Robustness to Corruptions Deep neural networks perform exceptionally on clean images but face significant challenges with corrupted ones. While data augmentation with specific corruptions during training can improve model robustness to those particular distortions, this approach typically degrades performance on both clean images and corruptions not encountered during training. In this talk, we present a novel approach that improves DNN robustness across diverse corruptions while maintaining clean image accuracy. Our key insight reveals that input perturbations can be effectively simulated through multiplicative perturbations in the weight space. Building on this finding, we introduce Data Augmentation via Multiplicative Perturbation (DAMP), a training methodology that optimizes DNNs under random multiplicative weight perturbations. Comprehensive experiments across multiple image classification datasets (CIFAR-10/100, TinyImageNet, and ImageNet) and architectures (ResNet50, ViT-S/16, ViT-B/16) demonstrate that DAMP enhances model generalization under corruptions while maintaining computational efficiency comparable to standard SGD. Notably, DAMP successfully trains a ViT-S/16 on ImageNet from scratch without extensive data augmentations and achieves a top-1 error of 23.7%, which is comparable to a ResNet50. Read the paper, “Improving robustness to corruptions with multiplicative weight perturbations” About the Speaker Trung Trinh is a final year PhD student in the Probabilistic Machine Learning group at Aalto University, Finland, supervised by Prof. Samuel Kaski. His research focuses on improving neural network robustness under data distribution shifts and enhancing model calibration to increase reliability in production environments. His work has been published in leading AI/ML conferences, including NeurIPS, ICLR, and ICML. Talk title: A simple and scalable approach to improve vision model robustness to corruptions. |
Best of NeurIPS - Feb 6
|
|
Best of NeurIPS - Feb 6
2025-02-06 · 17:00
Date and Time Feb 6, 2025 at 9 AM Pacific Welcome to the Best of NeurIPS virtual series that highlights some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you. Intrinsic Self-Supervision for Data Quality Audits Benchmark datasets in computer vision often contain issues such as off-topic samples, near-duplicates, and label errors, compromising model evaluation accuracy. This talk will discuss SelfClean, a data-cleaning framework that leverages self-supervised representation learning and distance-based indicators to detect these issues effectively. By framing the task as a ranking or scoring problem, SelfClean minimizes human effort while outperforming competing methods in identifying synthetic and natural contamination across natural and medical domains. With this methodology, we identified up to 16% of problematic samples in current benchmark datasets and enhanced the reliability of model performance evaluation. Read the paper, “Intrinsic Self-Supervision for Data Quality Audits” About the Speaker Fabian Gröger is a second-year PhD Student supervised by Alexander A. Navarini and Marc Pouly at the University of Basel. His research interests include self-supervised learning, data-centric machine learning research, and medical imaging. CLIP: Insights into Zero-Shot Image Classification with Mutual Knowledge We interpret CLIP’s zero-shot image classification by examining shared textual concepts learned by its vision and language encoders. We analyzes 13 CLIP models across various architectures, sizes, and datasets. The approach highlights a human-friendly way to understand CLIP’s classification decisions. Read the paper, “Interpreting and Analysing CLIP’s Zero-Shot Image Classification via Mutual Knowledge” About the Speaker Fawaz Sammani is a 2nd year PhD student at the Vrije Universiteit Brussel. His research focuses on Human-Friendly Interpretability and Explainability of deep neural networks. Multiview Scene Graph Motivated by how humans perceive scenes, we propose the Multiview Scene Graph (MSG) as a general topological scene representation. MSG constructs a place+object graph from unposed RGB images and we provide novel metrics to evaluate the graph quality. We combine visual place recognition and object association to build MSG in one Transformer decoder model. We believe MSG can connect dots across classic vision tasks to promote spatial intelligence and open new doors for topological 3D scene understanding. Read the paper, “Multiview Scene Graph” About the Speaker Juexiao Zhang is a second-year PhD student in computer science at NYU Courant, advised by Professor Chen Feng. He is interested in learning scene representations that are useful for robots to understand the world and interact with it. A Simple and Scalable Approach to Improve Vision Model Robustness to Corruptions Deep neural networks perform exceptionally on clean images but face significant challenges with corrupted ones. While data augmentation with specific corruptions during training can improve model robustness to those particular distortions, this approach typically degrades performance on both clean images and corruptions not encountered during training. In this talk, we present a novel approach that improves DNN robustness across diverse corruptions while maintaining clean image accuracy. Our key insight reveals that input perturbations can be effectively simulated through multiplicative perturbations in the weight space. Building on this finding, we introduce Data Augmentation via Multiplicative Perturbation (DAMP), a training methodology that optimizes DNNs under random multiplicative weight perturbations. Comprehensive experiments across multiple image classification datasets (CIFAR-10/100, TinyImageNet, and ImageNet) and architectures (ResNet50, ViT-S/16, ViT-B/16) demonstrate that DAMP enhances model generalization under corruptions while maintaining computational efficiency comparable to standard SGD. Notably, DAMP successfully trains a ViT-S/16 on ImageNet from scratch without extensive data augmentations and achieves a top-1 error of 23.7%, which is comparable to a ResNet50. Read the paper, “Improving robustness to corruptions with multiplicative weight perturbations” About the Speaker Trung Trinh is a final year PhD student in the Probabilistic Machine Learning group at Aalto University, Finland, supervised by Prof. Samuel Kaski. His research focuses on improving neural network robustness under data distribution shifts and enhancing model calibration to increase reliability in production environments. His work has been published in leading AI/ML conferences, including NeurIPS, ICLR, and ICML. Talk title: A simple and scalable approach to improve vision model robustness to corruptions. |
Best of NeurIPS - Feb 6
|
|
Best of NeurIPS - Feb 6
2025-02-06 · 17:00
Date and Time Feb 6, 2025 at 9 AM Pacific Welcome to the Best of NeurIPS virtual series that highlights some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you. Intrinsic Self-Supervision for Data Quality Audits Benchmark datasets in computer vision often contain issues such as off-topic samples, near-duplicates, and label errors, compromising model evaluation accuracy. This talk will discuss SelfClean, a data-cleaning framework that leverages self-supervised representation learning and distance-based indicators to detect these issues effectively. By framing the task as a ranking or scoring problem, SelfClean minimizes human effort while outperforming competing methods in identifying synthetic and natural contamination across natural and medical domains. With this methodology, we identified up to 16% of problematic samples in current benchmark datasets and enhanced the reliability of model performance evaluation. Read the paper, “Intrinsic Self-Supervision for Data Quality Audits” About the Speaker Fabian Gröger is a second-year PhD Student supervised by Alexander A. Navarini and Marc Pouly at the University of Basel. His research interests include self-supervised learning, data-centric machine learning research, and medical imaging. CLIP: Insights into Zero-Shot Image Classification with Mutual Knowledge We interpret CLIP’s zero-shot image classification by examining shared textual concepts learned by its vision and language encoders. We analyzes 13 CLIP models across various architectures, sizes, and datasets. The approach highlights a human-friendly way to understand CLIP’s classification decisions. Read the paper, “Interpreting and Analysing CLIP’s Zero-Shot Image Classification via Mutual Knowledge” About the Speaker Fawaz Sammani is a 2nd year PhD student at the Vrije Universiteit Brussel. His research focuses on Human-Friendly Interpretability and Explainability of deep neural networks. Multiview Scene Graph Motivated by how humans perceive scenes, we propose the Multiview Scene Graph (MSG) as a general topological scene representation. MSG constructs a place+object graph from unposed RGB images and we provide novel metrics to evaluate the graph quality. We combine visual place recognition and object association to build MSG in one Transformer decoder model. We believe MSG can connect dots across classic vision tasks to promote spatial intelligence and open new doors for topological 3D scene understanding. Read the paper, “Multiview Scene Graph” About the Speaker Juexiao Zhang is a second-year PhD student in computer science at NYU Courant, advised by Professor Chen Feng. He is interested in learning scene representations that are useful for robots to understand the world and interact with it. A Simple and Scalable Approach to Improve Vision Model Robustness to Corruptions Deep neural networks perform exceptionally on clean images but face significant challenges with corrupted ones. While data augmentation with specific corruptions during training can improve model robustness to those particular distortions, this approach typically degrades performance on both clean images and corruptions not encountered during training. In this talk, we present a novel approach that improves DNN robustness across diverse corruptions while maintaining clean image accuracy. Our key insight reveals that input perturbations can be effectively simulated through multiplicative perturbations in the weight space. Building on this finding, we introduce Data Augmentation via Multiplicative Perturbation (DAMP), a training methodology that optimizes DNNs under random multiplicative weight perturbations. Comprehensive experiments across multiple image classification datasets (CIFAR-10/100, TinyImageNet, and ImageNet) and architectures (ResNet50, ViT-S/16, ViT-B/16) demonstrate that DAMP enhances model generalization under corruptions while maintaining computational efficiency comparable to standard SGD. Notably, DAMP successfully trains a ViT-S/16 on ImageNet from scratch without extensive data augmentations and achieves a top-1 error of 23.7%, which is comparable to a ResNet50. Read the paper, “Improving robustness to corruptions with multiplicative weight perturbations” About the Speaker Trung Trinh is a final year PhD student in the Probabilistic Machine Learning group at Aalto University, Finland, supervised by Prof. Samuel Kaski. His research focuses on improving neural network robustness under data distribution shifts and enhancing model calibration to increase reliability in production environments. His work has been published in leading AI/ML conferences, including NeurIPS, ICLR, and ICML. Talk title: A simple and scalable approach to improve vision model robustness to corruptions. |
Best of NeurIPS - Feb 6
|
|
Best of NeurIPS - Feb 6
2025-02-06 · 17:00
Date and Time Feb 6, 2025 at 9 AM Pacific Welcome to the Best of NeurIPS virtual series that highlights some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you. Intrinsic Self-Supervision for Data Quality Audits Benchmark datasets in computer vision often contain issues such as off-topic samples, near-duplicates, and label errors, compromising model evaluation accuracy. This talk will discuss SelfClean, a data-cleaning framework that leverages self-supervised representation learning and distance-based indicators to detect these issues effectively. By framing the task as a ranking or scoring problem, SelfClean minimizes human effort while outperforming competing methods in identifying synthetic and natural contamination across natural and medical domains. With this methodology, we identified up to 16% of problematic samples in current benchmark datasets and enhanced the reliability of model performance evaluation. Read the paper, “Intrinsic Self-Supervision for Data Quality Audits” About the Speaker Fabian Gröger is a second-year PhD Student supervised by Alexander A. Navarini and Marc Pouly at the University of Basel. His research interests include self-supervised learning, data-centric machine learning research, and medical imaging. CLIP: Insights into Zero-Shot Image Classification with Mutual Knowledge We interpret CLIP’s zero-shot image classification by examining shared textual concepts learned by its vision and language encoders. We analyzes 13 CLIP models across various architectures, sizes, and datasets. The approach highlights a human-friendly way to understand CLIP’s classification decisions. Read the paper, “Interpreting and Analysing CLIP’s Zero-Shot Image Classification via Mutual Knowledge” About the Speaker Fawaz Sammani is a 2nd year PhD student at the Vrije Universiteit Brussel. His research focuses on Human-Friendly Interpretability and Explainability of deep neural networks. Multiview Scene Graph Motivated by how humans perceive scenes, we propose the Multiview Scene Graph (MSG) as a general topological scene representation. MSG constructs a place+object graph from unposed RGB images and we provide novel metrics to evaluate the graph quality. We combine visual place recognition and object association to build MSG in one Transformer decoder model. We believe MSG can connect dots across classic vision tasks to promote spatial intelligence and open new doors for topological 3D scene understanding. Read the paper, “Multiview Scene Graph” About the Speaker Juexiao Zhang is a second-year PhD student in computer science at NYU Courant, advised by Professor Chen Feng. He is interested in learning scene representations that are useful for robots to understand the world and interact with it. A Simple and Scalable Approach to Improve Vision Model Robustness to Corruptions Deep neural networks perform exceptionally on clean images but face significant challenges with corrupted ones. While data augmentation with specific corruptions during training can improve model robustness to those particular distortions, this approach typically degrades performance on both clean images and corruptions not encountered during training. In this talk, we present a novel approach that improves DNN robustness across diverse corruptions while maintaining clean image accuracy. Our key insight reveals that input perturbations can be effectively simulated through multiplicative perturbations in the weight space. Building on this finding, we introduce Data Augmentation via Multiplicative Perturbation (DAMP), a training methodology that optimizes DNNs under random multiplicative weight perturbations. Comprehensive experiments across multiple image classification datasets (CIFAR-10/100, TinyImageNet, and ImageNet) and architectures (ResNet50, ViT-S/16, ViT-B/16) demonstrate that DAMP enhances model generalization under corruptions while maintaining computational efficiency comparable to standard SGD. Notably, DAMP successfully trains a ViT-S/16 on ImageNet from scratch without extensive data augmentations and achieves a top-1 error of 23.7%, which is comparable to a ResNet50. Read the paper, “Improving robustness to corruptions with multiplicative weight perturbations” About the Speaker Trung Trinh is a final year PhD student in the Probabilistic Machine Learning group at Aalto University, Finland, supervised by Prof. Samuel Kaski. His research focuses on improving neural network robustness under data distribution shifts and enhancing model calibration to increase reliability in production environments. His work has been published in leading AI/ML conferences, including NeurIPS, ICLR, and ICML. Talk title: A simple and scalable approach to improve vision model robustness to corruptions. |
Best of NeurIPS - Feb 6
|
|
Best of NeurIPS - Feb 6
2025-02-06 · 17:00
Date and Time Feb 6, 2025 at 9 AM Pacific Welcome to the Best of NeurIPS virtual series that highlights some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you. Intrinsic Self-Supervision for Data Quality Audits Benchmark datasets in computer vision often contain issues such as off-topic samples, near-duplicates, and label errors, compromising model evaluation accuracy. This talk will discuss SelfClean, a data-cleaning framework that leverages self-supervised representation learning and distance-based indicators to detect these issues effectively. By framing the task as a ranking or scoring problem, SelfClean minimizes human effort while outperforming competing methods in identifying synthetic and natural contamination across natural and medical domains. With this methodology, we identified up to 16% of problematic samples in current benchmark datasets and enhanced the reliability of model performance evaluation. Read the paper, “Intrinsic Self-Supervision for Data Quality Audits” About the Speaker Fabian Gröger is a second-year PhD Student supervised by Alexander A. Navarini and Marc Pouly at the University of Basel. His research interests include self-supervised learning, data-centric machine learning research, and medical imaging. CLIP: Insights into Zero-Shot Image Classification with Mutual Knowledge We interpret CLIP’s zero-shot image classification by examining shared textual concepts learned by its vision and language encoders. We analyzes 13 CLIP models across various architectures, sizes, and datasets. The approach highlights a human-friendly way to understand CLIP’s classification decisions. Read the paper, “Interpreting and Analysing CLIP’s Zero-Shot Image Classification via Mutual Knowledge” About the Speaker Fawaz Sammani is a 2nd year PhD student at the Vrije Universiteit Brussel. His research focuses on Human-Friendly Interpretability and Explainability of deep neural networks. Multiview Scene Graph Motivated by how humans perceive scenes, we propose the Multiview Scene Graph (MSG) as a general topological scene representation. MSG constructs a place+object graph from unposed RGB images and we provide novel metrics to evaluate the graph quality. We combine visual place recognition and object association to build MSG in one Transformer decoder model. We believe MSG can connect dots across classic vision tasks to promote spatial intelligence and open new doors for topological 3D scene understanding. Read the paper, “Multiview Scene Graph” About the Speaker Juexiao Zhang is a second-year PhD student in computer science at NYU Courant, advised by Professor Chen Feng. He is interested in learning scene representations that are useful for robots to understand the world and interact with it. A Simple and Scalable Approach to Improve Vision Model Robustness to Corruptions Deep neural networks perform exceptionally on clean images but face significant challenges with corrupted ones. While data augmentation with specific corruptions during training can improve model robustness to those particular distortions, this approach typically degrades performance on both clean images and corruptions not encountered during training. In this talk, we present a novel approach that improves DNN robustness across diverse corruptions while maintaining clean image accuracy. Our key insight reveals that input perturbations can be effectively simulated through multiplicative perturbations in the weight space. Building on this finding, we introduce Data Augmentation via Multiplicative Perturbation (DAMP), a training methodology that optimizes DNNs under random multiplicative weight perturbations. Comprehensive experiments across multiple image classification datasets (CIFAR-10/100, TinyImageNet, and ImageNet) and architectures (ResNet50, ViT-S/16, ViT-B/16) demonstrate that DAMP enhances model generalization under corruptions while maintaining computational efficiency comparable to standard SGD. Notably, DAMP successfully trains a ViT-S/16 on ImageNet from scratch without extensive data augmentations and achieves a top-1 error of 23.7%, which is comparable to a ResNet50. Read the paper, “Improving robustness to corruptions with multiplicative weight perturbations” About the Speaker Trung Trinh is a final year PhD student in the Probabilistic Machine Learning group at Aalto University, Finland, supervised by Prof. Samuel Kaski. His research focuses on improving neural network robustness under data distribution shifts and enhancing model calibration to increase reliability in production environments. His work has been published in leading AI/ML conferences, including NeurIPS, ICLR, and ICML. Talk title: A simple and scalable approach to improve vision model robustness to corruptions. |
Best of NeurIPS - Feb 6
|
|
Best of NeurIPS - Feb 6
2025-02-06 · 17:00
Date and Time Feb 6, 2025 at 9 AM Pacific Welcome to the Best of NeurIPS virtual series that highlights some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you. Intrinsic Self-Supervision for Data Quality Audits Benchmark datasets in computer vision often contain issues such as off-topic samples, near-duplicates, and label errors, compromising model evaluation accuracy. This talk will discuss SelfClean, a data-cleaning framework that leverages self-supervised representation learning and distance-based indicators to detect these issues effectively. By framing the task as a ranking or scoring problem, SelfClean minimizes human effort while outperforming competing methods in identifying synthetic and natural contamination across natural and medical domains. With this methodology, we identified up to 16% of problematic samples in current benchmark datasets and enhanced the reliability of model performance evaluation. Read the paper, “Intrinsic Self-Supervision for Data Quality Audits” About the Speaker Fabian Gröger is a second-year PhD Student supervised by Alexander A. Navarini and Marc Pouly at the University of Basel. His research interests include self-supervised learning, data-centric machine learning research, and medical imaging. CLIP: Insights into Zero-Shot Image Classification with Mutual Knowledge We interpret CLIP’s zero-shot image classification by examining shared textual concepts learned by its vision and language encoders. We analyzes 13 CLIP models across various architectures, sizes, and datasets. The approach highlights a human-friendly way to understand CLIP’s classification decisions. Read the paper, “Interpreting and Analysing CLIP’s Zero-Shot Image Classification via Mutual Knowledge” About the Speaker Fawaz Sammani is a 2nd year PhD student at the Vrije Universiteit Brussel. His research focuses on Human-Friendly Interpretability and Explainability of deep neural networks. Multiview Scene Graph Motivated by how humans perceive scenes, we propose the Multiview Scene Graph (MSG) as a general topological scene representation. MSG constructs a place+object graph from unposed RGB images and we provide novel metrics to evaluate the graph quality. We combine visual place recognition and object association to build MSG in one Transformer decoder model. We believe MSG can connect dots across classic vision tasks to promote spatial intelligence and open new doors for topological 3D scene understanding. Read the paper, “Multiview Scene Graph” About the Speaker Juexiao Zhang is a second-year PhD student in computer science at NYU Courant, advised by Professor Chen Feng. He is interested in learning scene representations that are useful for robots to understand the world and interact with it. A Simple and Scalable Approach to Improve Vision Model Robustness to Corruptions Deep neural networks perform exceptionally on clean images but face significant challenges with corrupted ones. While data augmentation with specific corruptions during training can improve model robustness to those particular distortions, this approach typically degrades performance on both clean images and corruptions not encountered during training. In this talk, we present a novel approach that improves DNN robustness across diverse corruptions while maintaining clean image accuracy. Our key insight reveals that input perturbations can be effectively simulated through multiplicative perturbations in the weight space. Building on this finding, we introduce Data Augmentation via Multiplicative Perturbation (DAMP), a training methodology that optimizes DNNs under random multiplicative weight perturbations. Comprehensive experiments across multiple image classification datasets (CIFAR-10/100, TinyImageNet, and ImageNet) and architectures (ResNet50, ViT-S/16, ViT-B/16) demonstrate that DAMP enhances model generalization under corruptions while maintaining computational efficiency comparable to standard SGD. Notably, DAMP successfully trains a ViT-S/16 on ImageNet from scratch without extensive data augmentations and achieves a top-1 error of 23.7%, which is comparable to a ResNet50. Read the paper, “Improving robustness to corruptions with multiplicative weight perturbations” About the Speaker Trung Trinh is a final year PhD student in the Probabilistic Machine Learning group at Aalto University, Finland, supervised by Prof. Samuel Kaski. His research focuses on improving neural network robustness under data distribution shifts and enhancing model calibration to increase reliability in production environments. His work has been published in leading AI/ML conferences, including NeurIPS, ICLR, and ICML. Talk title: A simple and scalable approach to improve vision model robustness to corruptions. |
Best of NeurIPS - Feb 6
|
|
Migrating to Swift Package Manager
2024-11-12 · 19:20
In this talk I will share some ideas and tips on how a team with a large codebase and an enterprise-level repo could migrate from CocoaPods to Swift Package Manager, and the benefits it would bring to the developer experience. Sean Zhang is a Senior iOS Engineer on the platform team at Capital One. |
@DoorDash: Building Reusable Components && Migrating to Swift Package Manager
|
|
.NET Conf: Focus on AI
2024-08-20 · 16:00
.NET Conf: Focus on AI is a free, one-day livestream event that features speakers from the community and Microsoft teams working on integrating AI into .NET applications. Learn how to build intelligent applications with .NET using the latest AI libraries and tools, enhance existing applications with AI features, and see real-world examples of AI in action. Tune into focus.dotnetconf.net on August 20, 2024. Ask questions live and learn how to make your .NET applications smarter. | Session | Speaker | | :------------------------- |:------------------------- | | Keynote - State of .NET + AI | Scott Hanselman, Maria Naggaga and guests | | Get started incorporating AI into your .NET applications and services | Stephen Toub | | Better together: .NET Aspire and Semantic Kernel | Steve Sanderson and Matthew Bolanos | | Build interactive AI-powered web apps with Blazor and .NET | Daniel Roth | | Navigating the world of AI models in .NET: From local development to the cloud | Bruno Capuano | | OpenAI and Azure OpenAI: A .NET SDK convergence story | Matt Soucoup and Roger Pincombe | | Agents: Patterns and practices for automating business workflows | Kosta Petan and XiaoYun Zhang | | RAG on your data with .NET, AI and Azure SQL | Davide Mauri | | Building Generative AI apps with your data in Azure Cosmos DB | James Codella | | Integrating Semantic Search Capabilities with .NET and Azure: Milvus Vector Database | Tim Spann | | H&R Block: Lessons learnt from applying Generative AI to apps with .NET and Azure | Vin Kamat | | Add Generative AI capabilities to your .NET Web app for Azure App Service | Gaurav Seth | | Observing AI applications from dev to production with .NET Aspire | Anthony Shaw | | Infuse AI in your Windows apps with Windows Copilot Runtime and .NET | Nikola Metulev | | Build your own copilot with Teams AI library and .NET | Ayca Bas and John Miller | | RAG with AI Search and .NET | Matt Gotteiner | |
.NET Conf: Focus on AI
|
|
Yunong Zhang
– author
,
Jinjin Guo
– author
The book aims to solve the discrete implementation problems of continuous-time neural network models while improving the performance of neural networks by using various Zhang Time Discretization (ZTD) formulas. |
O'Reilly Data Science Books
|