talk-data.com talk-data.com

Filter by Source

Select conferences and events

Activities & events

Title & Speakers Event

Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models

Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.

About the Speaker

Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.

Data-Centric Lessons To Improve Speech-Language Pretraining

Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.

We focus on three research questions fundamental to speech-language pretraining data:

  • How to process raw web-crawled audio content for speech-text pretraining;
  • How to construct synthetic pretraining datasets to augment web-crawled data;
  • How to interleave (text, audio) segments into training sequences.

We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.

About the Speaker

Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.

A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne

Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.

About the Speaker

Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable.

Making Computer Vision Models Faster: An Introduction to TensorRT Optimization

Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.

About the Speaker

Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.

Feb 5 - AI, ML and Computer Vision Meetup

Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models

Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.

About the Speaker

Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.

Data-Centric Lessons To Improve Speech-Language Pretraining

Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.

We focus on three research questions fundamental to speech-language pretraining data:

  • How to process raw web-crawled audio content for speech-text pretraining;
  • How to construct synthetic pretraining datasets to augment web-crawled data;
  • How to interleave (text, audio) segments into training sequences.

We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.

About the Speaker

Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.

A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne

Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.

About the Speaker

Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable.

Making Computer Vision Models Faster: An Introduction to TensorRT Optimization

Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.

About the Speaker

Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.

Feb 5 - AI, ML and Computer Vision Meetup

Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models

Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.

About the Speaker

Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.

Data-Centric Lessons To Improve Speech-Language Pretraining

Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.

We focus on three research questions fundamental to speech-language pretraining data:

  • How to process raw web-crawled audio content for speech-text pretraining;
  • How to construct synthetic pretraining datasets to augment web-crawled data;
  • How to interleave (text, audio) segments into training sequences.

We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.

About the Speaker

Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.

A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne

Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.

About the Speaker

Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable.

Making Computer Vision Models Faster: An Introduction to TensorRT Optimization

Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.

About the Speaker

Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.

Feb 5 - AI, ML and Computer Vision Meetup

Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models

Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.

About the Speaker

Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.

Data-Centric Lessons To Improve Speech-Language Pretraining

Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.

We focus on three research questions fundamental to speech-language pretraining data:

  • How to process raw web-crawled audio content for speech-text pretraining;
  • How to construct synthetic pretraining datasets to augment web-crawled data;
  • How to interleave (text, audio) segments into training sequences.

We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.

About the Speaker

Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.

A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne

Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.

About the Speaker

Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable.

Making Computer Vision Models Faster: An Introduction to TensorRT Optimization

Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.

About the Speaker

Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.

Feb 5 - AI, ML and Computer Vision Meetup

Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models

Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.

About the Speaker

Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.

Data-Centric Lessons To Improve Speech-Language Pretraining

Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.

We focus on three research questions fundamental to speech-language pretraining data:

  • How to process raw web-crawled audio content for speech-text pretraining;
  • How to construct synthetic pretraining datasets to augment web-crawled data;
  • How to interleave (text, audio) segments into training sequences.

We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.

About the Speaker

Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.

A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne

Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.

About the Speaker

Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable.

Making Computer Vision Models Faster: An Introduction to TensorRT Optimization

Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.

About the Speaker

Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.

Feb 5 - AI, ML and Computer Vision Meetup

Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models

Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.

About the Speaker

Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.

Data-Centric Lessons To Improve Speech-Language Pretraining

Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.

We focus on three research questions fundamental to speech-language pretraining data:

  • How to process raw web-crawled audio content for speech-text pretraining;
  • How to construct synthetic pretraining datasets to augment web-crawled data;
  • How to interleave (text, audio) segments into training sequences.

We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.

About the Speaker

Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.

A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne

Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.

About the Speaker

Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable.

Making Computer Vision Models Faster: An Introduction to TensorRT Optimization

Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.

About the Speaker

Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.

Feb 5 - AI, ML and Computer Vision Meetup

Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models

Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.

About the Speaker

Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.

Data-Centric Lessons To Improve Speech-Language Pretraining

Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.

We focus on three research questions fundamental to speech-language pretraining data:

  • How to process raw web-crawled audio content for speech-text pretraining;
  • How to construct synthetic pretraining datasets to augment web-crawled data;
  • How to interleave (text, audio) segments into training sequences.

We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.

About the Speaker

Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.

A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne

Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.

About the Speaker

Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable.

Making Computer Vision Models Faster: An Introduction to TensorRT Optimization

Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.

About the Speaker

Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.

Feb 5 - AI, ML and Computer Vision Meetup

Join our virtual Meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.

Feb 5, 2026 9 - 11 AM Pacific Online. Register for the Zoom!

Unlocking Visual Anomaly Detection: Navigating Challenges and Pioneering with Vision-Language Models

Visual anomaly detection (VAD) is pivotal for ensuring quality in manufacturing, medical imaging, and safety inspections, yet it continues to face challenges such as data scarcity, domain shifts, and the need for precise localization and reasoning. This seminar explores VAD fundamentals, core challenges, and recent advancements leveraging vision-language models and multimodal large language models (MLLMs). We contrast CLIP-based methods for efficient zero/few-shot detection with MLLM-driven reasoning for explainable, threshold-free outcomes. Drawing from recent studies, we highlight emerging trends, benchmarks, and future directions toward building adaptable, real-world VAD systems. This talk is designed for researchers and practitioners interested in AI-driven inspection and next-generation multimodal approaches.

About the Speaker

Hossein Kashiani is a fourth-year Ph.D. student at Clemson University. His research focuses on developing generalizable and trustworthy AI systems, with publications in top venues such as CVPR, WACV, ICIP, IJCB, and TBIOM. His work spans diverse applications, including anomaly detection, media forensics, biometrics, healthcare, and visual perception.

Data-Centric Lessons To Improve Speech-Language Pretraining

Spoken Question-Answering (SQA) is a core capability for useful and interactive artificial intelligence systems. Recently, several speech-language models (SpeechLMs) have been released with a specific focus on improving their SQA performance. However, a lack of controlled ablations of pretraining data processing and curation makes it challenging to understand what factors account for performance, despite substantial gains from similar studies in other data modalities. In this work, we address this gap by conducting a data-centric exploration for pretraining SpeechLMs.

We focus on three research questions fundamental to speech-language pretraining data:

  • How to process raw web-crawled audio content for speech-text pretraining;
  • How to construct synthetic pretraining datasets to augment web-crawled data;
  • How to interleave (text, audio) segments into training sequences.

We apply the insights from our controlled data-centric ablations to pretrain a 3.8B-parameter SpeechLM, called SpeLangy, that outperforms models that are up to 3x larger by 10.2% absolute performance. We hope our findings highlight the impact of effective data curation for speech-language pretraining and guide future data-centric exploration in SpeechLMs.

About the Speaker

Vishaal Udandarao is a third year ELLIS PhD student, jointly working with Matthias Bethge at The University of Tuebingen and Samuel Albanie at The University of Cambridge/Google Deepmind. He is also a part of the International Max Planck Research School for Intelligent Systems. He is mainly interested in understanding the generalisation properties of foundation models, both vision-language models (VLMs) and large multi-modal models (LMMs), through the lens of their pre-training and test data distributions. His research is funded by a Google PhD Fellowship in Machine Intelligence.

A Practical Pipeline for Synthetic Data with Nano Banana Pro + FiftyOne

Most computer-vision failures come from the rare cases, the dark corners, odd combinations, and edge conditions we never capture enough in real datasets. In this session, we walk through a practical end-to-end pipeline for generating targeted synthetic data using Google’s Nano Banana Pro and managing it with FiftyOne. We’ll explore how to translate dataset gaps into generation prompts, create thousands of high-quality synthetic images, automatically enrich them with metadata, and bring everything into FiftyOne for inspection, filtering, and validation. By the end, you’ll understand how to build a repeatable synthetic-first workflow that closes real vision gaps and improves model performance on the scenarios that matter most.

About the Speaker

Adonai Vera - Machine Learning Engineer & DevRel at Voxel51. With over 7 years of experience building computer vision and machine learning models using TensorFlow\, Docker\, and OpenCV. I started as a software developer\, moved into AI\, led teams\, and served as CTO. Today\, I connect code and community to build open\, production-ready AI\, making technology simple\, accessible\, and reliable.

Making Computer Vision Models Faster: An Introduction to TensorRT Optimization

Modern computer vision applications demand real-time performance, yet many deep learning models struggle with high latency during deployment. This talk introduces how TensorRT can significantly accelerate inference by applying optimizations such as layer fusion, precision calibration, and efficient memory management. Attendees will learn the core concepts behind TensorRT, how it integrates into existing CV pipelines, and how to measure and benchmark improvements. Through practical examples and performance comparisons, the session will demonstrate how substantial speedups can be achieved with minimal model-accuracy loss. By the end, participants will understand when and how to apply TensorRT to make their CV models production-ready.

About the Speaker

Tushar Gadhiya is a Technical Lead at Infocusp Innovations, specialising in deep learning, computer vision, graph learning, and agentic AI. My experience spans academic research as a PhD holder and industry work, where I have contributed to multiple patents.

Feb 5 - AI, ML and Computer Vision Meetup

⚠️ IMPORTANT: REGISTRATION & APPROVAL PROCESS ⚠️ To ensure a high-quality, hands-on experience for everyone, we are curating the guest list for this session.

Please follow these 2 steps to apply for a spot: 👉 Step 1: RSVP 'Yes' on this Meetup page. 👉 Step 2: Submit your official registration on our website.

Please Note: Due to the interactive nature of the workshop and limited capacity, registration does not automatically guarantee entry. We review every application to ensure the right mix of attendees and will send you a final confirmation email shortly if your spot is secured.

---------------------------------------------------------------------------

Description The technology landscape is in constant motion. Just as your organization masters the currents of the Cloud Native paradigm, the next great wave—AI Native—is already on the horizon. Successfully navigating these continuous shifts is the defining challenge for modern technology leaders. How do you move from abstract strategic ideas to a concrete, co-created plan that your entire team can get behind?

This hands-on workshop provides the tools and a structured process to do just that. At the heart of this session is a unique and powerful "pattern language"—a physical deck of Paradigm, Antipattern, and Transformation cards. These tangible tools provide a shared vocabulary to diagnose your current state with unflinching honesty and design a future that is both ambitious and achievable.

Join us to turn complex strategic conversations into a dynamic, engaging, and visual planning session. You will leave not with a document that sits on a shelf, but with a collaboratively built roadmap and a clear, unified vision for your transformation journey.

How it Works We will guide your team through a structured, 3-hour collaborative process:

1. Diagnose Your Digital DNA (Current State Analysis) We begin by defining your "North Star"—the single most important objective for your transformation. Using Paradigm Cards, we will visually map your organization's maturity across the three great waves of innovation: Legacy (Waterfall), Cloud Native, and AI Native. Crucially, we will also use Antipattern Cards to expose the hidden traps and cultural pitfalls currently slowing you down.

2. The Strategic Pause Before planning the future, we introduce three powerful mental models: The Product Lifecycle, The Waves of Innovation, and The Six Modes of Operation. This brief "teaching moment" ensures everyone in the room has the strategic context needed to build a robust plan.

3. Build Your Actionable Roadmap This is where strategy becomes action. Using Transformation Pattern Cards, you will co-design a multi-wave roadmap. You will sequence the key moves needed to mature your current practices, responsibly "strangle" legacy systems, and prepare for the AI-native future. Finally, we pressure-test this plan against your identified antipatterns to ensure it solves your most critical problems.

Who Should Attend? This workshop is designed for technology leaders, architects, product managers, and cross-functional teams who are:

  • Kicking off a new transformation initiative.
  • Feeling "stuck" in their current Cloud Native or digital transformation.
  • Seeking to align leadership and delivery teams on a shared strategic vision.
  • Looking for a structured, repeatable process to build actionable technology roadmaps.

What You Will Achieve By the end of this interactive session, your team will have:

  • A Shared Language: A common vocabulary of patterns to make strategic conversations more effective and inclusive.
  • A Clear Diagnosis: A visual, honest snapshot of your organization's current state, including your maturity across the waves of innovation (e.g., Waterfall, Cloud Native, AI Native) and the hidden antipatterns holding you back.

An Actionable Roadmap: A collaboratively built, multi-wave transformation plan with clear, prioritized next steps to implement immediately.

Pattern Cards Workshop: Diagnosing & Planning Your Transformation
Molly Presley – host , Carl Watts – guest @ Library of Congress

In this episode of Data Unchained, host Molly Presley is joined by Carl Watts of the Library of Congress for a deep dive into what it takes to manage and preserve one of the largest and most complex data environments in the world. Carl shares firsthand insight into overseeing more than 150 petabytes of historical data, navigating large scale tape migrations, and confronting the governance, copyright, and operational challenges that come with applying AI to national archives. The conversation explores whether artificial intelligence can responsibly unlock siloed collections across text, audio, video, and web archives, and what it truly costs to move, protect, and future proof America’s digital memory at petabyte scale. Cyberpunk by jiglr | https://soundcloud.com/jiglrmusic Music promoted by https://www.free-stock-music.com Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/deed.en_US Hosted on Acast. See acast.com/privacy for more information.

AI/ML
Data Unchained
Podcast
Jozef de Vries – author , Tom Taulli – author , Benjamin Anderson – author

In a world where data sovereignty, scalability, and AI innovation are at the forefront of enterprise strategy, PostgreSQL is emerging as the key to unlocking transformative business value. This new guide serves as your beacon for navigating the convergence of AI, open source technologies, and intelligent data platforms. Authors Tom Taulli, Benjamin Anderson, and Jozef de Vries offer a strategic and practical approach to building AI and data platforms that balance innovation with governance, empowering organizations to take control of their data future. Whether you're designing frameworks for advanced AI applications, modernizing legacy infrastructures, or solving data challenges at scale, you can use this guide to bridge the gap between technical complexity and actionable strategy. Written for IT executives, data leaders, and practitioners alike, it will equip you with the tools and insights to harness Postgre's unique capabilities—extensibility, unstructured data management, and hybrid workloads—for long-term success in an AI-driven world. Learn how to build an AI and data platform using PostgreSQL Overcome data challenges like modernization, integration, and governance Optimize AI performance with model fine-tuning and retrieval-augmented generation (RAG) best practices Discover use cases that align data strategy with business goals Take charge of your data and AI future with this comprehensive and accessible roadmap

data data-engineering relational-databases postgresql AI/ML Data Management RAG
O'Reilly Data Engineering Books

AI dominates headlines—but how do we cut through the noise? In this candid session, tech leaders share how they’re navigating AI’s biggest promises and toughest realities. Learn how they balance innovation with responsibility, turn hype into action, and shape the future of work. Expect real challenges, lessons, and defining moments—not just theory.

AI/ML
Microsoft Ignite 2025

Unlock your potential and lead the future with this essential guide to thriving, creating, and innovating with confidence in the age of intelligence. Embark on a transformative journey with Becoming An AI Orchestrator: A Business Professional's Guide to Leading, Creating, and Thriving in the Age of Intelligence. This book is your essential guide to navigating the age of intelligence, where technology and creativity converge. Whether you're a creator, knowledge worker, or leader, you'll find invaluable insights and practical advice to help you thrive in this new era. From understanding the forces that have shaped our technological landscape to embracing the opportunities and challenges of AI, this book empowers you to lead, create, and innovate with confidence. Discover the power of AI and unlock your potential. Through engaging stories and expert guidance, you'll learn how to harness AI to enhance your work and life. This book is not just about technology; it's about empowering you to bring your visions to life and make a meaningful impact. With a focus on creativity, adaptability, and collaboration, Becoming An AI Orchestrator is your roadmap to success in a rapidly evolving world. Join the ranks of those who are not just adapting to change but leading it.

data data-science business-intelligence AI/ML
O'Reilly Business Intelligence Books
Sheamus McGovern – Founder | Engineer @ ODSC AI

Join Sheamus for an in-depth webinar on the exciting intersection of artificial intelligence and robotics. This session will provide a foundational understanding of how AI is revolutionizing the field of robotics, moving beyond traditional, pre-programmed systems to create intelligent, autonomous machines. Sheamus will explore the core concepts of AI that are most relevant to robotics, including machine learning, computer vision, and natural language processing. The webinar will cover practical applications and case studies, from self-navigating drones to collaborative industrial robots. Attendees will gain insight into the challenges and opportunities in this rapidly evolving field, and learn about the key technologies and skills needed to design and build the next generation of intelligent robots. Whether you are a student, an engineer, or simply curious about the future of automation, this session will provide a comprehensive and accessible introduction.

artificial intelligence robotics machine learning computer vision natural language processing
WEBINAR "Introduction to AI in Robotics"
Sheamus McGovern – Founder | Engineer @ ODSC AI

Join Sheamus for an in-depth webinar on the exciting intersection of artificial intelligence and robotics. This session will provide a foundational understanding of how AI is revolutionizing the field of robotics, moving beyond traditional, pre-programmed systems to create intelligent, autonomous machines. Sheamus will explore the core concepts of AI that are most relevant to robotics, including machine learning, computer vision, and natural language processing. The webinar will cover practical applications and case studies, from self-navigating drones to collaborative industrial robots. Attendees will gain insight into the challenges and opportunities in this rapidly evolving field, and learn about the key technologies and skills needed to design and build the next generation of intelligent robots. Whether you are a student, an engineer, or simply curious about the future of automation, this session will provide a comprehensive and accessible introduction.

artificial intelligence robotics machine learning computer vision natural language processing drones industrial robotics
WEBINAR "Introduction to AI in Robotics"

Jaja Finance is on a mission to empower customers to buy, borrow, and build, driven by technology, fuelled by data, and built for the future. But internally, the data team faced fragmented ways of working: non-standard modelling, limited transparency across teams, slow time-to-serve, all while navigating governance needs. In just one year, the team built a resilient, transparent, scalable data foundation by consolidating all data on Snowflake and standardizing development in Coalesce. 

In this session, Sarah Tolfrey, Head of Data Operations shares Jaja’s foundation-first playbook, from templating and data quality to iterative feedback loops that helped unlock:

•5x faster delivery on complex and unstructured data •Same-day turnarounds for change requests with downstream impact checks •30% faster development on complex projects usingCoalesceAI-powered Copilot, and  •47% reduction in model compute costs •Improved onboarding and cross-team visibility. This transformation opened the door to cutting-edge AI projects and broader analytics use across the business, accelerating Jaja’s mission to serve customers with speed, intelligence, and confidence.

AI/ML Analytics Data Quality Snowflake

Navigating an AI-powered, data-driven financial services future can be challenging, particularly as the industry faces greater pressure to demonstrate tangible returns on their AI investments. Join the Financial Services keynote at London Snowflake World Tour and hear directly from industry leaders about their partnerships with Snowflake, the business and technology challenges they’re looking to solve and the key use cases they’re implementing. And learn what the latest Snowflake announcements mean for the industry. Whether you’re a business executive, a technology leader or a Snowflake user, this session will provide actionable insights on how to architect for data and AI ROI.

AI/ML Snowflake

Important: This is paid conference. Purchase tickets on the event website is required for admission.

The conference organizer and AICamp partner, generously offers FREE and discounted tickets to AICamp community.

  • x25 Complimentary tickets code - NAVCOMPAICAMP
  • A special £25 discounted ticket when the free tickets are gone (reg. £600) with code 25TAICAMP via this link

All codes are first come first serve.

Description: Where the future of cloud and AI takes shape Civo Navigate: Sovereignty and AI Edition is the definitive event for leaders and builders navigating the next era of cloud technology. Join top minds from across the industry to explore transformative ideas, cutting-edge tools, and real-world solutions. You'll gain:

  • Insights on cutting-edge developments and breakthroughs in cloud, AI, and data sovereignty
  • Expert guidance and practical advice to inform and optimize your technology strategy
  • Opportunities to connect with industry leaders, share experiences, and build meaningful relationships

Join us in London for in-person access to expert-led sessions on AI, cloud innovation, and data sovereignty — and be part of the conversation shaping the future of tech.

(External RSVP) AI Conference: Civo Navigate London
Jez Clark – Co-Founder & CEO @ Eden Smith Group

In the race to unlock value from AI and data, technology isn’t the bottleneck - PEOPLE are. Organisations are pouring millions into data platforms and tooling yet still struggle to deliver measurable impact. Why? Because the real challenge lies in attracting, developing, and retaining the right talent - and empowering them to drive change without burning out.

In this session, Jez Clark, CEO of Eden Smith Group, shares a strategic blueprint for building high-impact data teams that don’t just deliver dashboards, but drive transformation. Drawing on 25 years of experience and partnerships across public and private sectors, Jez will explore how to future-proof your workforce for AI, embed continuous capability building, and bridge the gap between talent strategy and business outcomes.

Discussion points:

• Build data-centric talent ecosystems that scale

• Plan your workforce for an AI-first future

• Tackle change fatigue and boost team resilience

• Align learning and culture to performance

Whether you’re leading a data function, navigating transformation, or struggling to activate the full potential of your teams, this session will offer practical insights and frameworks to help you build talented data teams for business impact - and turn in turn create a lasting competitive advantage

AI/ML
Big Data LDN 2025
Chris Mohr – President @ Software & Information Industry Association

Chris Mohr, President of the Software & Information Industry Association (SIIA), joins us to unpack the legal and policy challenges shaping the future of data, AI, and digital information. Discover how companies, policymakers, and innovators can prepare for an era where AI regulation, copyright liability, and privacy standards are evolving faster than ever. If you’re a CIO, CTO, or business leader navigating decentralized data, compliance, and digital transformation, this episode will give you the insights you need to stay ahead of the curve. Be sure to check out Chris's podcast The Business of Information: https://www.siia.net/the-business-of-information/ You can find out more about Chris and SIIA by visiting their website: https://www.siia.net/ Cyberpunk by jiglr | https://soundcloud.com/jiglrmusic Music promoted by https://www.free-stock-music.com Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/deed.en_US

AI #ArtificialIntelligence #Copyright #DataPrivacy #AIRegulation #TechnologyPolicy #DigitalTransformation #Section230 #DataUnchained #TechPodcast #CloudData #DecentralizedData #CIO #CTO #SIIA

Hosted on Acast. See acast.com/privacy for more information.

AI/ML C#/.NET
Data Unchained
Podcast