talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (14 results)

See all 14 →
Showing 5 results

Activities & events

Title & Speakers Event

Zoom Link

https://voxel51.com/computer-vision-events/ai-ml-data-science-meetup-sept-7/

Monitoring Large Language Models (LLMs) in Production

Just like with all machine learning models, once you put an LLM in production you’ll probably want to keep an eye on how it’s performing. Observing key language metrics about user interaction and responses can help you craft better prompt templates and guardrails for your applications. This talk will take a look at what you might want to be looking at once you deploy your LLMs.

Sage Elliott is a Technical Evangelist – Machine Learning & MLOps at WhyLabs. He enjoys breaking down the barrier to AI observability and talking to amazing people in the AI community.

Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos

The success of the Neural Radiance Fields (NeRFs) for modeling and free-view rendering static objects has inspired numerous attempts on dynamic scenes.Current techniques that utilize neural rendering for facilitating freeview videos (FVVs) are restricted to either offline rendering or are capable of processing only brief sequences with minimal motion. In this paper, we present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time FVV rendering on long-duration dynamic scenes.

Minye Wu – Postdoctoral researcher, KU Leuven

Egoschmema: A Dataset for Truly Long-Form Video Understanding

Introducing EgoSchema, a very long-form video question-answering dataset, and benchmark to evaluate long video understanding capabilities of modern vision and language systems. Derived from Ego4D, EgoSchema consists of over 5000 human curated multiple choice question answer pairs, spanning over 250 hours of real video data, covering a very broad range of natural human activity and behavior.

Karttikeya is a PhD student in Computer Science at the Department of Electrical Engineering & Computer Sciences (EECS) at University of California, Berkeley advised by Prof. Jitendra Malik. Earlier, he held a visiting researcher position at Meta AI where he collaborated with Dr. Christoph Feichtenhofer and team.

September AI, Machine Learning & Data Science Meetup

Zoom Link

https://voxel51.com/computer-vision-events/ai-ml-data-science-meetup-sept-7/

Monitoring Large Language Models (LLMs) in Production Just like with all machine learning models, once you put an LLM in production you’ll probably want to keep an eye on how it’s performing. Observing key language metrics about user interaction and responses can help you craft better prompt templates and guardrails for your applications. This talk will take a look at what you might want to be looking at once you deploy your LLMs.

Sage Elliott is a Technical Evangelist – Machine Learning & MLOps at WhyLabs. He enjoys breaking down the barrier to AI observability and talking to amazing people in the AI community.

Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos

The success of the Neural Radiance Fields (NeRFs) for modeling and free-view rendering static objects has inspired numerous attempts on dynamic scenes.Current techniques that utilize neural rendering for facilitating freeview videos (FVVs) are restricted to either offline rendering or are capable of processing only brief sequences with minimal motion. In this paper, we present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time FVV rendering on long-duration dynamic scenes.

Minye Wu – Postdoctoral researcher, KU Leuven

Egoschmema: A Dataset for Truly Long-Form Video Understanding

Introducing EgoSchema, a very long-form video question-answering dataset, and benchmark to evaluate long video understanding capabilities of modern vision and language systems. Derived from Ego4D, EgoSchema consists of over 5000 human curated multiple choice question answer pairs, spanning over 250 hours of real video data, covering a very broad range of natural human activity and behavior.

Karttikeya is a PhD student in Computer Science at the Department of Electrical Engineering & Computer Sciences (EECS) at University of California, Berkeley advised by Prof. Jitendra Malik. Earlier, he held a visiting researcher position at Meta AI where he collaborated with Dr. Christoph Feichtenhofer and team.

September AI, Machine Learning & Data Science Meetup

Zoom Link

https://voxel51.com/computer-vision-events/ai-ml-data-science-meetup-sept-7/

Monitoring Large Language Models (LLMs) in Production

Just like with all machine learning models, once you put an LLM in production you’ll probably want to keep an eye on how it’s performing. Observing key language metrics about user interaction and responses can help you craft better prompt templates and guardrails for your applications. This talk will take a look at what you might want to be looking at once you deploy your LLMs.

Sage Elliott is a Technical Evangelist – Machine Learning & MLOps at WhyLabs. He enjoys breaking down the barrier to AI observability and talking to amazing people in the AI community.

Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos

The success of the Neural Radiance Fields (NeRFs) for modeling and free-view rendering static objects has inspired numerous attempts on dynamic scenes.Current techniques that utilize neural rendering for facilitating freeview videos (FVVs) are restricted to either offline rendering or are capable of processing only brief sequences with minimal motion. In this paper, we present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time FVV rendering on long-duration dynamic scenes.

Minye Wu – Postdoctoral researcher, KU Leuven

Egoschmema: A Dataset for Truly Long-Form Video Understanding

Introducing EgoSchema, a very long-form video question-answering dataset, and benchmark to evaluate long video understanding capabilities of modern vision and language systems. Derived from Ego4D, EgoSchema consists of over 5000 human curated multiple choice question answer pairs, spanning over 250 hours of real video data, covering a very broad range of natural human activity and behavior.

Karttikeya is a PhD student in Computer Science at the Department of Electrical Engineering & Computer Sciences (EECS) at University of California, Berkeley advised by Prof. Jitendra Malik. Earlier, he held a visiting researcher position at Meta AI where he collaborated with Dr. Christoph Feichtenhofer and team.

September AI, Machine Learning & Data Science Meetup

Zoom Link

https://voxel51.com/computer-vision-events/ai-ml-data-science-meetup-sept-7/

Monitoring Large Language Models (LLMs) in Production Just like with all machine learning models, once you put an LLM in production you’ll probably want to keep an eye on how it’s performing. Observing key language metrics about user interaction and responses can help you craft better prompt templates and guardrails for your applications. This talk will take a look at what you might want to be looking at once you deploy your LLMs.

Sage Elliott is a Technical Evangelist – Machine Learning & MLOps at WhyLabs. He enjoys breaking down the barrier to AI observability and talking to amazing people in the AI community.

Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos

The success of the Neural Radiance Fields (NeRFs) for modeling and free-view rendering static objects has inspired numerous attempts on dynamic scenes.Current techniques that utilize neural rendering for facilitating freeview videos (FVVs) are restricted to either offline rendering or are capable of processing only brief sequences with minimal motion. In this paper, we present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time FVV rendering on long-duration dynamic scenes.

Minye Wu – Postdoctoral researcher, KU Leuven

Egoschmema: A Dataset for Truly Long-Form Video Understanding

Introducing EgoSchema, a very long-form video question-answering dataset, and benchmark to evaluate long video understanding capabilities of modern vision and language systems. Derived from Ego4D, EgoSchema consists of over 5000 human curated multiple choice question answer pairs, spanning over 250 hours of real video data, covering a very broad range of natural human activity and behavior.

Karttikeya is a PhD student in Computer Science at the Department of Electrical Engineering & Computer Sciences (EECS) at University of California, Berkeley advised by Prof. Jitendra Malik. Earlier, he held a visiting researcher position at Meta AI where he collaborated with Dr. Christoph Feichtenhofer and team.

September AI, Machine Learning & Data Science Meetup
Marc Andreessen – Co-founder & General Partner @ Andreessen Horowitz , Arsalan , Lin Qiao , Jitendra Malik – Professor @ University of California, Berkeley , Eric Schmidt – Former CEO @ Google (Alphabet) , Ali Ghodsi – CEO @ Databricks , Reynold Xin – Co-founder and Chief Architect @ Databricks , Hannes Muhleisen , Matei Zaharia – Chief Technologist @ Databricks , Michael Armbrust @ Databricks , Harrison Chase – CEO @ LangChain

0:00 Open 6:08 Ali Ghodsi & Marc Andreessen 32:06 Reynold Xin 48:09 Michael Armbrust 1:00:00 Matei Zaharia & Panel 1:27:10 Hannes Muhleisen 01:37:43 Harrison Chase 01:49:15 Lin Qiao 02:05:03 Jitendra Malik 02:21:15 Arsalan & Eric Schmidt

AI/ML
Databricks DATA + AI Summit 2023
Showing 5 results