talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (3 results)

Showing 13 results

Activities & events

Title & Speakers Event

Join us for a virtual event to hear talks from experts on the latest developments in Visual Document AI.

Date and Location

Nov 6, 2025 9-11 AM Pacific Online. Register for the Zoom!

Document AI: A Review of the Latest Models, Tasks and Tools

In this talk, go through everything document AI: trends, models, tasks, tools! By the end of this talk you will be able to get to building apps based on document models

About the Speaker

Merve Noyan works on multimodal AI and computer vision at Hugging Face, and she's the author of the book Vision Language Models on O'Reilly.

Run Document VLMs in Voxel51 with the VLM Run Plugin — PDF to JSON in Seconds

The new VLM Run Plugin for Voxel51 enables seamless execution of document vision-language models directly within the Voxel51 environment. This integration transforms complex document workflows — from PDFs and scanned forms to reports — into structured JSON outputs in seconds. By treating documents as images, our approach remains general, scalable, and compatible with any visual model architecture. The plugin connects visual data curation with model inference, empowering teams to run, visualize, and evaluate document understanding models effortlessly. Document AI is now faster, reproducible, and natively integrated into your Voxel51 workflows.

About the Speaker

Dinesh Reddy is a founding team member of VLM Run, where he is helping nurture the platform from a sapling into a robust ecosystem for running and evaluating vision-language models across modalities. Previously, he was a scientist at Amazon AWS AI, working on large-scale machine learning systems for intelligent document understanding and visual AI. He completed his Ph.D. at the Robotics Institute, Carnegie Mellon University, focusing on combining learning-based methods with 3D computer vision for in-the-wild data. His research has been recognized with the Best Paper Award at IEEE IVS 2021 and fellowships from Amazon Go and Qualcomm.

CommonForms: Automatically Making PDFs Fillable

Converting static PDFs into fillable forms remains a surprisingly difficult task, even with the best commercial tools available today. We show that with careful dataset curation and model tuning, it is possible to train high-quality form field detectors for under $500. As part of this effort, we introduce CommonForms, a large-scale dataset of nearly half a million curated form images. We also release a family of highly accurate form field detectors, FFDNet-S and FFDNet-L.

About the Speaker

Joe Barrow is a researcher at Pattern Data, specializing in document AI and information extraction. He previously worked at the Adobe Document Intelligence Lab after receiving his PhD from the University of Maryland in 2022.

Visual Document Retrieval: How to Cluster, Search and Uncover Biases in Document Image Datasets Using Embeddings

In this talk you'll learn about the task of visual document retrieval, the models which are widely used by the community, and see them in action through the open source FiftyOne App where you'll learn how to use these models to identify groups and clusters of documents, find unique documents, uncover biases in your visual document dataset, and search over your document corpus using natural language.

About the Speaker

Harpreet Sahota is a hacker-in-residence and machine learning engineer with a passion for deep learning and generative AI. He’s got a deep interest in VLMs, Visual Agents, Document AI, and Physical AI.

Nov 6 - Visual Document AI: Because a Pixel is Worth a Thousand Tokens

Join us for a virtual event to hear talks from experts on the latest developments in Visual Document AI.

Date and Location

Nov 6, 2025 9-11 AM Pacific Online. Register for the Zoom!

Document AI: A Review of the Latest Models, Tasks and Tools

In this talk, go through everything document AI: trends, models, tasks, tools! By the end of this talk you will be able to get to building apps based on document models

About the Speaker

Merve Noyan works on multimodal AI and computer vision at Hugging Face, and she's the author of the book Vision Language Models on O'Reilly.

Run Document VLMs in Voxel51 with the VLM Run Plugin — PDF to JSON in Seconds

The new VLM Run Plugin for Voxel51 enables seamless execution of document vision-language models directly within the Voxel51 environment. This integration transforms complex document workflows — from PDFs and scanned forms to reports — into structured JSON outputs in seconds. By treating documents as images, our approach remains general, scalable, and compatible with any visual model architecture. The plugin connects visual data curation with model inference, empowering teams to run, visualize, and evaluate document understanding models effortlessly. Document AI is now faster, reproducible, and natively integrated into your Voxel51 workflows.

About the Speaker

Dinesh Reddy is a founding team member of VLM Run, where he is helping nurture the platform from a sapling into a robust ecosystem for running and evaluating vision-language models across modalities. Previously, he was a scientist at Amazon AWS AI, working on large-scale machine learning systems for intelligent document understanding and visual AI. He completed his Ph.D. at the Robotics Institute, Carnegie Mellon University, focusing on combining learning-based methods with 3D computer vision for in-the-wild data. His research has been recognized with the Best Paper Award at IEEE IVS 2021 and fellowships from Amazon Go and Qualcomm.

CommonForms: Automatically Making PDFs Fillable

Converting static PDFs into fillable forms remains a surprisingly difficult task, even with the best commercial tools available today. We show that with careful dataset curation and model tuning, it is possible to train high-quality form field detectors for under $500. As part of this effort, we introduce CommonForms, a large-scale dataset of nearly half a million curated form images. We also release a family of highly accurate form field detectors, FFDNet-S and FFDNet-L.

About the Speaker

Joe Barrow is a researcher at Pattern Data, specializing in document AI and information extraction. He previously worked at the Adobe Document Intelligence Lab after receiving his PhD from the University of Maryland in 2022.

Visual Document Retrieval: How to Cluster, Search and Uncover Biases in Document Image Datasets Using Embeddings

In this talk you'll learn about the task of visual document retrieval, the models which are widely used by the community, and see them in action through the open source FiftyOne App where you'll learn how to use these models to identify groups and clusters of documents, find unique documents, uncover biases in your visual document dataset, and search over your document corpus using natural language.

About the Speaker

Harpreet Sahota is a hacker-in-residence and machine learning engineer with a passion for deep learning and generative AI. He’s got a deep interest in VLMs, Visual Agents, Document AI, and Physical AI.

Nov 6 - Visual Document AI: Because a Pixel is Worth a Thousand Tokens

Join us for a virtual event to hear talks from experts on the latest developments in Visual Document AI.

Date and Location

Nov 6, 2025 9-11 AM Pacific Online. Register for the Zoom!

Document AI: A Review of the Latest Models, Tasks and Tools

In this talk, go through everything document AI: trends, models, tasks, tools! By the end of this talk you will be able to get to building apps based on document models

About the Speaker

Merve Noyan works on multimodal AI and computer vision at Hugging Face, and she's the author of the book Vision Language Models on O'Reilly.

Run Document VLMs in Voxel51 with the VLM Run Plugin — PDF to JSON in Seconds

The new VLM Run Plugin for Voxel51 enables seamless execution of document vision-language models directly within the Voxel51 environment. This integration transforms complex document workflows — from PDFs and scanned forms to reports — into structured JSON outputs in seconds. By treating documents as images, our approach remains general, scalable, and compatible with any visual model architecture. The plugin connects visual data curation with model inference, empowering teams to run, visualize, and evaluate document understanding models effortlessly. Document AI is now faster, reproducible, and natively integrated into your Voxel51 workflows.

About the Speaker

Dinesh Reddy is a founding team member of VLM Run, where he is helping nurture the platform from a sapling into a robust ecosystem for running and evaluating vision-language models across modalities. Previously, he was a scientist at Amazon AWS AI, working on large-scale machine learning systems for intelligent document understanding and visual AI. He completed his Ph.D. at the Robotics Institute, Carnegie Mellon University, focusing on combining learning-based methods with 3D computer vision for in-the-wild data. His research has been recognized with the Best Paper Award at IEEE IVS 2021 and fellowships from Amazon Go and Qualcomm.

CommonForms: Automatically Making PDFs Fillable

Converting static PDFs into fillable forms remains a surprisingly difficult task, even with the best commercial tools available today. We show that with careful dataset curation and model tuning, it is possible to train high-quality form field detectors for under $500. As part of this effort, we introduce CommonForms, a large-scale dataset of nearly half a million curated form images. We also release a family of highly accurate form field detectors, FFDNet-S and FFDNet-L.

About the Speaker

Joe Barrow is a researcher at Pattern Data, specializing in document AI and information extraction. He previously worked at the Adobe Document Intelligence Lab after receiving his PhD from the University of Maryland in 2022.

Visual Document Retrieval: How to Cluster, Search and Uncover Biases in Document Image Datasets Using Embeddings

In this talk you'll learn about the task of visual document retrieval, the models which are widely used by the community, and see them in action through the open source FiftyOne App where you'll learn how to use these models to identify groups and clusters of documents, find unique documents, uncover biases in your visual document dataset, and search over your document corpus using natural language.

About the Speaker

Harpreet Sahota is a hacker-in-residence and machine learning engineer with a passion for deep learning and generative AI. He’s got a deep interest in VLMs, Visual Agents, Document AI, and Physical AI.

Nov 6 - Visual Document AI: Because a Pixel is Worth a Thousand Tokens

Join us for a virtual event to hear talks from experts on the latest developments in Visual Document AI.

Date and Location

Nov 6, 2025 9-11 AM Pacific Online. Register for the Zoom!

Document AI: A Review of the Latest Models, Tasks and Tools

In this talk, go through everything document AI: trends, models, tasks, tools! By the end of this talk you will be able to get to building apps based on document models

About the Speaker

Merve Noyan works on multimodal AI and computer vision at Hugging Face, and she's the author of the book Vision Language Models on O'Reilly.

Run Document VLMs in Voxel51 with the VLM Run Plugin — PDF to JSON in Seconds

The new VLM Run Plugin for Voxel51 enables seamless execution of document vision-language models directly within the Voxel51 environment. This integration transforms complex document workflows — from PDFs and scanned forms to reports — into structured JSON outputs in seconds. By treating documents as images, our approach remains general, scalable, and compatible with any visual model architecture. The plugin connects visual data curation with model inference, empowering teams to run, visualize, and evaluate document understanding models effortlessly. Document AI is now faster, reproducible, and natively integrated into your Voxel51 workflows.

About the Speaker

Dinesh Reddy is a founding team member of VLM Run, where he is helping nurture the platform from a sapling into a robust ecosystem for running and evaluating vision-language models across modalities. Previously, he was a scientist at Amazon AWS AI, working on large-scale machine learning systems for intelligent document understanding and visual AI. He completed his Ph.D. at the Robotics Institute, Carnegie Mellon University, focusing on combining learning-based methods with 3D computer vision for in-the-wild data. His research has been recognized with the Best Paper Award at IEEE IVS 2021 and fellowships from Amazon Go and Qualcomm.

CommonForms: Automatically Making PDFs Fillable

Converting static PDFs into fillable forms remains a surprisingly difficult task, even with the best commercial tools available today. We show that with careful dataset curation and model tuning, it is possible to train high-quality form field detectors for under $500. As part of this effort, we introduce CommonForms, a large-scale dataset of nearly half a million curated form images. We also release a family of highly accurate form field detectors, FFDNet-S and FFDNet-L.

About the Speaker

Joe Barrow is a researcher at Pattern Data, specializing in document AI and information extraction. He previously worked at the Adobe Document Intelligence Lab after receiving his PhD from the University of Maryland in 2022.

Visual Document Retrieval: How to Cluster, Search and Uncover Biases in Document Image Datasets Using Embeddings

In this talk you'll learn about the task of visual document retrieval, the models which are widely used by the community, and see them in action through the open source FiftyOne App where you'll learn how to use these models to identify groups and clusters of documents, find unique documents, uncover biases in your visual document dataset, and search over your document corpus using natural language.

About the Speaker

Harpreet Sahota is a hacker-in-residence and machine learning engineer with a passion for deep learning and generative AI. He’s got a deep interest in VLMs, Visual Agents, Document AI, and Physical AI.

Nov 6 - Visual Document AI: Because a Pixel is Worth a Thousand Tokens

Join us for a virtual event to hear talks from experts on the latest developments in Visual Document AI.

Date and Location

Nov 6, 2025 9-11 AM Pacific Online. Register for the Zoom!

Document AI: A Review of the Latest Models, Tasks and Tools

In this talk, go through everything document AI: trends, models, tasks, tools! By the end of this talk you will be able to get to building apps based on document models

About the Speaker

Merve Noyan works on multimodal AI and computer vision at Hugging Face, and she's the author of the book Vision Language Models on O'Reilly.

Run Document VLMs in Voxel51 with the VLM Run Plugin — PDF to JSON in Seconds

The new VLM Run Plugin for Voxel51 enables seamless execution of document vision-language models directly within the Voxel51 environment. This integration transforms complex document workflows — from PDFs and scanned forms to reports — into structured JSON outputs in seconds. By treating documents as images, our approach remains general, scalable, and compatible with any visual model architecture. The plugin connects visual data curation with model inference, empowering teams to run, visualize, and evaluate document understanding models effortlessly. Document AI is now faster, reproducible, and natively integrated into your Voxel51 workflows.

About the Speaker

Dinesh Reddy is a founding team member of VLM Run, where he is helping nurture the platform from a sapling into a robust ecosystem for running and evaluating vision-language models across modalities. Previously, he was a scientist at Amazon AWS AI, working on large-scale machine learning systems for intelligent document understanding and visual AI. He completed his Ph.D. at the Robotics Institute, Carnegie Mellon University, focusing on combining learning-based methods with 3D computer vision for in-the-wild data. His research has been recognized with the Best Paper Award at IEEE IVS 2021 and fellowships from Amazon Go and Qualcomm.

CommonForms: Automatically Making PDFs Fillable

Converting static PDFs into fillable forms remains a surprisingly difficult task, even with the best commercial tools available today. We show that with careful dataset curation and model tuning, it is possible to train high-quality form field detectors for under $500. As part of this effort, we introduce CommonForms, a large-scale dataset of nearly half a million curated form images. We also release a family of highly accurate form field detectors, FFDNet-S and FFDNet-L.

About the Speaker

Joe Barrow is a researcher at Pattern Data, specializing in document AI and information extraction. He previously worked at the Adobe Document Intelligence Lab after receiving his PhD from the University of Maryland in 2022.

Visual Document Retrieval: How to Cluster, Search and Uncover Biases in Document Image Datasets Using Embeddings

In this talk you'll learn about the task of visual document retrieval, the models which are widely used by the community, and see them in action through the open source FiftyOne App where you'll learn how to use these models to identify groups and clusters of documents, find unique documents, uncover biases in your visual document dataset, and search over your document corpus using natural language.

About the Speaker

Harpreet Sahota is a hacker-in-residence and machine learning engineer with a passion for deep learning and generative AI. He’s got a deep interest in VLMs, Visual Agents, Document AI, and Physical AI.

Nov 6 - Visual Document AI: Because a Pixel is Worth a Thousand Tokens

Join us for a virtual event to hear talks from experts on the latest developments in Visual Document AI.

Date and Location

Nov 6, 2025 9-11 AM Pacific Online. Register for the Zoom!

Document AI: A Review of the Latest Models, Tasks and Tools

In this talk, go through everything document AI: trends, models, tasks, tools! By the end of this talk you will be able to get to building apps based on document models

About the Speaker

Merve Noyan works on multimodal AI and computer vision at Hugging Face, and she's the author of the book Vision Language Models on O'Reilly.

Run Document VLMs in Voxel51 with the VLM Run Plugin — PDF to JSON in Seconds

The new VLM Run Plugin for Voxel51 enables seamless execution of document vision-language models directly within the Voxel51 environment. This integration transforms complex document workflows — from PDFs and scanned forms to reports — into structured JSON outputs in seconds. By treating documents as images, our approach remains general, scalable, and compatible with any visual model architecture. The plugin connects visual data curation with model inference, empowering teams to run, visualize, and evaluate document understanding models effortlessly. Document AI is now faster, reproducible, and natively integrated into your Voxel51 workflows.

About the Speaker

Dinesh Reddy is a founding team member of VLM Run, where he is helping nurture the platform from a sapling into a robust ecosystem for running and evaluating vision-language models across modalities. Previously, he was a scientist at Amazon AWS AI, working on large-scale machine learning systems for intelligent document understanding and visual AI. He completed his Ph.D. at the Robotics Institute, Carnegie Mellon University, focusing on combining learning-based methods with 3D computer vision for in-the-wild data. His research has been recognized with the Best Paper Award at IEEE IVS 2021 and fellowships from Amazon Go and Qualcomm.

CommonForms: Automatically Making PDFs Fillable

Converting static PDFs into fillable forms remains a surprisingly difficult task, even with the best commercial tools available today. We show that with careful dataset curation and model tuning, it is possible to train high-quality form field detectors for under $500. As part of this effort, we introduce CommonForms, a large-scale dataset of nearly half a million curated form images. We also release a family of highly accurate form field detectors, FFDNet-S and FFDNet-L.

About the Speaker

Joe Barrow is a researcher at Pattern Data, specializing in document AI and information extraction. He previously worked at the Adobe Document Intelligence Lab after receiving his PhD from the University of Maryland in 2022.

Visual Document Retrieval: How to Cluster, Search and Uncover Biases in Document Image Datasets Using Embeddings

In this talk you'll learn about the task of visual document retrieval, the models which are widely used by the community, and see them in action through the open source FiftyOne App where you'll learn how to use these models to identify groups and clusters of documents, find unique documents, uncover biases in your visual document dataset, and search over your document corpus using natural language.

About the Speaker

Harpreet Sahota is a hacker-in-residence and machine learning engineer with a passion for deep learning and generative AI. He’s got a deep interest in VLMs, Visual Agents, Document AI, and Physical AI.

Nov 6 - Visual Document AI: Because a Pixel is Worth a Thousand Tokens

Join us for a virtual event to hear talks from experts on the latest developments in Visual Document AI.

Date and Location

Nov 6, 2025 9-11 AM Pacific Online. Register for the Zoom!

Document AI: A Review of the Latest Models, Tasks and Tools

In this talk, go through everything document AI: trends, models, tasks, tools! By the end of this talk you will be able to get to building apps based on document models

About the Speaker

Merve Noyan works on multimodal AI and computer vision at Hugging Face, and she's the author of the book Vision Language Models on O'Reilly.

Run Document VLMs in Voxel51 with the VLM Run Plugin — PDF to JSON in Seconds

The new VLM Run Plugin for Voxel51 enables seamless execution of document vision-language models directly within the Voxel51 environment. This integration transforms complex document workflows — from PDFs and scanned forms to reports — into structured JSON outputs in seconds. By treating documents as images, our approach remains general, scalable, and compatible with any visual model architecture. The plugin connects visual data curation with model inference, empowering teams to run, visualize, and evaluate document understanding models effortlessly. Document AI is now faster, reproducible, and natively integrated into your Voxel51 workflows.

About the Speaker

Dinesh Reddy is a founding team member of VLM Run, where he is helping nurture the platform from a sapling into a robust ecosystem for running and evaluating vision-language models across modalities. Previously, he was a scientist at Amazon AWS AI, working on large-scale machine learning systems for intelligent document understanding and visual AI. He completed his Ph.D. at the Robotics Institute, Carnegie Mellon University, focusing on combining learning-based methods with 3D computer vision for in-the-wild data. His research has been recognized with the Best Paper Award at IEEE IVS 2021 and fellowships from Amazon Go and Qualcomm.

CommonForms: Automatically Making PDFs Fillable

Converting static PDFs into fillable forms remains a surprisingly difficult task, even with the best commercial tools available today. We show that with careful dataset curation and model tuning, it is possible to train high-quality form field detectors for under $500. As part of this effort, we introduce CommonForms, a large-scale dataset of nearly half a million curated form images. We also release a family of highly accurate form field detectors, FFDNet-S and FFDNet-L.

About the Speaker

Joe Barrow is a researcher at Pattern Data, specializing in document AI and information extraction. He previously worked at the Adobe Document Intelligence Lab after receiving his PhD from the University of Maryland in 2022.

Visual Document Retrieval: How to Cluster, Search and Uncover Biases in Document Image Datasets Using Embeddings

In this talk you'll learn about the task of visual document retrieval, the models which are widely used by the community, and see them in action through the open source FiftyOne App where you'll learn how to use these models to identify groups and clusters of documents, find unique documents, uncover biases in your visual document dataset, and search over your document corpus using natural language.

About the Speaker

Harpreet Sahota is a hacker-in-residence and machine learning engineer with a passion for deep learning and generative AI. He’s got a deep interest in VLMs, Visual Agents, Document AI, and Physical AI.

Nov 6 - Visual Document AI: Because a Pixel is Worth a Thousand Tokens

The PyData Paris Conference will take place at the Cité des Sciences on Sep 25-26 2024.

📢 Tickets are for sale at https://pydata.org/paris2024.

Hosted by QuantStack and NumFOCUS, this event promises to bring together open-source maintainers and enthusiasts, as well as experts from across the globe, all united by their passion for open-source technologies.

PyData Paris 2024 is a celebration of the thriving Parisian open-source scientific computing and AI/ML community, showcasing the blossoming ecosystem that includes key players such as Hugging Face and Mistral AI, open-source projects like scikit-learn and Jupyter, as well as open-source software corporations like QuantStack and :probabl.

Our conference is honored to feature an impressive lineup of keynote speakers who will share their invaluable insights:

  • Arthur Mensch, co-founder and CEO of Mistral AI
  • Katharine Jarmul, privacy activist, author, and co-founder of PyLadies
  • Olivier Grisel, engineer at :probabl. and developer of scikit-learn
  • Merve Noyan, machine learning advocate engineer at Hugging Face

📢 Tickets are for sale at https://pydata.org/paris2024.

PyData Paris - 2024 conference at Cité des Sciences

PyData Amsterdam 2024 is a 3-day event for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R.

Over the span of 3 days, attendees will have the opportunity to participate in workshops, attend live keynote sessions and talks, as well as get to know fellow members of the PyData Community.

Note: This is a paid event, to attend the event, Get your tickets here >

AGENDA Full Program >

Brief agenda below:

Wednesday, September 18 8:30 - 17:00 \| Tutorial day \| 8 x tutorials Full Program HERE

Thursday, September 19 8:00 - 9:00 \| Resgistration 9:00 - 10:20 \| Keynote \| Open-Source Multimodal AI by Merve Noyan Hugging Face 🤗 10: 35 - 17: 00 \| Talks by 21 speakers \| Check out the full program HERE 17:00 - 17:50 \| Lightening talks 17:50 - 18:00 \| Closing notes 18:00 - 20:00 \| Social events with Snowflake ❄️

Friday, September 20 8:00 - 9:00 \| Resgistration 9:00 - 10:20 \| Keynote \| Applied NLP in the age of Generative AI by Ines Montani Explosion 10: 35 - 16: 20 \| Talks by 23 speakers \| Check out the full program HERE 16:30 - 17:20 \| 17:20 - 17:30 \| Closing notes 18:00 - 18:30 \| Standup comedy

DIRECTIONS Location: Gedempt Hamerkanaal 231 1021 KP Amsterdam

PyData Amsterdam 2024 Conference September 18 - 20

PyData Amsterdam 2024 is a 3-day event for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R.

Over the span of 3 days, attendees will have the opportunity to participate in workshops, attend live keynote sessions and talks, as well as get to know fellow members of the PyData Community.

Note: This is a paid event, to attend the event, Get your tickets here >

AGENDA Full Program >

Brief agenda below:

Wednesday, September 18 8:30 - 17:00 \| Tutorial day \| 8 x tutorials Full Program HERE

Thursday, September 19 8:00 - 9:00 \| Resgistration 9:00 - 10:20 \| Keynote \| Open-Source Multimodal AI by Merve Noyan Hugging Face 🤗 10: 35 - 17: 00 \| Talks by 21 speakers \| Check out the full program HERE 17:00 - 17:50 \| Lightening talks 17:50 - 18:00 \| Closing notes 18:00 - 20:00 \| Social events with Snowflake ❄️

Friday, September 20 8:00 - 9:00 \| Resgistration 9:00 - 10:20 \| Keynote \| Applied NLP in the age of Generative AI by Ines Montani Explosion 10: 35 - 16: 20 \| Talks by 23 speakers \| Check out the full program HERE 16:30 - 17:20 \| 17:20 - 17:30 \| Closing notes 18:00 - 18:30 \| Standup comedy

DIRECTIONS Location: Gedempt Hamerkanaal 231 1021 KP Amsterdam

PyData Amsterdam 2024 Conference September 18 - 20

** Important: RSVP https://www.aicamp.ai/event/eventdetails/W2024022909 (Due to the limited room capacity, all guests must pre-register on the link above for admission)

Description: Welcome to the monthly in-person AI meetup in Paris. Join us for deep-dive tech talks on AI, GenAI, LLMs, and machine learning, food/drink, and networking with speakers and fellow developers.

This time we are joining forces with our friend from Hugging Face and Weaviate to bring the full power of LLMs in production to you.

Agenda: * 5:30pm\~6:15pm: Checkin, Networking & Welcome * 6:15pm\~7:30pm: Tech talks and Q&A * 7:30pm\~9:00pm: Food & Drinks & more Networking

Tech Talk: Cutting edge open-source LLM ecosystem at Hugging Face Speaker: Merve Noyan @Hugging Face

Tech Talk: Some things about Multimodal Search Speaker: Daniel Phiri @Weaviate

Tech Talk: RAG in Production - a legal data story Speaker: Baudouin Arbarétier @Ordalie.ai Abstract: This talk is about the challenges of RAG on legal data, the fine-tuning of embeddings models for language/domain adaptation and how to build production ready AI-native applications on top of a modern tech stack.

Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics

Venue: Hugging Face Paris HQ, 124 Rue Réaumur, 75002 Paris

Sponsors: - Hugging Face\, Weaviate - We are actively seeking sponsors to support AI developers community. Whether it is by offering venue spaces\, providing food\, or cash sponsorship. Sponsors will have the chance to speak at the meetups\, receive prominent recognition\, and gain exposure to our extensive membership base of 10\,000+ local or 300K+ developers worldwide.

Community on Slack/Discord - Event chat: chat and connect with speakers and attendees - Sharing blogs\, events\, job openings\, projects collaborations Join Slack (search and join the #paris channel) \| Join Discord

AI meetup: LLMs in Production with Hugging Face and Weaviate

** Important: RSVP https://www.aicamp.ai/event/eventdetails/W2024022909 (Due to the limited room capacity, all guests must pre-register on the link above for admission)

Description: Welcome to the monthly in-person AI meetup in Paris. Join us for deep-dive tech talks on AI, GenAI, LLMs, and machine learning, food/drink, and networking with speakers and fellow developers.

This time we are joining forces with our friend from Hugging Face and Weaviate to bring the full power of LLMs in production to you.

Agenda: * 5:30pm\~6:15pm: Checkin, Networking & Welcome * 6:15pm\~7:30pm: Tech talks and Q&A * 7:30pm\~9:00pm: Food & Drinks & more Networking

Tech Talk: Cutting edge open-source LLM ecosystem at Hugging Face Speaker: Merve Noyan @Hugging Face

Tech Talk: Some things about Multimodal Search Speaker: Daniel Phiri @Weaviate

Tech Talk: RAG in Production - a legal data story Speaker: Baudouin Arbarétier @Ordalie.ai **Abstract:**This talk is about the challenges of RAG on legal data, the fine-tuning of embeddings models for language/domain adaptation and how to build production ready AI-native applications on top of a modern tech stack.

Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics

Venue: Hugging Face Paris HQ, 124 Rue Réaumur, 75002 Paris

Sponsors: - Hugging Face\, Weaviate - We are actively seeking sponsors to support AI developers community. Whether it is by offering venue spaces\, providing food\, or cash sponsorship. Sponsors will not only have the chance to speak at the meetups\, receive prominent recognition\, but also gain exposure to our extensive membership base of 10\,000+ AI developers in Paris or 300K+ worldwide.

Community on Slack/Discord

  • Event chat: chat and connect with speakers and attendees
  • Sharing blogs, events, job openings, projects collaborations
  • Join Slack/Discord (link is at the bottom of the page)
AI meetup: LLMs in Production with Hugging Face and Weaviate
Merve Noyan – Developer Advocacy Engineer for Open-Source @ Hugging Face

We talked about:

Merve’s background Merve’s first contributions to open source What Merve currently does at Hugging Face (Hub, Spaces) What is means to be a developer advocacy engineer at Hugging Face The best way to get open source experience (Google Summer of Code, Hacktoberfest, and sprints) The peculiarities of hiring as it relates to code contributions Best resources to learn about NLP besides Hugging Face Good first projects for NLP The most important topics in NLP right now NLP ML Engineer vs NLP Data Scientist Project recommendations and other advice to catch the eye of recruiters Merve on Twitch and her podcast Finding Merve online Merve and Mario Kart

Links:

Hugging Face Course: https://hf.co/course Natural Language Processing in TensorFlow: https://www.coursera.org/learn/natural-language-processing-tensorflow Github ML Poetry: https://github.com/merveenoyan/ML-poetry Tackling multiple tasks with a single visual language model: https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model Hugging Face big science/TOpp: https://huggingface.co/bigscience/T0pp Pathways Language Model (PaLM) blog: https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html

MLOps Zoomcamp: https://github.com/DataTalksClub/mlops-zoomcamp

Join DataTalks.Club: https://datatalks.club/slack.html

Our events: https://datatalks.club/events.html

AI/ML GitHub HTML MLOps NLP TensorFlow
DataTalks.Club
Showing 13 results