talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (36 results)

See all 36 →

Companies (1 result)

Generative AI 1 speaker
Senior Technical Specialist
Showing 5 results

Activities & events

Title & Speakers Event

LLMs like GPT can give useful answers to many questions, but there are also well-known issues with their output: The responses may be outdated, inaccurate, or outright hallucinations, and it’s hard to know when you can trust them. And they don’t know anything about you or your organization private data (we hope). RAG can help reduce the problems with “hallucinated” answers, and make the responses more up-to-date, accurate, and personalized - by injecting related knowledge, including non-public data. In this talk, we’ll go through what RAG means, demo some ways you can implement it - and warn of some traps you still have to watch out for.

RAG llms retrieval-augmented generation

In this talk, I will present some of the latest advances in retrieval-augmented generation(RAG) techniques, which combine the strengths of both retrieval-based and generative approaches for chatbot development. Retrieval-based methods can leverage existing text documents to provide informative and coherent responses, while generative methods can produce novel and engaging conversations personalized to the user.

RAG retrieval-augmented generation

** Important RSVP here (Due to room capacity and building security, you must pre-register at the link for admission)

Description: Welcome to our in-person AI meetup in New York. Join us for deep dive tech talks on AI, GenAI, LLMs and ML, hands-on workshops, food/drink, networking with speakers and fellow developers.

Tech Talk: Doing RAG the right way! - Advanced RAG Techniques Speaker: Zain Hasan (Weaviate) Abstract: In this talk, I will present some of the latest advances in retrieval-augmented generation(RAG) techniques, which combine the strengths of both retrieval-based and generative approaches for chatbot development. Retrieval-based methods can leverage existing text documents to provide informative and coherent responses, while generative methods can produce novel and engaging conversations personalized to the user.

Speaker: Kristian Aune (vespa.ai) Abstract: LLMs like GPT can give useful answers to many questions, but there are also well-known issues with their output: The responses may be outdated, inaccurate, or outright hallucinations, and it’s hard to know when you can trust them. And they don’t know anything about you or your organization private data (we hope). RAG can help reduce the problems with “hallucinated” answers, and make the responses more up-to-date, accurate, and personalized - by injecting related knowledge, including non-public data. In this talk, we’ll go through what RAG means, demo some ways you can implement it - and warn of some traps you still have to watch out for.

Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics

Sponsors: We are actively seeking sponsors to support our community. Whether it is by offering venue spaces, providing food/drink, or cash sponsor. Sponsors will not only speak at the meetups, receive prominent recognition, but also gain exposure to our extensive membership base of 20,000+ AI developers in New York or 350K+ worldwide.

Community on Slack/Discord

  • Event chat: chat and connect with speakers and attendees
  • Sharing blogs, events, job openings, projects collaborations
  • Join Slack/Discord (scroll down to the bottom) *
AI Meetup (May): Generative AI, LLMs and ML

Join us on our May meetup hosted in collaboration with Schiphol Group! This meetup will be all about the challenges and opportunities when using Large Language Models (LLMs) in production environments. First Schiphol Group will demonstrate how they use LLMs for external usage to enhance customer care support. Afterwards Albert Heijn will show the power of LLMs by showcasing their internal LLM "Samurai", which is used for efficient information retrieval. Excited?! Join us on the 2nd of May 2024!

SCHEDULE

  • 18:00-19:00: Welcome with food and drinks! (🍕 / 🍺)
  • 19:00-19:45: Talk 1 - "How Generative AI will bring Schiphol's customer care to new heights"
  • 19:45-20:00: Break
  • 20:00-20:45: Talk 2 - "Unleashing Samurai: How Albert Heijn is Making Content Retrieval Hassle-free for Employees"
  • 20:45-22:00: Networking / drinks!

TALKS

[Talk 1]: “How Generative AI will bring Schiphol's customer care to new heights” by Sebastiaan de Vries & Justin van Dongen In this talk, Sebastiaan de Vries, Senior Data Scientist and Justin van Dongen, Data Engineer at Schiphol Group will elaborate on how Schiphol is enhancing customer care support through the implementation of domain knowledgeable Large Language Models (LLMs) with adequate guardrails via their online channels. Justin will be presenting the centralised platform, the base of their future LLM applications that currently is realised and the vision they have for this foundation. Sebastiaan will delve into challenges faced in developing a customer facing model, the opportunities presented by leveraging LLMs, and the importance of implementing guardrails to ensure responsible AI usage. Furthermore, he will discuss the technical implementation, validation methods, and measurable impact of integrating LLMs in our day-to-day operations, along with desirable next steps of our newest digital agent.

[Talk 2]: “Unleashing Samurai: How Albert Heijn is Making Content Retrieval Hassle-free for Employees" by Samuel Oyediran & Christiaan Rademan Albert Heijn is transforming how employees search for information through the Samurai (SAM-your-AI) RAG application. This application was conceived during an AH Technology Hackathon and developed further by AH Gen AI Labs. Still in its pilot phase, Samurai has helped Dutch and non-Dutch-speaking colleagues across Albert Heijn efficiently access internal documentation using semantic search.

In the upcoming presentation, Samuel Oyediran, ML Engineer (Commerce) and Christiaan Rademan, ML Engineer (Digital), will provide insightful details on how Samurai was built.

DIRECTIONS Hosted by Schiphol Group at their head office building at Evert van de Beekstraat 202, 1118 CP Schiphol. To arrive at the Schiphol Head Office building (see photo attachments for directions):

Public transport From the railway station/Schiphol Plaza: exit the building and head for the buses. Take one of the Schiphol Sternet regional buses in the direction of Zuid-P30 from Platform B10 (lines 191, 195, 198) or B12 (lines 190, 192, 199). These buses run every 10 minutes throughout the day. Get off the bus at the Schipholgebouw [Schiphol Head Office building] stop (fourth stop). Journey time: 10 minutes.

Walking From the railway station/Schiphol Plaza: walk along Schiphol Boulevard towards the Hilton Hotel and past The Base A,B,C office building. Walking time: 10 minutes.

Car You can check in at the reception and you'll receive a free parking ticket. NOTE: parking is not directly next to the office. You can park your car at P22 (see photo attachments for directions).

On the fly with AI: LLM action for customer care support & information retrieval

Join us for the upcoming PyData Amsterdam meetup that we host in collaboration with Adyen.

Schedule

18.00-19.00: Walk in with drinks and food (🍕 /🍺) 19.00-19.45: Fraud or no Fraud: sounds simple, right? 19.45-20:00: short break 20.00-20:45: Building GenAI and ML systems with OSS Metaflow 20.45-21.30: Networking + drinks and bites

[Talk 1]: Fraud or no Fraud: sounds simple\, right? by Sophie van den Berg The surge in online payments has brought a surge in fraudsters looking to exploit the system. To combat this, we're leveraging machine learning (ML) models to identify and block fraudulent transactions. While this may seem like a straightforward supervised learning task, there's a key challenge: how do we confirm if a blocked transaction was truly fraudulent? This talk delves into counterfactual evaluation and other obstacles encountered when building an ML model for fraud detection at Adyen.

[Talk 2]: Building GenAI and ML systems with OSS Metaflow by Hugo Bowne-Anderson This talk explores a framework for how data scientists can deliver value with Generative AI: How can you embed LLMs and foundation models into your pre-existing software stack? How can you do so using Open Source Python? What changes about the production machine learning stack and what remains the same?

We motivate the concepts through generative AI examples in domains such as text-to-image (Stable Diffusion) and text-to-speech (Whisper) applications. Moreover, we’ll demonstrate how workflow orchestration provides a common scaffolding to ensure that your Generative AI and classical Machine Learning workflows alike are robust and ready to move safely into production systems.

This talk is aimed squarely at (data) scientists and ML engineers who want to focus on the science, data, and modeling, but want to be able to access all their infrastructural, platform, and software needs with ease!

Combating online payment fraud & putting LLMs in open-source production systems
Showing 5 results