talk-data.com
Event
AI Meetup (February): AI and Generative AI and LLMs
Activities tracked
0
*** RSVP: https://www.aicamp.ai/event/eventdetails/W2024022210 (Due to limited room capacity, you must pre-register at the link for admission).
Welcome to the monthly in-person AI meetup in London. Join us for deep dive tech talks on AI, GenAI, LLMs and machine learning, food/drink, networking with speakers and fellow developers.
Agenda: * 6:00pm\~7:00pm: Checkin, Food/drink and Networking * 7:00pm\~9:00pm: Tech talks and Q&A * 9:00pm: Open discussion and Mixer
Tech Talk: Deploy self-hosted open-source AI solutions Speaker: Dmitri Evseev @Arbitration City Abstract: I will share practical insights from my journey from law firm partner to AI startup founder, focusing on deploying self-hosted, open-source AI solutions in the legal sector and beyond. I will discuss the benefits of self-hosting over third-party APIs, the challenges of implementing these systems for production use, and methods to optimise GPU usage with open-source tools. The talk will also cover approaches to integrate containerised architectures and encryption for secure, scalable AI deployment, aiming to assist the LLMOps community and others exploring self-hosted AI and retrieval-augmented generation (RAG).
Tech Talk: Falcon OS - An open source LLM Operating System Speaker: Heiko Hotz (Google) Abstract: In this talk I will introduce the Falcon OS project, a collaboration with the Technology Innovation Institute and Weights & Biases. Falcon OS is a new operating system project centered around the open-source Falcon 40B LLM. It aims to simplify complex tasks through natural language, bridging the gap between users and computers. This talk will explore its potential to transform AI applications and what it takes for an LLM to be able to reason and act, a key capability for such a system.
Tech Talk: Navigating LLM Deployment: Tips, Tricks, and Techniques Speaker: Meryem Arik (TitanML) Abstract: Self-hosted Language Models are going to power the next generation of applications in critical industries like financial services, healthcare, and defence. Self-hosting LLMs, as opposed to using API-based models, comes with its own host of challenges - as well as needing to solve business problems, engineers need to wrestle with the intricacies of model inference, deployment and infrastructure. In this talk we are going to discuss the best practices in model optimisation, serving and monitoring - with practical tips and real case-studies.
Speakers/Topics: Stay tuned as we are updating speakers and schedules. If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics
Sponsors: We are actively seeking sponsors to support AI developers community. Whether it is by offering venue spaces, providing food, or cash sponsorship. Sponsors will have the chance to speak at the meetups, receive prominent recognition, and gain exposure to our extensive membership base of 10,000+ local or 300K+ developers worldwide.
Community on Slack/Discord - Event chat: chat and connect with speakers and attendees - Sharing blogs\, events\, job openings\, projects collaborations Join Slack (search and join the #london channel) \| Join Discord
Sessions & talks
Showing 1–0 of 0 · Newest first
No individual activities are attached to this event yet.