talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (35 results)

See all 35 →

Companies (1 result)

Showing 5 results

Activities & events

Title & Speakers Event

What is AGI, and are we close to achieving it? In this talk, I explore the most consequential question in AI today: the path to Artificial General Intelligence. Drawing from the perspectives of leading researchers—Geoffrey Hinton, Yoshua Bengio, Yann LeCun, Fei-Fei Li, and Richard Sutton —I examine competing definitions of AGI, the major architectural approaches being pursued, and what technical breakthroughs remain unsolved. I discuss where we are now (remarkable capabilities alongside surprising limitations), where experts believe we're heading (timelines ranging from years to decades), and what keeps AI safety researchers up at night (emergent deception, self-preservation, and the control problem). Finally, I share my own synthesis from my new book Foundations of Artificial Intelligence Agents Vol. 2: AI Agents. Whether AGI arrives in five years or fifty, the choices we make now will shape what that future looks like.

The Path to Artificial General Intelligence

As AI accelerates, so too does the hype. But are we truly edging closer to Artificial General Intelligence – or are we hurtling down a path that demands a serious course correction?

Join us for a lively and thought-provoking panel exploring where AI is really heading. We’ll unpack the bold claims around AGI, question the assumptions driving current investment and research, and ask whether the AI community is prioritising speed over safety, depth over direction. Expect insight, disagreement, and maybe even a few uncomfortable truths – from experts at the cutting edge of AI research, ethics, and innovation. Whether you’re an optimist, sceptic, or somewhere in between, this is the conversation you don’t want to miss.

The format: Expert Panel – No fluff. Just powerful, practical, future-forward insight Live Q&A – Ask the questions everyone else is afraid to Networking – Meet other brilliant humans building the future

AI/ML
The Future of AI: Are We on the Path to AGI or Is a Quick Turnaround Needed

As spring blossoms come to a close, join us for an exhilarating and insightful meetup featuring two captivating sessions designed to ignite your passion for cutting-edge AI and network analysis. Let’s celebrate the season of growth and renewal with exciting new knowledge and hands-on experiences!

Join us for a social and informative evening!

----------------------------------------------------------------------------------------------------

"From climbing a mountain to Large Language Models: A path of Reinforcement Learning" by Imre Mali

In machine learning, supervised learning often takes the center of the stage almost exclusively and more often than not, unjustly. A less popular approach, Reinforcement Learning has been around to solve control problems like balancing a pole on a cart, playing Atari games, defeating the world champion in chess and Go - and nowadays, to fine tune Large Language Models. In this talk we will take a hands-on approach and walk through the basics of this hidden force behind the most spectacular AI achievements, and debate why reinforcement learning might be the key to Artificial General Intelligence.

----------------------------------------------------------------------------------------------------

"Exploring Connections: Modeling and Visualizing Complex Networks" by Bogdan Mursa

The presentation is designed to equip participants with the skills to model, analyze, and visualize complex networks using state-of-the-art tools. This session will begin with an introduction to complex network theory, focusing on its significance in social analysis and its application in understanding structural patterns and behaviors within various systems. Participants will actively engage in sourcing complex network data from open-source repositories, gaining practical experience in identifying and retrieving relevant datasets. With Python and the NetworkX library, we will analyze these networks, extracting topological properties and interpreting the contextual meanings of real-world networks. We will cover crucial analytical techniques including the detection and extraction of significant nodes and communities, and the understanding of centrality measures to identify influential nodes within the network. Further, we will employ Gephi, a powerful tool for network visualization, to bring our analyzed data to life visually. This hands-on session is perfect for anyone interested in the practical applications of network theory in research or industry, offering insights and techniques that can be applied to a broad range of disciplines and professional contexts.

----------------------------------------------------------------------------------------------------

NumFOCUS Code of Conduct https://numfocus.org/code-of-conduct

PyData Cluj-Napoca: Meetup #19

📽️ Livestream: https://youtube.com/live/moRjkkDL_n0 🧑 In Person: https://forms.gle/y23ok154qxLcZc6b7 ***

MLOps London is back again in January 2024 with talks on production machine learning, LLMs, DevOps, and Data Science. The plan, as usual, is to run another hybrid event so please come along in person if you're local or need an excuse to travel to London, or join us live otherwise.

⚠️ Don't forget to fill out the form above if you are coming to the in-person event. The venue needs the list of attendees to let you in. ⚠️

AGENDA:

⏱️ 6.00 pm onwards 🍺 Arrival, drinks, food, and networking

⏱️ 6.30 pm 🎤 Kick off and welcome

⏱️ 6.40 pm 🎤 The IQ of AI: Measuring Intelligence in LLMs 🙎‍♀️ Jodie Burchell - Developer Advocate in Data Science at JetBrains

Unless you’ve been living under a rock for the past 6 months, you won’t have been able to avoid being bombarded with news about the latest developments in large language models (LLMs). Much of this information quickly devolved into wild speculation about the capabilities of these models, with many claiming that they are sophisticated enough to soon replace roles as diverse as writers, designers, lawyers, doctors … and even data professionals. Others have gone further, claiming that these models are showing at least some signs of artificial general intelligence or that we’re on an inevitable path to an AI apocalypse. In this talk, we’ll cut through the hype and delve deeply into claims of artificial general intelligence. We’ll discuss how to more systematically measure intelligence in artificial systems, and talk about where the current models stack up against this definition. By the end of this talk, you’ll see how far away we are from creating truly intelligent models, and also see some of the immediate ways we can take advantage of the capabilities of LLMs without overextending them.

⏱️ 7.25 pm 🎤 Building Open-Source LLM Applications 🧔🏻 Christopher Samiullah - CTO @ Indomitable Simulation

This talk focuses on implementing open-source Large Language Models (LLMs) in application development. It's tailored for developers who want to use LLMs with proprietary data.

  • Local Development Unlocked: We begin with an introduction to llama-cpp, a tool that makes LLM inference on your machine much simpler, setting the stage for advanced local experimentation.

  • Enhanced Capabilities with RAG: Practical examples of how Retrieval Augmented Generation via Llama Index can significantly elevate your models' capabilities, offering a glimpse into the possibilities of AI development.

  • Performance Evaluation: We'll navigate through effective tooling and techniques to evaluate and enhance the performance of your models
  • Tackling Deployment Challenges: The session will conclude with a look at the hurdles to deploying these models, providing you with a roadmap to successful implementation.

This talk is designed for developers looking to leverage open-source LLMs, offering a blend of practical guidance and innovative strategies to harness the power of AI with your own data.

⏱️ 8.15 pm 🎤 A Whirlwind Tour of ML Model Serving Strategies (Including LLMs) 🧔🏻 Ramon Perez - Developer Advocate @ Seldon

There are many recipes to serve machine learning models to end users today, and even though new ways keep popping up as time passes, some questions remain: How do we pick the appropriate serving recipe from the menu we have available, and how can we execute it as fast and efficiently as possible? In this talk, we’re going to go through a whirlwind tour of the different machine learning deployment strategies available today for both traditional ML systems and Large Language Models, and we’ll also touch on a few do’s and don’ts while we’re at it. This session will be jargonless but not buzzwordy- or meme-less.

*** ⚠️ If you are attending in person please complete the registration form (link at the top of this description). ⚠️

MLOps London January - Talks on LLMs and Model Serving Strategies
Shubham Saboo – guest , Sandra Kublik – guest

In 2020, OpenAI launched GPT-3, a large language AI model that is demonstrating the potential to radically change how we interact with software, and open up a completely new paradigm for cognitive software applications.

Today’s episode features Sandra Kublik and Shubham Saboo, authors of GPT-3: Building Innovative NLP Products Using Large Language Models. We discuss what makes GPT-3 unique, transformative use-cases it has ushered in, the technology powering GPT-3, its risks and limitations, whether scaling models is the path to “Artificial General Intelligence”, and more.

Announcement

For the next seven days, DataCamp Premium and DataCamp for Teams are free. Gain free access by following going here. 

AI/ML LLM NLP
DataFramed
Showing 5 results