talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (210 results)

See all 210 →
Showing 2 results

Activities & events

Title & Speakers Event
Chris Lattner – guest , Gergely Orosz – host

Brought to You By: •⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Companies like Graphite, Notion, and Brex rely on Statsig to measure the impact of the pace they ship. Get a 30-day enterprise trial here. •⁠ Linear – The system for modern product development. Linear is a heavy user of Swift: they just redesigned their native iOS app using their own take on Apple’s Liquid Glass design language. The new app is about speed and performance – just like Linear is. Check it out. — Chris Lattner is one of the most influential engineers of the past two decades. He created the LLVM compiler infrastructure and the Swift programming language – and Swift opened iOS development to a broader group of engineers. With Mojo, he’s now aiming to do the same for AI, by lowering the barrier to programming AI applications. I sat down with Chris in San Francisco, to talk language design, lessons on designing Swift and Mojo, and – of course! – compilers. It’s hard to find someone who is as enthusiastic and knowledgeable about compilers as Chris is! We also discussed why experts often resist change even when current tools slow them down, what he learned about AI and hardware from his time across both large and small engineering teams, and why compiler engineering remains one of the best ways to understand how software really works. — Timestamps (00:00) Intro (02:35) Compilers in the early 2000s (04:48) Why Chris built LLVM (08:24) GCC vs. LLVM (09:47) LLVM at Apple  (19:25) How Chris got support to go open source at Apple (20:28) The story of Swift  (24:32) The process for designing a language  (31:00) Learnings from launching Swift  (35:48) Swift Playgrounds: making coding accessible (40:23) What Swift solved and the technical debt it created (47:28) AI learnings from Google and Tesla  (51:23) SiFive: learning about hardware engineering (52:24) Mojo’s origin story (57:15) Modular’s bet on a two-level stack (1:01:49) Compiler shortcomings (1:09:11) Getting started with Mojo  (1:15:44) How big is Modular, as a company? (1:19:00) AI coding tools the Modular team uses  (1:22:59) What kind of software engineers Modular hires  (1:25:22) A programming language for LLMs? No thanks (1:29:06) Why you should study and understand compilers — The Pragmatic Engineer deepdives relevant for this episode: •⁠ AI Engineering in the real world • The AI Engineering stack • Uber's crazy YOLO app rewrite, from the front seat • Python, Go, Rust, TypeScript and AI with Armin Ronacher • Microsoft’s developer tools roots — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

AI/ML Analytics LLM Marketing Microsoft Python Rust TypeScript
The Pragmatic Engineer

Today's leading generative AI applications have workloads that span high performance GPU compute, CPU preprocessing, data-loading, and orchestration — often spread across a combination of Python, C++/Rust, and CUDA C++ — which increases the complexity and slows down the cycle of innovation. This talk explores the capabilities and power of the Modular Mojo programming language and Modular Accelerated Xecution (MAX) platform, which unifies CPU and GPU programming into a single Pythonic programming model that is simple and extensible. This results in reduced complexity and improved developer productivity, and streamlines innovation. We'll walk through CPU and GPU support with real-world examples, providing details of how AI application developers can use MAX and Mojo to define an end-to-end AI pipeline and overcome the complexities.

Recorded live in San Francisco at the AI Engineer World's Fair. See the full schedule of talks at https://www.ai.engineer/worldsfair/2024/schedule & join us at the AI Engineer World's Fair in 2025! Get your tickets today at https://ai.engineer/2025

About Chris Chris Lattner is a co-founder and the CEO of Modular, which is building an innovative new developer platform for AI and accelerated compute. Modular provides an AI engine that accelerates PyTorch and TensorFlow inference, as well as the Mojo🔥 language, which extends Python into systems and accelerator programming domains. He has also co-founded the LLVM Compiler infrastructure project, the Clang C++ compiler, the Swift programming language, the MLIR compiler infrastructure, the CIRCT project, and has contributed to many other commercial and open source projects at Apple, Tesla, Google and SiFive.

AI Engineer World's Fair 2024
Showing 2 results