talk-data.com talk-data.com

Topic

generative-ai

5

tagged

Activity Trend

1 peak/qtr
2020-Q1 2026-Q2

Activities

Showing filtered results

Filtering by: O'Reilly Data Engineering Books ×
AI Engineering Interviews

Generative AI is rapidly spreading across industries, and companies are actively hiring people who can design, build, and deploy these systems. But to land one of these roles, you'll have to get through the interview first. Generative AI Interviews walks you through every stage of the interview process, giving you an insider's perspective that will help you build confidence and stand out. This handy guide features 300 real-world interview questions organized by difficulty level, each with a clear outline of what makes a good answer, common pitfalls to avoid, and key points you shouldn't miss. What sets this book apart from others is Mina Ghashami and Ali Torkamani's knack for simplifying complex concepts into intuitive explanations, accompanied by compelling illustrations that make learning engaging. If you're looking for a guide to cracking GenAI interviews, this is it. Master GenAI interviews for roles from fundamental to advanced Explore 300 real industry interview questions with model answers and breakdowns Learn a step-by-step approach to explaining architecture, training, inference, and evaluation Get actionable insights that will help you stand out in even the most competitive hiring process

Context Engineering with DSPy

AI agents need the right context at the right time to do a good job. Too much input increases cost and harms accuracy, while too little causes instability and hallucinations. Context Engineering with DSPy introduces a practical, evaluation-driven way to design AI systems that remain reliable, predictable, and easy to maintain as they grow. AI engineer and educator Mike Taylor explains DSPy in a clear, approachable style, showing how its modular structure, portable programs, and built-in optimizers help teams move beyond guesswork. Through real examples and step-by-step guidance, you'll learn how DSPy's signatures, modules, datasets, and metrics work together to solve context engineering problems that evolve as models change and workloads scale. This book supports AI engineers, data scientists, machine learning practitioners, and software developers building AI agents, retrieval-augmented generation (RAG) systems, and multistep reasoning workflows that hold up in production. Understand the core ideas behind context engineering and why they matter Structure LLM pipelines with DSPy's maintainable, reusable components Apply evaluation-driven optimizers like GEPA and MIPROv2 for measurable improvements Create reproducible RAG and agentic workflows with clear metrics Develop AI systems that stay robust across providers, model updates, and real-world constraints

Prompt Engineering for LLMs

Large language models (LLMs) are revolutionizing the world, promising to automate tasks and solve complex problems. A new generation of software applications are using these models as building blocks to unlock new potential in almost every domain, but reliably accessing these capabilities requires new skills. This book will teach you the art and science of prompt engineering-the key to unlocking the true potential of LLMs. Industry experts John Berryman and Albert Ziegler share how to communicate effectively with AI, transforming your ideas into a language model-friendly format. By learning both the philosophical foundation and practical techniques, you'll be equipped with the knowledge and confidence to build the next generation of LLM-powered applications. Understand LLM architecture and learn how to best interact with it Design a complete prompt-crafting strategy for an application Gather, triage, and present context elements to make an efficient prompt Master specific prompt-crafting techniques like few-shot learning, chain-of-thought prompting, and RAG

LLM Engineer's Handbook

The "LLM Engineer's Handbook" is your comprehensive guide to mastering Large Language Models from concept to deployment. Written by leading experts, it combines theoretical foundations with practical examples to help you build, refine, and deploy LLM-powered solutions that solve real-world problems effectively and efficiently. What this Book will help me do Understand the principles and approaches for training and fine-tuning Large Language Models (LLMs). Apply MLOps practices to design, deploy, and monitor your LLM applications effectively. Implement advanced techniques such as retrieval-augmented generation (RAG) and preference alignment. Optimize inference for high performance, addressing low-latency and high availability for production systems. Develop robust data pipelines and scalable architectures for building modular LLM systems. Author(s) Paul Iusztin and Maxime Labonne are experienced AI professionals specializing in natural language processing and machine learning. With years of industry and academic experience, they are dedicated to making complex AI concepts accessible and actionable. Their collaborative authorship ensures a blend of theoretical rigor and practical insights tailored for modern AI practitioners. Who is it for? This book is tailored for AI engineers, NLP professionals, and LLM practitioners who wish to deepen their understanding of Large Language Models. Ideal readers possess some familiarity with Python, AWS, and general AI concepts. If you aim to apply LLMs to real-world scenarios or enhance your expertise in AI-driven systems, this handbook is designed for you.

Prompt Engineering for Generative AI

Large language models (LLMs) and diffusion models such as ChatGPT and Stable Diffusion have unprecedented potential. Because they have been trained on all the public text and images on the internet, they can make useful contributions to a wide variety of tasks. And with the barrier to entry greatly reduced today, practically any developer can harness LLMs and diffusion models to tackle problems previously unsuitable for automation. With this book, you'll gain a solid foundation in generative AI, including how to apply these models in practice. When first integrating LLMs and diffusion models into their workflows, most developers struggle to coax reliable enough results from them to use in automated systems. Authors James Phoenix and Mike Taylor show you how a set of principles called prompt engineering can enable you to work effectively with AI. Learn how to empower AI to work for you. This book explains: The structure of the interaction chain of your program's AI model and the fine-grained steps in between How AI model requests arise from transforming the application problem into a document completion problem in the model training domain The influence of LLM and diffusion model architecture—and how to best interact with it How these principles apply in practice in the domains of natural language processing, text and image generation, and code