talk-data.com talk-data.com

Topic

LLM

Large Language Models (LLM)

nlp ai machine_learning

1405

tagged

Activity Trend

158 peak/qtr
2020-Q1 2026-Q1

Activities

1405 activities · Newest first

AWS re:Inforce 2024 - Mitigate OWASP Top 10 for LLM risks with a Zero Trust approach (GAI323)

Generative AI–based applications have the most business impact when they have access to critical business data and are empowered to take actions on behalf of the user. However, these integrations raise important security questions outlined in the OWASP Top 10 for LLM vulnerabilities and NIST Adversarial Machine Learning frameworks. This lightning talk introduces high-level architectural patterns to effectively mitigate key OWASP Top 10 for LLM vulnerabilities through Zero Trust principles. Leave this talk with best practices for building generative AI applications accessing sensitive business data using Agents for Amazon Bedrock.

Learn more about AWS re:Inforce at https://go.aws/reinforce.

Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4

ABOUT AWS Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts.

AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

reInforce2024 #CloudSecurity #AWS #AmazonWebServices #CloudComputing

AWS re:Inforce 2024 - Building a secure end-to-end generative AI application in the cloud (NIS321)

The security and privacy of data during the training, fine-tuning, and inferencing phases of generative AI are paramount. This lightning talk introduces a reference architecture designed to use the security of AWS PrivateLink with generative AI applications. Explore the importance of protecting proprietary data in applications that leverage both AWS native LLMs and ISV-supplied external data stores. Learn about the secure movement and usage of data, particularly for RAG processes, across various data sources like Amazon S3, vector databases, and Snowflake. Learn how this reference architecture not only meets today’s security demands but also sets the stage for the future of secure generative AI development.

Learn more about AWS re:Inforce at https://go.aws/reinforce.

Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4

ABOUT AWS Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts.

AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

reInforce2024 #CloudSecurity #AWS #AmazonWebServices #CloudComputing

AWS re:Inforce 2024 - Use AWS WAF to help avoid cost-prohibitive traffic in LLM apps (NIS221)

While large language model (LLM) applications offer tremendous potential, managing their economic implications is critical for any business. LLMs require significant graphic processing units (GPUs) to provide the parallel processing power needed to train and run inference on the massive datasets that these models learn from. Misuse of these applications through unwanted traffic can result in prohibitively expensive costs. In this talk, dive into the effects of bot traffic on LLM applications and how to mitigate these expenses with AWS WAF.

Learn more about AWS re:Inforce at https://go.aws/reinforce.

Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4

ABOUT AWS Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts.

AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

reInforce2024 #CloudSecurity #AWS #AmazonWebServices #CloudComputing

Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. DataTopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society. Dive into conversations that flow as smoothly as your morning coffee, where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style! In this episode:

Apple Intelligence is finally here: Apple's latest AI advancements, featuring  GenAI images, privacy-first approach, math notes, and a mention of ChatGPT. Dive into this YouTube clip and ponder, will it work as intended? ChatGPT in a Spreadsheet: Explore the innovative recreation of an entire GPT architecture in a spreadsheet, a nanoGPT designed by @karpathy with about 85,000 parameters. “How a single ChatGPT mistake cost $10,000” - A clickbait title that stirred controversy on Hacker News, with many arguing the error was their fault entirely, not ChatGPT. Read more community reactions on Bear Blog and LinkedIn. The ideal PR is 50 lines long: Discuss the perfect pull request length and its impact on code quality, as detailed by Graphite. Any contribution too small? Delve into the debate on the value of small contributions in the open-source community with Slidev. Adobe overhauls terms of service: Adobe's new terms ensure AI won't be trained on customers’ work, raising important questions about data usage and privacy. Artists fleeing Instagram to protect their work: Artists are moving away from Instagram to prevent their creations from being used to train Meta's AI. Explore more: Reddit, SWGFL, and Cara.

Está no ar, o Data Hackers News !! Os assuntos mais quentes da semana, com as principais notícias da área de Dados, IA e Tecnologia, que você também encontra na nossa Newsletter semanal, agora no Podcast do Data Hackers !!

Aperte o play e ouça agora, o Data Hackers News dessa semana !

Para saber tudo sobre o que está acontecendo na área de dados, se inscreva na Newsletter semanal:

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.datahackers.news/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Baixe o relatório completo do State of Data Brazil e os highlights da pesquisa :

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://stateofdata.datahackers.com.br/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Conheça nossos comentaristas do Data Hackers News:

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Monique Femme⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Paulo Vasconcellos⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Demais canais do Data Hackers:

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Site⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Linkedin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Tik Tok⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠You Tube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Matérias/assuntos comentados:

Apple anuncia sua nova Inteligência Artificial e iOs; Meta anuncia chatbot de IA no Brasil; Kwai lança IA de video para competir com Sora da OpenAI

Já aproveita, para nos seguir no Spotify, Apple Podcasts, ou no seu player de podcasts favoritos !

Memory, the foundation of human intelligence, is still one of the most complex and mysterious aspects of the brain. Despite decades of research, we've only scratched the surface of understanding how our memories are formed, stored, and retrieved. But what if AI could help us crack the code on memory? How might AI be the key to unlocking problems that have evaded human cognition for so long? Kim Stachenfeld is a Senior Research Scientist at Google DeepMind in NYC and Affiliate Faculty at the Center for Theoretical Neuroscience at Columbia University.  Her research covers topics in Neuroscience and AI. On the Neuroscience side, she study how animals build and use models of their world that support memory and prediction. On the Machine Learning side, she works on implementing these cognitive functions in deep learning models. Kim’s work has been featured in The Atlantic, Quanta Magazine, Nautilus, and MIT Technology Review. In 2019, she was named one of MIT Tech Review’s Innovators under 35 for her work on predictive representations in hippocampus.  In the episode, Richie and Kim explore her work on Google Gemini, the importance of customizability in AI models, the need for flexibility and adaptability in AI models, retrieval databases and how they improve AI response accuracy, AI-driven science, the importance of augmenting human capabilities with AI and the challenges associated with this goal, the intersection of AI, neuroscience and memory and much more.  Links Mentioned in the Show: DeepMindAlphaFoldDr James Whittington - A unifying framework for frontal and temporal representation of memoryPaper - Language models show human-like content effects onreasoning tasksKim’s Website[Course] Artificial Intelligence (AI) StrategyRelated Episode: Making Better Decisions using Data & AI with Cassie Kozyrkov, Google's First Chief Decision ScientistSign up to RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

LLMs have opened up new avenues in NLP with their possible applications, but evaluating their output introduces a new set of challenges. In this talk, we discuss these challenges and our approaches to measuring the model output quality. We will talk about the existing evaluation methods and their pros and cons and then take a closer look at their application in a practical case study.

Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society. Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style!

In this episode we are joined by Christophe Beke, where we discuss the following: Meta identifies networks pushing deceptive content likely generated by AI: Unpacking Meta’s discovery of AI-generated deep fakes used in political scams and phishing attacks.Klarna using GenAI to cut marketing costs by $10 million annually: Exploring how Klarna leverages generative AI to save on marketing costs and the broader impact of AI-generated content on branding and creativity.Apple and OpenAI partnership rumors: Speculating on Apple’s potential partnership with OpenAI and what this partnership mean for Siri and user privacy.Even the Raspberry Pi is getting in on AI: Exciting new AI capabilities for the Raspberry Pi and the privacy questions they raise. Plus discussion on windows screenshot tools & more privacy concerns.Nvidia teases next-gen “Rubin” AI chips: Nvidia’s surprising early reveal of their next-gen AI chips—what’s behind the move?Marker: A new tool for converting PDFs to Markdown: Discovering the Marker library and its perks for converting PDFs while keeping valuable metadata.Signal EU Market Exit: The privacy-focused app’s decision to leave the EU market over regulatory challenges and the broader debate on privacy vs. security.Reframing ‘tech debt’: Fresh perspectives on managing technical debt in the fast-paced world of software development.Our newsletter: Don’t miss out! Subscribe to our data and AI newsletter for the latest headlines and insights.

Enhancing search on AWS with AI, RAG, and vector databases (L300) | AWS Events

As AI continues to transform industries, the applications of generative AI and Large Language Models (LLMs) are becoming increasingly significant. This session delves into the utility of these models across various sectors. Gain an understanding of how to use LLMs, embeddings, vector datastores, and their indexing techniques to create search solutions for enhanced user experiences and improved outcomes on AWS using Amazon Bedrock, Aurora, and LangChain. By the end of this session, participants will be equipped with the knowledge to harness the power of LLMs and vector databases, paving the way for the development of innovative search solutions on AWS.

Learn more: https://go.aws/3x2mha0 Learn more about AWS events: https://go.aws/3kss9CP

Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4

ABOUT AWS Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

AWSEvents #GenerativeAI #AI #Cloud #AWSAIandDataConference

Está no ar, o Data Hackers News !! Os assuntos mais quentes da semana, com as principais notícias da área de Dados, IA e Tecnologia, que você também encontra na nossa Newsletter semanal, agora no Podcast do Data Hackers !!

Aperte o play e ouça agora, o Data Hackers News dessa semana !

Para saber tudo sobre o que está acontecendo na área de dados, se inscreva na Newsletter semanal:

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.datahackers.news/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Baixe o relatório completo do State of Data Brazil e os highlights da pesquisa :

⁠⁠⁠⁠⁠⁠⁠⁠⁠https://stateofdata.datahackers.com.br/⁠⁠⁠⁠⁠⁠⁠⁠⁠

Conheça nossos comentaristas do Data Hackers News:

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Monique Femme⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Paulo Vasconcellos⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Demais canais do Data Hackers:

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Site⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Linkedin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Tik Tok⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠You Tube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Matérias/assuntos comentados:

⁠⁠IA do Google recomendado comer cola e pedra e outras bizarrices;⁠ ⁠OpenAI não clonou a voz de Scarlett Johansson.⁠ ⁠Meetup - Mulheres desbravando o futuro: Dados e IA na indústria de tecnologia.⁠

Já aproveita, para nos seguir no Spotify, Apple Podcasts, ou no seu player de podcasts favoritos !

Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society. Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style! In this episode, we dive deep into the fascinating and complex world of AI with our special guest, Senne Batsleer: De Mol + AI Voices: Exploring the use of AI-generated voices to disguise the mole in the Belgian TV show "The Mole". Our guest, Senne Batsleer, shares insights from their experience with AI voice technology. Scarlett Johansson vs OpenAI: Delving into the controversy of OpenAI using a voice eerily similar to Scarlett Johansson's in their new AI model. Read more in The Guardian and The Washington Post. Elon Musk’s xAI Raises $6B: A look into Elon Musk’s latest venture, xAI, and its ambitious funding round, aiming to challenge AI giants like OpenAI and Microsoft. OpenAI and News Corp’s $250M Deal: The implications of OpenAI’s data deal with News Corp.  Google AI Search Risks: Examining Google's AI search providing potentially dangerous answers based on outdated Reddit comments. Find out more on The Verge and BBC.  Humane’s AI Pin Looking for a Buyer: Discussing the struggles of Humane’s wearable AI device and its search for a buyer following a rocky debut. PostgREST Turns Databases into APIs: Exploring the concept of turning PostgreSQL databases directly into RESTful APIs, enhancing real-time applications. Risks of Expired Domain Names: Highlighting the dangers of expired domains and how they can be exploited by hackers.  The 'Dead Internet' Theory: Debating the rise of bots on the web and their potential to surpass human activity online. 

Andrii Yasinetsky is a serial startup founder, ex-Uber, and ex-Google. Now, he is building a healthcare AI startup in stealth mode; he joins us to talk about the enablers and obstacles in the current AI startup ecosystem. Andrii shares his views on the following challenges for organizations applying LLMs, such as converting bytes into high-quality data, ensuring the safety of LLMs, the implications of legal regulations on innovations, and expanding AI applicability to broader and more complex problems. Despite all the hurdles, Andrii sees AI as a great equalizer that will make many services more accessible and significantly enhance their speed and quality in numerous industries yet to be disrupted. Connect with Andrii: Twitter - twitter.com/yasikLinkedin - https://www.linkedin.com/in/yasinetsky/substack - https://yasik.substack.com/

LLMs open up an opportunity to automate and scale many operational processes, which couldn't be otherwise solved by conventional methods. Examples include simple summarization of issues and incidents, assisting production on-callers, managing incidents, clustering (creating taxonomy) of issues, scaling SRE via assisted review of development design documents. Therefore LLMs provide a new and unique opportunity to transform the work we do as SREs.

Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.

Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style!

In this episode: Slack's Data Practices: Discussing Slack's use of customer data to build models, the risks of global data leakage, and the impact of GDPR and AI regulations.ChatGPT's Data Analysis Improvements:  Discussing new features in ChatGPT that let you interrogate your data like a pro. The Loneliness of Data Scientists: Why being a lone data wolf is tough, and how collaboration is the key to success. Rustworkx for Graph Computation:  Evaluating Rustworkx as a robust tool for graphs compared to Networkx.Dolt - Git for Data: Comparing Dolt and DVC as tools for data version control. Check it out.Veo by Google DeepMind: An overview of Google's Veo technology and its potential applications.Ilya Sutskever’s Departure from OpenAI: What does Ilya Sutskever’s exit mean for OpenAI with Jakub Pachocki stepping in?Hot Takes - No Data Engineering Roadmap? Debating the necessity of a data engineering roadmap and the prominence of SQL skills.

Databricks ML in Action

Dive into the Databricks Data Intelligence Platform and learn how to harness its full potential for creating, deploying, and maintaining machine learning solutions. This book covers everything from setting up your workspace to integrating state-of-the-art tools such as AutoML and VectorSearch, imparting practical skills through detailed examples and code. What this Book will help me do Set up and manage a Databricks workspace tailored for effective data science workflows. Implement monitoring to ensure data quality and detect drift efficiently. Build, fine-tune, and deploy machine learning models seamlessly using Databricks tools. Operationalize AI projects including feature engineering, data pipelines, and workflows on the Databricks Lakehouse architecture. Leverage integrations with popular tools like OpenAI's ChatGPT to expand your AI project capabilities. Author(s) This book is authored by Stephanie Rivera, Anastasia Prokaieva, Amanda Baker, and Hayley Horn, seasoned experts in data science and machine learning from Databricks. Their collective years of expertise in big data and AI technologies ensure a rich and insightful perspective. Through their work, they strive to make complex concepts accessible and actionable. Who is it for? This book serves as an ideal guide for machine learning engineers, data scientists, and technically inclined managers. It's well-suited for those transitioning to the Databricks environment or seeking to deepen their Databricks-based machine learning implementation skills. Whether you're an ambitious beginner or an experienced professional, this book provides clear pathways to success.

Prompt Engineering for Generative AI

Large language models (LLMs) and diffusion models such as ChatGPT and Stable Diffusion have unprecedented potential. Because they have been trained on all the public text and images on the internet, they can make useful contributions to a wide variety of tasks. And with the barrier to entry greatly reduced today, practically any developer can harness LLMs and diffusion models to tackle problems previously unsuitable for automation. With this book, you'll gain a solid foundation in generative AI, including how to apply these models in practice. When first integrating LLMs and diffusion models into their workflows, most developers struggle to coax reliable enough results from them to use in automated systems. Authors James Phoenix and Mike Taylor show you how a set of principles called prompt engineering can enable you to work effectively with AI. Learn how to empower AI to work for you. This book explains: The structure of the interaction chain of your program's AI model and the fine-grained steps in between How AI model requests arise from transforming the application problem into a document completion problem in the model training domain The influence of LLM and diffusion model architecture—and how to best interact with it How these principles apply in practice in the domains of natural language processing, text and image generation, and code