talk-data.com talk-data.com

Topic

GenAI

Generative AI

ai machine_learning llm

1517

tagged

Activity Trend

192 peak/qtr
2020-Q1 2026-Q1

Activities

1517 activities · Newest first

AI and ML for Coders in PyTorch

Eager to learn AI and machine learning but unsure where to start? Laurence Moroney's hands-on, code-first guide demystifies complex AI concepts without relying on advanced mathematics. Designed for programmers, it focuses on practical applications using PyTorch, helping you build real-world models without feeling overwhelmed. From computer vision and natural language processing (NLP) to generative AI with Hugging Face Transformers, this book equips you with the skills most in demand for AI development today. You'll also learn how to deploy your models across the web and cloud confidently. Gain the confidence to apply AI without needing advanced math or theory expertise Discover how to build AI models for computer vision, NLP, and sequence modeling with PyTorch Learn generative AI techniques with Hugging Face Diffusers and Transformers

Como é manter o motor de dados de uma dos maiores grupos de varejo do Brasil rodando sem parar — enquanto se experimenta tecnologias de IA Generativa que poucas empresas do mundo ousaram colocar em produção? Neste episódio especial, convidamos Lucas Eduardo Wichinevsky, Rodrigo Lucchesi e Marcelle Araujo Chiriboga Carvalho do Grupo Boticário, para abrir a caixa-preta da Engenharia de Machine Learning. Lembrando que você pode encontrar todos os podcasts da comunidade Data Hackers no Spotify, iTunes, Google Podcast, Castbox e muitas outras plataformas. Falamos no episódio: Marcelle Chiriboga - Gerente de Data Science de Lojas e Franquias no Grupo Boticário Lucas Eduardo Wichinevsky  - Gerente de Data Science de Tech Corporate  no Grupo Boticário Rodrigo Lucchesi -  Gerente de Data Science de Demanda e RGM no  no Grupo Boticário Nossa Bancada — Data Hackers: Monique Femme — Head of Community Management na Data Hackers Paulo Vasconcellos — Co-founder da Data Hackers e Principal Data Scientist na Hotmart

Welcome to DataFramed Industry Roundups! In this series of episodes, we sit down to discuss the latest and greatest in data & AI.  In this episode, with special guest, DataCamp COO Martijn, we touch upon the hype and reality of AI agents in business, the McKinsey vs. Ethan Mollick debate on simple vs. complex agents, Meta's $15B stake in Scale AI and what it means for data and talent, Apple’s rumored $20B bid for Perplexity amid AI struggles, EU’s push to treat AI skills like reading and math, the first fully AI-generated NBA ad and what it means for creative industries, a new benchmark for deep research tools, and much more. Links Mentioned in the Show: Meta bought Scale AIApple rumoured to buy trying to acquire Perplexity for $20BnMcKinsey's Seizing the Agentic AI Advantage reportThe first fully AI-generated NBA AdEU Generative AI Outlook reportMary Meeker's Trend in AI reportDeep research benchmarkRewatch RADAR AI  New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

In legacy Airflow 2.x, each DAG run was tied to a unique “execution_date.” By removing this requirement, Airflow can now directly support a variety of new use cases, such as model training and generative AI inference, without the need for hacks and workarounds typically used by machine learning and AI engineers. In this talk, we will delve into the significant advancements in Airflow 3 that enable GenAI and MLOps use cases, particularly through the changes outlined in AIP 83. We’ll cover key changes like the renaming of “execution_date” to “logical_date,” along with the allowance for it to be null, and the introduction of the new “run_after” field which provides a more meaningful mechanism for scheduling and sorting. Furthermore, we’ll discuss how Airflow 3 enables multiple parallel runs, empowering diverse triggering mechanisms and easing backfill logic with a real-world demo.

Duolingo has built an internal tool DuoFactory to orchestrate AI generated content using Airflow. The tool has been used to generate example sentences per lesson, math exercises, and Duoradio lessons. The ecosystem is flexible for various company needs. Some of these use cases contain end to end generation where one click of a button generates content in app. We also have created a Workflow Builder to orchestrate and iterate on generative AI workflows by creating one-time DAG instances with a UI easy enough for non-engineers to use.

At SAP Business AI, we’ve transformed Retrieval-Augmented Generation (RAG) pipelines into enterprise-grade powerhouses using Apache Airflow. Our Generative AI Foundations Team developed a cutting-edge system that effectively grounds Large Language Models (LLMs) with rich SAP enterprise data. Powering Joule for Consultants, our innovative AI copilot, this pipeline manages the seamless ingestion, sophisticated metadata enrichment, and efficient lifecycle management of over a million structured and unstructured documents. By leveraging Airflow’s Dynamic DAGs, TaskFlow API, XCom, and Kubernetes Event-Driven Autoscaling (KEDA), we achieved unprecedented scalability and flexibility. Join our session to discover actionable insights, innovative scaling strategies, and a forward-looking vision for Pipeline-as-a-Service, empowering seamless integration of customer-generated content into scalable AI workflows

Airflow 3.0 is the most significant release in the project’s history, and brings a better user experience, stronger security, and the ability to run tasks anywhere, at any time. In this workshop, you’ll get hands-on experience with the new release and learn how to leverage new features like DAG versioning, backfills, data assets, and a new react-based UI. Whether you’re writing traditional ELT/ETL pipelines or complex ML and GenAI workflows, you’ll learn how Airflow 3 will make your day-to-day work smoother and your pipelines even more flexible. This workshop is suitable for intermediate to advanced Airflow users. Beginning users should consider taking the Airflow fundamentals course on the Astronomer Academy before attending this workshop.

At LinkedIn, our data pipelines process exabytes of data, with our offline infrastructure executing 300K ETL workflows daily and 10K concurrent executions. Historically, these workloads ran on our legacy system, Azkaban, which faced UX, scalability, and operational challenges. To modernize our infra, we built a managed Airflow service, leveraging its enhanced developer & operator experience, rich feature set, and strong OSS community support. That initiated LinkedIn’s largest-ever infrastructure migration—transitioning thousands of legacy workflows to Airflow. In this talk, we will share key lessons from migrating massive-scale pipelines with minimal production disruption. We will discuss: Overall Migration Strategy Custom Tooling Enhancements on testing, deployment, and observability Architectural Innovations decoupling orchestration and compute GenAI-powered Migration automating code rewrites Post-Migration Challenges & Airflow 3.0. Attendees will walk away with battle-tested strategies for large-scale Airflow adoption and practical insights into scaling Airflow in enterprise environments.

In the age of Generative AI, knowledge bases are the backbone of intelligent systems, enabling them to deliver accurate and context-aware responses. But how do you ensure that these knowledge bases remain up-to-date and relevant in a rapidly changing world? Enter Apache Airflow, a robust orchestration tool that streamlines the automation of data workflows. This talk will explore how Airflow can be leveraged to manage and update AI knowledge bases across multiple data sources. We’ll dive into the architecture, demonstrate how Airflow enables efficient data extraction, transformation, and loading (ETL), and share insights on tackling challenges like data consistency, scheduling, and scalability. Whether you’re building your own AI-driven systems or looking to optimize existing workflows, this session will provide practical takeaways to make the most of Apache Airflow in orchestrating intelligent solutions.

On March 13th, 2025, Amazon Web Services announced General Availability of Amazon SageMaker Unified Studio, bringing together AWS machine learning and analytics capabilities. At the heart of this next generation of Amazon SageMaker sits Apache Airflow. All SageMaker Unified Studio users have a personal, open-source Airflow deployment, running alongside their Jupyter notebook, enabling those users to easily develop Airflow DAGs that have unified access to all of their data. In this talk, I will go into details around the motivations for choosing Airflow for this capability, the challenges with incorporating Airflow into such a large and diverse experience, the key role that open-source plays, how we’re leveraging GenAI to make that open source development experience better, and the goals for the future of Airflow in SageMaker Unified Studio. Attendees will leave with a better understanding of the considerations they need to make when choosing Airflow as a component of their enterprise project, and a greater appreciation of how Airflow can power advanced capabilities.

The enterprise adoption of AI agents is accelerating, but significant challenges remain in making them truly reliable and effective. While coding assistants and customer service agents are already delivering value, more complex document-based workflows require sophisticated architectures and data processing capabilities. How do you design agent systems that can handle the complexity of enterprise documents with their tables, charts, and unstructured information? What's the right balance between general reasoning capabilities and constrained architectures for specific business tasks? Should you centralize your agent infrastructure or purchase vertical solutions for each department? The answers lie in understanding the fundamental trade-offs between flexibility, reliability, and the specific needs of your organization. Jerry Liu is the CEO and Co-founder at LlamaIndex, the AI agents platform for automating document workflows. Previously, he led the ML monitoring team at Robust Intelligence, did self-driving AI research at Uber ATG, and worked on recommendation systems at Quora. In the episode, Richie and Jerry explore the readiness of AI agents for enterprise use, the challenges developers face in building these agents, the importance of document processing and data structuring, the evolving landscape of AI agent frameworks like LlamaIndex, and much more. Links Mentioned in the Show: LlamaIndexLlamaIndex Production Ready Framework For LLM AgentsTutorial: Model Context Protocol (MCP)Connect with JerryCourse: Retrieval Augmented Generation (RAG) with LangChainRelated Episode: RAG 2.0 and The New Era of RAG Agents with Douwe Kiela, CEO at Contextual AI & Adjunct Professor at Stanford UniversityRewatch RADAR AI  New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Summary In this episode of the Data Engineering Podcast Arun Joseph talks about developing and implementing agent platforms to empower businesses with agentic capabilities. From leading AI engineering at Deutsche Telekom to his current entrepreneurial venture focused on multi-agent systems, Arun shares insights on building agentic systems at an organizational scale, highlighting the importance of robust models, data connectivity, and orchestration loops. Listen in as he discusses the challenges of managing data context and cost in large-scale agent systems, the need for a unified context management platform to prevent data silos, and the potential for open-source projects like LMOS to provide a foundational substrate for agentic use cases that can transform enterprise architectures by enabling more efficient data management and decision-making processes.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementData migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. This episode is brought to you by Coresignal, your go-to source for high-quality public web data to power best-in-class AI products. Instead of spending time collecting, cleaning, and enriching data in-house, use ready-made multi-source B2B data that can be smoothly integrated into your systems via APIs or as datasets. With over 3 billion data records from 15+ online sources, Coresignal delivers high-quality data on companies, employees, and jobs. It is powering decision-making for more than 700 companies across AI, investment, HR tech, sales tech, and market intelligence industries. A founding member of the Ethical Web Data Collection Initiative, Coresignal stands out not only for its data quality but also for its commitment to responsible data collection practices. Recognized as the top data provider by Datarade for two consecutive years, Coresignal is the go-to partner for those who need fresh, accurate, and ethically sourced B2B data at scale. Discover how Coresignal's data can enhance your AI platforms. Visit dataengineeringpodcast.com/coresignal to start your free 14-day trial. Your host is Tobias Macey and today I'm interviewing Arun Joseph about building an agent platform to empower the business to adopt agentic capabilitiesInterview IntroductionHow did you get involved in the area of data management?Can you start by giving an overview of how Deutsche Telekom has been approaching applications of generative AI?What are the key challenges that have slowed adoption/implementation?Enabling non-engineering teams to define and manage AI agents in production is a challenging goal. From a data engineering perspective, what does the abstraction layer for these teams look like? How do you manage the underlying data pipelines, versioning of agents, and monitoring of these user-defined agents?What was your process for developing the architecture and interfaces for what ultimately became the LMOS?How do the principles of operatings systems help with managing the abstractions and composability of the framework?Can you describe the overall architecture of the LMOS?What does a typical workflow look like for someone who wants to build a new agent use case?How do you handle data discovery and embedding generation to avoid unnecessary duplication of processing?With your focus on openness and local control, how do you see your work complementing projects like OumiWhat are the most interesting, innovative, or unexpected ways that you have seen LMOS used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on LMOS?When is LMOS the wrong choice?What do you have planned for the future of LMOS and MASAIC?Contact Info LinkedInParting Question From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Links LMOSDeutsche TelekomMASAICOpenAI Agents SDKRAG == Retrieval Augmented GenerationLangChainMarvin MinskyVector DatabaseMCP == Model Context ProtocolA2A (Agent to Agent) ProtocolQdrantLlamaIndexDVC == Data Version ControlKubernetesKotlinIstioXerox PARC)OODA (Observe, Orient, Decide, Act) LoopThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Unveiling Agentic AI: The Future of Next-Generation Analytics | Data & AI NXT Conference

Welcome to the Data & AI NXT Conference! 🎉 This year, we explore the next frontier in analytics: Agentic AI.

🔍 Next-Generation Agentic Analytics Artificial Intelligence is pushing analytics beyond static dashboards and reports. At this event, discover how next-gen AI Agents transform fragmented, siloed data, both historical and real-time, into optimized, actionable intelligence.

Learn how businesses are evolving from reactive analytics to self-improving decision systems that span the entire enterprise.

🗓️ Agenda & Chapters

0:00 Start 7:37 Opening 23:15 The Unseen Sportian’s Playbook: Redefining Sports through Data and AI | Leandro Mora 1:06:16 Ethical implications of self-driving intelligence | Avijeet Dutta, Dr Shivani Rai Gupta, Jyothish Jayaraman, and Andres Tenorio 1:59:54 Governance in the age of AI Agents | Roberto Contreras 2:48:11 Synthetic data, digital twins & the future of testing | Carla Molgora, Ana Lía Villarreal and Cristina Garita 3:44:34 Future of BI: from dashboards to autonomous intelligence | Nacho Vuotto, Esteban Bertuccio, Carlos Alarcón, and Sergio Soliz 4:43:02 Real-Time vs. historical: balancing speed and context | Daniel Esteban Vesga, Oscar Narvaez, Martin Sciarrillo, and Abraham Jacob Montoya 5:38:16 AI Agents and the future of Human-Tech | Almudena Claudio

🙌 Thanks for joining us! Don't forget to like, comment, and subscribe for more tech insights from Globant.

💚

Send us a text From Elephant Butts to Ethical AI — Duncan Curtis on De-Risking GenAI at Sama Episode intro Duncan Curtis, SVP for GenAI & AI Product + Technology at Sama, has shipped everything from autonomous-vehicle platforms at Zoox to game-changing data products at Google. Today he leads a 160-person team that’s reinventing how training data is curated, labeled, and audited so enterprises can ship production-ready GenAI—without the lurking model risk. Sama’s newest release, Sama Automate, is already cutting annotation time by 40 percent while keeping quality above SLAs, and Duncan says they’re “aiming for a 10× improvement by 2025.” (aiuserconference.com, sama.com) If you want the inside track on AI ROI, ethical guardrails, and why A’s hire A’s (but B’s hire C’s!), lean in—this one’s for you. (And yes, we do get to elephant butts.) Timestamped roadmap 00:46 Meet Duncan Curtis03:51 The Duncan Brand05:52 Making Time for Yourself08:47 Autonomous Cars — 9× Safer12:21 Favorite Jobs13:24 Inside Sama14:39 Data & LLM Training16:04 De-Risking Models19:08 Ethical AI22:43 Stopping Hallucinations27:18 Data Labeling Deep-Dive31:56 Production-Ready GenAI33:44 AGI Horizons35:34 What Makes Sama Different36:31 Calculating AI ROI38:50 State of the LLMs44:48 Elephant Butts & Closing ThoughtsQuick links LinkedIn: https://www.linkedin.com/in/duncan-curtis/Sama: https://www.sama.com/Latest blog: “Sama Introduces New Data Automation Platform” (sama.com)Hear more: Duncan on “Human Guardrails in Generative AI” (DataCamp podcast) (datacamp.com)Hashtags

MakingDataSimple #AIProduct #GenAI #DataLabeling #EthicalAI #AIROI #AutonomousVehicles #Podcast

Want to be featured as a guest on Making Data Simple? Reach out to us at [email protected] and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Today on the podcast, I interview AI researcher Tony Zhang about some of his recent findings about the effects that fully automated AI has on user decision-making. Tony shares lessons from his recent research study comparing typical recommendation AIs with a “forward-reasoning” approach that nudges users to contribute their own reasoning with process-oriented support that may lead to better outcomes. We’ll look at his two study examples where they provided an AI-enabled interface for pilots tasked with deciding mid-flight the next-best alternate airport to land at, and another scenario asking investors to rebalance an ETF portfolio. The takeaway, taken right from Tony’s research, is that “going forward, we suggest that process-oriented support can be an effective framework to inform the design of both 'traditional' AI-assisted decision-making tools but also GenAI-based tools for thought.” 

Highlights/ Skip to:

Tony Zhang’s background (0:46) Context for the study (4:12) Zhang’s metrics for measuring over-reliance on AI (5:06) Understanding the differences between the two design options that study participants were given  (15:39) How AI-enabled hints appeared for pilots in each version of the UI (17:49) Using AI to help pilots make good decisions faster (20:15) We look at the ETF portfolio rebalancing use case in the study  (27:46) Strategic and tactical findings that Tony took away from his study (30:47) The possibility of commercially viable recommendations based on Tony’s findings (35:40)  Closing thoughts (39:04)

Quotes from Today’s Episode

“I wanted to keep the difference between the [recommendation & forward reasoning versions] very minimal to isolate the effect of the recommendation coming in. So, if I showed you screenshots of those two versions, they would look very, very similar. The only difference that you would immediately see is that the recommendation version is showing numbers 1, 2, and 3 for the recommended airports. These [rankings] are not present in the forward-reasoning one [airports are default sorted nearest to furthest]. This actually is a pretty profound difference in terms of the interaction or the decision-making impact that the AI has. There is this normal flight mode and forward reasoning, so that pilots are already immersed in the system and thinking with the system during normal flight. It changes the process that they are going through while they are working with the AI.” Tony (18:50 - 19:42)

“You would imagine that giving the recommendation makes your decision faster, but actually, the recommendations were not faster than the forward-reasoning one. In the forward-reasoning one, during normal flight, pilots could already prepare and have a good overview of their surroundings, giving them time to adjust to the new situation. Now, in normal flight, they don’t know what might be happening, and then suddenly, a passenger emergency happens. While for the recommendation version, the AI just comes into the situation once you have the emergency, and then you need to do this backward reasoning that we talked about initially.” Tony ( 21:12 - 21:58)

“Imagine reviewing code written by other people. It’s always hard because you had no idea what was going on when it was written. That was the idea behind the forward reasoning. You need to look at how people are working and how you can insert AI in a way that it seamlessly fits and provides some benefit to you while keeping you in your usual thought process. So, the way that I see it is you need to identify where the key pain points actually are in your current decision-making process and try to address those instead of just trying to solve the task entirely for users.” Tony (25:40 - 26:19)

Links

LinkedIn: https://www.linkedin.com/in/zelun-tony-zhang/  Augmenting Human Cognition With Generative AI: Lessons From AI-Assisted Decision-Making: https://arxiv.org/html/2504.03207v1 

Today, we’re joined by Hikari Senju, Founder and CEO at Omneky, the generative AI platform built for performance advertising. We talk about:  Top 3 benefits of Gen AI in content marketingHow the market for digital ads is growing due to generative personalization & attribution capabilitiesAds taking on more of the sales functionThe need for thousands of variations of content to drive advertising results (& the dangers of serving the same ad repeatedly)Advertising to a world of digital users

The line between generic AI capabilities and truly transformative business applications often comes down to one thing: your data. While foundation models provide impressive general intelligence, they lack the specialized knowledge needed for domain-specific tasks that drive real business value. But how do you effectively bridge this gap? What's the difference between simply fine-tuning models versus using techniques like retrieval-augmented generation? And with constantly evolving models and technologies, how do you build systems that remain adaptable while still delivering consistent results? Whether you're in retail, healthcare, or transportation, understanding how to properly enrich, annotate, and leverage your proprietary data could be the difference between an AI project that fails and one that fundamentally transforms your business. Wendy Gonzalez is the CEO — and former COO — of Sama, a company leading the way in ethical AI by delivering accurate, human-annotated data while advancing economic opportunity in underserved communities. She joined Sama in 2015 and has been central to scaling both its global operations and its mission-driven business model, which has helped over 65,000 people lift themselves out of poverty through dignified digital work. With over 20 years of experience in the tech and data space, Wendy’s held leadership roles at EY, Capgemini, and Cycle30, where she built and managed high-performing teams across complex, global environments. Her leadership style blends operational excellence with deep purpose — ensuring that innovation doesn’t come at the expense of integrity. Wendy is also a vocal advocate for inclusive AI and sustainable impact, regularly speaking on how companies can balance cutting-edge technology with real-world responsibility. Duncan Curtis is the Senior Vice President of Generative AI at Sama, where he leads the development of AI-powered tools that are shaping the future of data annotation. With a background in product leadership and machine learning, Duncan has spent his career building scalable systems that bridge cutting-edge technology with real-world impact. Before joining Sama, he led teams at companies like Google, where he worked on large-scale personalization systems, and contributed to AI product strategy across multiple sectors. At Sama, he's focused on harnessing the power of generative AI to improve quality, speed, and efficiency — all while keeping human oversight and ethical practices at the core. Duncan brings a unique perspective to the AI space: one that’s grounded in technical expertise, but always oriented toward practical solutions and responsible innovation. In the episode, Richie, Wendy, and Duncan explore the importance of using specialized data with large language models, the role of data enrichment in improving AI accuracy, the balance between automation and human oversight, the significance of responsible AI practices, and much more. Links Mentioned in the Show: SamaConnect with WendyConnect with DuncanCourse: Generative AI ConceptsRelated Episode: Creating High Quality AI Applications with Theresa Parker & Sudhi Balan, Rocket SoftwareRegister for RADAR AI New to DataCamp? Learn on the go...

In this episode, Conor recommends some articles on AI and LLMs. Link to Episode 239 on WebsiteDiscuss this episode, leave a comment, or ask a question (on GitHub)Socials ADSP: The Podcast: TwitterConor Hoekstra: Twitter | BlueSky | MastodonShow Notes Date Generated: 2025-06-19 Date Released: 2025-06-20 The Real Python Podcast Episode 253My AI Skeptic Friends Are All Nuts - Thomas PtacekI Think I’m Done Thinking About genAI For Now - GlyphAI Changes Everything - Armin RonacherIntro Song Info Miss You by Sarah Jansen https://soundcloud.com/sarahjansenmusic Creative Commons — Attribution 3.0 Unported — CC BY 3.0 Free Download / Stream: http://bit.ly/l-miss-you Music promoted by Audio Library https://youtu.be/iYYxnasvfx8

Building Neo4j-Powered Applications with LLMs

Dive into building applications that combine the power of Large Language Models (LLMs) with Neo4j knowledge graphs, Haystack, and Spring AI to deliver intelligent, data-driven recommendations and search outcomes. This book provides actionable insights and techniques to create scalable, robust solutions by leveraging the best-in-class frameworks and a real-world project-oriented approach. What this Book will help me do Understand how to use Neo4j to build knowledge graphs integrated with LLMs for enhanced data insights. Develop skills in creating intelligent search functionalities by combining Haystack and vector-based graph techniques. Learn to design and implement recommendation systems using LangChain4j and Spring AI frameworks. Acquire the ability to optimize graph data architectures for LLM-driven applications. Gain proficiency in deploying and managing applications on platforms like Google Cloud for scalability. Author(s) Ravindranatha Anthapu, a Principal Consultant at Neo4j, and Siddhant Agarwal, a Google Developer Expert in Generative AI, bring together their vast experience to offer practical implementations and cutting-edge techniques in this book. Their combined expertise in Neo4j, graph technology, and real-world AI applications makes them authoritative voices in the field. Who is it for? Designed for database developers and data scientists, this book caters to professionals aiming to leverage the transformational capabilities of knowledge graphs alongside LLMs. Readers should have a working knowledge of Python and Java as well as familiarity with Neo4j and the Cypher query language. If you're looking to enhance search or recommendation functionalities through state-of-the-art AI integrations, this book is for you.