Welcome to DataFramed Industry Roundups! In this series of episodes, Adel & Richie sit down to discuss the latest and greatest in data & AI. In this episode, we discuss the rise of reasoning LLMs like DeepSeek R1 and the competition shaping the AI space, OpenAI’s Operator and the broader push for AI agents to control computers, and the implications of massive AI infrastructure investments like Project Stargate. We also touch on Google’s overlooked AI advancements, the challenges of AI adoption, the potential of Replit’s mobile app for building apps with natural language, and much more. Links Mentioned in the Show: YouTube Tutorial: Fine Tune DeepSeek R1 | Build a Medical ChatbotOpenAI Deep ResearchOpen OperatorGemini 2.0Lex Fridman Podcast Episode on DeepSeekRemoving Barriers to American Leadership in Artificial IntelligencePresident's Council of Advisors on Science and TechnologyProject Stargate announcements from OpenAI, SoftbankSam Altman's quest for $7tnReplit Mobile AppSign up to attend RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
talk-data.com
Topic
LLM
Large Language Models (LLM)
1405
tagged
Activity Trend
Top Events
Explore Generative AI and understand its key concepts, architecture, and tangible business use cases. This book will help you develop the skills needed to use SAP AI Core service features available in the SAP Business Technology Platform. You’ll examine large language model (LLM) concepts and gain the practical knowledge to unleash the best use of Gen AI. As you progress, you’ll learn how to get started with your own LLM models and work with Generative AI use cases. Additionally, you’ll see how to take advantage Amazon Bedrock stack using AWS SDK for ABAP. To fully leverage your knowledge, Generative AI with SAP and Amazon Bedrock offers practical step-by-step instructions for how to establish a cloud SAP BTP account model and create your first GenAIartifacts. This work is an important prerequisite for those who want to take full advantage of generative AI with SAP. What You Will Learn Master the concepts and terminology of artificial intelligence and GenAI Understand opportunities and impacts for different industries with GenAI Become familiar with SAP AI Core, Amazon Bedrock, AWS SDK for ABAP and develop your firsts GenAI projects Accelerate your development skills Gain more productivity and time implementing GenAI use cases Who this Book Is For Anyone who wants to learn about Generative AI for Enterprise and SAP practitioners who want to take advantage of AI within the SAP ecosystem to support their systems and workflows.
Drawing from her experience at Google and Meta, Dr. Marily Nika delivers the definitive guide for product managers building AI and GenAI powered products. Packed with smart strategies, actionable tools, and real-world examples, this book breaks down the complex world of AI agents and generative AI products into a playbook for driving innovation to help product leaders bridge the gap between niche AI and GenAI technologies and user pain points. Whether you're already leading product teams or are an aspiring product manager, and regardless of your prior knowledge with AI, this guide will empower you to confidently navigate every stage of the AI product lifecycle. Confidently manage AI product development with tools, frameworks, strategic insights, and real-world examples from Google, Meta, OpenAI, and more Lead product orgs to solve real problems via agentic AI and GenAI capabilities Gain AI Awareness and technical fluency to work with AI models, LLMs, and the algorithms that power them; get cross-functional alignment; make strategic trade-offs; and set OKRs
If you're looking to build production-ready AI applications that can reason and retrieve external data for context-awareness, you'll need to master--;a popular development framework and platform for building, running, and managing agentic applications. LangChain is used by several leading companies, including Zapier, Replit, Databricks, and many more. This guide is an indispensable resource for developers who understand Python or JavaScript but are beginners eager to harness the power of AI. Authors Mayo Oshin and Nuno Campos demystify the use of LangChain through practical insights and in-depth tutorials. Starting with basic concepts, this book shows you step-by-step how to build a production-ready AI agent that uses your data. Harness the power of retrieval-augmented generation (RAG) to enhance the accuracy of LLMs using external up-to-date data Develop and deploy AI applications that interact intelligently and contextually with users Make use of the powerful agent architecture with LangGraph Integrate and manage third-party APIs and tools to extend the functionality of your AI applications Monitor, test, and evaluate your AI applications to improve performance Understand the foundations of LLM app development and how they can be used with LangChain
Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. DataTopics Unpluggedis your go-to spot for relaxed discussions around tech, news, data, and society. Dive into conversations that should flow as smoothly as your morning coffee (but don’t), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style! This week, we break down some of the biggest developments in AI, investments, and automation: France’s AI Boom: $85 billion in investments – A look at how a mix of international and domestic funds is fueling France’s AI ecosystem, and why Mistral AI might be Europe's best shot at competing with OpenAI.Anthropic’s AI Job Index: Who’s using AI at work? – A deep dive into the latest report on how AI is being used in different industries, from software development to education, and the surprising ways automation is creeping into unexpected jobs.The $6 AI Model: How low can costs go? – Researchers have managed to create a reasoning model for just $6. We unpack how they pulled it off and what this means for the AI landscape.AI Censorship & Model Distillation: What’s really going on? – A discussion on recent claims that certain AI models come with baked-in censorship, and whether fine-tuning is playing a bigger role than we think.PromptLayer’s No-Code AI Tools – Are no-code AI development platforms the next big thing?Predicted Outputs: OpenAI’s approach to efficient code editing – A look at how OpenAI’s "Predicted Outputs" feature could make AI-assisted coding more efficient.MacOS System Monitoring & Dev Tooling: The geeky stuff – A breakdown of system monitoring tools for Mac users who love to keep an eye on every process running in the background.Snapshot Testing with Birdie – Exploring the concept of snapshot testing beyond UI testing and into function outputs.BeeWare & the Python Ecosystem – A look at how BeeWare is helping Python developers build cross-platform applications.Astral, Ruff, and UV: Python’s performance evolution – The latest from Charlie Marsh on the tools shaping Python development.
Create LLM-powered autonomous agents and intelligent assistants tailored to your business and personal needs. From script-free customer service chatbots to fully independent agents operating seamlessly in the background, AI-powered assistants represent a breakthrough in machine intelligence. In AI Agents in Action, you'll master a proven framework for developing practical agents that handle real-world business and personal tasks. Author Micheal Lanham combines cutting-edge academic research with hands-on experience to help you: Understand and implement AI agent behavior patterns Design and deploy production-ready intelligent agents Leverage the OpenAI Assistants API and complementary tools Implement robust knowledge management and memory systems Create self-improving agents with feedback loops Orchestrate collaborative multi-agent systems Enhance agents with speech and vision capabilities You won't find toy examples or fragile assistants that require constant supervision. AI Agents in Action teaches you to build trustworthy AI capable of handling high-stakes negotiations. You'll master prompt engineering to create agents with distinct personas and profiles, and develop multi-agent collaborations that thrive in unpredictable environments. Beyond just learning a new technology, you'll discover a transformative approach to problem-solving. About the Technology Most production AI systems require many orchestrated interactions between the user, AI models, and a wide variety of data sources. AI agents capture and organize these interactions into autonomous components that can process information, make decisions, and learn from interactions behind the scenes. This book will show you how to create AI agents and connect them together into powerful multi-agent systems. About the Book In AI Agents in Action, you’ll learn how to build production-ready assistants, multi-agent systems, and behavioral agents. You’ll master the essential parts of an agent, including retrieval-augmented knowledge and memory, while you create multi-agent applications that can use software tools, plan tasks autonomously, and learn from experience. As you explore the many interesting examples, you’ll work with state-of-the-art tools like OpenAI Assistants API, GPT Nexus, LangChain, Prompt Flow, AutoGen, and CrewAI. What's Inside Knowledge management and memory systems Feedback loops for continuous agent learning Collaborative multi-agent systems Speech and computer vision About the Reader For intermediate Python programmers. About the Author Micheal Lanham is a software and technology innovator with over 20 years of industry experience. He has authored books on deep learning, including Manning’s Evolutionary Deep Learning. Quotes This is about to become the hottest area of applied AI. Get a head start with this book! - Richard Davies, author of Prompt Engineering in Practice Couldn’t put this book down! It’s so comprehensive and clear that I felt like I was learning from a master teacher. - Radhika Kanubaddhi, Amazon An enlightening journey! This book transformed my questions into answers. - Jose San Leandro, ACM-SL Expertly guides through creating agent profiles, using tools, memory, planning, and multi-agent systems. Couldn’t be more timely! - Grigory Sapunov author of JAX in Action
Business runs on tabular data in databases, spreadsheets, and logs. Crunch that data using deep learning, gradient boosting, and other machine learning techniques. Machine Learning for Tabular Data teaches you to train insightful machine learning models on common tabular business data sources such as spreadsheets, databases, and logs. You’ll discover how to use XGBoost and LightGBM on tabular data, optimize deep learning libraries like TensorFlow and PyTorch for tabular data, and use cloud tools like Vertex AI to create an automated MLOps pipeline. Machine Learning for Tabular Data will teach you how to: Pick the right machine learning approach for your data Apply deep learning to tabular data Deploy tabular machine learning locally and in the cloud Pipelines to automatically train and maintain a model Machine Learning for Tabular Data covers classic machine learning techniques like gradient boosting, and more contemporary deep learning approaches. By the time you’re finished, you’ll be equipped with the skills to apply machine learning to the kinds of data you work with every day. About the Technology Machine learning can accelerate everyday business chores like account reconciliation, demand forecasting, and customer service automation—not to mention more exotic challenges like fraud detection, predictive maintenance, and personalized marketing. This book shows you how to unlock the vital information stored in spreadsheets, ledgers, databases and other tabular data sources using gradient boosting, deep learning, and generative AI. About the Book Machine Learning for Tabular Data delivers practical ML techniques to upgrade every stage of the business data analysis pipeline. In it, you’ll explore examples like using XGBoost and Keras to predict short-term rental prices, deploying a local ML model with Python and Flask, and streamlining workflows using large language models (LLMs). Along the way, you’ll learn to make your models both more powerful and more explainable. What's Inside Master XGBoost Apply deep learning to tabular data Deploy models locally and in the cloud Build pipelines to train and maintain models About the Reader For readers experienced with Python and the basics of machine learning. About the Authors Mark Ryan is the AI Lead of the Developer Knowledge Platform at Google. A three-time Kaggle Grandmaster, Luca Massaron is a Google Developer Expert (GDE) in machine learning and AI. He has published 17 other books. Quotes
Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Data Topics Unpluggedis your go-to spot for relaxed discussions on tech, news, data, and society. This week, we’re joined by returning guest Tim Leers, who helps us navigate the ever-evolving landscape of AI regulation, open-source controversies, and the battle for the future of large language models. Expect deep dives, hot takes, and a sprinkle of existential dread as we discuss: The EU AI Act and its ripple effects – What does it actually change? And is Meta pulling back on AI development because of it?Meta’s “Frontier AI” framework – A strategic move or just regulatory camouflage?OpenAI vs. the world – From copyright drama to OpenAI accusing competitors of using its models, is this just karma in action?DeepSeek and global AI competition – Why are government agencies banning it, and is it really a game-changer?The EU’s AI investment plans – Can Europe ever catch up, or is 1.5 billion euros just a drop in the compute ocean?OpenAI’s sudden love for open source – Sam Altman says they were on the "wrong side of history." Are they really changing, or is this just another strategic pivot?OpenAI’s latest tech update – we discuss Tim’s experience with o3 and show it liveAll that, plus some existential musings on AI’s role in society, competitive dynamics between the US, EU, and China, and whether we’re all just picking our preferred bias in a world of competing LLMs. Got thoughts? Drop us a comment or question—we might even read it on the next episode!
Supported by Our Partners • Swarmia — The engineering intelligence platform for modern software organizations. • Graphite — The AI developer productivity platform. • Vanta — Automate compliance and simplify security with Vanta. — On today’s episode of The Pragmatic Engineer, I’m joined by Chip Huyen, a computer scientist, author of the freshly published O’Reilly book AI Engineering, and an expert in applied machine learning. Chip has worked as a researcher at Netflix, was a core developer at NVIDIA (building NeMo, NVIDIA’s GenAI framework), and co-founded Claypot AI. She also taught Machine Learning at Stanford University. In this conversation, we dive into the evolving field of AI Engineering and explore key insights from Chip’s book, including: • How AI Engineering differs from Machine Learning Engineering • Why fine-tuning is usually not a tactic you’ll want (or need) to use • The spectrum of solutions to customer support problems – some not even involving AI! • The challenges of LLM evals (evaluations) • Why project-based learning is valuable—but even better when paired with structured learning • Exciting potential use cases for AI in education and entertainment • And more! — Timestamps (00:00) Intro (01:31) A quick overview of AI Engineering (05:00) How Chip ensured her book stays current amidst the rapid advancements in AI (09:50) A definition of AI Engineering and how it differs from Machine Learning Engineering (16:30) Simple first steps in building AI applications (22:53) An explanation of BM25 (retrieval system) (23:43) The problems associated with fine-tuning (27:55) Simple customer support solutions for rolling out AI thoughtfully (33:44) Chip’s thoughts on staying focused on the problem (35:19) The challenge in evaluating AI systems (38:18) Use cases in evaluating AI (41:24) The importance of prioritizing users’ needs and experience (46:24) Common mistakes made with Gen AI (52:12) A case for systematic problem solving (53:13) Project-based learning vs. structured learning (58:32) Why AI is not the end of engineering (1:03:11) How AI is helping education and the future use cases we might see (1:07:13) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • Applied AI Software Engineering: RAG https://newsletter.pragmaticengineer.com/p/rag • How do AI software engineering agents work? https://newsletter.pragmaticengineer.com/p/ai-coding-agents • AI Tooling for Software Engineers in 2024: Reality Check https://newsletter.pragmaticengineer.com/p/ai-tooling-2024 • IDEs with GenAI features that Software Engineers love https://newsletter.pragmaticengineer.com/p/ide-that-software-engineers-love — See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Está no ar, o Data Hackers News !! Os assuntos mais quentes da semana, com as principais notícias da área de Dados, IA e Tecnologia, que você também encontra na nossa Newsletter semanal, agora no Podcast do Data Hackers !!
Aperte o play e ouça agora, o Data Hackers News dessa semana !
Para saber tudo sobre o que está acontecendo na área de dados, se inscreva na Newsletter semanal:
https://www.datahackers.news/
Conheça nossos comentaristas do Data Hackers News:
Monique Femme
Paulo Vasconcellos
Matérias/assuntos comentados:
OpenAI anuncia o3-mini de graça;
DeepSeek se torna o app mais baixado na App Store;
OpenAI lança Deep Research
Demais canais do Data Hackers:
Site
Linkedin
Instagram
Tik Tok
You Tube
I’m doing things a bit differently for this episode of Experiencing Data. For the first time on the show, I’m hosting a panel discussion. I’m joined by Thomson Reuters’s Simon Landry, Sumo Logic’s Greg Nudelman, and Google’s Paz Perez to chat about how we design user experiences that improve people’s lives and create business impact when we expose LLM capabilities to our users.
With the rise of AI, there are a lot of opportunities for innovation, but there are also many challenges—and frankly, my feeling is that a lot of these capabilities right now are making things worse for users, not better. We’re looking at a range of topics such as the pros and cons of AI-first thinking, collaboration between UX designers and ML engineers, and the necessity of diversifying design teams when integrating AI and LLMs into b2b products.
Highlights/ Skip to
Thoughts on how the current state of LLMs implementations and its impact on user experience (1:51) The problems that can come with the "AI-first" design philosophy (7:58) Should a company's design resources be spent on go toward AI development? (17:20) How designers can navigate "fuzzy experiences” (21:28) Why you need to narrow and clearly define the problems you’re trying to solve when building LLMs products (27:35) Why diversity matters in your design and research teams when building LLMs (31:56) Where you can find more from Paz, Greg, and Simon (40:43)
Quotes from Today’s Episode
“ [AI] will connect the dots. It will argue pro, it will argue against, it will create evidence supporting and refuting, so it’s really up to us to kind of drive this. If we understand the capabilities, then it is an almost limitless field of possibility. And these things are taught, and it’s a fundamentally different approach to how we build user interfaces. They’re no longer completely deterministic. They’re also extremely personalized to the point where it’s ridiculous.” - Greg Nudelman (12:47) “ To put an LLM into a product means that there’s a non-zero chance your user is going to have a [negative] experience and no longer be your customer. That is a giant reputational risk, and there’s also a financial cost associated with running these models. I think we need to take more of a service design lens when it comes to [designing our products with AI] and ask what is the thing somebody wants to do… not on my website, but in their lives? What brings them to my [product]? How can I imagine a different world that leverages these capabilities to help them do their job? Because what [designers] are competing against is [a customer workflow] that probably worked well enough.” - Simon Landry (15:41) “ When we go general availability (GA) with a product, that traditionally means [designers] have done all the research, got everything perfect, and it’s all great, right? Today, GA is a starting gun. We don’t know [if the product is working] unless we [seek out user feedback]. A massive research method is needed. [We need qualitative research] like sitting down with the customer and watching them use the product to really understand what is happening[…] but you also need to collect data. What are they typing in? What are they getting back? Is somebody who’s typing in this type of question always having a short interaction? Let’s dig into it with rapid, iterative testing and evaluation, so that we can update our model and then move forward. Launching a product these days means the starting guns have been fired. Put the research to work to figure out the next step.” - (23:29) Greg Nudelman “ I think that having diversity on your design team (i.e. gender, level of experience, etc.) is critical. We’ve already seen some terrible outcomes. Multiple examples where an LLM is crafting horrendous emails, introductions, and so on. This is exactly why UXers need to get involved [with building LLMs]. This is why diversity in UX and on your tech team that deals with AI is so valuable. Number one piece of advice: get some researchers. Number two: make sure your team is diverse.” - Greg Nudelman (32:39) “ It’s extremely important to have UX talks with researchers, content designers, and data teams. It’s important to understand what a user is trying to do, the context [of their decisions], and the intention. [Designers] need to help [the data team] understand the types of data and prompts being used to train models. Those things are better when they’re written and thought of by [designers] who understand where the user is coming from. [Design teams working with data teams] are getting much better results than the [teams] that are working in a vacuum.” - Paz Perez (35:19)
Links
Milly Barker’s LinkedIn post Greg Nudelman’s Value Matrix Article Greg Nudelman website Paz Perez on Medium Paz Perez on LinkedIn Simon Landry LinkedIn
Looking for something? Whether it's a product, information, or inspiration, algorithms are anticipating your needs and delivering answers before you even know the question. Powered by LLMs and AI, these systems are redefining the activites of search and discovery - and the types of data needed to power these new feedback loops.
This session explores the rise of Lakehouse architecture and its industry-wide adoption, highlighting its ability to simplify Data Management. We’ll also examine how Large Language Models (LLMs) are transforming Data Engineering, enabling analysts to solve complex problems that once required advanced technical skills.
It’s been more than 2 years since the advent of Large Language Models - the release of ChatGPT. Billions of dollars have been invested in GenAI, and all the media foreshadows the end of IT work as we know it.
In this podcast episode, we talked with Andrey Cheptsov about The future of AI infrastructure.
About the Speaker: Andrey Cheptsov is the founder and CEO of dstack, an open-source alternative to Kubernetes and Slurm, built to simplify the orchestration of AI infrastructure. Before dstack, Andrey worked at JetBrains for over a decade helping different teams make the best developer tools. During the event, the guest, Andrey Cheptsov, founder and CEO of dstack, discussed the complexities of AI infrastructure. We explore topics like the challenges of using Kubernetes for AI workloads, the need to rethink container orchestration, and the future of hybrid and cloud-only infrastructures. Andrey also shares insights into the role of on-premise and bare-metal solutions, edge computing, and federated learning. 00:00 Andrey's Career Journey: From JetBrains to DStack 5:00 The Motivation Behind DStack 7:00 Challenges in Machine Learning Infrastructure 10:00 Transitioning from Cloud to On-Prem Solutions 14:30 Reflections on OpenAI's Evolution 17:30 Open Source vs Proprietary Models: A Balanced Perspective 21:01 Monolithic vs. Decentralized AI businesses 22:05 The role of privacy and control in AI for industries like banking and healthcare 30:00 Challenges in training large AI models: GPUs and distributed systems 37:03 DeepSpeed's efficient training approach vs. brute force methods 39:00 Challenges for small and medium businesses: hosting and fine-tuning models 47:01 Managing Kubernetes challenges for AI teams 52:00 Hybrid vs. cloud-only infrastructure 56:03 On-premise vs. bare-metal solutions 58:05 Exploring edge computing and its challenges
🔗 CONNECT WITH ANDREY CHEPTSOV Twitter - / andrey_cheptsov Linkedin - / andrey-cheptsov GitHub - https://github.com/dstackai/dstack/ Website - https://dstack.ai/
🔗 CONNECT WITH DataTalksClub Join DataTalks.Club:https://datatalks.club/slack.html Our events:https://datatalks.club/events.html Datalike Substack -https://datalike.substack.com/ LinkedIn: / datatalks-club
Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. DataTopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society. This week, we’re joined by Jonas Soenen, a machine learning engineer at Dataroots, to break down the latest AI shakeups—from DeepSeek R1 challenging OpenAI to new AI automation tools that might just change how we use the internet. Let’s dive in: DeepSeek R1: Open-source revolution or just open weights? – A new AI model making waves with transparency and cost efficiency. But is OpenAI really at risk? Reinforcement learning, no tricks needed – How DeepSeek R1 trains without complex search trees or hidden techniques—and why that’s a big deal. Web LM Arena’s leaderboard – How DeepSeek R1 ranks against OpenAI, Anthropic, and other top models in real-world coding tasks. Kimi – Another promising open-weight model challenging the AI giants. Could this be the real alternative to GPT-4? Open-source AI and industry reactions – Why are companies like OpenAI hesitant to embrace open-source AI, and will DeepSeek’s approach change the game? ByteDance’s surprise AI play – The TikTok parent company is quietly building its own powerful AI models—should OpenAI and Google be worried? OpenAI’s Stargate project – A massive $500B AI infrastructure initiative—how does this impact AI accessibility and competition? OpenAI’s Operator: Your new AI assistant? – A browser-based agent that can shop for you, browse the web, and click buttons—but how secure is it? Midscene & UI-TARS Desktop – AI-powered automation tools that might soon replace traditional workflows. Nightshade – A new method for artists to poison AI training data, protecting their work from unauthorized AI-generated copies. Nepenthes – A tool designed to fight back against LLM text scrapers—could this help protect data from being swallowed into future AI models? AI in music: Paul McCartney vs. AI-generated songs – The legendary Beatle wants stronger copyright protections, but is AI creativity a threat or a tool? 📢 Note: Recent press coverage has clarified key details. Training infrastructure and cost figures mentioned were for DeepSeek V3—DeepSeek R1’s actual training costs have not been officially disclosed.
Supported by Our Partners • Formation — Level up your career and compensation with Formation. • WorkOS — The modern identity platform for B2B SaaS • Vanta — Automate compliance and simplify security with Vanta. — In today’s episode of The Pragmatic Engineer, I’m joined by Jonas Tyroller, one of the developers behind Thronefall, a minimalist indie strategy game that blends tower defense and kingdom-building, now available on Steam. Jonas takes us through the journey of creating Thronefall from start to finish, offering insights into the world of indie game development. We explore: • Why indie developers often skip traditional testing and how they find bugs • The developer workflow using Unity, C# and Blender • The two types of prototypes game developers build • Why Jonas spent months building game prototypes in 1-2 days • How Jonas uses ChatGPT to build games • Jonas’s tips on making games that sell • And more! — Timestamps (00:00) Intro (02:07) Building in Unity (04:05) What the shader tool is used for (08:44) How a Unity build is structured (11:01) How game developers write and debug code (16:21) Jonas’s Unity workflow (18:13) Importing assets from Blender (21:06) The size of Thronefall and how it can be so small (24:04) Jonas’s thoughts on code review (26:42) Why practices like code review and source control might not be relevant for all contexts (30:40) How Jonas and Paul ensure the game is fun (32:25) How Jonas and Paul used beta testing feedback to improve their game (35:14) The mini-games in Thronefall and why they are so difficult (38:14) The struggle to find the right level of difficulty for the game (41:43) Porting to Nintendo Switch (45:11) The prototypes Jonas and Paul made to get to Thronefall (46:59) The challenge of finding something you want to build that will sell (47:20) Jonas’s ideation process and how they figure out what to build (49:35) How Thronefall evolved from a mini-game prototype (51:50) How long you spend on prototyping (52:30) A lesson in failing fast (53:50) The gameplay prototype vs. the art prototype (55:53) How Jonas and Paul distribute work (57:35) Next steps after having the play prototype and art prototype (59:36) How a launch on Steam works (1:01:18) Why pathfinding was the most challenging part of building Thronefall (1:08:40) Gen AI tools for building indie games (1:09:50) How Jonas uses ChatGPT for editing code and as a translator (1:13:25) The pros and cons of being an indie developer (1:15:32) Jonas’s advice for software engineers looking to get into indie game development (1:19:32) What to look for in a game design school (1:22:46) How luck figures into success and Jonas’s tips for building a game that sells (1:26:32) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • Game development basics https://newsletter.pragmaticengineer.com/p/game-development-basics • Building a simple game using Unity https://newsletter.pragmaticengineer.com/p/building-a-simple-game — See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Está no ar, o Data Hackers News !! Os assuntos mais quentes da semana, com as principais notícias da área de Dados, IA e Tecnologia, que você também encontra na nossa Newsletter semanal, agora no Podcast do Data Hackers !!
Aperte o play e ouça agora, o Data Hackers News dessa semana !
Para saber tudo sobre o que está acontecendo na área de dados, se inscreva na Newsletter semanal:
https://www.datahackers.news/
Conheça nossos comentaristas do Data Hackers News:
Monique Femme
Paulo Vasconcellos
Matérias/assuntos comentados:
Após Efeito DeepSeek, as ações da Nvidia desabam;
OpenAI anuncia Operator: uma IA que usa o browser para fazer tarefas.
Demais canais do Data Hackers:
Site
Linkedin
Instagram
Tik Tok
You Tube
As multimodal AI continues to grow, professionals are exploring new skills to harness its potential. From understanding real-time APIs to navigating new application architectures, the landscape is shifting. How can developers stay ahead in this evolving field? What opportunities do AI agents present for automating tasks and enhancing productivity? And how can businesses ensure they're ready for the future of AI-driven interactions? Russ D'Sa is the CEO & Co-founder at Livekit. Russ is building the transport layer for AI computing. He founded Livekit, the company that powers voice chat for OpenAI and Character.ai. Previously, he was a Product Manager at Medium and an engineer at Twitter. He's also a serial entrepreneur, having previously founded mobile search platform Evie Labs. In the episode, Richie and Russ explore the evolution of voice AI, the challenges of building voice applications, the rise of video AI, the implications of deep fakes, the potential of AI-generated worlds, the future of AI in customer service and education, and much more. Links Mentioned in the Show: LiveKitChatGPT VoiceCourse: Developing LLM Applications with LangChainRelated Episode: Creating High Quality AI Applications with Theresa Parker & Sudhi Balan, Rocket SoftwareRewatch sessions from RADAR: Forward Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Learn how machine learning algorithms work from the ground up so you can effectively troubleshoot your models and improve their performance. Fully understanding how machine learning algorithms function is essential for any serious ML engineer. In Machine Learning Algorithms in Depth you’ll explore practical implementations of dozens of ML algorithms including: Monte Carlo Stock Price Simulation Image Denoising using Mean-Field Variational Inference EM algorithm for Hidden Markov Models Imbalanced Learning, Active Learning and Ensemble Learning Bayesian Optimization for Hyperparameter Tuning Dirichlet Process K-Means for Clustering Applications Stock Clusters based on Inverse Covariance Estimation Energy Minimization using Simulated Annealing Image Search based on ResNet Convolutional Neural Network Anomaly Detection in Time-Series using Variational Autoencoders Machine Learning Algorithms in Depth dives into the design and underlying principles of some of the most exciting machine learning (ML) algorithms in the world today. With a particular emphasis on probabilistic algorithms, you’ll learn the fundamentals of Bayesian inference and deep learning. You’ll also explore the core data structures and algorithmic paradigms for machine learning. Each algorithm is fully explored with both math and practical implementations so you can see how they work and how they’re put into action. About the Technology Learn how machine learning algorithms work from the ground up so you can effectively troubleshoot your models and improve their performance. This book guides you from the core mathematical foundations of the most important ML algorithms to their Python implementations, with a particular focus on probability-based methods. About the Book Machine Learning Algorithms in Depth dissects and explains dozens of algorithms across a variety of applications, including finance, computer vision, and NLP. Each algorithm is mathematically derived, followed by its hands-on Python implementation along with insightful code annotations and informative graphics. You’ll especially appreciate author Vadim Smolyakov’s clear interpretations of Bayesian algorithms for Monte Carlo and Markov models. What's Inside Monte Carlo stock price simulation EM algorithm for hidden Markov models Imbalanced learning, active learning, and ensemble learning Bayesian optimization for hyperparameter tuning Anomaly detection in time-series About the Reader For machine learning practitioners familiar with linear algebra, probability, and basic calculus. About the Author Vadim Smolyakov is a data scientist in the Enterprise & Security DI R&D team at Microsoft. Quotes I love this book! It shows you how to implement common ML algorithms in plain Python with only the essential libraries, so you can see how the computation and math works in practice. - Junpeng Lao, Senior Data Scientist at Google I highly recommend this book. In the era of ChatGPT real knowledge of algorithms is invaluable. - Vatsal Desai, InfoDesk Explains algorithms so well that even a novice can digest it. - Harsh Raval, Zymr