talk-data.com talk-data.com

Topic

GenAI

Generative AI

ai machine_learning llm

1517

tagged

Activity Trend

192 peak/qtr
2020-Q1 2026-Q1

Activities

1517 activities · Newest first

Machine Learning for Tabular Data

Business runs on tabular data in databases, spreadsheets, and logs. Crunch that data using deep learning, gradient boosting, and other machine learning techniques. Machine Learning for Tabular Data teaches you to train insightful machine learning models on common tabular business data sources such as spreadsheets, databases, and logs. You’ll discover how to use XGBoost and LightGBM on tabular data, optimize deep learning libraries like TensorFlow and PyTorch for tabular data, and use cloud tools like Vertex AI to create an automated MLOps pipeline. Machine Learning for Tabular Data will teach you how to: Pick the right machine learning approach for your data Apply deep learning to tabular data Deploy tabular machine learning locally and in the cloud Pipelines to automatically train and maintain a model Machine Learning for Tabular Data covers classic machine learning techniques like gradient boosting, and more contemporary deep learning approaches. By the time you’re finished, you’ll be equipped with the skills to apply machine learning to the kinds of data you work with every day. About the Technology Machine learning can accelerate everyday business chores like account reconciliation, demand forecasting, and customer service automation—not to mention more exotic challenges like fraud detection, predictive maintenance, and personalized marketing. This book shows you how to unlock the vital information stored in spreadsheets, ledgers, databases and other tabular data sources using gradient boosting, deep learning, and generative AI. About the Book Machine Learning for Tabular Data delivers practical ML techniques to upgrade every stage of the business data analysis pipeline. In it, you’ll explore examples like using XGBoost and Keras to predict short-term rental prices, deploying a local ML model with Python and Flask, and streamlining workflows using large language models (LLMs). Along the way, you’ll learn to make your models both more powerful and more explainable. What's Inside Master XGBoost Apply deep learning to tabular data Deploy models locally and in the cloud Build pipelines to train and maintain models About the Reader For readers experienced with Python and the basics of machine learning. About the Authors Mark Ryan is the AI Lead of the Developer Knowledge Platform at Google. A three-time Kaggle Grandmaster, Luca Massaron is a Google Developer Expert (GDE) in machine learning and AI. He has published 17 other books. Quotes

Supported by Our Partners • Swarmia — The engineering intelligence platform for modern software organizations. • Graphite — The AI developer productivity platform.  • Vanta — Automate compliance and simplify security with Vanta. — On today’s episode of The Pragmatic Engineer, I’m joined by Chip Huyen, a computer scientist, author of the freshly published O’Reilly book AI Engineering, and an expert in applied machine learning. Chip has worked as a researcher at Netflix, was a core developer at NVIDIA (building NeMo, NVIDIA’s GenAI framework), and co-founded Claypot AI. She also taught Machine Learning at Stanford University. In this conversation, we dive into the evolving field of AI Engineering and explore key insights from Chip’s book, including: • How AI Engineering differs from Machine Learning Engineering  • Why fine-tuning is usually not a tactic you’ll want (or need) to use • The spectrum of solutions to customer support problems – some not even involving AI! • The challenges of LLM evals (evaluations) • Why project-based learning is valuable—but even better when paired with structured learning • Exciting potential use cases for AI in education and entertainment • And more! — Timestamps (00:00) Intro  (01:31) A quick overview of AI Engineering (05:00) How Chip ensured her book stays current amidst the rapid advancements in AI (09:50) A definition of AI Engineering and how it differs from Machine Learning Engineering  (16:30) Simple first steps in building AI applications (22:53) An explanation of BM25 (retrieval system)  (23:43) The problems associated with fine-tuning  (27:55) Simple customer support solutions for rolling out AI thoughtfully  (33:44) Chip’s thoughts on staying focused on the problem  (35:19) The challenge in evaluating AI systems (38:18) Use cases in evaluating AI  (41:24) The importance of prioritizing users’ needs and experience  (46:24) Common mistakes made with Gen AI (52:12) A case for systematic problem solving  (53:13) Project-based learning vs. structured learning (58:32) Why AI is not the end of engineering (1:03:11) How AI is helping education and the future use cases we might see (1:07:13) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • Applied AI Software Engineering: RAG https://newsletter.pragmaticengineer.com/p/rag  • How do AI software engineering agents work? https://newsletter.pragmaticengineer.com/p/ai-coding-agents  • AI Tooling for Software Engineers in 2024: Reality Check https://newsletter.pragmaticengineer.com/p/ai-tooling-2024  • IDEs with GenAI features that Software Engineers love https://newsletter.pragmaticengineer.com/p/ide-that-software-engineers-love — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

I’m doing things a bit differently for this episode of Experiencing Data. For the first time on the show, I’m hosting a panel discussion. I’m joined by Thomson Reuters’s Simon Landry, Sumo Logic’s Greg Nudelman, and Google’s Paz Perez to chat about how we design user experiences that improve people’s lives and create business impact when we expose LLM capabilities to our users. 

With the rise of AI, there are a lot of opportunities for innovation, but there are also many challenges—and frankly, my feeling is that a lot of these capabilities right now are making things worse for users, not better. We’re looking at a range of topics such as the pros and cons of AI-first thinking, collaboration between UX designers and ML engineers, and the necessity of diversifying design teams when integrating AI and LLMs into b2b products. 

Highlights/ Skip to 

Thoughts on how the current state of LLMs implementations and its impact on user experience (1:51)  The problems that can come with the "AI-first" design philosophy (7:58)  Should a company's design resources be spent on go toward AI development? (17:20) How designers can navigate "fuzzy experiences” (21:28) Why you need to narrow and clearly define the problems you’re trying to solve when building LLMs products (27:35) Why diversity matters in your design and research teams when building LLMs (31:56)  Where you can find more from Paz, Greg, and Simon (40:43)

Quotes from Today’s Episode

“ [AI] will connect the dots. It will argue pro, it will argue against, it will create evidence supporting and refuting, so it’s really up to us to kind of drive this. If we understand the capabilities, then it is an almost limitless field of possibility. And these things are taught, and it’s a fundamentally different approach to how we build user interfaces. They’re no longer completely deterministic. They’re also extremely personalized to the point where it’s ridiculous.” - Greg Nudelman (12:47) “ To put an LLM into a product means that there’s a non-zero chance your user is going to have a [negative] experience and no longer be your customer. That is a giant reputational risk, and there’s also a financial cost associated with running these models. I think we need to take more of a service design lens when it comes to [designing our products with AI] and ask what is the thing somebody wants to do… not on my website, but in their lives? What brings them to my [product]? How can I imagine a different world that leverages these capabilities to help them do their job? Because what [designers] are competing against is [a customer workflow] that probably worked well enough.” - Simon Landry (15:41) “ When we go general availability (GA) with a product, that traditionally means [designers] have done all the research, got everything perfect, and it’s all great, right? Today, GA is a starting gun. We don’t know [if the product is working] unless we [seek out user feedback]. A massive research method is needed. [We need qualitative research] like sitting down with the customer and watching them use the product to really understand what is happening[…] but you also need to collect data. What are they typing in? What are they getting back? Is somebody who’s typing in this type of question always having a short interaction? Let’s dig into it with rapid, iterative testing and evaluation, so that we can update our model and then move forward. Launching a product these days means the starting guns have been fired. Put the research to work to figure out the next step.” - (23:29) Greg Nudelman “ I think that having diversity on your design team (i.e. gender, level of experience, etc.) is critical. We’ve already seen some terrible outcomes. Multiple examples where an LLM is crafting horrendous emails, introductions, and so on. This is exactly why UXers need to get involved [with building LLMs]. This is why diversity in UX and on your tech team that deals with AI is so valuable. Number one piece of advice: get some researchers. Number two: make sure your team is diverse.” - Greg Nudelman (32:39) “ It’s extremely important to have UX talks with researchers, content designers, and data teams. It’s important to understand what a user is trying to do, the context [of their decisions], and the intention. [Designers] need to help [the data team] understand the types of data and prompts being used to train models. Those things are better when they’re written and thought of by [designers] who understand where the user is coming from. [Design teams working with data teams] are getting much better results than the [teams] that are working in a vacuum.” - Paz Perez (35:19)

Links

Milly Barker’s LinkedIn post Greg Nudelman’s Value Matrix Article Greg Nudelman website  Paz Perez on Medium Paz Perez on LinkedIn Simon Landry LinkedIn

The rise of AI agents in the workplace is transforming how businesses operate, tackling repetitive tasks and freeing up human employees for more creative endeavors. But what does this mean for the future of work, and how can professionals leverage these tools effectively? As AI agents become more sophisticated, capable of reasoning and decision-making, how do you ensure they align with your business goals? What are the implications for data privacy and security, and how do you manage the transition to a more automated workforce while maintaining human oversight? Surojit Chatterjee is the founder and CEO of Ema. Previously, he guided Coinbase through a successful 2021 IPO as its Chief Product Officer and scaled Google Mobile Ads and Google Shopping into multi-billion dollar businesses as the VP and Head of Product. Surojit holds 40 US patents and has an MBA from MIT, MS in Computer Science from SUNY at Buffalo, and B. Tech from IIT Kharagpur. In the episode, Richie and Surojit explore the transformative role of AI agents in automating repetitive business tasks, enhancing creativity and innovation, improving customer support, and redefining workplace efficiency. They discuss the potential of AI employees, data privacy concerns, and the future of AI-driven business processes, and much more. Links Mentioned in the Show: EmaConnect with SurojitSkill Track: Artificial Intelligence (AI) LeadershipRelated Episode: How Generative AI is Changing Leadership with Christie Smith, Founder of the Humanity Institute and Kelly Monahan, Managing Director, Research InstituteAttend RADAR Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Supported by Our Partners • Formation — Level up your career and compensation with Formation.  • WorkOS — The modern identity platform for B2B SaaS • Vanta — Automate compliance and simplify security with Vanta. — In today’s episode of The Pragmatic Engineer, I’m joined by Jonas Tyroller, one of the developers behind Thronefall, a minimalist indie strategy game that blends tower defense and kingdom-building, now available on Steam. Jonas takes us through the journey of creating Thronefall from start to finish, offering insights into the world of indie game development. We explore: • Why indie developers often skip traditional testing and how they find bugs • The developer workflow using Unity, C# and Blender • The two types of prototypes game developers build  • Why Jonas spent months building game prototypes in 1-2 days • How Jonas uses ChatGPT to build games • Jonas’s tips on making games that sell • And more! — Timestamps (00:00) Intro (02:07) Building in Unity (04:05) What the shader tool is used for  (08:44) How a Unity build is structured (11:01) How game developers write and debug code  (16:21) Jonas’s Unity workflow (18:13) Importing assets from Blender (21:06) The size of Thronefall and how it can be so small (24:04) Jonas’s thoughts on code review (26:42) Why practices like code review and source control might not be relevant for all contexts (30:40) How Jonas and Paul ensure the game is fun  (32:25) How Jonas and Paul used beta testing feedback to improve their game (35:14) The mini-games in Thronefall and why they are so difficult (38:14) The struggle to find the right level of difficulty for the game (41:43) Porting to Nintendo Switch (45:11) The prototypes Jonas and Paul made to get to Thronefall (46:59) The challenge of finding something you want to build that will sell (47:20) Jonas’s ideation process and how they figure out what to build  (49:35) How Thronefall evolved from a mini-game prototype (51:50) How long you spend on prototyping  (52:30) A lesson in failing fast (53:50) The gameplay prototype vs. the art prototype (55:53) How Jonas and Paul distribute work  (57:35) Next steps after having the play prototype and art prototype (59:36) How a launch on Steam works  (1:01:18) Why pathfinding was the most challenging part of building Thronefall (1:08:40) Gen AI tools for building indie games  (1:09:50) How Jonas uses ChatGPT for editing code and as a translator  (1:13:25) The pros and cons of being an indie developer  (1:15:32) Jonas’s advice for software engineers looking to get into indie game development (1:19:32) What to look for in a game design school (1:22:46) How luck figures into success and Jonas’s tips for building a game that sells (1:26:32) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • Game development basics https://newsletter.pragmaticengineer.com/p/game-development-basics  • Building a simple game using Unity https://newsletter.pragmaticengineer.com/p/building-a-simple-game — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Season 1 Episode 29: Navigating Trade-Offs and Balancing Priorities The Data Product Management In Action podcast, brought to you by executive producer Scott Hirleman, is a platform for data product management practitioners to share insights and experiences. In this episode of Data Product Management in Action, host Alexa Westlake talks with Anita Chen, diving into the complexities of managing data products. Anita, a product manager at PagerDuty, shares her approach to defining data products, prioritizing work, and balancing project work with interrupt-driven tasks. They discuss the critical roles of governance, security, and user enablement while emphasizing the importance of transparency and communication. The conversation also explores the transformative potential of generative AI in data product interactions and the build-vs-buy decision-making process. Gain insights into how data product management uniquely differs from traditional software product management and learn actionable strategies for success. Meet our Host Alexa Westlake: Alexa is a Data Analytics Leader in the Identity and Access Management space with a proven track record scaling high-growth SaaS companies. As a Staff Data Analyst at Okta, she brings a wealth of expertise in enterprise data, business intelligence, and strategic decision-making from the various industries she's worked in including telecommunications, strategy execution, and cloud computing. With a passion for harnessing the power of data for actionable insights, Alexa plays a crucial role in driving Okta's security, growth, and scale, helping the organization leverage data to execute on their market opportunity. Connect with Alexa on LinkedIn.

Meet our guest Anita Chen:  Anita is a Data Product Manager at PagerDuty, a digital operations company helping teams resolve issues faster, eliminate alert fatigue, and build more reliable services! Her background is mainly in the People Analytics space which has now expanded to data at scale with our Enterprise Data Team. She currently helps build data products that enable our teams to deliver the best possible customer experience. Anita is most passionate about how data can impact someone's lived experience and endeavor to democratize data in everything she builds. Connect with Anita on LinkedIn. All views and opinions expressed are those of the individuals and do not necessarily reflect their employers or anyone else.  Join the conversation on LinkedIn.  Apply to be a guest or nominate someone that you know.  Do you love what you're listening to? Please rate and review the podcast, and share it with fellow practitioners you know. Your support helps us reach more listeners and continue providing valuable insights! 

Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society. Dive into conversations that should flow as smoothly as your morning coffee (but don’t), where industry insights meet laid-back banter. Whether you’re a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let’s get into the heart of data, unplugged style! This week, we dive into: The creative future with AI: is generative AI helping or hurting creators? Environmental concerns of AI: the hidden costs of AI’s growing capabilities—how much energy do these models actually consume, and is it worth it?AI copyright controversies: Mark Zuckerberg’s LLaMA model faces criticism for using copyrighted materials like content from the notorious LibGen database.Trump vs. AI regulation: The former president repeals Biden’s AI executive order, creating a Wild West approach to AI development in the U.S. How will this impact innovation and global competition?Search reimagined with Perplexity AI: A new era of search blending conversational AI and personalized data unification. Could this be the future of information retrieval?Apple Intelligence on pause: Apple's AI-generated news alerts face a bumpy road. For more laughs, check out the dedicated subreddit AppleIntelligenceFail.Rhai scripting for Rust: Empowering Rust developers with an intuitive embedded scripting language to make extensibility a breeze.Poisoned text for scrapers: Exploring creative ways to protect web content from unauthorized scraping by AI systems.The rise of the AI Data Engineer: Is this a new role in data science, or are we just rebranding existing skills?

Leadership is facing unprecedented challenges in today's socio-political and economic climate. As the lines between work and personal life blur, professionals are seeking workplaces that prioritize humanity and purpose. How can leaders create environments that support employee well-being and connection? With AI's growing presence, how do we balance technological integration with maintaining a human-centered approach? Dr. Christie Smith is a renowned leadership expert, visionary thinker, and founder of The Humanity Studio, a pioneering research and advisory institute dedicated to improving the way we live by revolutionizing the way we work. With over 35 years of experience advising Fortune 500 companies and holding global leadership roles at Accenture, Apple, and Deloitte, Dr. Smith has shaped the future of leadership, talent strategy, and organizational culture across industries. Dr. Kelly Monahan is Managing Director of the Upwork Research Institute, leading their future of work research program. Her research has been recognized and published in both applied and academic journals, including MIT Sloan Management Review and the Journal of Strategic Management. In 2018, Kelly released her first book, “How Behavioral Economics Influences Management Decision-Making: A New Paradigm” (Academic Press/Elsevier Publishers). In 2019, Kelly gave her first TedX talk on the future of work. Kelly is frequently quoted in the media on talent decision-making and the future of work. She also has written over a dozen publications and is a sought-after speaker on how to apply new management and talent models in knowledge-based organizations. Kelly holds a B.S. from Rochester Institute of Technology, an M.S. from Roberts Wesleyan College, and a Ph.D. in Organizational Leadership from Regent University. In the episode, Richie, Christie, and Kelly explore leadership transformations driven by crises, the rise of human-centered workplaces, the integration of AI with human intelligence, the evolving skill landscape, the emergence of gray-collar work, and much more. Links Mentioned in the Show: Essential: How Distributed Teams, Generative AI, and Global Shifts Are Creating a New Human-Powered LeadershipThe Humanity StudioUpwork Research InstituteConnect with Christie and KellySkill Track: AI Business FundamentalsRelated Episode: Leadership in the AI Era with Dana Maor, Senior Partner at McKinsey & CompanyRewatch sessions from RADAR: Forward Edition New to DataCamp? Learn on the go using the DataCamp mobile app Empower your business with world-class data and AI skills with DataCamp for business

With GenAI and LLMs comes great potential to delight and damage customer relationships—both during the sale, and in the UI/UX. However, are B2B AI product teams actually producing real outcomes, on the business side and the UX side, such that customers find these products easy to buy, trustworthy and indispensable? 

What is changing with customer problems as a result of LLM and GenAI technologies becoming more readily available to implement into B2B software? Anything?

Is your current product or feature development being driven by the fact you might be able to now solve it with AI? The “AI-first” team sounds like it’s cutting edge, but is that really determining what a customer will actually buy from you? 

Today I want to talk to you about the interplay of GenAI, customer trust (both user and buyer trust), and the role of UX in products using probabilistic technology.  

These thoughts are based on my own perceptions as a “user” of AI “solutions,” (quotes intentional!), conversations with prospects and clients at my company (Designing for Analytics), as well as the bright minds I mentor over at the MIT Sandbox innovation fund. I also wrote an article about this subject if you’d rather read an abridged version of my thoughts.

Highlights/ Skip to:

AI and LLM-Powered Products Do Not Turn Customer Problems into “Now” and “Expensive” Problems (4:03) Trust and Transparency in the Sale and the Product UX: Handling LLM Hallucinations (Confabulations) and Designing for Model Interpretability (9:44) Selling AI Products to Customers Who Aren’t Users (13:28) How LLM Hallucinations and Model Interpretability Impact User Trust of Your Product (16:10) Probabilistic UIs and LLMs Don’t Negate the Need to Design for Outcomes (22:48) How AI Changes (or Doesn’t) Our Benchmark Use Cases and UX Outcomes (28:41) Closing Thoughts (32:36)

Quotes from Today’s Episode

“Putting AI or GenAI into a product does not change the urgency or the depth of a particular customer problem; it just changes the solution space. Technology shifts in the last ten years have enabled founders to come up with all sorts of novel ways to leverage traditional machine learning, symbolic AI, and LLMs to create new products and disrupt established products; however, it would be foolish to ignore these developments as a product leader. All this technology does is change the possible solutions you can create. It does not change your customer situation, problem, or pain, either in the depth, or severity, or frequency. In fact, it might actually cause some new problems. I feel like most teams spend a lot more time living in the solution space than they do in the problem space. Fall in love with the problem and love that problem regardless of how the solution space may continue to change.” (4:51) “Narrowly targeted, specialized AI products are going to beat solutions trying to solve problems for multiple buyers and customers. If you’re building a narrow, specific product for a narrow, specific audience, one of the things you have on your side is a solution focused on a specific domain used by people who have specific domain experience. You may not need a trillion-parameter LLM to provide significant value to your customer. AI products that have a more specific focus and address a very narrow ICP I believe are more likely to succeed than those trying to serve too many use cases—especially when GenAI is being leveraged to deliver the value. I think this can be true even for platform products as well. Narrowing the audience you want to serve also narrows the scope of the product, which in turn should increase the value that you bring to that audience—in part because you probably will have fewer trust, usability, and utility problems resulting from trying to leverage a model for a wide range of use cases.” (17:18) “Probabilistic UIs and LLMs are going to create big problems for product teams, particularly if they lack a set of guiding benchmark use cases. I talk a lot about benchmark use cases as a core design principle and data-rich enterprise products. Why? Because a lot of B2B and enterprise products fall into the game of ‘adding more stuff over time.’ ‘Add it so you can sell it.’ As products and software companies begin to mature, you start having product owners and PMs attached to specific technologies or parts of a product. Figuring out how to improve the customer’s experience over time against the most critical problems and needs they have is a harder game to play than simply adding more stuff— especially if you have no benchmark use cases to hold you accountable. It’s hard to make the product indispensable if it’s trying to do 100 things for 100 people.“ (22:48) “Product is a hard game, and design and UX is by far not the only aspect of product that we need to get right. A lot of designers don’t understand this, and they think if they just nail design and UX, then everything else solves itself. The reason the design and experience part is hard is that it’s tied to behavior change– especially if you are ‘disrupting’ an industry, incumbent tool, application, or product. You are in the behavior-change game, and it’s really hard to get it right. But when you get it right, it can be really amazing and transformative.” (28:01) “If your AI product is trying to do a wide variety of things for a wide variety of personas, it’s going to be harder to determine appropriate benchmarks and UX outcomes to measure and design against. Given LLM hallucinations, the increased problem of trust, model drift problems, etc., your AI product has to actually innovate in a way that is both meaningful and observable to the customer. It doesn’t matter what your AI is trying to “fix.” If they can’t see what the benefit is to them personally, it doesn’t really matter if technically you’ve done something in a new and novel way. They’re just not going to care because that question of what’s in it for me is always sitting behind, in their brain, whether it’s stated out loud or not.” (29:32)

Links

Designing for Analytics mailing list

As AI continues to advance, natural language processing (NLP) is at the forefront, transforming how businesses interact with data. From chatbots to document analysis, NLP offers numerous applications. But with the advent of generative AI, professionals face new challenges: When is it appropriate to use traditional NLP techniques versus more advanced models? How do you balance the costs and benefits of these technologies? Explore the strategic decisions and practical applications of NLP in the modern business world. Meri Nova is the founder of Break Into Data, a data careers company. Her work focuses on helping people switch to a career in data, and using machine learning to improve community engagement. Previously, she was a data scientist and machine learning engineer at Hyloc. Meri is the instructor of DataCamp's 'Retrieval Augmented Generation with LangChain' course. In the episode, Richie and Meri explore the evolution of natural language processing, the impact of generative AI on business applications, the balance between traditional NLP techniques and modern LLMs, the role of vector stores and knowledge graphs, and the exciting potential of AI in automating tasks and decision-making, and much more. Links Mentioned in the Show: Meri’s Breaking Into Data Handbook on GitHubBreak Into Data Discord GroupConnect with MeriSkill Track: Artificial Intelligence (AI) LeadershipRelated Episode: Industry Roundup #2: AI Agents for Data Work, The Return of the Full-Stack Data Scientist and Old languages Make a ComebackRewatch sessions from RADAR: Forward Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

2025 promises to be another transformative year for data and AI. From groundbreaking advancements in reasoning models to the rise of new challengers in generative AI, the field shows no signs of slowing down. Last week Jonathan and Martijn scored their 2024 predictions, and scored highly, but what's in store for 2025?  Building on the insights from their 2024 predictions, we'll assess the future of generative AI, the evolving role of AI in education, the growing importance of synthetic data, and much more. In the episode, Richie, Jo, and Martijn discuss whether OpenAI and Google will maintain their dominance or face disruption from new players like Meta’s Llama and XAI’s Grok, the implications of recent breakthroughs in AI reasoning, the rise of short-form video generation AI in social media and advertising, the challenges Europe faces in keeping pace with the US and China in AI innovation and much more. Links Mentioned in the Show: Data & AI Trends & Predictions 2025Skill Track: AI Business FundamentalsRelated Episode: Reviewing Our Data Trends & Predictions of 2024 with DataCamp's CEO & COO, Jonathan Cornelissen & Martijn TheuwissenRewatch sessions from RADAR: Forward Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

2024 was another huge year for data and AI. Generative AI continued to shape the way we work and interact with technology, with companies of all sizes racing to integrate AI into their products. We saw strides in tools like AI-enhanced data science notebooks, rapid adoption of generative image AI, and a steady march toward video generation AI. At the same time, foundational skills like AI literacy and data governance gained traction as critical areas for individuals and organizations to master. This time last year, DataCamp Co-Founders Jonathan and Martijn made a series of predictions and data and AI for 2024, today, they join Richie to reflect on their 2024 predictions and share their vision for data and AI in 2025. In the episode, Richie, Jonathan, and Martijn review the mainstream adoption of generative AI and its journey toward daily use, the rise of AI literacy as a critical skill, the growing overlap between data science and software engineering with the emergence of AI engineers, evolving trends in programming languages, how generative AI has moved from prototype to production, the near-mainstreaming of video generation AI, why AI hype continues to thrive and much more. Links Mentioned in the Show: Data & AI Trends & Predictions 2025Skill Track: AI Business FundamentalsRelated Episode: Data Trends & Predictions 2024 with DataCamp's CEO & COO, Jonathan Cornelissen & Martijn TheuwissenRewatch sessions from RADAR: Forward Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

podcast_episode
by Blake Stockman (Google; Meta; Uber; Y Combinator; founder of a tech recruitment agency) , Gergely Orosz

Supported by Our Partners • DX — DX is an engineering intelligence platform designed by leading researchers.  • Vanta — Automate compliance and simplify security with Vanta. — In today’s episode of The Pragmatic Engineer, I catch up with one of the best tech recruiters I’ve had the opportunity to work with: Blake Stockman, a former colleague of mine from Uber. Blake built a strong reputation in the recruiting world, working at tech giants like Google, Meta, and Uber. He also spent time with Y Combinator and founded his agency, where he helped both large tech companies and early-stage startups find and secure top talent. A few months ago, Blake did a career pivot: he is now studying to become a lawyer. I pounced on this perfect opportunity to have him share all that he’s seen behind-the-scenes in tech recruitment: sharing his observations unfiltered. In our conversation, Blake shares recruitment insights from his time at Facebook, Google, and Uber and his experience running his own tech recruitment agency. We discuss topics such as: • A step-by-step breakdown of hiring processes at Big Tech and startups• How to get the most out of your tech recruiter, as a candidate• Best practices for hiring managers to work with their recruiter• Why you shouldn’t disclose salary expectations upfront, plus tips for negotiating• Where to find the best startup opportunities and how to evaluate them—including understanding startup compensation• And much more! — Timestamps (00:00) Intro (01:40) Tips for working with recruiters (06:11) Why hiring managers should have more conversations with recruiters (09:48) A behind-the-scenes look at the hiring process at big tech companies  (13:38) How hiring worked at Uber when Gergely and Blake were there (16:46) An explanation of calibration in the recruitment process (18:11) A case for partnering with recruitment  (20:49) The different approaches to recruitment Blake experienced at different organizations (25:30) How hiring decisions are made  (31:34) The differences between hiring at startups vs. large, established companies (33:21) Reasons desperate decisions are made and problems that may arise (36:30) The problem of hiring solely to fill a seat (38:55) The process of the closing call (40:24) The importance of understanding equity  (43:27) Tips for negotiating  (48:38) How to find the best startup opportunities, and how to evaluate if it’s a good fit (53:58) What to include on your LinkedIn profile (55:48) A story from Uber and why you should remember to thank your recruiter (1:00:09) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • How GenAI is reshaping tech hiring https://newsletter.pragmaticengineer.com/p/how-genai-changes-tech-hiring • Hiring software engineers https://newsletter.pragmaticengineer.com/p/hiring-software-engineers  • Hiring an Engineering Manager https://newsletter.pragmaticengineer.com/p/hiring-engineering-managers • Hiring Junior Software Engineers https://newsletter.pragmaticengineer.com/p/hiring-junior-engineers — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

AWS re:Invent 2024 - Customer Keynote Autodesk

Design software pioneer Autodesk is transforming computer-aided design (CAD) by harnessing generative AI and Amazon Web Services (AWS). The company is developing advanced AI foundation models, like "Project Bernini," which can generate precise 2D and 3D geometric designs based on physical principles.

By utilizing AWS technologies such as Amazon DynamoDB, Elastic MapReduce (EMR), Amazon SageMaker, and Elastic Fabric Adapter, Autodesk has significantly enhanced its AI development process. These innovations have halved foundation model development time and increased AI productivity by 30%.

Learn more about AWS events: https://go.aws/events

Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4

ABOUT AWS Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

reInvent2024 #AWSreInvent2024 #AWSEvents

AI is not just about writing code; it's about improving the entire software development process. From generating documentation to automating code reviews, AI tools are becoming indispensable. But how do you ensure the quality of AI-generated code? What strategies can you employ to maintain high standards while leveraging AI's capabilities? These are the questions developers must consider as they incorporate AI into their workflows. Eran Yahav is an associate professor at the Computer Science Department at the Technion – Israel Institute of Technology and co-founder and CTO of Tabnine (formerly Codota). Prior to that, he was a research staff member at the IBM T.J. Watson Research Center in New York (2004-2010). He received his Ph.D. from Tel Aviv University (2005) and his B.Sc. from the Technion in 1996. His research interests include program analysis, program synthesis, and program verification. Eran is a recipient of the prestigious Alon Fellowship for Outstanding Young Researchers, the Andre Deloro Career Advancement Chair in Engineering, the 2020 Robin Milner Young Researcher Award (POPL talk here), the ERC Consolidator Grant as well as multiple best paper awards at various conferences. In the episode, Richie and Eran explore AI's role in software development, the balance between AI assistance and manual coding, the impact of generative AI on code review and documentation, the evolution of developer tools, and the future of AI-driven workflows, and much more. Links Mentioned in the Show: TabnineConnect with EranCourse: Working with the OpenAI APIRelated Episode: Getting Generative AI Into Production with Lin Qiao, CEO and Co-Founder of Fireworks AIRewatch sessions from RADAR: Forward Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

AI features and products are the hottest area of software development. Creating high quality AI software is both essential and challenging for many businesses. In this episode, we look at retrieval augmented generation, an important technique for improving text generation quality in AI applications. Beyond technical measures, we look at the broader quality problem for AI applications. How do you ensure your AI applications are effective and secure? What steps should you take to integrate AI into your existing data governance frameworks? And how do you measure the success of these AI-driven solutions? Theresa Parker is the Director of Product Management at Rocket Software. She has 25 years of experience as a technology executive with a focus on software development processes, consultancy, and business development. Her recent work in content management focuses on the use of AI and RAG to improve content discoverability. Sudhi Balan is the Chief Technology Officer for AI & Cloud. He leads the AI and data teams for data modernization, driving AI adoption of Rocket's structured and unstructured data products. He also shapes AI strategy for Rocket’s infrastructure and app portfolio. He has earned patents for safe and scalable applications of transformational technology. Previously, he led digital transformation and hybrid cloud strategy for Rocket’s unstructured data business and was Senior Director of Product Development at ASG. In the episode, Richie, Theresa, and Sudhi explore retrieval-augmented generation, its applications in customer support and loan processing, the importance of data governance and privacy, the role of testing and guardrails in AI, cost management strategies, and the potential of AI to transform customer experiences, and much more. Links Mentioned in the Show: Rocket SoftwareConnect with Theresa and SudhiCourse: Retrieval Augmented Generation (RAG) with LangChainRelated Episode: Getting Generative AI Into Production with Lin Qiao, CEO and Co-Founder of Fireworks AIRewatch sessions from RADAR: Forward Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business