talk-data.com talk-data.com

Topic

Cloud Computing

infrastructure saas iaas

4055

tagged

Activity Trend

471 peak/qtr
2020-Q1 2026-Q1

Activities

4055 activities · Newest first

It feels like there’s a new advancement happening in AI every day – but how do these discoveries go from their nascent state in the research lab to full-scale enterprise deployment?  And how do we ensure that these new tools and capabilities remain secure? Our panelists will explore this cycle of innovation – the iterative process of curiosity-driven research, practical application, and real-world impact of AI and machine learning. We'll examine practical case studies and underscore the transformative power AI has in the realm of security.  

Topics will include: 

  • How Google connects AI research with real-world solutions for some of the largest enterprises in the world
  • Google Cloud’s AI roadmap
  • Cybersecurity considerations for emerging AI tools

Artificial intelligence is no longer on the horizon – it’s the defining force shaping business today. During this fireside chat, Thomas Kurian, CEO of Google Cloud, will sit down with Google Cloud's VP of Marketing, Alison Wagonfeld for a candid conversation on navigating the AI revolution, unlocking new opportunities for innovation, and building a future-ready organization. They’ll explore Google Cloud’s strategic vision and delve into both the profound impact of AI across industries and actionable strategies for businesses to leverage this technology.

Topics will include: 

  • AI as the competitive differentiator
  • The role of Google Cloud in the AI era
  • Navigating leadership in the age of AI

In this podcast episode, we talked with Eddy Zulkifly about From Supply Chain Management to Digital Warehousing and FinOps

About the Speaker: Eddy Zulkifly is a Staff Data Engineer at Kinaxis, building robust data platforms across Google Cloud, Azure, and AWS. With a decade of experience in data, he actively shares his expertise as a Mentor on ADPList and Teaching Assistant at Uplimit. Previously, he was a Senior Data Engineer at Home Depot, specializing in e-commerce and supply chain analytics. Currently pursuing a Master’s in Analytics at the Georgia Institute of Technology, Eddy is also passionate about open-source data projects and enjoys watching/exploring the analytics behind the Fantasy Premier League.

In this episode, we dive into the world of data engineering and FinOps with Eddy Zulkifly, Staff Data Engineer at Kinaxis. Eddy shares his unconventional career journey—from optimizing physical warehouses with Excel to building digital data platforms in the cloud.

🕒 TIMECODES 0:00 Eddy’s career journey: From supply chain to data engineering 8:18 Tools & learning: Excel, Docker, and transitioning to data engineering 21:57 Physical vs. digital warehousing: Analogies and key differences 31:40 Introduction to FinOps: Cloud cost optimization and vendor negotiations 40:18 Resources for FinOps: Certifications and the FinOps Foundation 45:12 Standardizing cloud cost reporting across AWS/GCP/Azure 50:04 Eddy’s master’s degree and closing thoughts

🔗 CONNECT WITH EDDY Twitter - https://x.com/eddarief Linkedin - https://www.linkedin.com/in/eddyzulkifly/ Github: https://github.com/eyzyly/eyzyly ADPList: https://adplist.org/mentors/eddy-zulkifly

🔗 CONNECT WITH DataTalksClub Join the community - https://datatalks.club/slack.html Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/r?cid=ZjhxaWRqbnEwamhzY3A4ODA5azFlZ2hzNjBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ

Check other upcoming events - https://lu.ma/dtc-events LinkedIn - https://www.linkedin.com/company/datatalks-club/ Twitter - https://twitter.com/DataTalksClub Website - https://datatalks.club/

Automating Data Quality via Shift Left for Real-Time Web Data Feeds at Industrial Scale | Sarah M...

Automating Data Quality via Shift Left for Real-Time Web Data Feeds at Industrial Scale | Sarah McKenna | Shift Left Data Conference 2025

Real-time web data is one of the hardest data streams to automate with trust since web sites don't want to be scraped, are constantly changing with no notice, and employ sophisticated bot blocking mechanisms to try to stop automated data collection. At Sequentum we cut our teeth on web data and have come out with a general purpose cloud platform for any type of data ingestion and data enrichment that our clients can transparently audit and ultimately trust to get their mission critical data delivered on time and with quality to fuel their business decision making.

The role of data and AI engineers is more critical than ever. With organizations collecting massive amounts of data, the challenge lies in building efficient data infrastructures that can support AI systems and deliver actionable insights. But what does it take to become a successful data or AI engineer? How do you navigate the complex landscape of data tools and technologies? And what are the key skills and strategies needed to excel in this field?  Deepak Goyal is a globally recognized authority in Cloud Data Engineering and AI. As the Founder & CEO of Azurelib Academy, he has built a trusted platform for advanced cloud education, empowering over 100,000 professionals and influencing data strategies across Fortune 500 companies. With over 17 years of leadership experience, Deepak has been at the forefront of designing and implementing scalable, real-world data solutions using cutting-edge technologies like Microsoft Azure, Databricks, and Generative AI. In the episode, Richie and Deepak explore the fundamentals of data engineering, the critical skills needed, the intersection with AI roles, career paths, and essential soft skills. They also discuss the hiring process, interview tips, and the importance of continuous learning in a rapidly evolving field, and much more. Links Mentioned in the Show: AzureLibAzureLib Academy Connect with DeepakGet Certified! Azure FundamentalsRelated Episode: Effective Data Engineering with Liya Aizenberg, Director of Data Engineering at AwaySign up to attend RADAR: Skills Edition  New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Kir Titievsky, Product Manager at Google Cloud with extensive experience in streaming and storage infrastructure, joined Yuliia and Dumky to talk about streaming. Drawing from his work with Apache Kafka, Cloud PubSub, Dataflow and Cloud Storage since 2015, Kir explains the fundamental differences between streaming and micro-batch processing. He challenges common misconceptions about streaming costs, explaining how streaming can be significantly less expensive than batch processing for many use cases. Kir shares insights on the "service bus architecture" revival, discussing how modern distributed messaging systems have solved historic bottlenecks while creating new opportunities for business and performance needs.Kir's medium - https://medium.com/@kir-gcpKir's Linkedin page - https://www.linkedin.com/in/kir-titievsky-%F0%9F%87%BA%F0%9F%87%A6-7775052/

As cloud adoption accelerates, not all analytics workloads are heading in the same direction. This blog explores three strategic options for data and IT leaders. Published at: https://www.eckerson.com/articles/are-you-cloud-bound-the-case-for-migration-repatriation-or-keeping-your-analytics-projects-on-premises

Build Bigger With Small Ai: Running Small Models Locally

It's finally possible to bring the awesome power of Large Language Models (LLMs) to your laptop. This talk will explore how to run and leverage small, openly available LLMs to power common tasks involving data, including selecting the right models, practical use cases for running small models, and best practices for deploying small models effectively alongside databases.

Bio: Jeffrey Morgan is the founder of Ollama, an open-source tool to get up and run large language models. Prior to founding Ollama, Jeffrey founded Kitematic, which was acquired by Docker and evolved into Docker Desktop. He has previously worked at companies including Docker, Twitter, and Google.

➡️ Follow Us LinkedIn: https://www.linkedin.com/company/small-data-sf/ X/Twitter : https://twitter.com/smalldatasf Website: https://www.smalldatasf.com/

Discover how to run large language models (LLMs) locally using Ollama, the easiest way to get started with small AI models on your Mac, Windows, or Linux machine. Unlike massive cloud-based systems, small open source models are only a few gigabytes, allowing them to run incredibly fast on consumer hardware without network latency. This video explains why these local LLMs are not just scaled-down versions of larger models but powerful tools for developers, offering significant advantages in speed, data privacy, and cost-effectiveness by eliminating hidden cloud provider fees and risks.

Learn the most common use case for small models: combining them with your existing factual data to prevent hallucinations. We dive into retrieval augmented generation (RAG), a powerful technique where you augment a model's prompt with information from a local data source. See a practical demo of how to build a vector store from simple text files and connect it to a model like Gemma 2B, enabling you to query your own data using natural language for fast, accurate, and context-aware responses.

Explore the next frontier of local AI with small agents and tool calling, a new feature that empowers models to interact with external tools. This guide demonstrates how an LLM can autonomously decide to query a DuckDB database, write the correct SQL, and use the retrieved data to answer your questions. This advanced tutorial shows you how to connect small models directly to your data engineering workflows, moving beyond simple chat to create intelligent, data-driven applications.

Get started with practical applications for small models today, from building internal help desks to streamlining engineering tasks like code review. This video highlights how small and large models can work together effectively and shows that open source models are rapidly catching up to their cloud-scale counterparts. It's never been a better time for developers and data analysts to harness the power of local AI.

Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. DataTopics Unpluggedis your go-to spot for relaxed discussions around tech, news, data, and society. Dive into conversations that should flow as smoothly as your morning coffee (but don’t), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data—unplugged style! In this episode: OpenAI asks White House for AI regulation relief: OpenAI seeks federal-level AI policy exceptions in exchange for transparency. But is this a sign they’re losing momentum?Hot take: GPT-4.5 is a ‘nothing burger’: Is GPT-4.5 actually an upgrade, or just a well-marketed rerun?Claude 3.7 & Blowing €100 in Two Days: One of the hosts tests Claude extensively—and racks up a pricey bill. Was it worth it?OpenAI’s Deep Research: How does OpenAI’s new research tool compare to Perplexity?AI cracks superbug problem in two days: AI speeds up decades of scientific research—should we be impressed or concerned?European tech coalition demands ‘radical action’ on digital sovereignty: Big names like Airbus and Proton push for homegrown European tech.Migrating from AWS to a European cloud: A real-world case study on cutting costs by 62%—is it worth the trade-offs?Docs by the French government: A Notion alternative for open-source government collaboration.Why people hate note-taking apps: A deep dive into the frustrations with Notion, Obsidian, and alternatives.Model Context Protocol (MCP): How MCP is changing AI tool integrations—and why OpenAI isn’t on board (yet).OpenRouter.ai: The one-stop API for switching between AI models. Does it live up to the hype?OTDiamond.ai: A multi-LLM approach that picks the best model for your queries to balance cost and performance.Are you polite to AI?: Study finds most people say "please" to ChatGPT—good manners or fear of the AI uprising?AI refusing to do your work?: A hilarious case of an AI refusing to generate code because it "wants you to learn."And finally, a big announcement—DataTopics Unplugged is evolving! Stay tuned for an updated format and a fresh take on tech discussions. 

What if rather than starting from legacy media standards to build cloud media workflows, you start with web technology and build back to cloud native workflows? What if we give every frame a URL and build out from there? - Richard describes a future cloud-native media mesh platform with open APIs that accelerate adoption of asynchronous, scalable and secure media workflows in the web, including ingest, growing files, fast turnaround and multiplatform production.

Serhii Sokolenko, founder at Tower Dev and former product manager at tech giants like Google Cloud, Snowflake, and Databricks, joined Yuliia to discuss his journey building a next-generation compute platform. Tower Dev aims to simplify data processing for data engineers who work with Python. Serhii explains how Tower addresses three key market trends: the integration of data engineering with AI through Python, the movement away from complex distributed processing frameworks, and users' desire for flexibility across different data platforms. He explains how Tower makes Python data applications more accessible by eliminating the need to learn complex frameworks while automatically scaling infrastructure. Sergei also shares his perspective on the future of data engineering, noting in which ways AI will transform the profession.Tower Dev - https://tower.dev/Serhii's Linkedin - https://www.linkedin.com/in/ssokolenko/

CockroachDB: The Definitive Guide, 2nd Edition

CockroachDB is the distributed SQL database that handles the demands of today's data-driven applications. The second edition of this popular hands-on guide shows software developers, architects, and DevOps/SRE teams how to use CockroachDB for applications that scale elastically and provide seamless delivery for end users while remaining indestructible. Data professionals will learn how to migrate existing applications to CockroachDB's performant, cloud-native data architecture. You'll also quickly discover the benefits of strong data correctness and consistency guarantees, plus optimizations for delivering ultra-low latencies to globally distributed end users. Uncover the power of distributed SQL Learn how to start, manage, and optimize projects in CockroachDB Explore best practices for data modeling, schema design, and distributed infrastructure Discover strategies for migrating data into CockroachDB See how to read, write, and run ACID transactions across distributed systems Maximize resiliency in multiregion clusters Secure, monitor, and fine-tune your CockroachDB deployment for peak performance

In this podcast episode, we talked with Nemanja Radojkovic about MLOps in Corporations and Startups.

About the Speaker: Nemanja Radojkovic is Senior Machine Learning Engineer at Euroclear.

In this event,we’re diving into the world of MLOps, comparing life in startups versus big corporations. Joining us again is Nemanja, a seasoned machine learning engineer with experience spanning Fortune 500 companies and agile startups. We explore the challenges of scaling MLOps on a shoestring budget, the trade-offs between corporate stability and startup agility, and practical advice for engineers deciding between these two career paths. Whether you’re navigating legacy frameworks or experimenting with cutting-edge tools.

1:00 MLOps in corporations versus startups 6:03 The agility and pace of startups 7:54 MLOps on a shoestring budget 12:54 Cloud solutions for startups 15:06 Challenges of cloud complexity versus on-premise 19:19 Selecting tools and avoiding vendor lock-in 22:22 Choosing between a startup and a corporation 27:30 Flexibility and risks in startups 29:37 Bureaucracy and processes in corporations 33:17 The role of frameworks in corporations 34:32 Advantages of large teams in corporations 40:01 Challenges of technical debt in startups 43:12 Career advice for junior data scientists 44:10 Tools and frameworks for MLOps projects 49:00 Balancing new and old technologies in skill development 55:43 Data engineering challenges and reliability in LLMs 57:09 On-premise vs. cloud solutions in data-sensitive industries 59:29 Alternatives like Dask for distributed systems

🔗 CONNECT WITH NEMANJA LinkedIn -   / radojkovic   Github - https://github.com/baskervilski

🔗 CONNECT WITH DataTalksClub Join the community - https://datatalks.club/slack.html Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/... Check other upcoming events - https://lu.ma/dtc-events  LinkedIn -   / datatalks-club    Twitter -   / datatalksclub    Website - https://datatalks.club/ 

Supported by Our Partners • Sentry — Error and performance monitoring for developers. • The Software Engineer’s Guidebook: Written by me (Gergely) – now out in audio form as well. — In today’s episode of The Pragmatic Engineer, I am joined by former Uber colleague, Gautam Korlam. Gautam is the Co-Founder of Gitar, an agentic AI startup that automates code maintenance. Gautam was mobile engineer no. 9 at Uber and founding engineer for the mobile platform team – and so he learned a few things about scaling up engineering teams. We talk about: • How Gautam accidentally deleted Uber’s Java monorepo – really! • Uber's unique engineering stack and why custom solutions like SubmitQueue were built in-house • Monorepo: the benefits and downsides of this approach • From Engineer II to Principal Engineer at Uber: Gautam’s career trajectory • Practical strategies for building trust and gaining social capital  • How the platform team at Uber operated with a product-focused mindset • Vibe coding: why it helps with quick prototyping • How AI tools are changing developer experience and productivity • Important skills for devs to pick up to remain valuable as AI tools spread • And more! — Timestamps (00:00) Intro (02:11) How Gautam accidentally deleted Uber’s Java Monorepo (05:40) The impact of Gautam’s mistake (06:35) Uber’s unique engineering stack (10:15) Uber’s SubmitQueue (12:44) Why Uber moved to a monorepo (16:30) The downsides of a monorepo (18:35) Measurement products built in-house  (20:20) Measuring developer productivity and happiness  (22:52) How Devpods improved developer productivity  (27:37) The challenges with cloud development environments (29:10) Gautam’s journey from Eng II to Principal Engineer (32:00) Building trust and gaining social capital  (36:17) An explanation of Principal Engineer at Uber—and the archetypes at Uber  (45:07) The platform and program split at Uber (48:15) How Gautam and his team supported their internal users  (52:50) Gautam’s thoughts on developer productivity  (59:10) How AI enhances productivity, its limitations, and the rise of agentic AI (1:04:00) An explanation of Vibe coding (1:07:34) An overview of Gitar and all it can help developers with  (1:10:44) Top skills to cultivate to add value and stay relevant (1:17:00) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • The Platform and Program split at Uber • How Uber is measuring engineering productivity • Inside Uber’s move to the Cloud • How Uber built its observability platform • Software Architect Archetypes — See the transcript and other references from the episode at ⁠⁠https://newsletter.pragmaticengineer.com/podcast⁠⁠ — Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected].

Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Get ready to dive into the world of DevOps & Cloud tech! This session will help you navigate the complex world of Cloud and DevOps with confidence. This session is ideal for new grads, career changers, and anyone feeling overwhelmed by the buzz around DevOps. We'll break down its core concepts, demystify the jargon, and explore how DevOps is essential for success in the ever-changing technology landscape, particularly in the emerging era of generative AI. A basic understanding of software development concepts is helpful, but enthusiasm to learn is most important.

Vishakha is a Senior Cloud Architect at Google Cloud Platform with over 8 years of DevOps and Cloud experience. Prior to Google, she was a DevOps engineer at AWS and a Subject Matter Expert (SME) for the IaC offering CloudFormation in the NorthAm region. She has experience in diverse domains including Financial Services, Retail, and Online Media. She primarily focuses on Infrastructure Architecture, Design & Automation (IaC), Public Cloud (AWS, GCP), Kubernetes/CNCF tools, Infrastructure Security & Compliance, CI/CD & GitOps, and MLOPS.

"What if you have a beautiful SLO Dashboard and it's all red and no one cares?" The mission of Site Reliability Engineering (SRE) is to ensure the reliability, scalability, and performance of critical systems - a goal best achieved through strong collaboration with teams across the organization. We are exploring how SRE is embedded in an organization, how it interfaces with application owners, senior management, business stakeholders and external software/hardware vendors. In all these cases the success of SRE's mission hinges on the effectiveness of the relationships.

We will use plenty of examples of what worked, what failed in our past work and why. Additionally, we will address funding challenges that can unexpectedly impact even well-established SRE teams.

Mike has built his career around driving performance and efficiency, specializing in optimizing the security, availability and speed of cloud applications, data and infrastructure. He developed the first currency program trading system for the Toronto Stock Exchange at UBS and later refined his expertise in optimizing trading systems and migrating core data to the cloud at Morgan Stanley and Transamerica. He is a founding member of the NYZH consultancy, focusing on AI and SRE. Based in Denver, Colorado, Mike is a pilot who enjoys desert racing and cycling, sharing adventures with his wife and three children.

Hands-On APIs for AI and Data Science

Are you ready to grow your skills in AI and data science? A great place to start is learning to build and use APIs in real-world data and AI projects. API skills have become essential for AI and data science success, because they are used in a variety of ways in these fields. With this practical book, data scientists and software developers will gain hands-on experience developing and using APIs with the Python programming language and popular frameworks like FastAPI and StreamLit. As you complete the chapters in the book, you'll be creating portfolio projects that teach you how to: Design APIs that data scientists and AIs love Develop APIs using Python and FastAPI Deploy APIs using multiple cloud providers Create data science projects such as visualizations and models using APIs as a data source Access APIs using generative AI and LLMs

The rapid expansion of data centers is reshaping the industry, requiring new approaches to design, safety, and leadership. 

We’re excited to have Doug Mouton, former Senior Eng Lead, Datacenter Design Engineering and Construction at Meta, as a guest on this latest episode of the “Data Center Revolution” podcast. Doug joins us with key insights into leadership, adaptability, and the evolution of hyperscale data-center construction. He also shares his journey from military service to leading large-scale infrastructure projects in the data center industry, highlighting key transferable skills along the way. 

Key Takeaways:

(07:54) Military mindset builds strong leaders. (14:25) Veterans thrive in high-pressure environments. (25:32) Katrina exposed disaster preparedness gaps. (35:16) Microsoft shifted to cost-effective data center designs. (43:56) Data centers face growing energy challenges. (54:26) Safety-first culture boosts efficiency and morale. (01:21:43) Data centers must transition to hybrid cooling solutions. (01:42:09) AI needs ethical guardrails.

Resources Mentioned:

Fidelis New Energy | Website - https://www.fidelisinfra.com

Microsoft Azure - https://azure.microsoft.com/en-us/

Meta - https://about.meta.com/

Jacobs - https://www.jacobs.com/

National Guard - https://nationalguard.com/

Jones Lang LaSalle - https://www.us.jll.com/

Thank you for listening to “Data Center Revolution.” Don’t forget to leave us a review and subscribe so you don’t miss an episode.   To learn more about Overwatch, visit us at https://linktr.ee/overwatchmissioncritical 

DataCenterIndustry #NuclearEnergy #FutureOfDataCenters #AI