Ever wonder how Google Cloud supports billions of daily users worldwide? Dive into the world of Google’s cloud regions, global expansion, and data centers. This session unveils the innovative technology behind these modern marvels and the operational expertise that keeps them humming. Learn about our cutting-edge efficiency and state-of-the-art security. Discover how innovation and sustainability is key to our design and how you can apply Google’s best practices to your own infrastructure.
talk-data.com
Topic
Cyber Security
2078
tagged
Activity Trend
Top Events
Google Workspace customers are switching to Google Chat and Google Meet, AI-first collaboration tools that seamlessly integrate with the Workspace apps you use every day. Learn how they migrated from costly point solutions to enhance collaboration, improve data security, reduce costs, and unlock new levels of team productivity with Gemini.
Securing AI applications demands a full life cycle approach. Palo Alto Networks integrates AI Runtime Security and AI Security Posture Management (AI-SPM) to protect your workloads from development to runtime. Gain inline data protection, prevent model misuse, and stop zero-day threats with centralized security policies and real-time threat detection. Discover how to automate security, reduce risk, and safeguard sensitive data while enabling innovation in your AI ecosystems.
This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.
AI agents are revolutionizing organizations by embedding AI into workflows, delivering autonomous outcomes. While this offers opportunities, it also introduces risks. To mitigate these, we must implement agent-specific controls. AI is a powerful security tool, enabling resilience & optimization at scale. Discover how Accenture’s Security AI Engineering practice is developing modular blueprints to rapidly integrate AI & agentic capabilities, accelerating security operations, bridging tooling gaps, & enhancing critical infrastructure resilience.
This Session is hosted by a Google Cloud Next Sponsor.
Visit your registration profile at g.co/cloudnext to opt out of sharing your contact information with the sponsor hosting this session.
Discover how Target modernized its MLOps workflows using Ray and Vertex AI to build scalable ML applications. This session will cover key strategies for optimizing model performance, ensuring security and compliance, and fostering collaboration between data science and platform teams. Whether you’re looking to streamline model deployment, enhance data access, or improve infrastructure management in a hybrid setup, this session provides practical insights and guidance for integrating Ray and Vertex AI into your MLOps roadmap.
This talk explores the practical challenges and solutions of a global car manufacturer's SOC modernization. We'll move beyond theory to examine their use-case driven approach, focusing on Agile Methodology for rapid progress, Threat-Centric Design to guide security, "SOC as Code" for automation, and SOAR Playbooks for incident response. We'll share insights on optimizing log ingestion, building threat profiles, and "shifting left" by integrating security earlier in development. Additionally, we'll discuss organizational restructuring and balancing transformative change with incremental improvements. This session offers valuable lessons for security professionals modernizing their SOC in a large-scale enterprise.
It feels like there’s a new advancement happening in AI every day – but how do these discoveries go from their nascent state in the research lab to full-scale enterprise deployment? And how do we ensure that these new tools and capabilities remain secure? Our panelists will explore this cycle of innovation – the iterative process of curiosity-driven research, practical application, and real-world impact of AI and machine learning. We'll examine practical case studies and underscore the transformative power AI has in the realm of security.
Topics will include:
- How Google connects AI research with real-world solutions for some of the largest enterprises in the world
- Google Cloud’s AI roadmap
- Cybersecurity considerations for emerging AI tools
Ensuring data usability is paramount to unlocking a company’s full potential and driving informed decision-making. Part of author Saurav Bhattacharya’s trilogy that covers the essential pillars of digital ecosystems—security, reliability, and usability—this book offers a comprehensive exploration of the fundamental concepts, principles, and practices essential for enhancing data accessibility and effectiveness. You’ll study the core aspects of data design, standardization, and interoperability, gaining the knowledge needed to create and maintain high-quality data environments. By examining the tools and technologies that improve data usability, along with best practices for data visualization and user-centric strategies, this book serves as an invaluable resource for professionals seeking to leverage data more effectively. The book also addresses crucial governance issues, ensuring data quality, integrity, and security are maintained. Through a detailed analysis of data governance frameworks and privacy concerns, you’ll see how to manage data responsibly. Additionally, the book includes compelling case studies that highlight successful data usability implementations, future trends, and the challenges faced in achieving optimal data usability. By fostering a culture of data literacy and usability, this book will help you and your organization navigate the evolving data landscape and harness the power of data for innovation and growth. What You Will Learn Understand the fundamental concepts and importance of data usability, including effective data design, enhancing data accessibility, and ensuring data standardization and interoperability. Review the latest tools and technologies that enhance data usability, best practices for data visualization, and strategies for implementing user-centric data approaches. Ensure data quality and integrity, while navigating data privacy and security concerns. Implement robust data governance frameworks to manage data responsibly and effectively. Who This Book Is For Cybersecurity and IT professionals
Drew Biemer, Agency Director for Energy Facility Siting and Permitting for the State of New Hampshire, sits down with Kirk to talk about the pressing need for adaptations in the state’s energy landscape amid rising demands and transitioning from coal to renewables, the challenges of "astroturfing," and the impact of data centers on energy needs
0:00 Introduction to Energy and Politics 3:24 State-Level Energy Challenges 7:34 The Complexities of Energy Opposition 12:33 Understanding Astroturfing in Energy 17:55 New Hampshire's Energy Landscape 26:01 Future of Renewable Energy in New Hampshire 33:41 Data Centers and Energy Demand 44:34 Balancing Energy Goals and Community Needs 54:42 New Hampshire's Energy Future and Exports 56:17 Data Centers and Energy Improvements 59:48 The Energy Demand Crisis 1:04:26 Local Development and Energy Projects 1:08:53 Energy Policy and National Security 1:12:57 The Engineer's Fallacy in Energy Projects 1:19:40 The Role of Community Engagement 1:23:15 Merging Data Centers and Energy 1:29:48 The Limits of Renewable Energy 1:39:10 Understanding Regulatory Structures 1:47:28 Skills for Future Opportunities in Energy For more about us: https://linktr.ee/overwatchmissioncritical
Learn how to build and manage robust AI infrastructure using Kong AI Gateway for efficient GenAI application development and deployment. From AI Gateway essentials to advanced management techniques, you'll learn to optimize your applications, implement governance and security measures, and adapt to various deployment environments.
Supported by Our Partners • Graphite — The AI developer productivity platform. • Sonar — Code quality and code security for ALL code. • Chronosphere — The observability platform built for control. — How do you take a new product idea, and turn it into a successful product? Figma Slides started as a hackathon project a year and a half ago – and today it’s a full-on product, with more than 4.5M slide decks created by users. I’m joined by two founding engineers on this project: Jonathan Kaufman and Noah Finer. In our chat, Jonathan and Noah pull back the curtain on what it took to build Figma Slides. They share engineering challenges faced, interesting engineering practices utilized, and what it's like working on a product used by millions of designers worldwide. We talk about: • An overview of Figma Slides • The tech stack behind Figma Slides • Why the engineering team built grid view before single slide view • How Figma ensures that all Figma files look the same across browsers • Figma’s "vibe testing" approach • How beta testing helped experiment more • The “all flags on”, “all flags off” testing approach • Engineering crits at Figma • And much more! — Timestamps (00:00) Intro (01:45) An overview of Figma Slides and the first steps in building it (06:41) Why Figma built grid view before single slide view (10:00) The next steps of building UI after grid view (12:10) The team structure and size of the Figma Slides team (14:14) The tech stack behind Figma Slides (15:31) How Figma uses C++ with bindings (17:43) The Chrome debugging extension used for C++ and WebAssembly (21:02) An example of how Noah used the debugging tool (22:18) Challenges in building Figma Slides (23:15) An explanation of multiplayer cursors (26:15) Figma’s philosophy of building interconnected products—and the code behind them (28:22) An example of a different mouse behavior in Figma (33:00) Technical challenges in developing single slide view (35:10) Challenges faced in single-slide view while maintaining multiplayer compatibility (40:00) The types of testing used on Figma Slides (43:42) Figma’s zero bug policy (45:30) The release process, and how engineering uses feature flags (48:40) How Figma tests Slides with feature flags enabled and then disabled (51:35) An explanation of eng crits at Figma (54:53) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: • Inside Figma’s engineering culture • Quality Assurance across the tech industry • Shipping to production • Design-first software engineering — See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Your complete Guide to preparing for the AWS® Certified Data Engineer: Associate exam The AWS® Certified Data Engineer Study Guide is your one-stop resource for complete coverage of the challenging DEA-C01 Associate exam. This Sybex Study Guide covers 100% of the DEA-C01 objectives. Prepare for the exam faster and smarter with Sybex thanks to accurate content including, an assessment test that validates and measures exam readiness, real-world examples and scenarios, practical exercises, and challenging chapter review questions. Reinforce and retain what you’ve learned with the Sybex online learning environment and test bank, accessible across multiple devices. Get ready for the AWS Certified Data Engineer exam – quickly and efficiently – with Sybex. Coverage of 100% of all exam objectives in this Study Guide means you’ll be ready for: Data Ingestion and Transformation Data Store Management Data Operations and Support Data Security and Governance ABOUT THE AWS DATA ENGINEER – ASSOCIATE CERTIFICATION The AWS Data Engineer – Associate certification validates skills and knowledge in core data-related Amazon Web Services. It recognizes your ability to implement data pipelines and to monitor, troubleshoot, and optimize cost and performance issues in accordance with best practices Interactive learning environment Take your exam prep to the next level with Sybex’s superior interactive online study tools. To access our learning environment, simply visit www.wiley.com/go/sybextestprep, register your book to receive your unique PIN, and instantly gain one year of FREE access after activation to: • Interactive test bank with 5 practice exams to help you identify areas where further review is needed. Get more than 90% of the answers correct, and you’re ready to take the certification exam. • 100 electronic flashcards to reinforce learning and last-minute prep before the exam • Comprehensive glossary in PDF format gives you instant access to the key terms so you are fully prepared
Stay ahead with insights into the next big advancements in AI-powered security.
The integration of AI into everyday business operations raises questions about the future of work and human agency. With AI's potential to automate and optimize, how do we ensure that it complements rather than competes with human capabilities? What measures can be taken to prevent AI from overshadowing human input and creativity? How do we strike a balance between embracing AI's benefits and preserving the essence of human contribution? Faisal Hoque is the founder and CEO of SHADOKA, NextChapter, and other companies. He also serves as a transformation and an innovation partner for CACI, an $8B company focused on U.S. national security. He volunteers for several organizations, including MIT IDEAS Social Innovation Program. He is also a contributor at the Swiss business school IMD, Thinkers50, the Project Management Institute (PMl), and others. As a founder and CEO of multiple companies, he is a three-time winner of Deloitte Technology Fast 50™ and Fast 500™ awards. He has developed more than 20 commercial platforms and worked with leadership at the U.S. DoD, DHS, GE, MasterCard, American Express, Home Depot, PepsiCo, IBM, Chase, and others. For their innovative work, he and his team have been awarded several provisional patents in the areas of user authentication, business rule routing, and metadata sorting. In the episode, Richie and Faisal explore the philosophical implications of AI on humanity, the concept of AI as a partner, the potential societal impacts of AI-driven unemployment, the importance of critical thinking and personal responsibility in the AI era, and much more. Links Mentioned in the Show: SHADOKAFaisail’s WebsiteConnect with FaisalSkill Track: Artificial Intelligence (AI) LeadershipRelated Episode: Making Better Decisions using Data & AI with Cassie Kozyrkov, Google's First Chief Decision ScientistSign up to attend RADAR: Skills Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business
Supported by Our Partners • WorkOS — The modern identity platform for B2B SaaS. • Vanta — Automate compliance and simplify security with Vanta. — Linux is the most widespread operating system, globally – but how is it built? Few people are better to answer this than Greg Kroah-Hartman: a Linux kernel maintainer for 25 years, and one of the 3 Linux Kernel Foundation Fellows (the other two are Linus Torvalds and Shuah Khan). Greg manages the Linux kernel’s stable releases, and is a maintainer of multiple kernel subsystems. We cover the inner workings of Linux kernel development, exploring everything from how changes get implemented to why its community-driven approach produces such reliable software. Greg shares insights about the kernel's unique trust model and makes a case for why engineers should contribute to open-source projects. We go into: • How widespread is Linux? • What is the Linux kernel responsible for – and why is it a monolith? • How does a kernel change get merged? A walkthrough • The 9-week development cycle for the Linux kernel • Testing the Linux kernel • Why is Linux so widespread? • The career benefits of open-source contribution • And much more! — Timestamps (00:00) Intro (02:23) How widespread is Linux? (06:00) The difference in complexity in different devices powered by Linux (09:20) What is the Linux kernel? (14:00) Why trust is so important with the Linux kernel development (16:02) A walk-through of a kernel change (23:20) How Linux kernel development cycles work (29:55) The testing process at Kernel and Kernel CI (31:55) A case for the open source development process (35:44) Linux kernel branches: Stable vs. development (38:32) Challenges of maintaining older Linux code (40:30) How Linux handles bug fixes (44:40) The range of work Linux kernel engineers do (48:33) Greg’s review process and its parallels with Uber’s RFC process (51:48) Linux kernel within companies like IBM (53:52) Why Linux is so widespread (56:50) How Linux Kernel Institute runs without product managers (1:02:01) The pros and cons of using Rust in Linux kernel (1:09:55) How LLMs are utilized in bug fixes and coding in Linux (1:12:13) The value of contributing to the Linux kernel or any open-source project (1:16:40) Rapid fire round — The Pragmatic Engineer deepdives relevant for this episode: What TPMs do and what software engineers can learn from them The past and future of modern backend practices Backstage: an open-source developer portal — See the transcript and other references from the episode at https://newsletter.pragmaticengineer.com/podcast — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe
Chez Radio France, nous avons exploré le concept de Backend for Frontend (BFF) en le poussant aux limites de ses capacités. Ce talk propose un retour d'expérience détaillé sur les mécanismes que nous avons mis en place pour optimiser et sécuriser cette architecture, en réponse aux besoins spécifiques de nos applications.\n\nNous aborderons les choix techniques, les défis rencontrés, et les solutions pratiques qui nous ont permis de gérer efficacement les interactions entre nos frontends et backends. Venez découvrir comment le BFF peut transformer la gestion des flux de données et améliorer la scalabilité de vos projets.
A friendly illustrated guide to designing and implementing your first database. Grokking Relational Database Design makes the principles of designing relational databases approachable and engaging. Everything in this book is reinforced by hands-on exercises and examples. In Grokking Relational Database Design, you’ll learn how to: Query and create databases using Structured Query Language (SQL) Design databases from scratch Implement and optimize database designs Take advantage of generative AI when designing databases A well-constructed database is easy to understand, query, manage, and scale when your app needs to grow. In Grokking Relational Database Design you’ll learn the basics of relational database design including how to name fields and tables, which data to store where, how to eliminate repetition, good practices for data collection and hygiene, and much more. You won’t need a computer science degree or in-depth knowledge of programming—the book’s practical examples and down-to-earth definitions are beginner-friendly. About the Technology Almost every business uses a relational database system. Whether you’re a software developer, an analyst creating reports and dashboards, or a business user just trying to pull the latest numbers, it pays to understand how a relational database operates. This friendly, easy-to-follow book guides you from square one through the basics of relational database design. About the Book Grokking Relational Database Design introduces the core skills you need to assemble and query tables using SQL. The clear explanations, intuitive illustrations, and hands-on projects make database theory come to life, even if you can’t tell a primary key from an inner join. As you go, you’ll design, implement, and optimize a database for an e-commerce application and explore how generative AI simplifies the mundane tasks of database designs. What's Inside Define entities and their relationships Minimize anomalies and redundancy Use SQL to implement your designs Security, scalability, and performance About the Reader For self-taught programmers, software engineers, data scientists, and business data users. No previous experience with relational databases assumed. About the Authors Dr. Qiang Hao and Dr. Michail Tsikerdekis are both professors of Computer Science at Western Washington University. Quotes If anyone is looking to improve their database design skills, they can’t go wrong with this book. - Ben Brumm, DatabaseStar Goes beyond SQL syntax and explores the core principles. An invaluable resource! - William Jamir Silva, Adjust Relational database design is best done right the first time. This book is a great help to achieve that! - Maxim Volgin, KLM Provides necessary notions to design and build databases that can stand the data challenges we face. - Orlando Méndez, Experian
Get ready to dive into the world of DevOps & Cloud tech! This session will help you navigate the complex world of Cloud and DevOps with confidence. This session is ideal for new grads, career changers, and anyone feeling overwhelmed by the buzz around DevOps. We'll break down its core concepts, demystify the jargon, and explore how DevOps is essential for success in the ever-changing technology landscape, particularly in the emerging era of generative AI. A basic understanding of software development concepts is helpful, but enthusiasm to learn is most important.
Vishakha is a Senior Cloud Architect at Google Cloud Platform with over 8 years of DevOps and Cloud experience. Prior to Google, she was a DevOps engineer at AWS and a Subject Matter Expert (SME) for the IaC offering CloudFormation in the NorthAm region. She has experience in diverse domains including Financial Services, Retail, and Online Media. She primarily focuses on Infrastructure Architecture, Design & Automation (IaC), Public Cloud (AWS, GCP), Kubernetes/CNCF tools, Infrastructure Security & Compliance, CI/CD & GitOps, and MLOPS.
"What if you have a beautiful SLO Dashboard and it's all red and no one cares?" The mission of Site Reliability Engineering (SRE) is to ensure the reliability, scalability, and performance of critical systems - a goal best achieved through strong collaboration with teams across the organization. We are exploring how SRE is embedded in an organization, how it interfaces with application owners, senior management, business stakeholders and external software/hardware vendors. In all these cases the success of SRE's mission hinges on the effectiveness of the relationships.
We will use plenty of examples of what worked, what failed in our past work and why. Additionally, we will address funding challenges that can unexpectedly impact even well-established SRE teams.
Mike has built his career around driving performance and efficiency, specializing in optimizing the security, availability and speed of cloud applications, data and infrastructure. He developed the first currency program trading system for the Toronto Stock Exchange at UBS and later refined his expertise in optimizing trading systems and migrating core data to the cloud at Morgan Stanley and Transamerica. He is a founding member of the NYZH consultancy, focusing on AI and SRE. Based in Denver, Colorado, Mike is a pilot who enjoys desert racing and cycling, sharing adventures with his wife and three children.
Master the art of designing and implementing analytics solutions using Microsoft Fabric with this comprehensive guide. Whether you're preparing for the DP-600 certification exam or want to advance your career, this book offers expert insights into data analytics in Microsoft environments. What this Book will help me do Confidently pass the DP-600 certification exam by mastering exam-tested skills. Acquire practical expertise in deploying data analytics solutions with Microsoft Fabric. Understand and optimize data integration, security, and performance in analytics systems. Learn advanced techniques including semantic model optimization and advanced SQL querying. Prepare for real-world challenges through mock exams and hands-on exercises. Author(s) Jagjeet Singh Makhija and Charles Odunukwe, authors of this guide, are seasoned Microsoft specialists with decades of experience in data analytics, certification training, and technology consulting. Their clear and methodical approach ensures learners at all levels can grow their expertise. Who is it for? If you're a data analyst or IT professional looking to enhance your skills in analytics and Microsoft's technologies, this book is for you. It's ideal for those pursuing the DP-600 certification or aiming to improve their data integration and analysis capabilities.