talk-data.com
People (417 results)
See all 417 →Companies (1 result)
Activities & events
| Title & Speakers | Event |
|---|---|
|
Mandeep Singh
– author
,
Amit Singhal
– author
,
Puneet Kumar Aggarwal
– author
,
Sushil Kumar Singh
– author
,
Parita Jain
– author
Future-proof your knowledge and expertise in telecommunications with this essential guide, which provides a comprehensive analysis of the critical security and privacy challenges in the transition to 6G communication. The advancement from 5G to 6G communication represents a quantum leap in wireless technology, promising unprecedented speeds, ultra-low latency, and ubiquitous connectivity. As the industry embarks on this journey, it encounters a host of technical challenges, particularly in ensuring the security and privacy of data transmitted across these networks. The interconnected nature of 6G systems, combined with the proliferation of Internet of Things devices and the sheer volume of data exchanged, creates a fertile ground for cyber threats and privacy breaches. This book delves into these intricate technical challenges, offering a comprehensive analysis of the security and privacy implications of 6G communication. We explore the vulnerabilities inherent in 6G networks, ranging from potential weaknesses in network protocols to the risk of unauthorized access to sensitive data. Through detailed examination and real-world examples, we provide insights into cutting-edge security measures and privacy-preserving techniques tailored specifically to the unique characteristics of 6G systems. By addressing these challenges head-on, we aim to empower engineers, researchers, and policymakers with the knowledge and tools necessary to build resilient and secure 6G networks that safeguard user privacy and data integrity in an increasingly interconnected world. By dissecting the complexities of 6G architecture and protocols, the book equips readers with a nuanced understanding of the unique security and privacy considerations that must be addressed in the design and implementation of these transformative systems. |
|
|
Shantanu Baruah
– author
Gain cutting-edge skills in building a full-stack web application with AI assistance. This book will guide you in creating your own travel application using React and Node.js, with MongoDB as the database, while emphasizing the use of Gen AI platforms like Perplexity.ai and Claude for quicker development and more accurate debugging. The book’s step-by-step approach will help you bridge the gap between traditional web development methods and modern AI-assisted techniques, making it both accessible and insightful. It provides valuable lessons on professional web application development practices. By focusing on a practical example, the book offers hands-on experience that mirrors real-world scenarios, equipping you with relevant and in-demand skills that can be easily transferred to other projects. The book emphasizes the principles of responsive design, teaching you how to create web applications that adapt seamlessly to different screen sizes and devices. This includes using fluid grids, media queries, and optimizing layouts for usability across various platforms. You will also learn how to design, manage, and query databases using MongoDB, ensuring you can effectively handle data storage and retrieval in your applications. Most significantly, the book will introduce you to generative AI tools and prompt engineering techniques that can accelerate coding and debugging processes. This modern approach will streamline development workflows and enhance productivity. By the end of this book, you will not only have learned how to create a complete web application from backend to frontend, along with database management, but you will also have gained invaluable associated skills such as using IDEs, version control, and deploying applications efficiently and effectively with AI. What You Will Learn How to build a full-stack web application from scratch How to use generative AI tools to enhance coding efficiency and streamline the development process How to create user-friendly interfaces that enhance the overall experience of your web applications How to design, manage, and query databases using MongoDB Who This Book Is For Frontend developers, backend developers, and full-stack developers. |
|
|
Pratik Prakash Kasralikar
– author
In the evolving landscape of SAP development, performance is no longer just a nice-to-have—it's a necessity. With the power of SAP HANA and the enhancements introduced in ABAP 7.5, developers are now equipped to rethink how applications are built, executed, and optimized. This book is your guide to that transformation. We begin by understanding the core shift: moving data-intensive operations directly into the HANA database. When implemented correctly, this "code pushdown" philosophy dramatically reduces data transfer and processing overhead. AMDP (ABAP Managed Database Procedures), our in-database processing engine, enables us to write complex logic directly in SQLScript, harnessing HANA’s parallel processing capabilities. We focus on crafting efficient AMDP procedures by adopting set-based operations and minimizing unnecessary data movement. Next, we explore Core Data Services (CDS) Views, our go-to data modeling tool. CDS Views are not just simple database views; they act as semantic layers that define how our applications interact with data. We learn to create optimized CDS Views by leveraging associations, annotations, and table functions, enabling us to build reusable, high-performance data models. These views simplify complex queries, improve data consistency, and enhance application flexibility. We then turn to Native SQL, our direct line to the HANA database. While AMDP and CDS Views provide powerful abstractions, Native SQL offers ultimate control for specialized tasks. We embed Native SQL within AMDP procedures to access database-specific features and fine-tune performance for critical operations. Along the way, we apply best practices for writing efficient queries, with a strong focus on indexing, join strategies, and precise data filtering. Throughout this journey, we emphasize the importance of rigorous testing and proactive monitoring. Just like a race car undergoes extensive testing before hitting the track, our ABAP applications require careful validation to ensure accuracy and optimal performance. We explore techniques for unit testing AMDP procedures, validating CDS Views, and monitoring query performance. We also look at strategies for detecting and addressing potential bottlenecks before they affect end users. SAP ABAP 7.5 Optimization for HANA is not just about writing faster code—it’s about fundamentally rethinking how we develop applications. By embracing code pushdown, leveraging AMDP, CDS Views, and Native SQL, and implementing robust testing and monitoring strategies, we build ABAP applications that are not only faster, but also more scalable, maintainable, and adaptable to the ever-evolving demands of modern business. You Will: Learn how to implement the "code pushdown" philosophy, moving data-intensive operations directly into the HANA database to reduce data transfer and processing overhead Understand to create optimized CDS Views, leveraging associations, annotations, and table functions to build reusable, high-performance data models that simplify complex queries and improve data consistency. Explore how to write complex logic directly in SQLScript using AMDP, harnessing HANA's parallel processing capabilities, and using Native SQL for specialized tasks, accessing database-specific features to optimize performance. This Book is For: ABAP Developers, SAP Consultants and Architects and IT Managers and Technical Leads |
|
|
Sourav Banerjee
– author
The book simplifies the complexities of cloud transition and offers a clear, actionable roadmap for organizations moving from SAP BW or BW/4HANA to SAP Datasphere and SAP Analytics Cloud (as part of SAP Business Data Cloud), particularly in alignment with S/4HANA transformation. Whether you are assessing your current landscape, building a business case with ROI analysis, or creating a phased implementation strategy, this book delivers both technical and strategic guidance. It highlights short- and long-term planning considerations, outlines migration governance, and provides best practices for managing projects across hybrid SAP environments. From identifying platform gaps to facilitating stakeholder discussions, this book is an essential resource for anyone involved in the analytics modernization journey. You Will: [if !supportLists] · [endif] Learn how to assess your current SAP BW or BW/4HANA landscape and identify key migration drivers [if !supportLists] · [endif] Understand best practices for leveraging out-of-the-box cloud features and AI/ML capabilities [if !supportLists] · [endif] A step-by-step approach to planning and executing the move to SAP Business Data Cloud (Mainly SAP Datasphere and SAP Analytics Cloud) This book is for: SAP BW/BW4HANA Customers, SAP Consultants, Solution Architects and Enterprise Architects |
|
|
Asim Chowdhury
– author
Unlock the power of Oracle Database 23AI and Autonomous Database Serverless (ADB-S) with this comprehensive guide to the latest innovations in performance, security, automation, and AI-driven optimization. As enterprises embrace intelligent and autonomous data platforms, understanding these capabilities is essential for data architects, developers, and DBAs. Explore cutting-edge features such as vector data types and AI-powered vector search, revolutionizing data retrieval in modern AI applications. Learn how schema privileges and the DB_DEVELOPER_ROLE simplify access control in multi-tenant environments. Dive into advanced auditing, SQL Firewall, and data integrity constraints to strengthen security and compliance. Discover AI-driven advancements like machine learning-based query execution, customer retention prediction, and AI-powered query tuning. Additional chapters cover innovations in JSON, XML, JSON-Relational Duality Views, new indexing techniques, SQL property graphs, materialized views, partitioning, lock-free transactions, JavaScript stored procedures, blockchain tables, and automated bigfile tablespace shrinking. What sets this book apart is its practical focus—each chapter includes real-world case studies and executable scripts, enabling professionals to implement these features effectively in enterprise environments. Whether you're optimizing performance or aligning IT with business goals, this guide is your key to building scalable, secure, and AI-powered solutions with Oracle 23AI and ADB-S. What You Will Learn Explore Oracle 23AI's latest features through real-world use cases Implement AI/ML-driven optimizations for smarter, autonomous database performance Gain hands-on experience with executable scripts and practical coding examples Strengthen security and compliance using advanced auditing, SQL Firewall, and blockchain tables Master high-performance techniques for query tuning, in-memory processing, and scalability Revolutionize data access with AI-powered vector search in modern AI workloads Simplify user access in multi-tenant environments using schema privileges and DB_DEVELOPER_ROLE Model and query complex data using JSON-Relational Duality Views and SQL property graphs Who this Book is For Database architects, data engineers, Oracle developers, and IT professionals seeking to leverage Oracle 23AI’s latest features for real-world applications |
|
|
Dunith Danushka
– author
This book is a comprehensive guide designed to equip you with the practical skills and knowledge necessary to tackle real-world data challenges using Open Source solutions. Focusing on 10 real-world data engineering projects, it caters specifically to data engineers at the early stages of their careers, providing a strong foundation in essential open source tools and techniques such as Apache Spark, Flink, Airflow, Kafka, and many more. Each chapter is dedicated to a single project, starting with a clear presentation of the problem it addresses. You will then be guided through a step-by-step process to solve the problem, leveraging widely-used open-source data tools. This hands-on approach ensures that you not only understand the theoretical aspects of data engineering but also gain valuable experience in applying these concepts to real-world scenarios. At the end of each chapter, the book delves into common challenges that may arise during the implementation of the solution, offering practical advice on troubleshooting these issues effectively. Additionally, the book highlights best practices that data engineers should follow to ensure the robustness and efficiency of their solutions. A major focus of the book is using open-source projects and tools to solve problems encountered in data engineering. In summary, this book is an indispensable resource for data engineers looking to build a strong foundation in the field. By offering practical, real-world projects and emphasizing problem-solving and best practices, it will prepare you to tackle the complex data challenges encountered throughout your career. Whether you are an aspiring data engineer or looking to enhance your existing skills, this book provides the knowledge and tools you need to succeed in the ever-evolving world of data engineering. You Will Learn: The foundational concepts of data engineering and practical experience in solving real-world data engineering problems How to proficiently use open-source data tools like Apache Kafka, Flink, Spark, Airflow, and Trino 10 hands-on data engineering projects Troubleshoot common challenges in data engineering projects Who is this book for: Early-career data engineers and aspiring data engineers who are looking to build a strong foundation in the field; mid-career professionals looking to transition into data engineering roles; and technology enthusiasts interested in gaining insights into data engineering practices and tools. |
|
|
Vinoth Govindarajan
– author
,
Dipankar Mazumdar
– author
Engineering Lakehouses with Open Table Formats introduces the architecture and capabilities of open table formats like Apache Iceberg, Apache Hudi, and Delta Lake. The book guides you through the design, implementation, and optimization of lakehouses that can handle modern data processing requirements effectively with real-world practical insights. What this Book will help me do Understand the fundamentals of open table formats and their benefits in lakehouse architecture. Learn how to implement performant data processing using tools like Apache Spark and Flink. Master advanced topics like indexing, partitioning, and interoperability between data formats. Explore data lifecycle management and integration with frameworks like Apache Airflow and dbt. Build secure lakehouses with regulatory compliance using best practices detailed in the book. Author(s) Dipankar Mazumdar and Vinoth Govindarajan are seasoned professionals with extensive experience in big data processing and software architecture. They bring their expertise from working with data lakehouses and are known for their ability to explain complex technical concepts clearly. Their collaborative approach brings valuable insights into the latest trends in data management. Who is it for? This book is ideal for data engineers, architects, and software professionals aiming to master modern lakehouse architectures. If you are familiar with data lakes or warehouses and wish to transition to an open data architectural design, this book is suited for you. Readers should have basic knowledge of databases, Python, and Apache Spark for the best experience. |
|
|
Brian Allbee
– author
Grow your software engineering discipline, incorporating and mastering design, development, testing, and deployment best practices examples in a realistic Python project structure. Key Features Understand what makes Software Engineering a discipline, distinct from basic programming Gain practical insight into updating, refactoring, and scaling an existing Python system Implement robust testing, CI/CD pipelines, and cloud-ready architecture decisions Book Description Software engineering is more than coding; it’s the strategic design and continuous improvement of systems that serve real-world needs. This newly updated second edition of Hands-On Software Engineering with Python expands on its foundational approach to help you grow into a senior or staff-level engineering role. Fully revised for today’s Python ecosystem, this edition includes updated tooling, practices, and architectural patterns. You’ll explore key changes across five minor Python versions, examine new features like dataclasses and type hinting, and evaluate modern tools such as Poetry, pytest, and GitHub Actions. A new chapter introduces high-performance computing in Python, and the entire development process is enhanced with cloud-readiness in mind. You’ll follow a complete redesign and refactor of a multi-tier system from the first edition, gaining insight into how software evolves—and what it takes to do that responsibly. From system modeling and SDLC phases to data persistence, testing, and CI/CD automation, each chapter builds your engineering mindset while updating your hands-on skills. By the end of this book, you'll have mastered modern Python software engineering practices and be equipped to revise and future-proof complex systems with confidence. What you will learn Distinguish software engineering from general programming Break down and apply each phase of the SDLC to Python systems Create system models to plan architecture before writing code Apply Agile, Scrum, and other modern development methodologies Use dataclasses, pydantic, and schemas for robust data modeling Set up CI/CD pipelines with GitHub Actions and cloud build tools Write and structure unit, integration, and end-to-end tests Evaluate and integrate tools like Poetry, pytest, and Docker Who this book is for This book is for Python developers with a basic grasp of software development who want to grow into senior or staff-level engineering roles. It’s ideal for professionals looking to deepen their understanding of software architecture, system modeling, testing strategies, and cloud-aware development. Familiarity with core Python programming is required, as the book focuses on applying engineering principles to maintain, extend, and modernize real-world systems. |
|
|
In a world where data sovereignty, scalability, and AI innovation are at the forefront of enterprise strategy, PostgreSQL is emerging as the key to unlocking transformative business value. This new guide serves as your beacon for navigating the convergence of AI, open source technologies, and intelligent data platforms. Authors Tom Taulli, Benjamin Anderson, and Jozef de Vries offer a strategic and practical approach to building AI and data platforms that balance innovation with governance, empowering organizations to take control of their data future. Whether you're designing frameworks for advanced AI applications, modernizing legacy infrastructures, or solving data challenges at scale, you can use this guide to bridge the gap between technical complexity and actionable strategy. Written for IT executives, data leaders, and practitioners alike, it will equip you with the tools and insights to harness Postgre's unique capabilities—extensibility, unstructured data management, and hybrid workloads—for long-term success in an AI-driven world. Learn how to build an AI and data platform using PostgreSQL Overcome data challenges like modernization, integration, and governance Optimize AI performance with model fine-tuning and retrieval-augmented generation (RAG) best practices Discover use cases that align data strategy with business goals Take charge of your data and AI future with this comprehensive and accessible roadmap |
|
|
The Great Data Engineering Reset: From Pipelines to Agents and Beyond
2025-11-28 · 15:00
❄️ DSF WinterFest 2025: Global Online Summit ❄️ Join the global data celebration! Monday 24th to Friday 28th November 2025 Online \| 2-3 sessions per day \| Theme: Innovating with Data DSF WinterFest is back, and this year, it’s going global! Join our 50,000-strong community for a week of world-class talks, tutorials, and panels exploring how data, AI, and analytics are reshaping the world. Expect inspiring content, expert insights, and the cosy, welcoming DSF atmosphere we are known for, all from the comfort of your own space! Why join? 🌍 A global stage with speakers and attendees from every corner of the world 🎟️ One ticket for the full week. Register once and access every session 💻 Easy access from anywhere. Join live or catch replays in your own time ☕ Cosy community vibe. No travel, no stress, just data and connection 🎟️ Tickets: Choose your experience and secure your spot today: Free Pass - Watch live and enjoy replays until 30 November 2025 Upgrade at Checkout - Get extended replay access until May 2026 Register on our website to receive your joining links, add sessions to your calendar, and tune in live from anywhere in the world. Please note: Clicking “Attend” on Meetup does not register you for this summit. You must register via our website to receive your links. 🎁 Competition: We’re spreading festive cheer! One lucky attendee will win a £300 Amazon gift voucher (or equivalent in your currency). Find out more here. ❄️❄️❄️ Session details: 💡 The Great Data Engineering Reset: From Pipelines to Agents and Beyond 🗓️ Friday 28th November ⏰ 15:00 PM GMT 🗣️ Joe Reis, Data Engineer and Architect For years, data engineering was a story of predictable pipelines: move data from point A to point B. But AI just hit the reset button on our entire field. Now, we're all staring into the void, wondering what's next. While the fundamentals remain unchanged, data continues to pose challenges in traditional areas such as security, data governance, management, and modeling. Everything else is up for grabs. This talk will cut through the noise and explore the future of data engineering in an AI-driven world. We'll discuss how our focus must shift from building dashboards and analytics to architecting for automated action. The reset button has been pushed. It's time for us to invent the future of our industry. Joe Reis, a "recovering data scientist" with 20 years in the data industry, is the co-author of the best-selling O'Reilly book, "Fundamentals of Data Engineering." He’s also the instructor for the wildly popular Data Engineering Professional Certificate on Coursera, in partnership with DeepLearning.ai and AWS. Joe’s extensive experience encompasses data engineering\, data architecture\, machine learning\, and more. He regularly keynotes major data conferences globally\, advises and invests in innovative data product companies\, writes at [Practical Data Modeling](https://practicaldatamodeling.substack.com/) and his [personal blog](https://joereis.substack.com/)\, and hosts the popular data podcast "[The Joe Reis Show.](https://open.spotify.com/show/3mcKitYGS4VMG2eHd2PfDN?si=3d3fde23fc1c4a33)" In his free time\, Joe is dedicated to writing new books and articles and thinking of ways to advance the data industry. ❄️❄️❄️ 🔗 How to join: Once registered, you’ll receive your unique joining link by email, plus handy reminders one week, one day, and one hour before each session. Don't forget to add the sessions you are attending to your calendar. If you can’t make it live, don’t worry, your ticket includes replay access until 30 November 2025 (or May 2026 with the upgrade). 📘 Reminders: Time zones: All sessions are listed in GMT - please check your local time when registering. Recordings: Access replays until 30 November 2025 with a free pass, or until May 2026 with an upgraded ticket Please note: Clicking “Attend” on Meetup does not register you for this summit. You must register via our website to receive your links. Join the Celebration ❄️ Five days. Global speakers. Cutting-edge insights. Free to join live - replays included. Upgrade for extended access. Register now and be part of the global data community shaping the future. #DSFWinterFest |
The Great Data Engineering Reset: From Pipelines to Agents and Beyond
|
|
Just Use Postgres!
2025-11-19
Denis Magda
– author
You probably don’t need a collection of specialty databases. Just use Postgres instead! Written for application developers and database pros, Just Use Postgres! shows you how to get the most out of the powerful Postgres database. In Just Use Postgres! you’ll learn how to: Use Postgres as an RDBMS for transactional workloads Develop generative AI, geospatial, and time-series applications Take advantage of modern SQL including window functions and CTEs Perform full-text search and process JSON documents Use Postgres as a message queue Optimize performance with various index types including B-trees, GIN, GiST, HNSW, and more Over the decades, PostgreSQL, aka Postgres, has grown into the most powerful general-purpose database and has become the de facto standard for developers worldwide. Just Use Postgres! takes a modern look at Postgres, exploring the database’s most up-to-date features for AI, time-series, full-text search, geospatial, and other application workloads. About the Technology You know that PostgreSQL is a fast, reliable, SQL compliant RDBMS. You may not know that it’s also great for geospatial systems, time series, full-text search, JSON documents, AI vector embeddings, and many other specialty database functions. For almost any data task you can imagine, you can use Postgres. About the Book Just Use Postgres! covers recipes for using Postgres in dozens of applications normally reserved for single-purpose databases. Written for busy application developers, each chapter explores a different use case illuminating the breadth and depth of Postgres’s capabilities. Along the way, you’ll also meet an incredible ecosystem of Postgres extensions like pgvector, PostGIS, pgmq, and TimescaleDB. You’ll be amazed at everything you can accomplish with Postgres! What's Inside Generative AI, geospatial, and time-series applications Modern SQL including window functions and CTEs Full-text search and JSON B-trees, GIN, GiST, HNSW, and more About the Reader For application developers, software engineers, and architects who know the basics of SQL. About the Author Denis Magda is a recognized Postgres expert and software engineer who worked on Java at Sun Microsystems and Oracle before focusing on databases and large-scale distributed systems. Quotes I was pleasantly surprised to learn many new things from this book. - From the Afterword by Vlad Mihalcea An excellent guide covering everything from basics to cutting-edge features. - Dave Cramer, PostgreSQL JDBC Maintainer Pleasant, easy to read with tonnes of great code. - Mike McQuillan, McQTech Ltd Well-organized and easy to search. - Edward Pollack, Microsoft Data Platform MVP The missing guide to understanding and using Postgres. - Mehboob Alam, POSTGRESNX, Inc. |
|
|
Context Engineering for Multi-Agent Systems
2025-11-18
Denis Rothman
– author
Build AI that thinks in context using semantic blueprints, multi-agent orchestration, memory, RAG pipelines, and safeguards to create your own Context Engine Free with your book: DRM-free PDF version + access to Packt's next-gen Reader Key Features Design semantic blueprints to give AI structured, goal-driven contextual awareness Orchestrate multi-agent workflows with MCP for adaptable, context-rich reasoning Engineer a glass-box Context Engine with high-fidelity RAG, trust, and safeguards Book Description Generative AI is powerful, yet often unpredictable. This guide shows you how to turn that unpredictability into reliability by thinking beyond prompts and approaching AI like an architect. At its core is the Context Engine, a glass-box, multi-agent system you’ll learn to design and apply across real-world scenarios. Written by an AI guru and author of various cutting-edge AI books, this book takes you on a hands-on journey from the foundations of context design to building a fully operational Context Engine. Instead of relying on brittle prompts that give only simple instructions, you’ll begin with semantic blueprints that map goals and roles with precision, then orchestrate specialized agents using the Model Context Protocol. As the engine evolves, you’ll integrate memory and high-fidelity retrieval with citations, implement safeguards against data poisoning and prompt injection, and enforce moderation to keep outputs aligned with policy. You’ll also harden the system into a resilient architecture, then see it pivot across domains, from legal compliance to strategic marketing, proving its domain independence. By the end of this book, you’ll be equipped with the skills to engineer an adaptable, verifiable architecture you can repurpose across domains and deploy with confidence. Email sign-up and proof of purchase required What you will learn Develop memory models to retain short-term and cross-session context Craft semantic blueprints and drive multi-agent orchestration with MCP Implement high-fidelity RAG pipelines with verifiable citations Apply safeguards against prompt injection and data poisoning Enforce moderation and policy-driven control in AI workflows Repurpose the Context Engine across legal, marketing, and beyond Deploy a scalable, observable Context Engine in production Who this book is for This book is for AI engineers, software developers, system architects, and data scientists who want to move beyond ad hoc prompting and learn how to design structured, transparent, and context-aware AI systems. It will also appeal to ML engineers and solutions architects with basic familiarity with LLMs who are eager to understand how to orchestrate agents, integrate memory and retrieval, and enforce safeguards. |
|
|
Pro Oracle GoldenGate 23ai for the DBA: Powering the Foundation of Data Integration and AI
2025-11-17
Bobby Curtis
– author
Transform your data replication strategy into a competitive advantage with Oracle GoldenGate 23ai. This comprehensive guide delivers the practical knowledge DBAs and architects need to implement, optimize , and scale Oracle GoldenGate 23ai in production environments. Written by Oracle ACE Director Bobby Curtis, it blends deep technical expertise with real-world business insights from hundreds of implementations across manufacturing, financial services, and technology sectors. Beyond traditional replication, this book explores the groundbreaking capabilities that make GoldenGate 23ai essential for modern AI initiatives. Learn how to implement real-time vector replication for RAG systems, integrate with cloud platforms like GCP and Snowflake, and automate deployments using REST APIs and Python. Each chapter offers proven strategies to deliver measurable ROI while reducing operational risk. Whether you're upgrading from Classic GoldenGate , deploying your first cloud data pipeline, or building AI-ready data architectures, this book provides the strategic guidance and technical depth to succeed. With Bobby's signature direct approach, you'll avoid common pitfalls and implement best practices that scale with your business. What You Will Learn Master the microservices architecture and new capabilities of Oracle GoldenGate 23ai Implement secure, high-performance data replication across Oracle, PostgreSQL, and cloud databases Configure vector replication for AI and machine learning workloads, including RAG systems Design and build multi-master replication models with automatic conflict resolution Automate deployments and management using RESTful APIs and Python Optimize performance for sub-second replication lag in production environments Secure your replication environment with enterprise-grade features and compliance Upgrade from Classic to Microservices architecture with zero downtime Integrate with cloud platforms including OCI, GCP, AWS, and Azure Implement real-time data pipelines to BigQuery , Snowflake, and other cloud targets Navigate Oracle licensing models and optimize costs Who This Book Is For Database administrators, architects, and IT leaders working with Oracle GoldenGate —whether deploying for the first time, migrating from Classic architecture, or enabling AI-driven replication—will find actionable guidance on implementation, performance tuning, automation, and cloud integration. Covers unidirectional and multi-master replication and is packed with real-world use cases. |
|
|
Keep Safe Using Mobile Tech, 2nd Edition
2025-11-12
Glenn Fleishman
– author
Leverage your smartphone and smartwatch for improved personal safety! Version 2.0, updated November 12, 2025 The digital and “real” worlds can both be scary places. The smartphone (and often smartwatch) you already carry with you can help reduce risks, deter theft, and mitigate violence. This book teaches you to secure your hardware, block abuse, automatically call emergency services, connect with others to ensure you arrive where and when you intended, detect stalking by compact trackers, and keep your ecosystem accounts from Apple, Google, and Microsoft secure. You don’t have to be reminded of the virtual and physical risks you face every day. Some of us are targeted more than others. Modern digital features built into mobile operating systems (and some computer operating systems) can help reduce our anxiety by putting more power in our hands to deter, deflect, block, and respond to abuse, threats, and emergencies. Keep Safe Using Mobile Tech looks at both digital threats, like online abuse and account hijacking, and ones in the physical world, like being stalked through Bluetooth trackers, facing domestic violence, or being in a car crash. The book principally covers the iPhone, Apple Watch, Android devices, and Wear OS watches. It also covers more limited but useful features available on the iPad and on computers running macOS or Windows. This second edition incorporates the massive number of new safety features Google added since October 2024 to the Android operating system, some particular to Google Pixel phones and smartwatches, and improved blocking, filtering, and screening added to Apple’s iOS 26 and related operating system updates in fall 2025. This book explores many techniques to help: Learn how to harden your Apple Account, Google Account, and Microsoft Account beyond just a password or a text-message token. Discover filtering and blocking tools from Apple and Google that can prevent abusive, fraudulent, and phishing messages and calls from reaching you. Block seeing unwanted sensitive images on your iPhone, iPad, Mac, Apple Watch, or Android phone—and help your kids receive advice on how not to send them. Turn on tracking on your Apple, Google, and Microsoft devices, and use it to recover or erase stolen hardware. Keep your cloud-archived messages from leaking to attackers. Screen calls with an automated assistant so that you know who wants you before picking up and without sending to voicemail. Lock down your devices to keep thieves and other personal invaders from accessing them. Prepare for emergencies by setting up medical information on your mobile devices. Let a supported smartphone or smartwatch recognize when you’re in a car crash or have taken a hard fall and call emergency services for you (and text your emergency contacts) if you can’t respond. Keep track of heart anomalies through smartwatch alerts and tests on your Apple Watch and many Android Wear smartwatches. Tell others where or when you expect to check in with them again, and let your smartphone alert them if you don’t with your Apple iPhone or Android phone. Deter stalking from tiny Bluetooth trackers. Protect your devices and accounts against access from domestic assailants. Block thieves who steal your phone—potentially threatening you or attacking you in person—from gaining access to the rest of your digital life. |
|
|
AI Systems Performance Engineering
2025-11-12
Chris Fregly
– author
Elevate your AI system performance capabilities with this definitive guide to maximizing efficiency across every layer of your AI infrastructure. In today's era of ever-growing generative models, AI Systems Performance Engineering provides engineers, researchers, and developers with a hands-on set of actionable optimization strategies. Learn to co-optimize hardware, software, and algorithms to build resilient, scalable, and cost-effective AI systems that excel in both training and inference. Authored by Chris Fregly, a performance-focused engineering and product leader, this resource transforms complex AI systems into streamlined, high-impact AI solutions. Inside, you'll discover step-by-step methodologies for fine-tuning GPU CUDA kernels, PyTorch-based algorithms, and multinode training and inference systems. You'll also master the art of scaling GPU clusters for high performance, distributed model training jobs, and inference servers. The book ends with a 175+-item checklist of proven, ready-to-use optimizations. Codesign and optimize hardware, software, and algorithms to achieve maximum throughput and cost savings Implement cutting-edge inference strategies that reduce latency and boost throughput in real-world settings Utilize industry-leading scalability tools and frameworks Profile, diagnose, and eliminate performance bottlenecks across complex AI pipelines Integrate full stack optimization techniques for robust, reliable AI system performance |
|
|
Data Engineering for Beginners
2025-11-11
Chisom Nwokwu
– author
A hands-on technical and industry roadmap for aspiring data engineers In Data Engineering for Beginners, big data expert Chisom Nwokwu delivers a beginner-friendly handbook for everyone interested in the fundamentals of data engineering. Whether you're interested in starting a rewarding, new career as a data analyst, data engineer, or data scientist, or seeking to expand your skillset in an existing engineering role, Nwokwu offers the technical and industry knowledge you need to succeed. The book explains: Database fundamentals, including relational and noSQL databases Data warehouses and data lakes Data pipelines, including info about batch and stream processing Data quality dimensions Data security principles, including data encryption Data governance principles and data framework Big data and distributed systems concepts Data engineering on the cloud Essential skills and tools for data engineering interviews and jobs Data Engineering for Beginners offers an easy-to-read roadmap on a seemingly complicated and intimidating subject. It addresses the topics most likely to cause a beginning data engineer to stumble, clearly explaining key concepts in an accessible way. You'll also find: A comprehensive glossary of data engineering terms Common and practical career paths in the data engineering industry An introduction to key cloud technologies and services you may encounter early in your data engineering career Perfect for practicing and aspiring data analysts, data scientists, and data engineers, Data Engineering for Beginners is an effective and reliable starting point for learning an in-demand skill. It's a powerful resource for everyone hoping to expand their data engineering Skillset and upskill in the big data era. |
|
|
Fundamentals of Software Engineering
2025-11-06
Dan Vega
– author
,
Nathaniel Schutta
– author
What do you need to know to be a successful software engineer? Undergraduate curricula and bootcamps may teach the fundamentals of algorithms and writing code, but they rarely cover topics vital to your career advancement. With this practical book, you'll learn the skills you need to succeed and thrive. Authors Nathaniel Schutta and Dan Vega guide your journey with everything from pointers to deep dives into specific topic areas that will help you build the skills that really matter as a software engineer. Understand what software engineering is—and why communication and other soft skills matter Learn the basics of software architecture and architectural drivers Use common and proven techniques to read and refactor code bases Understand the importance of testing and how to implement an effective test suite Learn how to reliably and repeatedly deploy software Know how to evaluate and choose the right solution or tool for a given problem |
|
|
Mastering Snowflake DataOps with DataOps.live: An End-to-End Guide to Modern Data Management
2025-10-30
Ronald L. Steelman Jr.
– author
This practical, in-depth guide shows you how to build modern, sophisticated data processes using the Snowflake platform and DataOps.live —the only platform that enables seamless DataOps integration with Snowflake. Designed for data engineers, architects, and technical leaders, it bridges the gap between DataOps theory and real-world implementation, helping you take control of your data pipelines to deliver more efficient, automated solutions. . You’ll explore the core principles of DataOps and how they differ from traditional DevOps, while gaining a solid foundation in the tools and technologies that power modern data management—including Git, DBT, and Snowflake. Through hands-on examples and detailed walkthroughs, you’ll learn how to implement your own DataOps strategy within Snowflake and maximize the power of DataOps.live to scale and refine your DataOps processes. Whether you're just starting with DataOps or looking to refine and scale your existing strategies, this book—complete with practical code examples and starter projects—provides the knowledge and tools you need to streamline data operations, integrate DataOps into your Snowflake infrastructure, and stay ahead of the curve in the rapidly evolving world of data management. What You Will Learn Explore the fundamentals of DataOps , its differences from DevOps, and its significance in modern data management Understand Git’s role in DataOps and how to use it effectively Know why DBT is preferred for DataOps and how to apply it Set up and manage DataOps.live within the Snowflake ecosystem Apply advanced techniques to scale and evolve your DataOps strategy Who This Book Is For Snowflake practitioners—including data engineers, platform architects, and technical managers—who are ready to implement DataOps principles and streamline complex data workflows using DataOps.live. |
|
|
Building Data Integration Solutions
2025-10-29
Jay Borthen
– author
Are you struggling to manage and make sense of the vast streams of data flowing into your organization? In today's data-driven world, the ability to effectively unify and organize disparate data sources is not just an advantage—it's a necessity. The challenge lies in navigating the complexities of data diversity, volume, and regulatory demands, which can overwhelm even the most seasoned data professionals. In this essential book, Jay Borthen offers a comprehensive guide to understanding the art of data integration. This book dives deep into the processes and strategies necessary for creating effective data pipelines that ensure consistency, accuracy, and accessibility of your data. Whether you're a novice looking to understand the basics or an experienced professional aiming to refine your skills, Borthen's insights and practical advice, grounded in real-world case studies, will empower you to transform your organization's data handling capabilities. Understand various data integration solutions and how different technologies can be employed Gain insights into the relationship between data integration and the overall data life cycle Learn to effectively design, set up, and manage data integration components within pipelines Acquire the knowledge to configure pipelines, perform data migrations, transformations, and more |
|
|
Apache Hudi: The Definitive Guide
2025-10-27
Rebecca Bilbro
– author
,
Prashant Wason
– author
,
Bhavani Sudha Saktheeswaran
– author
,
Shiyan Xu
– author
Overcome challenges in building transactional guarantees on rapidly changing data by using Apache Hudi. With this practical guide, data engineers, data architects, and software architects will discover how to seamlessly build an interoperable lakehouse from disparate data sources and deliver faster insights using your query engine of choice. Authors Shiyan Xu, Prashant Wason, Bhavani Sudha Saktheeswaran, and Rebecca Bilbro provide practical examples and insights to help you unlock the full potential of data lakehouses for different levels of analytics, from batch to interactive to streaming. You'll also learn how to evaluate storage choices and leverage built-in automated table optimizations to build, maintain, and operate production data applications. Understand the need for transactional data lakehouses and the challenges associated with building them Explore data ecosystem support provided by Apache Hudi for popular data sources and query engines Perform different write and read operations on Apache Hudi tables and effectively use them for various use cases, including batch and stream applications Apply different storage techniques and considerations such as indexing and clustering to maximize your lakehouse performance Build end-to-end incremental data pipelines using Apache Hudi for faster ingestion and fresher analytics |
|