talk-data.com talk-data.com

Topic

Cyber Security

cybersecurity information_security data_security privacy

2078

tagged

Activity Trend

297 peak/qtr
2020-Q1 2026-Q1

Activities

2078 activities · Newest first

Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society. Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style!

In this episode we are joined by Christophe Beke, where we discuss the following: Meta identifies networks pushing deceptive content likely generated by AI: Unpacking Meta’s discovery of AI-generated deep fakes used in political scams and phishing attacks.Klarna using GenAI to cut marketing costs by $10 million annually: Exploring how Klarna leverages generative AI to save on marketing costs and the broader impact of AI-generated content on branding and creativity.Apple and OpenAI partnership rumors: Speculating on Apple’s potential partnership with OpenAI and what this partnership mean for Siri and user privacy.Even the Raspberry Pi is getting in on AI: Exciting new AI capabilities for the Raspberry Pi and the privacy questions they raise. Plus discussion on windows screenshot tools & more privacy concerns.Nvidia teases next-gen “Rubin” AI chips: Nvidia’s surprising early reveal of their next-gen AI chips—what’s behind the move?Marker: A new tool for converting PDFs to Markdown: Discovering the Marker library and its perks for converting PDFs while keeping valuable metadata.Signal EU Market Exit: The privacy-focused app’s decision to leave the EU market over regulatory challenges and the broader debate on privacy vs. security.Reframing ‘tech debt’: Fresh perspectives on managing technical debt in the fast-paced world of software development.Our newsletter: Don’t miss out! Subscribe to our data and AI newsletter for the latest headlines and insights.

Trust is the foundation of any relationship, whether it's between friends or in business. But what happens when the entity you're asked to trust isn't human, but AI? How do you ensure that the AI systems you're developing are not only effective but also trustworthy? In a world where AI is increasingly making decisions that impact our lives, how can we distinguish between systems that genuinely serve our interests and those that might exploit our data?  Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of over one dozen books—including his latest, A Hacker’s Mind—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation and AccessNow; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc. In the episode, Richie and Bruce explore the definition of trust, the difference between trust and trustworthiness, how AI mimics social trust, AI and deception, the need for public non-profit AI to counterbalance corporate AI, monopolies in tech, understanding the application and potential consequences of AI misuse, AI regulation, the positive potential of AI, why AI is a political issue and much more. Links Mentioned in the Show: Schneier on SecurityBooks by Bruce[Course] AI EthicsRelated Episode: Building Trustworthy AI with Alexandra Ebert, Chief Trust Officer at MOSTLY AISign up to RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Summary

Any software system that survives long enough will require some form of migration or evolution. When that system is responsible for the data layer the process becomes more challenging. Sriram Panyam has been involved in several projects that required migration of large volumes of data in high traffic environments. In this episode he shares some of the valuable lessons that he learned about how to make those projects successful.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino. This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to dataengineeringpodcast.com/codecomments today to subscribe. My thanks to the team at Code Comments for their support. Your host is Tobias Macey and today I'm interviewing Sriram Panyam about his experiences conducting large scale data migrations and the useful strategies that he learned in the process

Interview

Introduction How did you get involved in the area of data management? Can you start by sharing some of your experiences with data migration projects?

As you have gone through successive migration projects, how has that influenced the ways that you think about architecting data systems?

How would you categorize the different types and motivations of migrations?

How does the motivation for a migration influence the ways that you plan for and execute that work?

Can you talk us through one or two specific projects that you have taken part in? Part 1: The Triggers

Section 1: Technical Limitations triggering Data Migration

Scaling bottlenecks: Performance issues with databases, storage, or network infrastructure Legacy compatibility: Difficulties integrating with modern tools and cloud platforms System upgrades: The need to migrate data during major software changes (e.g., SQL Server version upgrade)

Section 2: Types of Migrations for Infrastructure Focus

Storage migration: Moving data between systems (HDD to SSD, SAN to NAS, etc.) Data center migration: Physical relocation or consolidation of data centers Virtualization migration: Moving from physical servers to virtual machines (or vice versa)

Section 3: Technical Decisions Driving Data Migrations

End-of-life support: Forced migration when older software or hardware is sunsetted Security and compliance: Adopting new platforms with better security postures Cost Optimization: Potential savings of cloud vs. on-premise data centers

Part 2: Challenges (and Anxieties)

Section 1: Technical Challenges

Data transformation challenges: Schema changes, complex data mappings Network bandwidth and latency: Transferring large datasets efficiently Performance tes

IBM z14 (3906) Technical Guide

This IBM® Redbooks® publication describes the new member of the IBM Z® family, IBM z14™. IBM z14 is the trusted enterprise platform for pervasive encryption, integrating data, transactions, and insights into the data. A data-centric infrastructure must always be available with a 99.999% or better availability, have flawless data integrity, and be secured from misuse. It also must be an integrated infrastructure that can support new applications. Finally, it must have integrated capabilities that can provide new mobile capabilities with real-time analytics that are delivered by a secure cloud infrastructure. IBM z14 servers are designed with improved scalability, performance, security, resiliency, availability, and virtualization. The superscalar design allows z14 servers to deliver a record level of capacity over the prior IBM Z platforms. In its maximum configuration, z14 is powered by up to 170 client characterizable microprocessors (cores) running at 5.2 GHz. This configuration can run more than 146,000 million instructions per second (MIPS) and up to 32 TB of client memory. The IBM z14 Model M05 is estimated to provide up to 35% more total system capacity than the IBM z13® Model NE1. This Redbooks publication provides information about IBM z14 and its functions, features, and associated software support. More information is offered in areas that are relevant to technical planning. It is intended for systems engineers, consultants, planners, and anyone who wants to understand the IBM Z servers functions and plan for their usage. It is intended as an introduction to mainframes. Readers are expected to be generally familiar with existing IBM Z technology and terminology.

IBM z14 ZR1 Technical Guide

This IBM® Redbooks® publication describes the new member of the IBM Z® family, IBM z14™ Model ZR1 (Machine Type 3907). It includes information about the Z environment and how it helps integrate data and transactions more securely, and can infuse insight for faster and more accurate business decisions. The z14 ZR1 is a state-of-the-art data and transaction system that delivers advanced capabilities, which are vital to any digital transformation. The z14 ZR1 is designed for enhanced modularity, in an industry standard footprint. A data-centric infrastructure must always be available with a 99.999% or better availability, have flawless data integrity, and be secured from misuse. It also must be an integrated infrastructure that can support new applications. Finally, it must have integrated capabilities that can provide new mobile capabilities with real-time analytics that are delivered by a secure cloud infrastructure. IBM z14 ZR1 servers are designed with improved scalability, performance, security, resiliency, availability, and virtualization. The superscalar design allows z14 ZR1 servers to deliver a record level of capacity over the previous IBM Z platforms. In its maximum configuration, z14 ZR1 is powered by up to 30 client characterizable microprocessors (cores) running at 4.5 GHz. This configuration can run more than 29,000 million instructions per second and up to 8 TB of client memory. The IBM z14 Model ZR1 is estimated to provide up to 54% more total system capacity than the IBM z13s® Model N20. This Redbooks publication provides information about IBM z14 ZR1 and its functions, features, and associated software support. More information is offered in areas that are relevant to technical planning. It is intended for systems engineers, consultants, planners, and anyone who wants to understand the IBM Z servers functions and plan for their usage. It is intended as an introduction to mainframes. Readers are expected to be generally familiar with IBM Z technology and terminology.

IBM z15 (8561) Technical Guide

This IBM® Redbooks® publication describes the features and functions the latest member of the IBM Z® platform, the IBM z15™ (machine type 8561). It includes information about the IBM z15 processor design, I/O innovations, security features, and supported operating systems. The z15 is a state-of-the-art data and transaction system that delivers advanced capabilities, which are vital to any digital transformation. The z15 is designed for enhanced modularity, which is in an industry standard footprint. This system excels at the following tasks: Making use of multicloud integration services Securing data with pervasive encryption Accelerating digital transformation with agile service delivery Transforming a transactional platform into a data powerhouse Getting more out of the platform with IT Operational Analytics Accelerating digital transformation with agile service delivery Revolutionizing business processes Blending open source and Z technologies This book explains how this system uses new innovations and traditional Z strengths to satisfy growing demand for cloud, analytics, and open source technologies. With the z15 as the base, applications can run in a trusted, reliable, and secure environment that improves operations and lessens business risk.

Everything in the world has a price, including improving and scaling your data and AI functions. That means that at some point someone will question the ROI of your projects, and often, these projects will be looked at under the lens of monetization. But how do you ensure that what you’re working on is not only providing value to the business but also creating financial gain? What conditions need to be met to prove your project's success and turn value into cash? Vin Vashishta is the author of ‘From Data to Profit’ (Wiley), the playbook for monetizing data and AI. He built V-Squared from client 1 to one of the oldest data and AI consulting firms. For the last eight years, he has been recognized as a data and AI thought leader. Vin is a LinkedIn Top Voice and Gartner Ambassador. His background spans over 25 years in strategy, leadership, software engineering, and applied machine learning. Dr. Tiffany Perkins-Munn is on a mission to bring research, analytics, and data science to life. She earned her Ph.D. in Social-Personality Psychology with an interdisciplinary focus on Advanced Quantitative Methods. Her insights are the subject of countless lectures on psychology, statistics, and their real-world applications. As the Head of Data and Analytics for the innovative CDAO organization at J.P. Morgan Chase, her knack involves unraveling complex business problems through operational enhancements, augmented financials, and intuitive recruiting. After over two decades in the industry, she consistently forges robust relationships across the corporate spectrum, becoming one of the Top 10 Finalists in the Merrill Lynch Global Markets Innovation Program. In the episode, Richie, Vin, and Tiffany explore the challenges of monetizing data and AI projects, including how technical, organizational, and strategic factors affect your input, the importance of aligning technical and business objectives to keep outputs focused on core business goals, how to assess your organization's data and AI maturity, examples of high data maturity businesses, data security and compliance, quick wins in data transformation and infrastructure, why long-term vision and strategy matter, and much more. Links Mentioned in the Show: Connect with Tiffany on LinkedinConnect with Vin on LinkedinVin’s Website[Course] Data Governance Concepts Related Episode: Scaling Enterprise Analytics with Libby Duane Adams, Chief Advocacy Officer and Co-Founder of Alteryx New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

IBM z15 (8562) Technical Guide

This IBM® Redbooks® publication describes the features and functions the latest member of the IBM Z® platform, the IBM z15™ Model T02 (machine type 8562). It includes information about the IBM z15 processor design, I/O innovations, security features, and supported operating systems. The z15 is a state-of-the-art data and transaction system that delivers advanced capabilities, which are vital to any digital transformation. The z15 is designed for enhanced modularity, which is in an industry standard footprint. This system excels at the following tasks: Making use of multicloud integration services Securing data with pervasive encryption Accelerating digital transformation with agile service delivery Transforming a transactional platform into a data powerhouse Getting more out of the platform with IT Operational Analytics Accelerating digital transformation with agile service delivery Revolutionizing business processes Blending open source and Z technologies This book explains how this system uses new innovations and traditional Z strengths to satisfy growing demand for cloud, analytics, and open source technologies. With the z15 as the base, applications can run in a trusted, reliable, and secure environment that improves operations and lessens business risk.

Database Management Systems by Pearson

Express Learning is a series of books designed as quick reference guides to important undergraduate computer courses. The organized and accessible format of these books allows students to learn important concepts in an easy-to-understand, question-and-answer format. These portable learning tools have been designed as one-stop references for students to understand and master the subjects by themselves.

Features –

• Designed as a student-friendly self-learning guide. The book is written in a clear, concise, and lucid manner. • Easy-to-understand question-and-answer format. • Includes previously asked as well as new questions organized in chapters. • All types of questions including MCQs, short and long questions are covered. • Solutions to numerical questions asked at examinations are provided. • All ideas and concepts are presented with clear examples. • Text is well structured and well supported with suitable diagrams. • Inter-chapter dependencies are kept to a minimum

Book Contents –

1: Database System 2: Conceptual Modelling 3: Relational Model 4: Relational Algebra and Calculus 5: Structured Query Language 6: Relational Database Design 7: Data Storage and Indexing 8: Query Processing and Optimization 9: Introduction to Transaction Processing 10: Concurrency Control Techniques 11: Database Recovery System 12: Database Security 13: Database System Architecture 14: Data Warehousing, OLAP, and Data Mining 15: Information Retrieval 16: Miscellaneous Questions

Send us a text Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society. Dive into conversations that flow like your morning coffee, where industry insights meet laid-back banter. Whether you're a data aficionado or just curious about the digital age, pull up a chair and let's explore the heart of data, unplugged style!

Stack Overflow and OpenAI Deal Controversy: Discussing the partnership controversy, with users protesting the lack of an opt-out option and how this could reshape the platform. Look into Phind here.Apple and OpenAI Rumors - could ChatGPT be the new Siri? Examining rumors of ChatGPT potentially replacing Siri, and Apple's AI strategy compared to Microsoft’s MAI-1. Check out more community opinions here.Hello GPT-4o: Exploring the new era with OpenAI's GPT-4o that blends video, text, and audio for more dynamic human-AI interactions. Discussing AI's challenges under the European AI Act and chatgpt’s use in daily life and dating apps like Bumble.Claude Takes Europe: Claude 3 now available in the EU. How does it compare to ChatGPT in coding and conversation?ElevenLabs' Music Generation AI: A look at ElevenLabs' AI for generating music and the broader AI music landscape. How are these algorithms transforming music creation? Check out the AI Song Contest here.Google Cloud’s Big Oops with UniSuper: Unpack the shocking story of how Google Cloud accidentally wiped out UniSuper’s account. What does this mean for data security and redundancy strategies?The Great CLI Debate: Is Python really the right choice for CLI tools? We spark the debate over Python vs. Go and Rust in building efficient CLI tools.

Peter Schroeder, Founder of Telzio, joins us on this latest podcast episode of Data Unchained to talk about the pain points in tech his company Telzio is trying to solve for, why security is the first thinks of whenever making changes to his platform, and the importance of meeting the customer wherever and whenever they are using AI.

data #decentralized #podcast #datascience #datasets #ai #artificialintelligence #datastorage #security #dj #podcaster #solutions

Cyberpunk by jiglr | https://soundcloud.com/jiglrmusic Music promoted by https://www.free-stock-music.com Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/deed.en_US Hosted on Acast. See acast.com/privacy for more information.

XML and Related Technologies

About The Author – Atul Kahate has over 13 years of experience in Information Technology in India and abroad in various capacities. He has done his Bachelor of Science degree in Statistics and his Master of Business Administration in Computer Systems. He has authored 17 highly acclaimed books on various areas of Information Technology. Several of his books are being used as course textbooks or sources of reference in a number of universities/colleges/IT companies all over the world. Atul has been writing articles in newspapers about cricket, since the age of 12. He has also authored two books on cricket and has written over 2000 articles on IT and cricket. He has a deep interest in teaching, music, and cricket besides technology. He has conducted several training programs, in a number of educational institutions and IT organisations, on a wide range of technologies. Some of the prestigious institutions where he has conducted training programs, include IIT, Symbiosis, I2IT, MET, Indira Institute of Management, Fergusson College, MIT, VIIT, MIT, Walchand Government Engineering College besides numerous other colleges in India.

Book Content – 1. Introduction to XML, 2. XML Syntaxes, 3. Document Type Definitions, 4. XML Schemas 5. Cascading Style Sheets, 6. Extensible Stylesheet Language, 7. XML and Java, 8. XML and ASP.NET, 9. Web Services and AJAX, 10. XML Security, Appendix – Miscellaneous Topics

Rapid change seems to be the new norm within the data and AI space, and due to the ecosystem constantly changing, it can be tricky to keep up. Fortunately, any self-respecting venture capitalist looking into data and AI will stay on top of what’s changing and where the next big breakthroughs are likely to come from. We all want to know which important trends are emerging and how we can take advantage of them, so why not learn from a leading VC.  Tomasz Tunguz is a General Partner at Theory Ventures, a $235m early-stage venture capital firm. He blogs sat tomtunguz.com & co-authored Winning with Data. He has worked or works with Looker, Kustomer, Monte Carlo, Dremio, Omni, Hex, Spot, Arbitrum, Sui & many others. He was previously the product manager for Google's social media monetization team, including the Google-MySpace partnership, and managed the launches of AdSense into six new markets in Europe and Asia. Before Google, Tunguz developed systems for the Department of Homeland Security at Appian Corporation.  In the episode, Richie and Tom explore trends in generative AI, the impact of AI on professional fields, cloud+local hybrid workflows, data security, and changes in data warehousing through the use of integrated AI tools, the future of business intelligence and data analytics, the challenges and opportunities surrounding AI in the corporate sector. You'll also get to discover Tom's picks for the hottest new data startups. Links Mentioned in the Show: Tom’s BlogTheory VenturesArticle: What Air Canada Lost In ‘Remarkable’ Lying AI Chatbot Case[Course] Implementing AI Solutions in BusinessRelated Episode: Making Better Decisions using Data & AI with Cassie Kozyrkov, Google's First Chief Decision ScientistSign up to RADAR: AI Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

Summary Artificial intelligence has dominated the headlines for several months due to the successes of large language models. This has prompted numerous debates about the possibility of, and timeline for, artificial general intelligence (AGI). Peter Voss has dedicated decades of his life to the pursuit of truly intelligent software through the approach of cognitive AI. In this episode he explains his approach to building AI in a more human-like fashion and the emphasis on learning rather than statistical prediction. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementDagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.Your host is Tobias Macey and today I'm interviewing Peter Voss about what is involved in making your AI applications more "human"Interview IntroductionHow did you get involved in machine learning?Can you start by unpacking the idea of "human-like" AI? How does that contrast with the conception of "AGI"?The applications and limitations of GPT/LLM models have been dominating the popular conversation around AI. How do you see that impacting the overrall ecosystem of ML/AI applications and investment?The fundamental/foundational challenge of every AI use case is sourcing appropriate data. What are the strategies that you have found useful to acquire, evaluate, and prepare data at an appropriate scale to build high quality models? What are the opportunities and limitations of causal modeling techniques for generalized AI models?As AI systems gain more sophistication there is a challenge with establishing and maintaining trust. What are the risks involved in deploying more human-level AI systems and monitoring their reliability?What are the practical/architectural methods necessary to build more cognitive AI systems? How would you characterize the ecosystem of tools/frameworks available for creating, evolving, and maintaining these applications?What are the most interesting, innovative, or unexpected ways that you have seen cognitive AI applied?What are the most interesting, unexpected, or challenging lessons that you have learned while working on desiging/developing cognitive AI systems?When is cognitive AI the wrong choice?What do you have planned for the future of cognitive AI applications at Aigo?Contact Info LinkedInWebsiteParting Question From your perspective, what is the biggest barrier to adoption of machine learning today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.Links Aigo.aiArtificial General IntelligenceCognitive AIKnowledge GraphCausal ModelingBayesian StatisticsThinking Fast & Slow by Daniel Kahneman (affiliate link)Agent-Based ModelingReinforcement LearningDARPA 3 Waves of AI presentationWhy Don't We Have AGI Yet? whitepaperConcepts Is All You Need WhitepaperHellen KellerStephen HawkingThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0

Mastering MySQL Administration: High Availability, Security, Performance, and Efficiency

This book is your one-stop resource on MySQL database installation and server management for administrators. It covers installation, upgrades, monitoring, high availability, disaster recovery, security, and performance and troubleshooting. You will become fluent in MySQL 8.2, the latest version of the highly scalable and robust relational database system. With a hands-on approach, the book offers step-by-step guidance on installing, upgrading, and establishing robust high availability and disaster recovery capabilities for MySQL databases. It also covers high availability with InnoDB and NDB clusters, MySQL routers and enterprise MySQL tools, along with robust security design and performance techniques. Throughout, the authors punctuate concepts with examples taken from their experience with large-scale implementations at companies such as Meta and American Airlines, anchoring this practical guide to MySQL 8.2 administration in the real world. What YouWill Learn Understand MySQL architecture and best practices for administration of MySQL server Configure high availability, replication, disaster recovery with InnoDB and NDB engines Back up and restore with MySQL utilities and tools, and configure the database for zero data loss Troubleshoot with steps for real-world critical errors and detailed solutions Who This Book Is For Technical professionals, database administrators, developers, and engineers seeking to optimize MySQL databases for scale, security, and performance

IBM Storage FlashSystem 9500 Product Guide for IBM Storage Virtualize 8.6

This IBM® Redpaper® Product Guide describes the IBM Storage FlashSystem® 9500 solution, which is a next-generation IBM Storage FlashSystem control enclosure. It combines the performance of flash and a Non-Volatile Memory Express (NVMe)-optimized architecture with the reliability and innovation of IBM FlashCore® technology and the rich feature set and high availability (HA) of IBM Storage Virtualize. Often, applications exist that are foundational to the operations and success of an enterprise. These applications might function as prime revenue generators, guide or control important tasks, or provide crucial business intelligence, among many other jobs. Whatever their purpose, they are mission critical to the organization. They demand the highest levels of performance, functionality, security, and availability. They also must be protected against the newer threat of cyberattacks. To support such mission-critical applications, enterprises of all types and sizes turn to the IBM Storage FlashSystem 9500. IBM Storage FlashSystem 9500 provides a rich set of software-defined storage (SDS) features that are delivered by IBM Storage Virtualize, including the following examples: Data reduction and deduplication Dynamic tiering Thin-provisioning Snapshots Cloning Replication and data copy services Cyber resilience Transparent Cloud Tiering IBM HyperSwap® including 3-site replication for HA Scale-out and scale-up configurations that further enhance capacity and throughput for better availability This Redpaper applies to IBM Storage Virtualize V8.6.

Summary Generative AI promises to accelerate the productivity of human collaborators. Currently the primary way of working with these tools is through a conversational prompt, which is often cumbersome and unwieldy. In order to simplify the integration of AI capabilities into developer workflows Tsavo Knott helped create Pieces, a powerful collection of tools that complements the tools that developers already use. In this episode he explains the data collection and preparation process, the collection of model types and sizes that work together to power the experience, and how to incorporate it into your workflow to act as a second brain.

Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data managementDagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.Your host is Tobias Macey and today I'm interviewing Tsavo Knott about Pieces, a personal AI toolkit to improve the efficiency of developersInterview IntroductionHow did you get involved in machine learning?Can you describe what Pieces is and the story behind it?The past few months have seen an endless series of personalized AI tools launched. What are the features and focus of Pieces that might encourage someone to use it over the alternatives?model selectionsarchitecture of Pieces applicationlocal vs. hybrid vs. online modelsmodel update/delivery processdata preparation/serving for models in context of Pieces appapplication of AI to developer workflowstypes of workflows that people are building with piecesWhat are the most interesting, innovative, or unexpected ways that you have seen Pieces used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Pieces?When is Pieces the wrong choice?What do you have planned for the future of Pieces?Contact Info LinkedInParting Question From your perspective, what is the biggest barrier to adoption of machine learning today?Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.Links PiecesNPU == Neural Processing UnitTensor ChipLoRA == Low Rank AdaptationGenerative Adversarial NetworksMistralEmacsVimNeoVimDartFlutte

Databases are ubiquitous, and you don’t need to be a data practitioner to know that all data everywhere is stored in a database—or is it? While the majority of data around the world lives in a database, the data that helps run the heart of our operating systems—the core functions of our computers— is not stored in the same place as everywhere else. This is due to database storage sitting ‘above’ the operating system, requiring the OS to run before the databases can be used. But what if the OS was built ‘on top’ of a database? What difference could this fundamental change make to how we use computers? Mike Stonebraker is a distinguished computer scientist known for his foundational work in database systems, he is also currently CTO & Co-Founder At DBOS. His extensive career includes significant contributions through academic prototypes and commercial startups, leading to the creation of several pivotal relational database companies such as Ingres Corporation, Illustra, Paradigm4, StreamBase Systems, Tamr, Vertica, and VoltDB. Stonebraker's role as chief technical officer at Informix and his influential research earned him the prestigious 2014 Turing Award. Stonebraker's professional journey spans two major phases: initially at the University of California, Berkeley, focusing on relational database management systems like Ingres and Postgres, and later, from 2001 at the Massachusetts Institute of Technology (MIT), where he pioneered advanced data management techniques including C-Store, H-Store, SciDB, and DBOS. He remains a professor emeritus at UC Berkeley and continues to influence as an adjunct professor at MIT’s Computer Science and Artificial Intelligence Laboratory. Stonebraker is also recognized for his editorial work on the book "Readings in Database Systems." In the episode, Richie and Mike explore the the success of PostgreSQL, the evolution of SQL databases, the shift towards cloud computing and what that means in practice when migrating to the cloud, the impact of disaggregated storage, software and serverless trends, the role of databases in facilitating new data and AI trends, DBOS and it’s advantages for security, and much more.  Links Mentioned in the Show: DBOSPaper: What Goes Around Comes Around[Course] Understanding Cloud ComputingRelated Episode: Scaling Enterprise Analytics with Libby Duane Adams, Chief Advocacy Officer and Co-Founder of AlteryxRewatch sessions from RADAR: The Analytics Edition New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

This presentation will discuss the challenges of designing cybersecurity into new (i.e., greenfield) OT systems and how \"Secure-by-Design\" principles can and should be applied by the organizations that design and integrate these systems. The presentation will also discuss the importance of understanding the security capabilities of the products being integrated into OT systems. proper design documentation, design reviews, risk assessments, cyber acceptance testing (e.g., Cyber FAT and SAT), as well as the integration of technology to monitor, maintain, and manage security during operations.

This presentation will discuss the challenges of designing cybersecurity into new (i.e., greenfield) OT systems and how "Security by Design" principles can and should be applied by the organizations that design and integrate these systems. The presentation will also discuss the importance of understanding the security capabilities of the products being integrated into OT systems. proper design documentation, design reviews, risk assessments, cyber acceptance testing (e.g., Cyber FAT and SAT), as well as the integration of technology to monitor, maintain, and manage security during operations.