talk-data.com talk-data.com

Topic

Cyber Security

cybersecurity information_security data_security privacy

2078

tagged

Activity Trend

297 peak/qtr
2020-Q1 2026-Q1

Activities

2078 activities · Newest first

Data Globalization at Conde Nast Using Delta Sharing

Databricks has been an essential part of the Conde Nast architecture for the last few years. Prior to building our centralized data platform, “evergreen,” we had similar challenges as many other organizations; siloed data, duplicated efforts for engineers, and a lack of collaboration between data teams. These problems led to mistrust in data sets and made it difficult to scale to meet the strategic globalization plan we had for Conde Nast.

Over the last few years we have been extremely successful in building a centralized data platform on Databricks in AWS, fully embracing the lakehouse vision from end-to-end. Now, our analysts and marketers can derive the same insights from one dataset and data scientists can use the same datasets for use cases such as personalization, subscriber propensity models, churn models and on-site recommendations for our iconic brands.

In this session, we’ll discuss how we plan to incorporate Unity Catalog and Delta Sharing as the next phase of our globalization mission. The evergreen platform has become the global standard for data processing and analytics at Conde. In order to manage the worldwide data and comply with GDPR requirements, we need to make sure data is processed in the appropriate region and PII data is handled appropriately. At the same time, we need to have a global view of the data to allow us to make business decisions at the global level. We’ll talk about how delta sharing allows us a simple, secure way to share de-identified datasets across regions in order to make these strategic business decisions, while complying with security requirements. Additionally, we’ll discuss how Unity Catalog allows us to secure, govern and audit these datasets in an easy and scalable manner.

Talk by: Zachary Bannor

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc

Essential Data Security Strategies for the Modern Enterprise Data Architecture

Balancing critical data requirements is a 24-7 task for enterprise-level organizations that must straddle the need to open specific gates to enable self-service data access while closing other access points to maintain internal and external compliance. Data breaches can cost U.S. businesses an average of $9.4 million per occurrence; ignoring this leaves organizations vulnerable to severe losses and crippling costs.

The 2022 Gartner Hype Cycle for Data Security reports that more and more enterprises are modernizing their data architecture with cloud and technology partners to help them collect, store and manage business data; a trend that does not appear to be letting up. According to Gartner®, “by 2025, 30% of enterprises will have adopted the Broad Data Security Platform (bDSP), up from less than 10% in 2021, due to the pent-up demand for higher levels of data security and the rapid increase in product capabilities."

Moving to both a modern data architecture and data-driven culture sets enterprises on the right trajectory for growth, but it’s important to keep in mind individual public cloud platforms are not guaranteed to protect and secure data. To solve this, Privacera pioneered the industry’s first open-standards-based data security platform that integrates privacy and compliance across multiple cloud services.

During this presentation, we will discuss: - Why today’s modern data architecture needs a DSP that works across the entire data ecosystem; Essential DSP prescriptive measures and adoption strategies. - Why faster and more responsible access to data insights helps reduce cost, increases productivity, expedites decision making, and leads to exponential growth.

Talk by: Piet Loubser

Here’s more to explore: Data, Analytics, and AI Governance: https://dbricks.co/44gu3YU

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc

Testing Generative AI Models: What You Need to Know

Generative AI shows incredible promise for enterprise applications. The explosion of generative AI can be attributed to the convergence of several factors. Most significant is that the barrier to entry has dropped for AI application developers through customizable prompts (few-shot learning), enabling laypeople to generate high-quality content. The flexibility of models like ChatGPT and DALLE-2 have sparked curiosity and creativity about new applications that they can support. The number of tools will continue to grow in a manner similar to how AWS fueled app development. But excitement must be tampered by concerns about new risks imposed to business and society. Increased capability and adoption also increase risk exposure. As organizations explore creative boundaries of generative models, measures to reduce risk must be put in place. However, the enormous size of the input space and inherent complexity make this task more challenging than traditional ML models.

In this session, we summarize the new risks introduced by the new class of generative foundation models through several examples, and compare how these risks relate to the risks of mainstream discriminative models. Steps can be taken to reduce the operational risk, bias and fairness issues, and privacy and security of systems that leverage LLM for automation. We’ll explore model hallucinations, output evaluation, output bias, prompt injection, data leakage, stochasticity, and more. We’ll discuss some of the larger issues common to LLMs and show how to test for them. A comprehensive, test-based approach to generative AI development will help instill model integrity by proactively mitigating failure and the associated business risk.

Talk by: Yaron Singer

Here’s more to explore: LLM Compact Guide: https://dbricks.co/43WuQyb Big Book of MLOps: https://dbricks.co/3r0Pqiz

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc

Activate Your Lakehouse with Unity Catalog

Building a lakehouse is straightforward today thanks to many open source technologies and Databricks. However, it can be taxing to extract value from lakehouses as they grow without robust data operations. Join us to learn how YipitData uses the Unity Catalog to streamline data operations and discover best practices to scale your own Lakehouse. At YipitData, our 15+ petabyte Lakehouse is a self-service data platform built with Databricks and AWS, supporting analytics for a data team of over 250. We will share how leveraging Unity Catalog accelerates our mission to help financial institutions and corporations leverage alternative data by:

  • Enabling clients to universally access our data through a spectrum of channels, including Sigma, Delta Sharing, and multiple clouds
  • Fostering collaboration across internal teams using a data mesh paradigm that yields rich insights
  • Strengthening the integrity and security of data assets through ACLs, data lineage, audit logs, and further isolation of AWS resources
  • Reducing the cost of large tables without downtime through automated data expiration and ETL optimizations on managed delta tables

Through our migration to Unity Catalog, we have gained tactics and philosophies to seamlessly flow our data assets internally and externally. Data platforms need to be value-generating, secure, and cost-effective in today's world. We are excited to share how Unity Catalog delivers on this and helps you get the most out of your lakehouse.

Talk by: Anup Segu

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc

US Army Corp of Engineers Enhanced Commerce & National Sec Through Data-Driven Geospatial Insight

The US Army Corps of Engineers (USACE) is responsible for maintaining and improving nearly 12,000 miles of shallow-draft (9'-14') inland and intracoastal waterways, 13,000 miles of deep-draft (14' and greater) coastal channels, and 400 ports, harbors, and turning basins throughout the United States. Because these components of the national waterway network are considered assets to both US commerce and national security, they must be carefully managed to keep marine traffic operating safely and efficiently.

The National DQM Program is tasked with providing USACE a nationally standardized remote monitoring and documentation system across multiple vessel types with timely data access, reporting, dredge certifications, data quality control, and data management. Government systems have often lagged commercial systems in modernization efforts, and the emergence of the cloud and Data Lakehouse Architectures have empowered USACE to successfully move into the modern data era.

This session incorporates aspects of these topics: Data Lakehouse Architecture: Delta Lake, platform security and privacy, serverless, administration, data warehouse, Data Lake, Apache Iceberg, Data Mesh GIS: H3, MOSAIC, spatial analysis data engineering: data pipelines, orchestration, CDC, medallion architecture, Databricks Workflows, data munging, ETL/ELT, lakehouses, data lakes, Parquet, Data Mesh, Apache Spark™ internals. Data Streaming: Apache Spark Structured Streaming, real-time ingestion, real-time ETL, real-time ML, real-time analytics, and real-time applications, Delta Live Tables. ML: PyTorch, TensorFlow, Keras, scikit-learn, Python and R ecosystems data governance: security, compliance, RMF, NIST data sharing: sharing and collaboration, delta sharing, data cleanliness, APIs.

Talk by: Jeff Mroz

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc

Advanced Governance with Collibra on Databricks

A data lake is only as good as its governance. Understanding what data you have, performing classification, defining/applying security policies and auding how it's used is the data governance lifecycle. Unity Catalog with its rich ecosystem of supported tools simplifies all stages of the data governance lifecycle. Learn how metadata can be hydrated, into Collibra directly from Unity Catalog. Once the metadata is available in Collibra we will demonstrate classification, defining security policies on the data and pushing those policies into Databricks. All access and usage of data is automatically audited with real time lineage provided in the data explorer as well as system tables.

Talk by: Leon Eller and Antonio Castelo

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksin

Best Practices for Setting Up Databricks SQL at Enterprise Scale

To learn more, visit the Databricks Security and Trust Center: https://www.databricks.com/trust

In this session, we will talk about the best practices for setting up Databricks to run at large enterprise scale with thousands of users, departmental security and governance, and end-to-end lineage from ingestion to BI tools. We’ll showcase the power of Unity Catalog and Databricks SQL as the core of your modern data stack and how to achieve both data, environment, and financial governance while empowering your users to quickly find and access the data they need.

Talk by: Siddharth Bhai, Paul Roome, Jeremy Lewallen, and Samrat Ray

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksin

Sponsored by: Immuta | Building an End-to-End MLOps Workflow with Automated Data Access Controls

WorldQuant Predictive’s customers rely on our predictions to understand how changing world and market conditions will impact decisions to be made. Speed is critical, and so are accuracy and resilience. To that end, our data team built a modern, automated MLOps data flow using Databricks as a key part of our data science tooling, and integrated with Immuta to provide automated data security and access control.

In this session, we will share details of how we used policy-as-code to support our globally distributed data science team with secure data sharing, testing, validation and other model quality requirements. We will also discuss our data science workflow that uses Databricks-hosted MLflow together with an Immuta-backed custom feature store to maximize speed and quality of model development through automation. Finally, we will discuss how we deploy the models into our customized serverless inference environment, and how that powers our industry solutions.

Talk by: Tyler Ditto

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc

What’s New With Platform Security and Compliance in the Databricks Lakehouse Platform

At Databricks, we know that data is one of your most valuable assets and alwasys must be protected, that’s why security is built into every layer of the Databricks Lakehouse Platform. Databricks provides comprehensive security to protect your data and workloads, such as encryption, network controls, data governance and auditing.

In this session, you will hear from Databricks product leaders on the platform security and compliance progress made over the past year, with demos on how administrators can start protecting workloads fast. You will also learn more about the roadmap that delivers on the Databricks commitment to you as the most trusted, compliant, and secure data and AI platform with the Databricks Lakehouse.

Talk by: Samrat Ray and David Veuve

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksin

Sponsored by: Privacera | Applying Advanced Data Security Governance with Databricks Unity Catalog

This talk explores the application of advanced data security and access control integrated with Databricks Unity Catalog through Privacera. Learn about Databricks with Unity Catalog and Privacera capabilities and real-world use cases demonstrating data security and access control best practices and how to successfully plan for and implement enterprise data security governance at scale across your entire Databricks Lakehouse.

Talk by: Don Bosco Durai

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc

Lakehouse Architecture to Advance Security Analytics at the Department of State

In 2023, the Department of State surged forward on implementing a lakehouse architecture to get faster, smarter, and more effective on cybersecurity log monitoring and incident response. In addition to getting us ahead of federal mandates, this approach promises to enable advanced analytics and machine learning across our highly federated global IT environment while minimizing costs associated with data retention and aggregation.

This talk will include a high-level overview of the technical and policy challenge and a technical deeper dive on the tactical implementation choices made. We’ll share lessons learned related to governance and securing organizational support, connecting between multiple cloud environments, and standardizing data to make it useful for analytics. And finally, we’ll discuss how the lakehouse leverages Databricks in multicloud environments to promote decentralized ownership of data while enabling strong, centralized data governance practices.

Talk by: Timothy Ahrens and Edward Moe

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc

Security Best Practices and Tools to Build a Secure Lakehouse

To learn more, visit the Databricks Security and Trust Center: https://www.databricks.com/trust

As you embark on a lakehouse project or evolve your existing data lake, you may want to improve your security posture and take advantage of new security features—there may even be a security team at your company that demands it. Databricks has worked with thousands of customers to securely deploy the Databricks Platform to meet their architecture and security requirements. While many organizations deploy security differently, we have found a common set of guidelines and features among organizations that require a high level of security. In this session, we will detail the security features and architectural choices frequently used by these organizations and walk through a series of threat models for the risks that most concern security teams. While this session is great for people who already know Databricks—don’t worry—that knowledge isn’t required. You will walk away with a full handbook detailing all the concepts, configurations, check lists, security analysis tool (SAT), and security reference architecture (SRA) automation scripts from the session so that you can make immediate progress when you get back to the office. Security can be hard, but we’ve collected the hard work already done by some of the best in the industry, and built tools, to make it easier. Come learn how. See how good looks like via a demo.

Talk by: Arun Pamulapati and Anindita Mahapatra

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc

Pro Power BI Architecture: Development, Deployment, Sharing, and Security for Microsoft Power BI Solutions

This book provides detailed guidance around architecting and deploying Power BI reporting solutions, including help and best practices for sharing and security. You’ll find chapters on dataflows, shared datasets, composite model and DirectQuery connections to Power BI datasets, deployment pipelines, XMLA endpoints, and many other important features related to the overall Power BI architecture that are new since the first edition. You will gain an understanding of what functionality each of the Power BI components provide (such as Dataflow, Shared Dataset, Datamart, thin reports, and paginated reports), so that you can make an informed decision about what components to use in your solution. You will get to know the pros and cons of each component, and how they all work together within the larger Power BI architecture. Commonly encountered problems you will learn to handle include content unexpectedly changing while users are in the process of creating reports and building analyses, methods of sharing analyses that don’t cover all the requirements of your business or organization, and inconsistent security models. Detailed examples help you to understand and choose from among the different methods available for sharing and securing Power BI content so that only intended recipients can see it. The knowledge provided in this book will allow you to choose an architecture and deployment model that suits the needs of your organization. It will also help ensure that you do not spend your time maintaining your solution, but on using it for its intended purpose: gaining business value from mining and analyzing your organization’s data. What You Will Learn Architect Power BI solutions that are reliable and easy to maintain Create development templates and structures in support of reusability Set up and configure the Power BI gateway as a bridge between on-premises data sourcesand the Power BI cloud service Select a suitable connection type—Live Connection, DirectQuery, Scheduled Refresh, or Composite Model—for your use case Choose the right sharing method for how you are using Power BI in your organization Create and manage environments for development, testing, and production Secure your data using row-level and object-level security Save money by choosing the right licensing plan Who This Book Is For Data analysts and developers who are building reporting solutions around Power BI, as well as architects and managers who are responsible for the big picture of how Power BI meshes with an organization’s other systems, including database and data warehouse systems.

Roya is a research scientist who is passionate about advancing artificial intelligence technologies. She is particularly interested in computer vision and pattern recognition and has developed machine learning solutions for various applications, including healthcare, assistive technology, and security. Roya is also an advocate for women's rights. She is a Google Women's Techmaker Ambassa…

Today I’m chatting with Peter Everill, who is the Head of Data Products for Analytics and ML Designs at the UK grocery brand, Sainsbury’s. Peter is also a founding member of the Data Product Leadership Community. Peter shares insights on why his team spends so much time conducting discovery work with users, and how that leads to higher adoption and in turn, business value. Peter also gives us his in-depth definition of a data product, including the three components of a data product and the four types of data products he’s encountered. He also shares the 8-step product management methodology that his team uses to develop data products that truly deliver value to end users. Pete also shares the #1 resource he would invest in right now to make things better for his team and their work.

Highlights/ Skip to:

I introduce Peter, who I met through the Data Product Leadership Community (00:37) What the data team structure at Sainsbury’s looks like and how Peter wound up working there (01:54) Peter shares the 8-step product management methodology that has been developed by his team and where in that process he spends most of his time (04:54) How involved the users are in Peter’s process when it comes to developing data products (06:13) How Peter was able to ensure that enough time is taken on discovery throughout the design process (10:03) Who on Peter’s team is doing the core user research for product development (14:52) Peter shares the three things that he feels make data product teams successful (17:09) How Peter defines a data product, including the three components of a data product and the four types of data products (18:34) Peter and I discuss the importance of spending time in discovery (24:25) Peter explains why he measures reach and impact as metrics of success when looking at implementation (26:18) How Peter solves for the gap when handing off a product to the end users to implement and adopt (29:20) How Peter hires for data product management roles and what he looks for in a candidate (33:31) Peter talks about what roles or skills he’d be looking for if he was to add a new person to his team (37:26)

Quotes from Today’s Episode “I’m a big believer that the majority of analytics in its simplest form is improving business processes and decisions. A big part of our discovery work is that we align to business areas, business divisions, or business processes, and we spend time in that discovery space actually mapping the business process. What is the goal of this process? Ultimately, how does it support the P&L?” — Peter Everill (12:29)

“There’s three things that are successful for any organization that will make this work and make it stick. The first is defining what you mean by a data product. The second is the role of a data product manager in the organization and really being clear what it is that they do and what they don’t do. … And the third thing is their methodology, from discovery through to delivery. The more work you put upfront defining those and getting everyone trained and clear on that, I think the quicker you’ll get to an organization that’s really clear about what it’s delivering, how it delivers, and who does what.” – Peter Everill (17:31)

“The important way that data and analytics can help an organization firstly is, understanding how that organization is performing. And essentially, performance is how well processes and decisions within the organization are being executed, and the impact that has on the P&L.” – Peter Everill (20:24)

“The great majority of organizations don’t allocate that percentage [20-25%] of time to discovery; they are jumping straight into solution. And also, this is where organizations typically then actually just migrate what already exists from, maybe, legacy service into a shiny new cloud platform, which might be good from a defensive data strategy point of view, but doesn’t offer new net value—apart from speed, security and et cetera of the cloud. Ultimately, this is why analytics organizations aren’t generally delivering value to organizations.” – Peter Everill (25:37)

“The only time that value is delivered, is from a user taking action. So, the two metrics that we really focus on with all four data products [are] reach [and impact].” – Peter Everill (27:44)

“In terms of benefits realization, that is owned by the business unit. Because ultimately, you’re asking them to take the action. And if they do, it’s their part of the P&L that’s improving because they own the business, they own the performance. So, you really need to get them engaged on the release, and for them to have the superusers, the champions of the product, and be driving voice of the release just as much as the product team.” – Peter Everill (30:30)

On hiring DPMs: “Are [candidates] showing the aptitude, do they understand what the role is, rather than the experience? I think data and analytics and machine learning product management is a relatively new role. You can’t go on LinkedIn necessarily, and be exhausted with a number of candidates that have got years and years of data and analytics product management.” – Peter Everill (36:40)

Links LinkedIn: https://www.linkedin.com/in/petereverill/

Roya Kandalan - Aware, Inc. (Senior Research Scientist)\n\nRoya is a research scientist who is passionate about advancing artificial intelligence technologies. She is particularly interested in computer vision and pattern recognition and has developed machine learning solutions for various applications, including healthcare, assistive technology, and security.\nRoya is also an advocate for women's rights. She is a Google Women's Techmaker Ambassa…

Today's government agencies face unprecedented complexities, and when thinking about the role of government in driving positive change for society at large, data & AI stand out as key levers to empower government agencies to do more with less. However, the road to government data & AI transformation is fraught with risk, and is full with opportunity. So how can government data leaders succeed in their transformation endeavors?  Steve Orrin is Intel’s Federal Chief Technology Officer. He leads Public Sector Solution Architecture, Strategy, and Technology Engagements and has held technology leadership positions at Intel where he has led cybersecurity programs, products, and strategy. Steve was previously CSO for Sarvega, CTO of Sanctum, CTO and co-founder of LockStar, and CTO at SynData Technologies. He was named one of InfoWorld's Top 25 CTO's, received Executive Mosaic’s Top CTO Executives Award, is a Washington Exec Top Chief Technology Officers to Watch in 2023, was the Vice-Chair of the NSITC/IDESG Security Committee and was a Guest Researcher at NIST’s National Cybersecurity Center of Excellence (NCCoE). He is a fellow at the Center for Advanced Defense Studies and the chair of the INSA Cyber Committee. Throughout the episode, we talked about the unique challenges government face when driving value with data & AI, how agencies need to align their data ambitions with their actual mission, the nuances between data privacy laws between the united states, Europe, and China, how to best approach launching pilot projects if you are in government, and a lot more.