talk-data.com talk-data.com

Topic

Cloud Computing

infrastructure saas iaas

4055

tagged

Activity Trend

471 peak/qtr
2020-Q1 2026-Q1

Activities

4055 activities · Newest first

Transitioning to Microsoft Power Platform: An Excel User Guide to Building Integrated Cloud Applications in Power BI, Power Apps, and Power Automate

Welcome to this step-by-step guide for Excel users, data analysts, and finance specialists. It is designed to take you through practical report and development scenarios, including both the approach and the technical challenges. This book will equip you with an understanding of the overall Power Platform use case for addressing common business challenges. While Power BI continues to be an excellent tool of choice in the BI space, Power Platform is the real game changer. Using an integrated architecture, a small team of citizen developers can build solutions for all kinds of business problems. For small businesses, Power Platform can be used to build bespoke CRM, Finance, and Warehouse management tools. For large businesses, it can be used to build an integration point for existing systems to simplify reporting, operation, and approval processes. The author has drawn on his15 years of hands-on analytics experience to help you pivot from the traditional Excel-based reporting environment. By using different business scenarios, this book provides you with clear reasons why a skill is important before you start to dive into the scenarios. You will use a fast prototyping approach to continue to build exciting reporting, automation, and application solutions and improve them while you acquire new skill sets. The book helps you get started quickly with Power BI. It covers data visualization, collaboration, and governance practices. You will learn about the most practical SQL challenges. And you will learn how to build applications in PowerApps and Power Automate. The book ends with an integrated solution framework that can be adapted to solve a wide range of complex business problems. What You Will Learn Develop reporting solutions and business applications Understand the Power Platform licensing and development environment Apply Data ETL and modeling in Power BI Use Data Storytelling and dashboard design to better visualize data Carry out data operations with SQL and SharePoint lists Develop useful applications using Power Apps Develop automated workflows using Power Automate Integrate solutions with Power BI, Power Apps, and Power Automate to build enterprise solutions Who This Book Is For Next-generation data specialists, including Excel-based users who want to learn Power BI and build internal apps; finance specialists who want to take a different approach to traditional accounting reports; and anyone who wants to enhance their skill set for the future job market.

Exam Ref PL-900 Microsoft Power Platform Fundamentals, 2nd Edition

Prepare for Microsoft Exam PL-900. Demonstrate your real-world knowledge of the fundamentals of Microsoft Power Platform, including its business value, core components, and the capabilities and advantages of Power BI, Power Apps, Power Automate, and Power Virtual Agents. Designed for business users, functional consultants, and other professionals, this Exam Ref focuses on the critical thinking and decision-making acumen needed for success at the Microsoft Certified: Power Platform Fundamentals level. Focus on the expertise measured by these objectives: Describe the business value of Power Platform Identify the Core Components of Power Platform Demonstrate the capabilities of Power BI Demonstrate the capabilities of Power Apps Demonstrate the capabilities of Power Automate Demonstrate the capabilities of Power Virtual Agents This Microsoft Exam Ref: Organizes its coverage by exam objectives Features strategic, what-if scenarios to challenge you Assumes you are a business user, functional consultant, or other professional who wants to improve productivity by automating business processes, analyzing data, creating simple app experiences, or developing business enhancements to Microsoft cloud solutions. About the Exam Exam PL-900 focuses on knowledge needed to describe the value of Power Platform services and of extending solutions; describe Power Platform administration and security; describe Common Data Service, Connectors, and AI Builder; identify common Power BI components; connect to and consume data; build basic dashboards with Power BI; identify common Power Apps components; build basic canvas and model-driven apps; describe Power Apps portals; identify common Power Automate components; build basic flows; describe Power Virtual Agents capabilities; and build and publish basic chatbots. About Microsoft Certification Passing this exam fulfills your requirements for the Microsoft Certified: Power Platform Fundamentals certification, demonstrating your understanding of Power Platforms core capabilitiesfrom business value and core product capabilities to building simple apps, connecting data sources, automating basic business processes, creating dashboards, and creating chatbots. With this certification, you can move on to earn specialist certifications covering more advanced aspects of Power Apps and Power BI, including Microsoft Certified: Power Platform App Maker Associate and Power Platform Data Analyst Associate. See full details at: microsoft.com/learn

Automating Data Transformations

The modern data stack has evolved rapidly in the past decade. Yet, as enterprises migrate vast amounts of data from on-premises platforms to the cloud, data teams continue to face limitations executing data transformation at scale. Data transformation is an integral part of the analytics workflow--but it's also the most time-consuming, expensive, and error-prone part of the process. In this report, Satish Jayanthi and Armon Petrossian examine key concepts that will enable you to automate data transformation at scale. IT decision makers, CTOs, and data team leaders will explore ways to democratize data transformation by shifting from activity-oriented to outcome-oriented teams--from manufacturing-line assembly to an approach that lets even junior analysts implement data with only a brief code review. With this insightful report, you will: Learn how successful data systems rely on simplicity, flexibility, user-friendliness, and a metadata-first approach Adopt a product-first mindset (data as a product, or DaaP) for developing data resources that focus on discoverability, understanding, trust, and exploration Build a transformation platform that delivers the most value, using a column-first approach Use data architecture as a service (DAaaS) to help teams build and maintain their own data infrastructure as they work collaboratively About the authors: Armon Petrossian is CEO and cofounder of Coalesce. Previously, he was part of the founding team at WhereScape in North America, where he served as national sales manager for almost a decade. Satish Jayanthi is CTO and cofounder of Coalesce. Prior to that, he was senior solutions architect at WhereScape, where he met his cofounder Armon.

IBM FlashSystem 7300 Product Guide

This IBM® Redpaper Product Guide describes the IBM FlashSystem® 7300 solution, which is a next-generation IBM FlashSystem control enclosure. It combines the performance of flash and a Non-Volatile Memory Express (NVMe)-optimized architecture with the reliability and innovation of IBM FlashCore® technology and the rich feature set and high availability (HA) of IBM Spectrum® Virtualize. To take advantage of artificial intelligence (AI)-enhanced applications, real-time big data analytics, and cloud architectures that require higher levels of system performance and storage capacity, enterprises around the globe are rapidly moving to modernize established IT infrastructures. However, for many organizations, staff resources, and expertise are limited, and cost-efficiency is a top priority. These organizations have important investments in existing infrastructure that they want to maximize. They need enterprise-grade solutions that optimize cost-efficiency while simplifying the pathway to modernization. IBM FlashSystem 7300 is designed specifically for these requirements and use cases. It also delivers a cyber resilience without compromising application performance. IBM FlashSystem 7300 provides a rich set of software-defined storage (SDS) features that are delivered by IBM Spectrum Virtualize, including the following examples: Data reduction and deduplication Dynamic tiering Thin-provisioning Snapshots Cloning Replication and data copy services Cyber resilience Transparent Cloud Tiering (TCT) IBM HyperSwap® including 3-site replication for high availability Scale-out and scale-up configurations further enhance capacity and throughput for better availability With the release of IBM Spectrum Virtualize V8.5, extra functions and features are available, including support for new third-generation IBM FlashCore Modules Non-Volatile Memory Express (NVMe) type drives within the control enclosure, and 100 Gbps Ethernet adapters that provide NVMe Remote Direct Memory Access (RDMA) options. New software features include GUI enhancements, security enhancements including multifactor authentication and single sign-on, and Fibre Channel (FC) portsets.

Discover how Infrastructure From Code (IfC) can revolutionize Cloud DevOps automation by generating cloud deployment templates directly from Python code. Learn how this technology empowers Python developers to easily deploy and operate cost-effective, secure, reliable, and sustainable cloud software. Join us to explore the strategic potential of IfC.

Running machine learning models in a production environment brings its own challenges. In this talk we would like to present our solution of a machine learning lifecycle for the text-based cataloging classification system from idealo.de. We will share lessons learned and talk about our experiences during the lifecycle migration from a hosted cluster to a cloud solution within the last 3 years. In addition, we will outline how we embedded our ML components as part of the overall idealo.de processing architecture.

This talk presents a novel approach to MLOps that combines the benefits of open-source technologies with the power and cost-effectiveness of cloud computing platforms. By using tools such as Terraform, MLflow, and Feast, we demonstrate how to build a scalable and maintainable ML system on the cloud that is accessible to ML Engineers and Data Scientists. Our approach leverages cloud managed services for the entire ML lifecycle, reducing the complexity and overhead of maintenance and eliminating the vendor lock-in and additional costs associated with managed MLOps SaaS services. This innovative approach to MLOps allows organizations to take full advantage of the potential of machine learning while minimizing cost and complexity.

Local Planning Authorities (LPAs) in the UK rely on written representations from the community to inform their Local Plans which outline development needs for their area. With an average of 2000 representations per consultation and 4 rounds of consultation per Local Plan, the volume of information can be overwhelming for both LPAs and the Planning Inspectorate tasked with examining the legality and soundness of plans. In this study, we investigate the potential for Large Language Models (LLMs) to streamline representation analysis.

We find that LLMs have the potential to significantly reduce the time and effort required to analyse representations, with simulations on historical Local Plans projecting a reduction in processing time by over 30%, and experiments showing classification accuracy of up to 90%.

In this presentation, we discuss our experimental process which used a distributed experimentation environment with Jupyter Lab and cloud resources to evaluate the performance of the BERT, RoBERTa, DistilBERT, and XLNet models. We also discuss the design and prototyping of web applications to support the aided processing of representations using Voilà, FastAPI, and React. Finally, we highlight successes and challenges encountered and suggest areas for future improvement.

Everybody knows our yellow vans, trucks and planes around the world. But do you know how data drives our business and how we leverage algorithms and technology in our core operations? We will share some “behind the scenes” insights on Deutsche Post DHL Group’s journey towards a Data-Driven Company. • Large-Scale Use Cases: Challenging and high impact Use Cases in all major areas of logistics, including Computer Vision and NLP • Fancy Algorithms: Deep-Neural Networks, TSP Solvers and the standard toolkit of a Data Scientist • Modern Tooling: Cloud Platforms, Kubernetes , Kubeflow, Auto ML • No rusty working mode: small, self-organized, agile project teams, combining state of the art Machine Learning with MLOps best practices • A young, motivated and international team – German skills are only “nice to have” But we have more to offer than slides filled with buzzwords. We will demonstrate our passion for our work, deep dive into our largest use cases that impact your everyday life and share our approach for a timeseries forecasting library - combining data science, software engineering and technology for efficient and easy to maintain machine learning projects..

Debugging is hard. Distributed debugging is hell.

Dask is a popular library for parallel and distributed computing in Python. Dask is commonly used in data science, actual science, data engineering, and machine learning to distribute workloads onto clusters of many hundreds of workers with ease.

However, when things go wrong life can become difficult due to all of the moving parts. These parts include your code, other PyData libraries like NumPy/pandas, the machines you’re running on, the network between them, storage, the cloud, and of course issues with Dask itself. It can be difficult to understand what is going on, especially when things seem slower than they should be or fail unexpectedly. Observability is the key to sanity and success.

In this talk, we describe the tools Dask offers to help you observe your distributed cluster, analyze performance, and monitor your cluster to react to unexpected changes quickly. We will dive into distributed logging, automated metrics, event-based monitoring, and root-causing problems with diagnostic tooling. Throughout the talk, we will leverage real-world use cases to show how these tools help to identify and solve problems for large-scale users in the wild.

This talk should be particularly insightful for Dask users, but the approaches to observing distributed systems should be relevant to anyone operating at scale in production.

The name WALD-stack stems from the four technologies it is composed of, i.e. a cloud-computing Warehouse like Snowflake or Google BigQuery, the open-source data integration engine Airbyte, the open-source full-stack BI platform Lightdash, and the open-source data transformation tool DBT.

Using a Formula 1 Grand Prix dataset, I will give an overview of how these four tools complement each other perfectly for analytics tasks in an ELT approach. You will learn the specific uses of each tool as well as their particular features. My talk is based on a full tutorial, which you can find under waldstack.org.

Summary

The customer data platform is a category of services that was developed early in the evolution of the current era of cloud services for data processing. When it was difficult to wire together the event collection, data modeling, reporting, and activation it made sense to buy monolithic products that handled every stage of the customer data lifecycle. Now that the data warehouse has taken center stage a new approach of composable customer data platforms is emerging. In this episode Darren Haken is joined by Tejas Manohar to discuss how Autotrader UK is addressing their customer data needs by building on top of their existing data stack.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their extensive library of integrations enable you to automatically send data to hundreds of downstream tools. Sign up free at dataengineeringpodcast.com/rudderstack Your host is Tobias Macey and today I'm interviewing Darren Haken and Tejas Manohar about building a composable CDP and how you can start adopting it incrementally

Interview

Introduction How did you get involved in the area of data management? Can you describe what you mean by a "composable CDP"?

What are some of the key ways that it differs from the ways that we think of a CDP today?

What are the problems that you were focused on addressing at Autotrader that are solved by a CDP? One of the promises of the first generation CDP was an opinionated way to model your data so that non-technical teams could own this responsibility. What do you see as the risks/tradeoffs of moving CDP functionality into the same data stack as the rest of the organization?

What about companies that don't have the capacity to run a full data infrastructure?

Beyond the core technology of the data warehouse, what are the other evolutions/innovations that allow for a CDP experience to be built on top of the core data stack? added burden on core data teams to generate event-driven data models When iterating toward a CDP on top of the core investment of the infrastructure to feed and manage a data warehouse, what are the typical first steps?

What are some of the components in the ecosystem that help to speed up the time to adoption? (e.g. pre-built dbt packages for common transformations, etc.)

What are the most interesting, innovative, or unexpected ways that you have seen CDPs implemented? What are the most interesting, unexpected, or challenging lessons that you have learned while working on CDP related functionality? When is a CDP (composable or monolithic) the wrong choice? What do you have planned for the future of the CDP stack?

Contact Info

Darren

LinkedIn @DarrenHaken on Twitter

Tejas

LinkedIn @tejasmanohar on Twitter

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

Autotrader Hightouch

Customer Studio

CDP == Customer Data Platform Segment

Podcast Episode

mPar

The modern data stack is a loose collection of technologies, often cloud-based, that collaboratively process and store data to support modern analytics. It must be automated, low code/no code, AI-assisted, graph-enabled, multimodal, streaming, distributed, meshy, converged, polyglot, open, and governed. Published at: https://www.eckerson.com/articles/twelve-must-have-characteristics-of-a-modern-data-stack

Beginning Database Design Solutions, 2nd Edition

A concise introduction to database design concepts, methods, and techniques in and out of the cloud In the newly revised second edition of Beginning Database Design Solutions: Understanding and Implementing Database Design Concepts for the Cloud and Beyond, Second Edition, award-winning programming instructor and mathematician Rod Stephens delivers an easy-to-understand guide to designing and implementing databases both in and out of the cloud. Without assuming any prior database design knowledge, the author walks you through the steps you’ll need to take to understand, analyze, design, and build databases. In the book, you’ll find clear coverage of foundational database concepts along with hands-on examples that help you practice important techniques so you can apply them to your own database designs, as well as: Downloadable source code that illustrates the concepts discussed in the book Best practices for reliable, platform-agnostic database design Strategies for digital transformation driven by universally accessible database design An essential resource for database administrators, data management specialists, and database developers seeking expertise in relational, NoSQL, and hybrid database design both in and out of the cloud, Beginning Database Design Solutions is a hands-on guide ideal for students and practicing professionals alike.

Summary

The data ecosystem has been building momentum for several years now. As a venture capital investor Matt Turck has been trying to keep track of the main trends and has compiled his findings into the MAD (ML, AI, and Data) landscape reports each year. In this episode he shares his experiences building those reports and the perspective he has gained from the exercise.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management Businesses that adapt well to change grow 3 times faster than the industry average. As your business adapts, so should your data. RudderStack Transformations lets you customize your event data in real-time with your own JavaScript or Python code. Join The RudderStack Transformation Challenge today for a chance to win a $1,000 cash prize just by submitting a Transformation to the open-source RudderStack Transformation library. Visit dataengineeringpodcast.com/rudderstack today to learn more Your host is Tobias Macey and today I'm interviewing Matt Turck about his annual report on the Machine Learning, AI, & Data landscape and the insights around data infrastructure that he has gained in the process

Interview

Introduction How did you get involved in the area of data management? Can you describe what the MAD landscape report is and the story behind it?

At a high level, what is your goal in the compilation and maintenance of your landscape document? What are your guidelines for what to include in the landscape?

As the data landscape matures, how have you seen that influence the types of projects/companies that are founded?

What are the product categories that were only viable when capital was plentiful and easy to obtain? What are the product categories that you think will be swallowed by adjacent concerns, and which are likely to consolidate to remain competitive?

The rapid growth and proliferation of data tools helped establish the "Modern Data Stack" as a de-facto architectural paradigm. As we move into this phase of contraction, what are your predictions for how the "Modern Data Stack" will evolve?

Is there a different architectural paradigm that you see as growing to take its place?

How has your presentation and the types of information that you collate in the MAD landscape evolved since you first started it?~~ What are the most interesting, innovative, or unexpected product and positioning approaches that you have seen while tracking data infrastructure as a VC and maintainer of the MAD landscape? What are the most interesting, unexpected, or challenging lessons that you have learned while working on the MAD landscape over the years? What do you have planned for future iterations of the MAD landscape?

Contact Info

Website @mattturck on Twitter MAD Landscape Comments Email

Parting Question

From your perspective, what is the biggest gap in the tooling or technology for data management today?

Closing Announcements

Thank you for listening! Don't forget to check out our other shows. Podcast.init covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers

Links

MAD Landscape First Mark Capital Bayesian Learning AI Winter Databricks Cloud Native Landscape LUMA Scape Hadoop Ecosystem Modern Data Stack Reverse ETL Generative AI dbt Transform

Podcast Episode

Snowflake IPO Dataiku Iceberg

Podcast Episode

Hudi

Podcast Episode

DuckDB

Podcast Episode

Trino Y42

Podcast Episode

Mozart Data

Podcast Episode

Keboola MPP Database

The intro and outro music is f

Data Fabric and Data Mesh Approaches with AI: A Guide to AI-based Data Cataloging, Governance, Integration, Orchestration, and Consumption

Understand modern data fabric and data mesh concepts using AI-based self-service data discovery and delivery capabilities, a range of intelligent data integration styles, and automated unified data governance—all designed to deliver "data as a product" within hybrid cloud landscapes. This book teaches you how to successfully deploy state-of-the-art data mesh solutions and gain a comprehensive overview on how a data fabric architecture uses artificial intelligence (AI) and machine learning (ML) for automated metadata management and self-service data discovery and consumption. You will learn how data fabric and data mesh relate to other concepts such as data DataOps, MLOps, AIDevOps, and more. Many examples are included to demonstrate how to modernize the consumption of data to enable a shopping-for-data (data as a product) experience. By the end of this book, you will understand the data fabric concept and architecture as it relates to themes such as automated unifieddata governance and compliance, enterprise information architecture, AI and hybrid cloud landscapes, and intelligent cataloging and metadata management. What You Will Learn Discover best practices and methods to successfully implement a data fabric architecture and data mesh solution Understand key data fabric capabilities, e.g., self-service data discovery, intelligent data integration techniques, intelligent cataloging and metadata management, and trustworthy AI Recognize the importance of data fabric to accelerate digital transformation and democratize data access Dive into important data fabric topics, addressing current data fabric challenges Conceive data fabric and data mesh concepts holistically within an enterprise context Become acquainted with the business benefits of data fabric and data mesh Who This Book Is For Anyone who is interested in deploying modern data fabric architectures and data mesh solutions within an enterprise, including IT and business leaders, data governance and data office professionals, data stewards and engineers, data scientists, and information and data architects. Readers should have a basic understanding of enterprise information architecture.

Brad Culberson is a Principal Architect in the Field CTO's office at Snowflake. Niall Woodward is a co-founder of SELECT, a startup providing optimization and spend management software for Snowflake customers. In this conversation with Tristan and Julia, Brad and Niall discuss all things cost optimization: cloud vs on-prem, measuring ROI, and tactical ways to get more out of your budget. For full show notes and to read 6+ years of back issues of the podcast's companion newsletter, head to https://roundup.getdbt.com.  The Analytics Engineering Podcast is sponsored by dbt Labs.

Azure SQL Hyperscale Revealed: High-performance Scalable Solutions for Critical Data Workloads

Take a deep dive into the Azure SQL Database Hyperscale Service Tier and discover a new form of cloud architecture from Microsoft that supports massive databases. The new horizontally scalable architecture, formerly code-named Socrates, allows you to decouple compute nodes from storage layers. This radically different approach dramatically increases the scalability of the service. This book shows you how to leverage Hyperscale to provide next-level scalability, high throughput, and fast performance from large databases in your environment. The book begins by showing how Hyperscale helps you eliminate many of the problems of traditional high-availability and disaster recovery architecture. You’ll learn how Hyperscale overcomes storage capacity limitations and issues with scale-up times and costs. With Hyperscale, your costs do not increase linearly with database size and you can manage more data than ever at a lower cost. The book teaches you how todeploy, configure, and monitor an Azure SQL Hyperscale database in a production environment. The book also covers migrating your current workloads from traditional architecture to Azure SQL Hyperscale. What You Will Learn Understand the advantages of Hyperscale over traditional architecture Deploy a Hyperscale database on the Azure cloud (interactively and with code) Configure the advanced features of the Hyperscale database tier Monitor and scale database performance to suit your needs Back up and restore your Azure SQL Hyperscale databases Implement disaster recovery and failover capability Compare performance of Hyperscale vs traditional architecture Migrate existing databases to the Hyperscale service tier Who This Book Is For SQL architects, data engineers, and DBAs who want the most efficient and cost-effective cloud technologies to run their critical data workloads, and those seeking rapid scalability and high performance and throughput while utilizing large databases

A modern data stack combines different tools, technologies, and processes businesses use to collect, store, analyze, and visualize data. It is designed to provide a unified and streamlined approach to data management, allowing organizations to make data-driven decisions quickly and efficiently.

The modern data stack differs from the traditional one in several ways. Traditionally, data stacks were built using a monolithic architecture that relied on expensive hardware and software licenses. These stacks were challenging to manage and slow to scale and often resulted in data silos that hindered collaboration between different teams.

On the other hand, the modern data stack is built using a modular architecture that leverages cloud computing, open-source software, and APIs. This approach allows organizations to use the best-of-breed tools for each step of the data pipeline, resulting in a more flexible, scalable, and cost-effective solution.

On today’s episode, we’re joined by Atif Ghauri, Senior Vice President at Cyderes, a global cybersecurity powerhouse offering comprehensive solutions around managed security, identity and access management, and professional services.

We talk about:

  • How Cyderes works and the problems they solve.
  • The evolution of cloud security.
  • The impact of AI on cybersecurity.
  • The biggest risk factors in cloud security today.
  • How new SaaS founders today should think about cybersecurity and common mistakes to avoid.
  • The turning point where SaaS companies have to start taking security more seriously.
  • Some of the things Atif has found surprising in his security career.

Atif Ghauri - https://www.linkedin.com/in/aghauri Cyderes - https://www.linkedin.com/company/the-herjavec-group/

This episode is brought to you by Qrvey

The tools you need to take action with your data, on a platform built for maximum scalability, security and cost efficiencies. If you’re ready to reduce complexity and dramatically lower costs, contact us today at qrvey.com.

Qrvey, the modern no-code analytics solution for SaaS companies on AWS.

saas  #analytics #AWS  #BI