talk-data.com talk-data.com

Description

Session: When Microsoft Fabric was released, it came with Apache Spark out of the box. Spark’s ability to work with more programming languages opened up possibilities for creating data-driven and automated lakehouses. With Python Notebooks, we have a better tool for handling metadata, automation, and processing of more trivial workloads, while still having the option to use Spark Notebooks for handling more demanding processing. We will cover: The difference between Python Notebooks and a Single Node Spark cluster, and why Spark Notebooks are more costly and less performant with certain types of workloads. When to use Python Notebooks and when to use Spark Notebooks. Where to use Python Notebooks in a meta-driven Lakehouse. A brief introduction to tooling and move workload between Python Notebooks and Spark Notebooks. How to avoid overload the Lakehouse tech stack with python technologies. Costs