LLMs have taken the world by storm over the past few months. But there's often a common perception that building an LLM model yourself is prohibitively challenging and expensive. In this talk I will show you how you can use state of the art Open Source foundation models and fine tune those models simply and cheaply to build something bespoke on your data. At Databricks we believe in selecting the right model for the right use-case, and with Dolly we demonstrated how fine-tuning on a bespoke dataset can provide instruction-following capability.
talk-data.com
Topic
Databricks
1286
tagged
Activity Trend
Top Events
In this course, you will learn basic skills that will allow you to use the Databricks Data Intelligence Platform to perform a simple data engineering workflow and support data warehousing endeavors. You will be given a tour of the workspace and be shown how to work with objects in Databricks such as catalogs, schemas, volumes, tables, compute clusters and notebooks. You will then follow a basic data engineering workflow to perform tasks such as creating and working with tables, ingesting data into Delta Lake, transforming data through the medallion architecture, and using Databricks Workflows to orchestrate data engineering tasks. You’ll also learn how Databricks supports data warehousing needs through the use of Databricks SQL, DLT, and Unity Catalog.
This course provides a comprehensive overview of Databricks’ modern approach to data warehousing, highlighting how a data lakehouse architecture combines the strengths of traditional data warehouses with the flexibility and scalability of the cloud. You’ll learn about the AI-driven features that enhance data transformation and analysis on the Databricks Data Intelligence Platform. Designed for data warehousing practitioners, this course provides you with the foundational information needed to begin building and managing high-performant, AI-powered data warehouses on Databricks. This course is designed for those starting out in data warehousing and those who would like to execute data warehousing workloads on Databricks. Participants may also include data warehousing practitioners who are familiar with traditional data warehousing techniques and concepts and are looking to expand their understanding of how data warehousing workloads are executed on Databricks.
We will dive deeper into inference from Apple Silicon to huge production clusters, and cover tricks how to make it faster.
Power BI Desktop bietet Datenaktualisierung bis in den Bereich von Sekunden an. Dazu dient in der Datenquellenkonfiguration die DirectQuery.
Die Session beginnt mit dem Anbinden von Azure SQL-DBs und zeigt anschließend die Unterschiede und Feinheiten zu Databricks als Datenquelle.
Dies beginnt beim Erstellen von Databricks DBs, geht weiter zur Notebook Konfiguration und beinhaltet auch die Authentication zu Databricks-Clustern.