talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (103 results)

See all 103 →

Companies (2 results)

BlackRock 2 speakers
PhD, Director of Data Science
Black Duck 1 speaker
Strategic Account Manager
Showing 2 results

Activities & events

Title & Speakers Event
Sean Owen – Principal Specialist for Data Science and ML @ Databricks

Large Language Models (LLMs) are taking AI mainstream across companies and individuals. However, public LLMs are trained on general-purpose data. They do not include your own corporate data and they are black boxes on how they are trained. Because terminology is different for healthcare, financial, retail, digital-native and other industries, companies today are looking for industry-specific LLMs to better understand the terminology, context and knowledge that better suits their needs. In contrast to closed LLMs, open source-based models can be used for commercial usage or customized to suit an enterprise’s needs on their own data. Learn how Databricks makes it easy for you to build, tune and use custom models, including a deep dive into Dolly, the first open source, instruction-following LLM, fine-tuned on a human-generated instruction dataset licensed for research and commercial use.

In this session, you will:

  • See a real-life demo of creating your own LLMs specific to your industry
  • Learn how to securely train on your own documents if needed
  • Learn how Databricks makes it quick, scalable and inexpensive
  • Deep dive into Dolly and its applications

Talk by: Sean Owen

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc

AI/ML Databricks LLM
Databricks DATA + AI Summit 2023
Sean Knapp – Founder and CEO @ Ascend , Tobias Macey – host

Summary The dream of every engineer is to automate all of their tasks. For data engineers, this is a monumental undertaking. Orchestration engines are one step in that direction, but they are not a complete solution. In this episode Sean Knapp shares his views on what constitutes proper automation and the work that he and his team at Ascend are doing to help make it a reality.

Announcements

Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder. The only thing worse than having bad data is not knowing that you have it. With Bigeye’s data observability platform, if there is an issue with your data or data pipelines you’ll know right away and can get it fixed before the business is impacted. Bigeye let’s data teams measure, improve, and communicate the quality of your data to company stakeholders. With complete API access, a user-friendly interface, and automated yet flexible alerting, you’ve got everything you need to establish and maintain trust in your data. Go to dataengineeringpodcast.com/bigeye today to sign up and start trusting your analyses. Your host is Tobias Macey and today I’m interviewing Sean Knapp about the role of data automation in building maintainable systems

Interview

Introduction How did you get involved in the area of data management? Can you describe what you mean by the term "data automation" and the assumptions that it includes? One of the perennial challenges of automation is that there are always steps that are resistant to being performed without human involvement. What are some of the tasks that you have found to be common problems in that sense? What are the different concerns that need to be included in a stack that supports fully automated data workflows? There was recently an interesting article suggesting that the "left-to-right" approach to data workflows is backwards. In your experience, what would be required to allow for triggering data processes based on the needs of the data consumers? (e.g. "make sure that this BI dashboard is up to date every 6 hours") What are the

API BI BigEye CDP Cloud Computing Dashboard Data Engineering Data Lake Data Management ETL/ELT Kubernetes MongoDB MySQL postgresql Data Streaming
Data Engineering Podcast
Showing 2 results