talk-data.com talk-data.com

Filter by Source

Select conferences and events

People (14 results)

See all 14 →
Showing 5 results

Activities & events

Title & Speakers Event
August Gophers Pub Social! 2024-08-14 · 18:00

Merry August Team!

We thought it might be nice to throw together a pub social to keep the Summer Go-ing until our next fully fledged Meetup. Nothing fancy, just an opportunity for anyone who wants to to head on down, meet some fellow Gophers, and have a chat.

As this event is more informal than usual, and is just a chance for us to invade some poor pub on a Wednesday night, everyone is encouraged to wear their most Go-rgeous Go or programmer gear (yes, we accept your all-important programming socks) or indulge your method of choice to signal to your comrades that you’re one of the Gopher crew! As much as doodling a Gopher on your forehead may seem like a viable option now, it may restrict your career prospects if you use the wrong marker, so the official London Gophers stance is, sadly, to advise against it.

Additionally, for those interested, be aware that GopherCon UK will be taking place next week from Wednesday 14th - Friday 16th, and you can still pick up tickets from https://www.gophercon.co.uk/!

See you at the pub!

📜 All London Gophers events operate under the Go Community Code of Conduct - https://golang.org/conduct

  • Treat everyone with respect and kindness.
  • Be thoughtful in how you communicate.
  • Don’t be destructive or inflammatory.

Please do not message members without their consent

If you encounter an issue, please mail [email protected] or [email protected]

==== 📢 Become a Speaker! 📢 =====

Have something to say? We want to listen! We are always looking for new speakers who want to share their adventures with Go and have mentors who can help.

You can sign up to be a speaker here: https://gophers.london/apply

==== 📞 How To Reach Us 📞 =====

Email: [email protected] Linkedin: https://www.linkedin.com/company/london-gophers/ YouTube: https://www.youtube.com/c/LondonGophers

August Gophers Pub Social!

RSVP Webinar: https://www.eventbrite.com/e/webinar-generative-ai-on-aws-tickets-45852865154

Talk #1: Mistral AI's 2024 Updates including Mixtral 8x22B and a new hands-on DeepLearning.ai short course! by Sophia Yang, PhD Head of Developer Relations @ Mistral AI

Mixtral 8x22B is Mistral AI's latest open model. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Mixtral 8x22B comes with the following strengths:

  • It is fluent in English, French, Italian, German, and Spanish
  • It has strong mathematics and coding capabilities
  • It is natively capable of function calling; along with the constrained output mode implemented on la Plateforme, this enables application development and tech stack modernisation at scale
  • Its 64K tokens context window allows precise information recall from large documents

Link to Sophia's new DeepLearning.ai short course featuring Mixtral 8x22B: https://www.deeplearning.ai/short-courses/getting-started-with-mistral/

Talk #2: LLM Telemetry with OpenLLMetry and Amazon Bedrock by Clay Elmore (Senior SA, Gen AI)

Generative AI applications present challenging observability problems including multi-dimensional compute metric tracing, ambiguous API tracing, constantly evolving model providers, and difficulties evaluating foundation model outputs. Join us in this session to learn about emerging trends in observing applications powered by Amazon Bedrock. You will learn about how to utilize open source observability software OpenLLMetry with Amazon Bedrock to build an end to end observability solution for many Generative AI apps running on AWS.

RSVP Webinar: https://www.eventbrite.com/e/webinar-generative-ai-on-aws-tickets-45852865154

Zoom link: https://us02web.zoom.us/j/82308186562

Related Links Generative AI Free Course on DeepLearning.ai: https://bit.ly/gllm O'Reilly Book: https://www.amazon.com/Generative-AWS-Context-Aware-Multimodal-Applications/dp/1098159225 Website: https://generativeaionaws.com Meetup: https://meetup.generativeaionaws.com GitHub Repo: https://github.com/generative-ai-on-aws/ YouTube: https://youtube.generativeaionaws.com

Mistral AI Updates incl Mixtral 8x22B + OpenLLMetry Evaluation Optimization

RSVP Webinar: https://www.eventbrite.com/e/webinar-generative-ai-on-aws-tickets-45852865154

Talk #1: Mistral AI's 2024 Updates including Mixtral 8x22B and a new hands-on DeepLearning.ai short course! by Sophia Yang, PhD Head of Developer Relations @ Mistral AI

Mixtral 8x22B is Mistral AI's latest open model. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Mixtral 8x22B comes with the following strengths:

  • It is fluent in English, French, Italian, German, and Spanish
  • It has strong mathematics and coding capabilities
  • It is natively capable of function calling; along with the constrained output mode implemented on la Plateforme, this enables application development and tech stack modernisation at scale
  • Its 64K tokens context window allows precise information recall from large documents

Link to Sophia's new DeepLearning.ai short course featuring Mixtral 8x22B: https://www.deeplearning.ai/short-courses/getting-started-with-mistral/

Talk #2: LLM Telemetry with OpenLLMetry and Amazon Bedrock by Clay Elmore (Senior SA, Gen AI)

Generative AI applications present challenging observability problems including multi-dimensional compute metric tracing, ambiguous API tracing, constantly evolving model providers, and difficulties evaluating foundation model outputs. Join us in this session to learn about emerging trends in observing applications powered by Amazon Bedrock. You will learn about how to utilize open source observability software OpenLLMetry with Amazon Bedrock to build an end to end observability solution for many Generative AI apps running on AWS.

RSVP Webinar: https://www.eventbrite.com/e/webinar-generative-ai-on-aws-tickets-45852865154

Zoom link: https://us02web.zoom.us/j/82308186562

Related Links Generative AI Free Course on DeepLearning.ai: https://bit.ly/gllm O'Reilly Book: https://www.amazon.com/Generative-AWS-Context-Aware-Multimodal-Applications/dp/1098159225 Website: https://generativeaionaws.com Meetup: https://meetup.generativeaionaws.com GitHub Repo: https://github.com/generative-ai-on-aws/ YouTube: https://youtube.generativeaionaws.com

Mistral AI Updates incl Mixtral 8x22B + OpenLLMetry Evaluation Optimization

RSVP Webinar: https://www.eventbrite.com/e/webinar-generative-ai-on-aws-tickets-45852865154

Talk #1: Mistral AI's 2024 Updates including Mixtral 8x22B and a new hands-on DeepLearning.ai short course! by Sophia Yang, PhD Head of Developer Relations @ Mistral AI

Mixtral 8x22B is Mistral AI's latest open model. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Mixtral 8x22B comes with the following strengths:

  • It is fluent in English, French, Italian, German, and Spanish
  • It has strong mathematics and coding capabilities
  • It is natively capable of function calling; along with the constrained output mode implemented on la Plateforme, this enables application development and tech stack modernisation at scale
  • Its 64K tokens context window allows precise information recall from large documents

Link to Sophia's new DeepLearning.ai short course featuring Mixtral 8x22B: https://www.deeplearning.ai/short-courses/getting-started-with-mistral/

Talk #2: LLM Telemetry with OpenLLMetry and Amazon Bedrock by Clay Elmore (Senior SA, Gen AI)

Generative AI applications present challenging observability problems including multi-dimensional compute metric tracing, ambiguous API tracing, constantly evolving model providers, and difficulties evaluating foundation model outputs. Join us in this session to learn about emerging trends in observing applications powered by Amazon Bedrock. You will learn about how to utilize open source observability software OpenLLMetry with Amazon Bedrock to build an end to end observability solution for many Generative AI apps running on AWS.

RSVP Webinar: https://www.eventbrite.com/e/webinar-generative-ai-on-aws-tickets-45852865154

Zoom link: https://us02web.zoom.us/j/82308186562

Related Links Generative AI Free Course on DeepLearning.ai: https://bit.ly/gllm O'Reilly Book: https://www.amazon.com/Generative-AWS-Context-Aware-Multimodal-Applications/dp/1098159225 Website: https://generativeaionaws.com Meetup: https://meetup.generativeaionaws.com GitHub Repo: https://github.com/generative-ai-on-aws/ YouTube: https://youtube.generativeaionaws.com

Mistral AI Updates incl Mixtral 8x22B + OpenLLMetry Evaluation Optimization

To significantly improve the performance of Spark SQL, there is a trend to offload Spark SQL execution to highly optimized native libraries or accelerators in past several years, like Photon from Databricks, Nvidia's Rapids plug-in, and Intel and Kyligence's initiated open source Gluten project. By the multi-fold performance improvement from these solutions, more and more Apache Spark™ users have started to adopt the new technology. One characteristics of native libraries is that they all use columnar data format as the basic data format. It's because the columnar data format has the intrinsic affinity to vectorized data processing using SIMD instructions. While vanilla Spark's shuffle is based on spark's internal row data format. The high overhead of the columnar to row and row to columnar conversion during the shuffle makes reusing current shuffle not possible. Due to the importance of shuffle service in Spark, we have to implement an efficient columnar shuffle, which brings couple of new challenges, like the split of columnar data, or the dictionary support during shuffle.

In this session, we will share the exploration process of the columnar shuffle design during our Gazelle and Gluten development, and best practices for implementing the columnar shuffle service. We will also share how we learned from the development of vanilla Spark's shuffle, for example, how to address the small files issue then we will propose the new shuffle solution. We will show the performance comparison between Columnar shuffle and vanilla Spark's row-based shuffle. Finally, we will share how the new built-in accelerators like QAT and IAA in the latest Intel processor are used in our columnar shuffle service and boost the performance.

Talk by: Binwei Yang and Rong Ma

Here’s more to explore: Why the Data Lakehouse Is Your next Data Warehouse: https://dbricks.co/3Pt5unq Lakehouse Fundamentals Training: https://dbricks.co/44ancQs

Connect with us: Website: https://databricks.com Twitter: https://twitter.com/databricks LinkedIn: https://www.linkedin.com/company/databricks Instagram: https://www.instagram.com/databricksinc Facebook: https://www.facebook.com/databricksinc

Data Lakehouse Databricks DWH Spark SQL
Databricks DATA + AI Summit 2023
Showing 5 results