We tackle the challenge of jointly personalizing content and style from a few examples. A promising approach is to train separate Low-Rank Adapters (LoRA) and merge them effectively, preserving both content and style. Existing methods, such as ZipLoRA, treat content and style as independent entities, merging them by learning masks in LoRA's output dimensions. However, content and style are intertwined, not independent. To address this, we propose DuoLoRA, a content-style personalization framework featuring three key components: (i) rank-dimension mask learning, (ii) effective merging via layer priors, and (iii) Constyle loss, which leverages cycle-consistency in the merging process. First, we introduce ZipRank, which performs content-style merging within the rank dimension, offering adaptive rank flexibility and significantly reducing the number of learnable parameters. Additionally, we incorporate SDXL layer priors to apply implicit rank constraints informed by each layer's content-style bias and adaptive merger initialization, enhancing the integration of content and style. To further refine the merging process, we introduce Constyle loss, which leverages the cycle-consistency between content and style. Our experimental results demonstrate that DuoLoRA outperforms state-of-the-art content-style merging methods across multiple benchmarks.
talk-data.com
Topic
lora
3
tagged
Activity Trend
2
peak/qtr
2020-Q1
2026-Q1
I will show how to easily fine-tune an open source model with LoRa and how to deploy it to Production with LitServe.
Aziz (Aleph Alpha) will talk about How to Build an On-Premise LLM Finetuning Platform in which we will be exploring different fine-tuning approaches — including LoRA, QLoRA, and full finetuning — and discuss when to use each. We’ll also show how to implement dynamic worker scheduling and automatic GPU resource allocation, helping you streamline training workflows and turbocharge your engineering teams — all while ensuring your data stays securely on your own infrastructure.