Implementation of the StyleAligned technique for ComfyUI.
This implementation is split into two different nodes, and does not require any additional models or dependencies.
This node replaces the KSampler, and lets you reference an existing latent as a style reference. In order to retrieve the latent, you will need to perform DDIM inversion; an example workflow for this is provided here.
Above, a reference image, and a batch of images generated using the prompt 'a robot' and the reference image shown as style input.
model
: The base model to patch.share_attn
: Which components of self-attention are normalized. Defaults toq+k
. Set toq+k+v
for more extreme sharing, at the cost of quality in some cases.share_norm
: Whether to share normalization across the batch. Defaults toboth
. Set togroup
orlayer
to only share group or layer normalization, respectively.scale
: The scale at which to apply the style-alignment effect. Defaults to1
.
Instead of referencing a single latent, this node aligns the style of the entire batch with the first image generated in the batch, effectively causing all images in the batch to be generated with the same style.
A batch of generations with the same parameters and the Batch Align node applied (left) and disabled (right).
model
: The base model to patch.share_attn
: Which components of self-attention are normalized. Defaults toq+k
. Set toq+k+v
for more extreme sharing, at the cost of quality in some cases.share_norm
: Whether to share normalization across the batch. Defaults toboth
. Set togroup
orlayer
to only share group or layer normalization, respectively.scale
: The scale at which to apply the style-alignment effect. Defaults to1
.batch_size
,noise_seed
,control_after_generate
,cfg
: Identical to the standardKSampler
parameters.
Simply download or git clone this repository in ComfyUI/custom_nodes/
. Example workflows are included in resources/
.