Images & Design

Flux

Flux (developed by Black Forest Labs, founded by the original Stable Diffusion team) is the leading open-source image generation model family as of 2026. Flux.1 produces images that many professionals now consider superior to Midjourney for photorealism and prompt adherence — while offering the flexibility of open weights that can be fine-tuned, self-hosted, and accessed via API.

Images & Design

What Flux is

Flux is a family of text-to-image models from Black Forest Labs — the company founded in 2024 by the core team behind Stable Diffusion. The Flux.1 release in August 2024 set a new benchmark for open-source image generation, producing images that rivals or exceeds proprietary models like Midjourney and DALL-E 3 on many measures, particularly photorealism and prompt adherence.

Unlike Midjourney (closed, web-only), Flux offers open weights — meaning the model itself can be downloaded, fine-tuned, and self-hosted. This makes Flux the foundation for a large ecosystem of fine-tuned models, workflows, and commercial applications.

Why Flux matters for professionals: Flux.1 Pro achieves state-of-the-art prompt adherence — it follows complex text descriptions more accurately than any previous open model. For commercial image production, this reliability is the key advantage over earlier diffusion models.

The three Flux.1 variants

  • Flux.1 Pro — highest quality, API-only access (bfl.ml), best for commercial production
  • Flux.1 Dev — open weights, non-commercial licence, good for research and personal projects
  • Flux.1 Schnell — fastest, Apache 2.0 licence (fully open source), slightly lower quality but 10x faster than Pro

Where to access Flux

  • Replicate — run Flux.1 models via API or web UI at replicate.com
  • Hugging Face — Flux.1 Schnell and Dev available for free on HF Spaces
  • Black Forest Labs API — api.bfl.ml for Flux.1 Pro in production applications
  • ComfyUI / Automatic1111 — run locally with downloaded model weights
  • Third-party tools — Flux is integrated into many image generation platforms and apps

Flux vs Midjourney

Flux advantages: Better prompt adherence, open weights, API access, fine-tuning capability, lower cost at scale, no style lock-in.

Midjourney advantages: Stronger aesthetic style (the distinctive "Midjourney look"), active community, built-in upscaling and variation tools, no technical setup required.

For commercial applications where accuracy to specification matters, Flux is often the better choice. For artistic exploration where the model's aesthetic sensibility is part of the value, Midjourney remains preferred by many creatives.

Writing effective Flux prompts

Flux responds well to detailed, specific prompts — it follows instructions more literally than Midjourney, which adds its own aesthetic interpretation. This means you get what you ask for, so ask with precision.

Good prompt structure for Flux: [subject] + [action/pose] + [setting] + [lighting] + [style/medium] + [camera/lens details] + [mood]

For photorealism: include camera model, lens focal length, aperture, lighting type. For illustration: specify the exact style (editorial illustration, isometric vector, gouache painting). Flux takes these literally.

Generate a product photo
A [product name and brief description], placed on [surface — e.g. white marble / weathered wood / dark slate], [lighting — e.g. soft studio lighting from the left / harsh afternoon sun / warm tungsten backlight], shot on a Sony A7R V with 85mm f/1.8 lens, shallow depth of field, professional product photography, no shadows on background, [colour tone — e.g. clean and minimal / moody and dark].
Create an editorial illustration
[Scene description — who, what, where]. Editorial illustration style for a magazine article. [Art direction — e.g. bold graphic shapes / detailed pen and ink / painterly watercolour]. Colour palette: [describe]. The image should convey [emotion/theme]. No text in image.
Generate a portrait
Portrait of [describe person — age range, features, expression], photographed with [lighting — e.g. Rembrandt lighting / window light from the left / ring light], [background — e.g. neutral grey backdrop / blurred urban environment / solid dark background], shot on Canon 5D Mark IV with 135mm f/2 lens. [Mood]: [describe]. Photorealistic.
Create a concept render
[Product/object] concept render, [material description — e.g. matte white plastic with brushed aluminium accents], placed on [surface], [lighting — e.g. three-point studio lighting], isometric view, product design render, clean white background, professional product visualisation.
Generate UI/UX mockup screenshot
A realistic smartphone screen showing [describe the interface — app name, type, what it shows]. The screen displays [specific content]. Realistic phone model [iPhone 15 Pro / Samsung S24], held [angle — e.g. at a slight angle, front-facing], clean background, professional app mockup photography style.
Iterate on a Flux image
I generated an image with this prompt: [paste prompt]. The result was [describe what you got]. I want to change [specific element] while keeping [specific element]. Write 3 modified prompt variations that address the change.
Fine-tuning brief for LoRA training
I want to fine-tune a Flux LoRA on [subject — e.g. my product / a person / a visual style]. Describe: (1) the ideal training data I should collect (quantity, variety, image quality requirements), (2) the recommended training parameters for Flux LoRA, (3) how to write activation tokens in the training captions, (4) how to test the LoRA quality after training.
Generate a consistent character across scenes
I need to generate a consistent character across multiple images: [describe character — appearance, clothing, distinctive features]. The Flux trigger word or description I will use consistently is: [describe]. Generate prompts for 3 different scenes featuring this character, each with a different setting and action, using the character description consistently.

Architecture — Flux as a flow-matching model

Flux.1 is built on a multimodal diffusion transformer (MMDiT) architecture combined with flow matching — a generative modelling approach that differs from standard diffusion. Rather than a Gaussian noise denoising process (as in Stable Diffusion), flow matching learns a vector field that transforms between noise and image distributions. This produces more stable training and better gradient flow through the model, which contributes to Flux's improved prompt adherence.

The Flux.1 model has 12 billion parameters (the Pro and Dev variants) — approximately 3x larger than Stable Diffusion XL. This scale is a major factor in its improved quality and instruction-following capability.

Black Forest Labs — the team

Black Forest Labs was founded in August 2024 by Robin Rombach, Andreas Blattmann, Patrick Esser, and colleagues — the core research team that created Stable Diffusion at Ludwig Maximilian University of Munich and CompVis, and later at Stability AI. The Flux release came immediately after the team's departure from Stability AI. The original Latent Diffusion (LDM) paper (Rombach et al., arXiv:2112.10752) is the foundational work underlying both Stable Diffusion and Flux's architectural lineage.

Licencing

Flux.1 Schnell is released under Apache 2.0 — the most permissive licence, allowing commercial use, modification, and redistribution without restriction. Flux.1 Dev is released under a custom non-commercial licence — free for research and personal use, commercial use requires a licence from Black Forest Labs. Flux.1 Pro is only accessible via the paid API; model weights are not released.

LoRA fine-tuning on Flux

The Dev and Pro architectures support LoRA (Low-Rank Adaptation) fine-tuning — training a small set of additional parameters to adapt the model to a specific style, person, or product without retraining the full model. The standard Flux LoRA training workflow uses approximately 15-30 high-quality training images and runs on a consumer GPU (24GB VRAM) using tools like Kohya_ss or Ostris's AI Toolkit. Community-trained Flux LoRAs for various styles and subjects are widely shared on Civitai and Hugging Face.

Source note: Technical specifications from Black Forest Labs documentation at blackforestlabs.ai. Pricing from api.bfl.ml. Architecture from the Flux.1 technical report. All verified April 2026.