Techalicious Academy / 2025-12-11-stable-diffusion

(Visit our meetup for more great tutorials)

STABLE DIFFUSION OVERVIEW

What's Stable Diffusion?

Open-source AI image generation from Stability AI. Unlike DALL-E or Midjourney, they released the weights and code publicly back in August 2022. Anyone can run it on their own hardware.

That openness changed everything. People started downloading it, studying how it works, fine-tuning it, and sharing what they made. A whole ecosystem popped up almost overnight.

Stable Diffusion vs AUTOMATIC1111

These are NOT the same thing:

Stable Diffusion = the AI model itself (the brain)
AUTOMATIC1111 (A1111) = a web interface to control it (the dashboard)

A1111 is a community project, NOT made by Stability AI. It's just the most popular way to use Stable Diffusion. Other interfaces exist:

Tonight we're using A1111 because:

The Open Secret: Training Data Reality

Here's something you should know upfront. Cloud AI services like DALL-E and Midjourney heavily filter their outputs. You type a prompt, you get a clothed person. That's by design.

Local Stable Diffusion models? Different story. Many community models are trained on uncensored datasets. Stability AI maintains plausible deniability, but the reality is: these models learned from everything.

What this means practically:

None of this is a value judgment. Just know what you're working with. If you're demoing at work or showing family, test your prompts first and use appropriate negative prompts. The tutorial covers this in the negative prompt section.

Model Generations

SD 1.x (2022)

Original release. Makes 512x512 images. Runs on consumer GPUs.
Good for 2022, showing its age now.

SD 2.x (Late 2022)

Better quality but broke compatibility with community tools.
Nobody really adopted it.

SDXL (2023) - THIS IS WHAT WE USE

Current generation. Native 1024x1024. Way better at composition,
anatomy, and text. Models are bigger (~6.5GB vs ~2GB) but the
results speak for themselves.

Why Open Source Matters Here

Because Stability released everything openly, the community built specialized versions. Want anime? Someone trained a model for that. Photorealistic portraits? Got it. Western comics? Multiple options.

The models in this tutorial are community fine-tunes built on SDXL, each dialed in for a different look.

Links

A1111 repo: https://github.com/AUTOMATIC1111/stable-diffusion-webui

WebUI runs at: http://127.0.0.1:7860