Build a UI for Stable Diffusion

How to build a UI for a Stable Diffusion texture generator using Python and Replicate on Windows and macOS

Matthäus Niedoba
October 21, 2022
15
min read
Content

Stable Diffusion is an open source deep learning text-to-image model that has gained a lot of popularity in recent days.

As well as generating concept art, Stable Diffusion can also generate textures, restore, and modify images.

Unfortunately, many possibilities are reserved for developers only. In this tutorial, you can build an interface for your artists and experiment with various machine learning models. And all this with only 30 lines of Python and no complicated setup.

Using Replicate and Anchorpoint

To simplify our life, we will use Replicate and Anchorpoint. Replicate allows us to access Stable Diffusion and other machine learning models using their API, so you don’t need to download any training data. Anchorpoint will help us to set up a UI (user interface) very quickly, so you don’t need to download and setup Qt Python.

Python Code

You can download the YAML and Python file and place it in your Anchorpoint project. Keep in mind that you have to create your own API token for Replicate.


# Import the modules
# With this you can get information from Anchorpoint e.g. which file is selected or in which folder the command is executed.
import anchorpoint as ap
# With this you can control the attribute system of Anchorpoint
import apsync as aps
# The replicate module to control machine learning models in the cloud
import replicate
# A module for downloading images from server and the os module for file operations.
import requests, os

# Get the context of the selected file e.g. the file path
ctx = ap.Context.instance()

# The image generation function. It also downloads the image from the replicate servers.
def generate(message):
    # Save the replicate credentials (the token) as environment variable
    os.environ["REPLICATE_API_TOKEN"] = "37db125a1e12a5ba99a8b95a9b9aac75409ed4c4"
    # The global progress indicator so that the user knows that Anchorpoint is something in the background.
    progress = ap.Progress("Running Stable Diffusion", show_loading_screen=True)
    # Select the model. Here you can find an overview of models: https://replicate.com/
    model = replicate.models.get("stability-ai/stable-diffusion")
    # The image is saved on the server. You need the URL to download it.
    output_url = model.predict(prompt=message,num_inference_steps=200)
    # We need a counter for the filename later. Otherwise you will always overwrite your last image.
    count = len(os.listdir(ctx.path))
    # The file path and filename where the image will be downloaded later on
    file_path = ctx.path+"/image_"+str(count)+".png"
    # Here we connect to the server
    http_request = requests.get(output_url[0], stream = True)
    
    # You can just google that part to understand it better. It's simply that the image is downloaded in chunks and written to the file.
    if http_request.status_code == 200:
        with open(file_path, 'wb') as file:
            for chunk in http_request:
                file.write(chunk)

    # Use attributes in Anchorpoint to simply append the prompt to the file
    aps.set_attribute_text(file_path,"Prompt",message)

# The function that happens after the user clicks "Generate Texture"
def click(dialog):
    # Get the value of the dialog into a variable
    input = dialog.get_value("prompt")
    # Start the generation with stable diffusion in another thread so that the application is not blocked
    ctx.run_async(generate,input)
    # Close the dialog
    dialog.close()

# Build the dialog
def create_dialog():
    # Initialize dialog
    dialog = ap.Dialog()
    dialog.title = "Texture from Prompt"
    # Add a label and a text field. The placeholder is the grayed out text that is useful as a tip for the user
    dialog.add_text("Prompt: ").add_input(placeholder="electric sheep, neon, synthwave", var="prompt", width=400)
    # Add a button that executes a function as a callback when clicked. 
    dialog.add_button("Generate Texture",callback = click)
    # Open the dialog
    dialog.show()

# Start the application with the dialog
create_dialog()

Extension options

By the end, you have a tool that you can share with artists in your team or use yourself. You can modify that tool for batch processing texture generation, creating alternatives to existing images or scanning your whole asset library to tag it automatically when using an image-to-prompt model, for example.

Build tools for your artists

Anchorpoint brings you all the components and a simple Python API to build the pipeline for your studio.
Learn about Anchorpoint

Related articles

No items found.