ATk v1.3.2 BIG improvements
I recommend using WinSDlauncher (Windows OS App) side by side with ATk to run it over LAN.
This update took longer than expected. However, I believe that the implementation of Stable Diffusion and ControlNET based on a single CN unit is now substantially complete. I would like to remind that here you will find the new Installation user-guide PDF useful to setup StableDiffusion (A1111) and ControlNET before running the Ambrosinus Toolkit (AI subcategory).

After fixing and improving several functionalities of the components designed for text-to-image, image-to-image, and outpainting, I decided to integrate some of the main features of Stable Diffusion and ControlNET directly into the context menu of these components. For example, I added the ability to select “low vram” or “pixel perfect” modes, as well as the option to choose the type of sampling performed by the samplers (Automatic/Karras/Uniform).

After fixing and improving several functionalities of the components designed for text-to-image, image-to-image, and outpainting, I decided to integrate some of the main features of Stable Diffusion and ControlNET directly into the context menu of these components. For example, I added the ability to select “low vram” or “pixel perfect” modes, as well as the option to choose the type of sampling performed by the samplers (Automatic/Karras/Uniform). As part of the T2I procedures, I integrated a CSV sheet containing multiple pre-set styles (approximately 500+) from the network; the styles.csv file (downloadable from here) must be copied into the main directory of Stable Diffusion (…\stable-diffusion-webui\styles.csv). Through the new SDstyles component, users can now select their desired style type and apply it to the input prompt to modify the final image accordingly.

Regarding the I2I (image-to-image) processes, the context menu has been enhanced with two additional sets of settings: the first one allows users to specify whether ControlNET should apply inpainting to the masked area or the unmasked one (with the masked area being the default option). The three options Fill/Latent noise/Latent nothing are inpainting modes that enable you to create or alter pixels within the masked area. The second set, “Mask blur”, allows you to define the mask’s sharpness where it intersects with the source image. Additionally, for the I2I procedures, I have introduced the IP2P (InstructPix2Pix) processor, which operates pixel by pixel to transform the source image into the one described in the initial prompt, with or without a mask.
Mask images are a crucial component in I2I procedures. For this reason, in addition to recoding the Python Cluster of the GrayGaussMask component in C#, I decided to enhance the ImageMask component to better handle mask colours, ensure proper output image formatting, and provide the option to invert black and white mask colours.
Since the ControlNET processors themselves can generate masks (such as canny, mlsd, depth, lineart, scribble, etc.), I added a new “SetPreProcess” component that allows users to generate such masks before initiating the GenAI processes. Users can modify the features and enhance the final effects of certain masks using the A-B threshold sliders. By default, this process occurs in a temporary folder that is deleted when the user sets the component to false. However, through the context menu, users can make this folder permanent and utilize the masks for various procedures. The input Type for this component must come from the same value list used for T2I or I2I; however, by enabling the “Autopopulate” option, users can automatically populate the connected value list with the main ControlNET processors
The most effective way to integrate StableDiffusion + CN into the Grasshopper workflow is to manage the saving of the 3D model image shown in the Rhino viewport in a simple and fast manner. Through the two components, “ARatioImg” and “ViewCapture,” I improved the management of image sizing (both passed as baseIMG input or as screen captured), modification of the width and height of the image according to the Aspect Ratio, and management of the frame format in terms of AR by directly displaying the preview of the frame itself in the Rhino viewport. This component can also save the image with different zoom factors and image formats.

Running ATk over the Local Area Network (LAN) side by side with WinSDlauncher a Windows OS app for StableDiffusion (A1111)
Hold on! The title contains too much information. You’re right, but the matter is actually much simpler than it appears. Let’s address this step by step. As I mentioned in previous articles about ATk development within Grasshopper, StableDiffusion (A1111) is a project that can run on its own Local Area Network (LAN). During my presentation at the EDD24 event at the European Institute of Design in Milan (PART 1), I explained the advantage of this approach: it allows you to dedicate a more powerful server machine to handle image processing through GenAI, while using a less powerful device, such as a laptop, to run ATk through Grasshopper and launch procedures remotely. Here’s the scenario I envisioned:
In this version of ATk, I’ve focused on integrating the WinSDlauncher app, which I developed a couple of years ago before creating the Ambrosinus Toolkit (a behind-the-scenes story). This Windows OS application was developed alongside ATk and was released with version 1.1.4 (check more info here). The new version enables users to: Open a custom port on their LAN network
Detect the available IPv4 address
Enable the Windows firewall for outgoing traffic on that port (within their LAN).
Additionally, WinSDlauncher allows users to launch the WebUI client of (A1111), which can then be managed remotely from a laptop connected to the same LAN network.

To enable users to manage the GenAI process both locally and remotely from their laptop (since the CMD window only appears on the server PC, not the laptop), I found it valuable to implement a separate management window for the WebUI image processing. This window opens automatically whenever a user initiates one of the three generative macroprocesses (T2I, I2I, or Outpainting). This enhancement not only allows users to view the progress of image generation but also enables them to skip or interrupt the process without having to restart the CMD window.

How to set up ATk + WinSDlauncher over LAN
As always, a video is worth more than thousand words😉
In the upcoming version... Multi ControlNET units (partially developed in v1.3.2)
This implementation partially completes the development of the ATk subcategory dedicated to StableDiffusion and ControlNET. While it will require some reworking of the logic behind the components I have broadly categorized as GenAI, several key features, such as style transfer, are already functional. I am releasing these new components as Work in Progress (WIP) while preparing a new release that will address features related to multi ControlNET units (mCNu).
mCNunits Video demo
As always, a video is worth more than thousand words😉