Skip to content

lucianoambrosini.it

ControlNET

AI Outpainting with Stable Diffusion via Grasshopper

With Ambrosinus Toolkit v1.2.6 I finally brought the “Variation” mode to the locally executable Stable Diffusion-based AI component package.
This well-known mode is based on the principle of altering the initial prompt and the image provided as input. In fact, unlike Text-to-Image (T2I), Image-to-Image (I2I) always requires an initial source image.

AI as Rendering Engine: Variation mode locally via Grasshopper

With Ambrosinus Toolkit v1.2.6 I finally brought the “Variation” mode to the locally executable Stable Diffusion-based AI component package.
This well-known mode is based on the principle of altering the initial prompt and the image provided as input. In fact, unlike Text-to-Image (T2I), Image-to-Image (I2I) always requires an initial source image.

MeshyAI toolbox for creating 3D assets now through Grasshopper

As many already know, new platforms and services dedicated to AI are growing visibly. Meshy is one of them and promises to empower content creators to effortlessly turn text and images into captivating 3D assets in under a minute. Waiting for further developments… I’m starting to integrate these features within Ambrosinus Toolkit v1.2.6 😉

Interested in my works? see *About Me* section and contact me by email and/or by social