Video Gallery
All videos from my YouTube channel. Filter them by clicking on the tags below
- All
- ai
- development
- grasshopper
- rhinoceros
- environmental
- computational
- b.i.m.
- design
- 3dprinted
- modelling
- structural
- viz
- coding
AI as Rendering Engine: Ambrosinus Toolkit v1.2.1
📣 Since Ambrosinus Toolkit v1.1.9 is possible running Stable Diffusion and the wonderful neural network named ControlNET locally via Grasshopper. New components are based on the wonderful project of AUTOMATIC1111 which exploits all the power of Stable Diffusion entirely locally. But it doesn’t end here, in fact through the expansion that integrates a second neural network, ControlNET v1.0 (developed by Stanford researchers) it is possible to use all the power of AI in the main creative activities carried out by architects and designers – AI as a rendering engine! (or something really promising 🧐). 💡With v1.2.1 is possible to interact with the webui-user.bat file, with Rhino views (so pass your model screenshot directly in the AI engine to get a conceptual render) and Upscale the image! ✨ The Ambrosinus-Toolkit project is always under development. Stay tuned!
Win SD Launcher v1.1.2: a Windows App for launching quickly WebUI from AUTOMATIC1111
💡 I developed this simple tool helpful to run quickly Stable Diffusion WebUI from the AUTOMATIC1111 project. This is a .EXE file runnable on Windows OS. With WinSDlauncher v1.1.1 is not required to run the App as a Windows Administrator! Latest update… 📣 This version has added the possibility to set some command arguments and overwrite/create the “webui-user.bat” file according to them. ✨ The Ambrosinus-Toolkit project is always under development and also out of the Grasshopper environment as AmbrosinusDEV experiments. Stay tuned!
Running Stable Diffusion and ControlNET locally via Grasshopper (thanks AUTOMATIC1111)
📣 Since Ambrosinus Toolkit v1.1.9 is possible running Stable Diffusion and the wonderful neural network named ControlNET locally via Grasshopper. New components are based on the wonderful project of AUTOMATIC1111 which exploits all the power of Stable Diffusion entirely locally. But it doesn’t end here, in fact through the expansion that integrates a second neural network, ControlNET (developed by Stanford researchers) it is possible to use all the power of AI in the main creative activities carried out by architects and designers – AI as a rendering engine! (or something really promising 🧐). 💡 I developed some Grasshopper components that lay the foundations for exploiting the enormous potential of this project, without the use of API keys, manageable locally…but above all free! ✨ The Ambrosinus-Toolkit project is always under development and will be part of different Academic Research and experimentations. Stay tuned!
2D image Depth Map to 3D object with Grasshopper
📣 Next Ambrosinus Toolkit version will include obtaining a 3D object (Mesh or PointCloud) through the monocular depth map estimation process – the new motto is: “What You See Is What You Test” WYSIWYT. “DPTto3D” component can generate different depth maps (in many Matlabplot palettes) and a PLY file. The designer will able to manipulate the point cloud as he/she wishes especially throughout the new Rhino 8 command: “ShrinkWrap”. Some frames have been speeded up. The Ambrosinus-Toolkit project is always under development and will be part of different Academic Research and experimentations. Stay tuned!
Stable Diffusion inside Grasshopper - TXTtoIMG IMGtoIMG and IMGtoIMG Masking - CLIPguidance
📣 Since Ambrosinus Toolkit v1.1.5 and LA_StabilityAI-GH build-106 component, it is possible exploring AI generative art with more focus on the details. Now, you can run TXTtoIMG, IMGtoIMG and IMGtoIMG Masking modes in a unique tool. This demo shows also how to use some new/updated components like GrayGaussMask, FileNAmer and SdINinfo. Thank you Stability-AI to have shared your API documentation for Stable Diffusion!
Ask to OpenAI for tips inside Grasshopper with Python (COMPLETION mode) - Grasshopper DEV
📣 Ambrosinus Toolkit is a Grasshopper toolset of components and utilities useful for the user’s projects – From design to management. In this video, I have shown this ghuser component can execute the requesting process for OpenAI “text completion mode”. This AI capability allows designers/users to ask for answers or submitting instructions to AI.
OpenAI (DALL-E) inside Grasshopper with Python Advanced version (VARIATION mode) - Grasshopper DEV
📣 Ambrosinus Toolkit is a Grasshopper toolset of components and utilities useful for the user’s projects – From design to management. In this video, I have shown a ghuser component (advanced – according to Open AI DALL-E features) that I have coded to play with a prompt-to-image requesting process.
OpenAI (DALL-E) inside Grasshopper with Python Advanced version (EDIT mode) - Grasshopper DEV
📣 Ambrosinus Toolkit is a Grasshopper toolset of components and utilities useful for the user’s projects – From design to management. In this video, I have shown a ghuser component (advanced – according to Open AI DALL-E features) that I have coded to play with a prompt-to-image requesting process.
StabilityAI (Stable Diffusion) inside Grasshopper with Python
📣 Ambrosinus Toolkit is a Grasshopper toolset of components and utilities useful for the user’s projects – From design to management. Thanks to Stability-AI API documentation for Stable Diffusion (v 1.5) by DreamStudio is it possible to integrate into Grasshopper our “prompts to image” process throughout my component “LA_StabilityAI-GH”
OpenAI (DALL-E) inside Grasshopper with Python - Grasshopper DEV
📣 Ambrosinus Toolkit is a Grasshopper toolset of components and utilities useful for the user’s projects – From design to management. In this video, I have shown a ghuser component that I have coded to play with a prompt-to-image process. 🆙 [𝗨𝗣𝗗𝗔𝗧𝗘] If you don’t want to install any Python libraries have a look at Ambrosinus-Toolkit_v1.1.1 – https://bit.ly/AmbrosinusToolkit_Food4Rhino it has integrated DALL-E request process thanks “DALLEfromGH” component (info https://bit.ly/DALLEfromGH-AmbrosinusToolkit )
Gradient Generator and Utilities
Gradient Generator allows the user to setup custom colours (at left/right of the grips), custom grip points position, linear interpolation to make the colours transition smoother, lock option prevents Gradient component from editing, sliding the single RGB value of the custom gradient and finally, by baking action, you can add your own gradient directly on the canvas.
i-Mesh - Structural Design | Sample breaking-strength by directions
This algorithm applies some trigonometric concepts useful to define the breaking strength of the fibres used for a sample of dimensions 200 mm x 200 mm The data on the mechanical resistance of the single fibres covered with fluo-polymer were collected by appropriate test campaigns at the PoliMi TextileHub
i-Mesh - Structural Design optimisation | Clichy frame - Making of joints
This algorithm has improved the customization and creation of the interlocking nodes for the frame structure designed for Clichy (Miralles Tagliabue studio) with the collaboration of i-Mesh for the covering of the building module. The algorithm easily adapts to any frame structure and allows you to customize the joint. The process ends with a voxelization-based procedure to uniform the topology of the exportable mesh for 3D printing. Original concept inspired by #Co-de-it project based on “FroGH” plugins.
Modelling the Alessi juicer in Rhino - The Wrong Way!
Modelling the Alessi juicer in Rhino – The Wrong Way! In some cases the reason why I publish video tutorials does not always coincide with the best solution or the perfect way, indeed my videos are functional to the theoretical part of my courses and for this reason, I try to emulate the most common mistakes. In my videos, I teach the young apprentice how to make up for the mistakes made at the very beginning of the modelling phase…so “the wrong way” in the title. Good vision.