DEV Community

Cover image for Generative AI for 3D Modeling and Printing
Simon Goldin for Digital Canvas Development

Posted on

Generative AI for 3D Modeling and Printing

Introduction

While exploring generative AI, I've seen what it can do with different web programming languages like JavaScript and I thought it might be interesting to see what it can do with 3d modeling.

One tool many people use for parametric modeling is OpenSCAD which is very utilitarian and has methods for constructing 3d and 2d geometry (and 3d from 2d such as extrusion).

Since "OpenSCAD focuses on the CAD aspects rather than the artistic aspects of 3D modeling" it's great for generating practical and functional 3d models (and also things like fractals, which might be a future topic), and I wanted to explore how chatGPT can help - in cutting down on time, exploring different designs, or at the very least teaching me new things.

The project I decided on was a business card holder - it's something that has a little bit of complexity, can lend itself to creativity, and is also relatively geometric and inorganic; not to mention it's something that I can use!

For a little inspiration, I turned to MidJourney:

AI-generated business card holder designs

Initial Attempt

First, I wanted to see what GPT-4 would give me out of the box, without much direction:

generate openSCAD code for a standard size business card holder

Produces the following code (for the brevity, the "fluff" around it is omitted"):



cardWidth = 90; 
cardHeight = 55; 
cardThickness = 5; 

boxWallThickness = 3; 
boxBottomThickness = 3; 
boxInsideWidth = cardWidth - 2; 
boxInsideHeight = cardHeight - 2; 
boxInsideDepth = cardThickness + boxBottomThickness; 

module box() {
    difference() {
        // Outer box
        cube([cardWidth, cardHeight, cardThickness]);
        // Inner box
        translate([boxWallThickness, boxWallThickness, boxBottomThickness])
        cube([boxInsideWidth, boxInsideHeight, boxInsideDepth]);
    }
}

box();


Enter fullscreen mode Exit fullscreen mode

This code generates two "cubes" (well, rectangular prisms) - one representing a stack of business cards, and larger one - and subtracts one from the other.

This code is valid and renders the following shape (I added colors to illustrate the parts):

simple business card holder 3d model

The general model is right, but there are a few problems with this:

  1. The negative space is a bit too small - a standard business card is 88.9mm x 50.8mm but the cutout is 87mm long, so there will be an overhang.
  2. The depth of this container doesn't allow for many business cards.
  3. I am pretty sure that chatGPT was "trying" to generate a box with walls on each side, but ended up completely removing two of them.

We can play around with some of the variables to give us something more functional, or we can ask chatGPT do it for us:

adjust the model so that there are four walls and a "base" and that a stack of standard business cards fits in the cutout.

Gives us:



cardWidth = 90; 
cardHeight = 55; 
stackThickness = 10; 

boxWallThickness = 3; 
boxBaseThickness = 3; 
boxInsideWidth = cardWidth + 2; 
boxInsideHeight = cardHeight + 2; 
boxInsideDepth = stackThickness + boxBaseThickness;

module box() {
    difference() {
        // Outer box
        cube([boxInsideWidth + (2 * boxWallThickness), boxInsideHeight + (2 * boxWallThickness), boxInsideDepth]);
        // Inner box
        translate([boxWallThickness, boxWallThickness, boxBaseThickness])
        cube([boxInsideWidth, boxInsideHeight, stackThickness]);
    }
}

box();


Enter fullscreen mode Exit fullscreen mode

This results in a cutout that is 92mm x 57mm and 10mm high which gives us a little clearance for the business cards:

business card box 3d model from complex code

The code could be cleaned up (for example by starting off with the dimensions of the entire stack of cards, and going from there) but the result is a perfectly functional container.

Another Approach

We can take a more "engineering" approach by crafting our prompt as if we were writing the code ourselves. This might be less error-prone if the prompt is crafted well:

generate openSCAD code for the following:

a rectangular prism that is 98mm across, 63mm deep, and 13mm high.

subtract a prism that is 92mm x 57mm and 10mm high from the top center of the previous prism.

Generates a very simple block of code (and goes on to explain what it does):



difference() {
    cube([98, 63, 13]); // Outer rectangle
    translate([3, 3, 3]) cube([92, 57, 10]); // Inner rectangle subtracted from the outer rectangle
}


Enter fullscreen mode Exit fullscreen mode

And looks identical to the previous version:

business card box 3d model from simple code

Tradeoffs

The second approach resulted in much less code (generally good!) which was also more straightforward, but we had to do a little more "manual" work like finding the exact dimensions we need and breaking the prompt up into steps, as if we were writing the code ourselves.

The first approach might still be better for exploration and experimentation, if we want to see wildly different versions of what a "business card holder" can look like.

Either way, we get the same model which can be 3d-printed in a little under two hours:

screenshot of model imported and sliced in PrusaSlicer

Going Further

Let's combine these approaches to create a more interesting design.

Generate openSCAD code for standard size business card holder composed of the following modules:

  • a base that is flat against the surface it will be placed.
  • a small front face "lip" connected to the base that keeps the cards from slipping out.
  • a back that has a bottom side that connects to the back side of the "base". This "back" shape is almost vertical but with the top rotated away from the front.

Color-code each module.

Skipping the code here, this renders:

openSCAD screenshot of model generated from above code

Obviously, that's not how we want the "back" oriented.

Another attempt:

Generate openSCAD code for standard size business card holder composed of the following modules:

  • a base that is flat against the surface it will be placed.
  • a small front face "lip" connected to the base that keeps the cards from slipping out.
  • a back that has a bottom side that connects to the back side of the "base". This "back" shape is almost vertical but rotated so that this "back" shape is tilted back, away from the front of the model.

Color-code each module.

another attempt; back section is angled the wrong way

Regenerating the response strays further away from what we want:

attempted model with parts not aligned or rotated properly

Further generations aren't much better, so we'll be direct when discussing these actions by giving it specific dimensions:

Generate openSCAD code for standard size business card holder composed of the following modules:

  • a 2mm thick, 90mm across, and 20mm deep bottom that is horizontal to the surface it will be placed on.
  • a 2mm thick and 5mm tall front face "lip" at the top front of the base that will keep the cards from slipping out.
  • a 2mm thick and 40mm tall vertical back side. Cards will rest on this.

Rotate and translate each module to adhere to the description.

Color-code each module for debugging. Do not provide anything except for the code.

Does pretty much we want:



$fn = 100; // Higher resolution

module base() {
    color("Blue")
    cube([90, 20, 2]);
}

module lip() {
    color("Green")
    translate([0, 15, 0])
    cube([90, 2, 5]);
}

module back() {
    color("Red")
    translate([0, 0, 2])
    cube([90, 2, 40]);
}

// Combine modules
base();
lip();
back();



Enter fullscreen mode Exit fullscreen mode

simple card holder model

Rotating the back part and placing it correctly took some trial and error: rotate the "back" 10 degrees so that the top is angled away from the front and move the "back" module so that the bottom edge connects to the back edge of the base. ... fix the rotation direction ... fix the rotation direction and placement so that the back is at the back, away from the lip ... move the back piece to the back edge of the base.

Eventually, I got more or less what I wanted:



$fn = 100; // Higher resolution

module base() {
    color("Blue")
    cube([90, 20, 2]);
}

module lip() {
    color("Green")
    translate([0, 15, 0])
    cube([90, 2, 5]);
}

module back() {
    color("Red")
    translate([45, 0, 1]) // Move upwards by 1mm and center for rotation
    rotate([10, 0, 0]) // Rotate 10 degrees
    translate([-45, 0, 0]) // Re-center after rotation
    cube([90, 2, 40]);
}

// Combine modules
base();
lip();
back();


Enter fullscreen mode Exit fullscreen mode

simple business card holder with front lip, base, and tilted back

After this point, I prompted GPT-4 to add "caps" to the sides:

create two quadrilaterals that will act as left and right "caps" described as:

  • each cap is 2mm wide
  • bottom edges are flush with the bottom of the base
  • front edges are flush with the front lip
  • back edges are flush with the back part
  • top edges connect between the middle of the back part and the top of the front lip

The best way would probably be to create the points and then extrude and then move them to different positions based on "left" or "right"

Which produced a syntactical error and not what I was going for:



module cap(right) {
    color("Yellow")
    points = [[0,0],[90,0],[90,15],[45,35]]; // Define points of a polygon
    linear_extrude(height = 2)
        polygon(points=points); // Create the polygon
    translate([right ? 88 : 0, 0, 0]); // Position the cap
}


Enter fullscreen mode Exit fullscreen mode

incorrect

But it was on the right track, programmatically-speaking.

Given that one of my initial goals was to speed up development, I took the reigns and fixed the issues manually:



module cap(right) {
    color("Yellow");
    points = [[0, 0], [24, -4], [4, 16], [0, 16]]; // Points on the X-Y plane
    translate([right ? 90 : 2, 0, 0]) 
    rotate([0, -90, 0]) // Rotate points to Y-Z plane
    linear_extrude(height = 2) 
    polygon(points = points);
}


Enter fullscreen mode Exit fullscreen mode

and fed it back to chatGPT.

After this point, I asked chatGPT to increase the width from 90mm to 95mm to account for the end caps, "etched" text into the back, and then made some manual adjustments to arrive at my final (for now) product:



module base() {
    color("Blue")
    cube([95, 20, 2]);
}

module lip() {
    color("Green")
    translate([0, 18, 0])
    cube([95, 2, 5]);
}

module back() {
    color("Red")
    difference() {
        translate([0, 0, 1]) // Move upwards by 1mm and center for rotation
        rotate([10, 0, 0]) // Rotate 10 degrees
        cube([95, 2, 40]);
        translate([85.5, -2, 18]) // Adjusted position of the text
        scale([.65, 1, .65]) // Decrease scale of the text
        rotate([260, 180, 0]) // Adjusted rotation to make the text parallel with the 'back'
        linear_extrude(height = 2, convexity = 2)
        text("digitalcanvas.dev", font = "Merriweather"); // Text to be cut out
    }
}

module cap(right) {
    color("Yellow");
    points = [[0, 0], [24, -4], [5, 18], [0, 18]]; // Points on the X-Y plane
    translate([right ? 95 : 2, 0, 0]) 
    rotate([0, -90, 0]) // Rotate points to Y-Z plane
    linear_extrude(height = 2) 
    polygon(points = points);
}

// Combine modules
base();
lip();
back();
cap(true); // Right cap
cap(false); // Left cap


Enter fullscreen mode Exit fullscreen mode

3d model preview with

I loaded this into my slicer (PrusaSlicer) and began the print!

model loaded into PrusaSlicer

Final thoughts & takeaways

Ultimately, I didn't get the level of polish that MidJourney teased was possible, and the process was imperfect but a good learning experience for both working with Generative AI and OpenSCAD.

GPT-4 gave valid code the vast majority of the time and I was able to adjust it when needed. Having larger chunks of code generated for me definitely saved time - both in typing and looking up documentation - and being able to tweak specific numbers and feed it back to chatGPT allowed for a relatively smooth workflow; though I concede that as a first try, using chatGPT was slower than coding it by hand; I spent a lot of time checking and tweaking the generated outputs (not to mention having to wait for rate limits to expire).

There were some themes that came out of this. Generated OpenSCAD code was prone to mixing up axes when rotating and translating, and getting the right "Points on the X-Y plane" was a struggle. Doing this manually was much faster, but prompts like "the module was rotated along the wrong axis" usually worked, too.

It's also important to be as direct as possible - do not assume "module A should connect to module B" will result in what you expect; give more direction: "the bottom edge of module A should be flush with the top edge of module B and the smaller edge should be centered on the larger edge."

Finally, it helps to break the end goal into smaller tasks (e.g. "generate module A", "adjust model A", "add module B") rather than start with a larger prompt that has more things that can go wrong. Interestingly, generating modules was generally less error-prone than modifying them.

In my opinion, it's best to treat it as pair programming where you hand the work off between two software engineers while speaking in "the highest level of abstraction" (which, in some cases, is lower than you'd think).

Thank you for reading! Have you used generative AI for 3d modeling or printing? What approach worked well for you? I would love to hear about other experiences.

Of course, I can't leave this post unfinished! Here is the final product:

photo of final 3d-printed product

The real business cards are in the mail, but I printed a fake one!

photo of final 3d-printed product with a business card placeholder

Top comments (4)

Collapse
 
andypiper profile image
Andy Piper • Edited

Cool post. Did you share the final code and model on any of the 3D model sites like Printables? Could be helpful for others to learn more about OpenSCAD.

I did something similar but instead of OpenSCAD, went to another service that offers to generate models from 2D inputs. It is not free and you can’t code it, but it was an interesting exercise.

Experiments in digital making – The lost outpost

Iterating on 3D printed designs generated from an imaginary 2D digital brain – Midjourney, Kaedim, and understanding printables.

favicon andypiper.co.uk
Collapse
 
goldins profile image
Simon Goldin

Thanks for reading! Good idea, I'll dig up the final FINAL code and upload it to Printables (that's my preference too).

Kaedim definitely seems like a cool tool! Using it with generative image services hints at some potential generative 3d consumer services that I'm sure exist or are at least being worked on.

Actually this reminds me of this AI-generated table model

which links to this interesting article that expands on generative "organic" design.

Collapse
 
andypiper profile image
Andy Piper • Edited

I've also now played with Luma AI, which is more of a "Midjourney-meets-3D modelling" tool (most visible in the fact that the UI is Discord). I haven't had a chance to print any of the models I had it design for me yet, but it also seems interesting.

Collapse
 
mariadsouza362 profile image
maria dsouza

Generative AI is truly revolutionizing the world of 3D modeling and printing, making the design process more intuitive and efficient. It's exciting to see how this technology can generate complex models that were once difficult or time-consuming to create manually. When paired with high-quality 3D printing services in Qatar, businesses and creators can bring these AI-generated designs to life with precision and speed. From architectural models to custom parts, the combination of AI and 3D printing is pushing the boundaries of what's possible in design and manufacturing.