Skip to main content

Designing a Tattoo with Blender

·2370 words·12 mins

Last year I decided that I wanted to get another tattoo. I wanted to commemorate my love for travel with a unique piece of art that covered the lower half of one of my legs. Seeing as how this piece of art is permanently embedded into my body, I wanted to be directly involved with the creation of the design, if not drawing and painting it myself. And being the 3D nerd that I am, I decided to use the tools that I’m comfortable with.

About the design
#

For a long time, I’ve been fascinated with the Japanese Sumi-e style of ink paintings… especially since I’ve been doing more painting with coffee. I like the expressiveness of the style along with its general simplicity. And the tonal varieties are really nice, too.

So I wanted something in that style, but with a focus on living a travel lifestyle in the modern age. So as a motif, the thought I had was to have a road winding through some mountains on my calf area. At the same time, so much of my family’s travel adventures has us out in nature. I’m a sucker for a rocky, bubbling creek or shallow river. So the idea was to have that motif live on my shin as a kind of complement to the road scene on the back.

I spent a lot of time doing sketches for these ideas, but it became increasingly clear that what I really needed was the ability to visualize the design in the round… ideally on my own leg. However, if you’ve ever tried to draw on the outside of your own leg, you’ll know that contorting yourself that way and drawing is pretty difficult, not to mention uncomfortable.

I needed to find a different way.

Let’s do it in 3D!
#

Suddenly, the obviousness of it all hit me… I’m a friggin’ 3D artist. I have all the tools and [hopefully] skills to previsualize my tattoo in 3D space on my own leg. So that’s what I did.

All told, the process was actually pretty straight-forward:

  1. 3D scan my leg
  2. Retopologize the scan data
  3. UV unwrap the retopologized mesh
  4. Paint the tattoo design as a texture
  5. Share the texture and 3D model with a super-talented tattoo artist

Now… although the process was simple to describe, that doesn’t mean it was necessarily easy to do. And, of course, I went through the process using open source tools. Let’s take a look at each step.

3D scanning your own leg
#

As it turns out, photographing your own leg from every possible angle requires nearly as much contortionist skills as drawing on that very same leg. I had all kinds of issues trying to verify that photos were in focus and that there was sufficient coverage of the whole leg.

Ultimately, I had to admit my own limitations and ask for help. I handed my phone over to my incredibly patient wife and asked her to shoot a few videos of my leg as she walked around it. The whole thing was pretty comical to watch, with me standing still on a picnic table while my wife walks around me, recording video of just one of my legs.

But we got the video captured. Step one of photoscanning complete!

Of course, the next step is converting that video into a 3D model. To do that conversion, I used Meshroom from AliceVision, an open source 3D photoscanning tool. The challenge, however, is that Meshroom doesn’t take video as input by default. It prefers having an array of photographs… and importantly, those photographs should ideally also include metadata about the camera that took them.

Fortunately, most of that work could be done with good ol’ FFMPEG. Basically, I just needed to run the following command on my source video:

> ffmpeg -i [video_from_phone].mp4 -vf fps=1/30 [output_folder]/%03d.jpg

The only real problem with this approach is that it doesn’t always preserve all of the embedded metadata from the camera. For this… I cheated a little bit. The video from my phone (it was a Google Pixel 6 Pro) didn’t really have the kind of data that Meshroom was looking for… so I took a photo from the phone and borrowed the EXIF data from that image. I think I had to massage the EXIF data a little bit (I’m pretty sure I just did that in GIMP) and then I used exiftool with a little bit of bash scripting to copy that EXIF data to each of the images I pulled using FFMPEG. The command looked something like this:

> for i in [output_folder]/*jpg; do exiftool -TagsFromFile [exif_source_image].jpg $i; done

And then I had a whole directory of images ready for loading into Meshroom!

From there the processed flowed pretty smoothly along the lines of most online documentation and tutorials for Meshroom. Load the images. Remove any images that are blurry or don’t include the desired subject. Slowly walk through each of Meshroom’s processing nodes to ultimately generate a usable OBJ file and texture.

Photoscanning my leg in Meshroom

Just a note in case you’re wondering why there’s already a tattoo on my leg in the screenshot… this image is from the second photoscan of my leg that I did because I ended up getting the tattoo in two separate sessions.

The mesh is ugly! Fix it with retopo!
#

If you’ve done any kind of photoscanning, you’re likely already familiar with how nasty the mesh topology tends to be on photoscanned meshes. If you haven’t, then let me tell you, it’s nasty. On its own, the geometry and texture are functional in as much as you have data in three dimensions. But if you want to do anything with that mesh, like modify the texture or rig it for animation, then you’re in for a barrage of annoyances.

Fortunately, because 3D sculpting has been so tightly integrated into the modern modeling workflow, we have tools that can dramatically improve mesh topology and get something infinitely more workable. There’s a little bit of clean-up involved to remove superfluous geometry, but the core of the process that I’m talking about is called retopology. Basically, retopology is the process of building a new 3D mesh, using another one as reference.

It’s not as arduous as I’m making it sound. The whole thing can be done in Blender. There are some really great add-ons that can help smooth out the retopo process, but because a leg is essentially a cylinder that’s been reshaped to look leggy, it’s not really necessary to pull in those heavy guns.

In fact, the whole thing can pretty much be handled with Blender’s Shrinkwrap modifier, combined with a bit of snapping and some minor vertex tweaking.

Retopo!

You might notice that I didn’t retopologize my foot. That’s for two reasons: first, I wasn’t tattooing my foot, so having clean geometry there wasn’t super critical. And second, I’m lazy.

Prepping the canvas
#

With the mesh retoplogized, you’d think I’d be ready to start working on the tattoo design. Not yet. The mesh is nice, but that on its own doesn’t give a sufficient space to start painting the tattoo texture. The mesh needs to be unwrapped first. For all intents and purposes, I needed to do the 3D equivalent of cutting the skin off my leg and laying it flat so I could paint on it. This process is called UV unwrapping.

Again, because the mesh itself is just a fancy cylinder, the process is simple. Mark a seam (the “cut line” for skinning my leg) and then use Blender’s built-in unwrapping algorithm to get my flat painting surface. The main thing I wanted to focus on was making sure that the painted texture was mostly a clean rectangular shape. This would ensure that when the tattoo design was printed out, it would go cleanly and could then easily be wrapped around my physical leg in the same way as the texture map.

Unwrapping my leg

Technically, I could be done here and I could start painting. However, I really wanted to maintain the texture of my leg that I got from the original scan. So there was one more small step: texture baking. The actual process of texture baking in Blender is a little convoluted, but what happens is the texture colors from the photoscanned mesh are remapped to the cleanly retopologized and UV unwrapped mesh that I’d created.

Whew. Now I could start doing the thing that I’d originally set out to do. I could start designing my tattoo.

Finally… painting the tattoo
#

It might be that although this step was the whole point for this exercise, it may be the least interesting. For the most part, I painted the texture directly on the mesh using Blender’s built-in texture painting tools. In fairness, the texture painting toolchain and workflow within Blender could certainly use some love. It’s definitely not at the level of something like Substance Painter. It could definitely use things like layers, wet brushes, and improved color mixing. However, it’s no slouch.

But I didn’t stay exclusively in Blender. A lot of my initial sketching for this concept happened in MyPaint. I find that its simple, unintrusive interface aligns nicely with the way I like to work. So the first step was actually pulling my MyPaint sketches into Blender as a base texture. From there, I used Blender’s painting tools, both in the 3D Viewport and the Image Editor, to refine positioning on my leg and work in more details.

But like I said, I wanted to match the Sumi-e ink painting style. There’s a lot that can be done with Blender’s smudge and blur brushes, but they have their limits. And sadly, the infinite canvas in MyPaint tends to make it complicated to round-trip a texture between it and Blender.

This is where Krita proved to be really useful for me. I could use the same brushes I’m comfortable with in MyPaint, plus there are some additional really nice ones of wet mixing… and Krita keeps the original image size without much monkeying around. So with the basic painting done in Blender, I could save the texture out as an image and do further refinement in Krita. Then save and reload in Blender to check in 3D. It’s a bit of a traditional texture painting workflow, but I’ve gotta say that it works really nicely.

Painting my leg in Blender… but also in other apps

And in full disclosure, I did also use GIMP to do a bit of manipulation and pixel pushing on the image to tweak positioning a bit.

Ultimately, though, I was able to get the job done to a point where I was really happy with the result. Have a look at it in 3D below. Just click and drag to orbit around the model. If you’re on a phone, you should also be able to zoom with pinching and pan with two fingers. On a desktop, use Shift and Ctrl when clicking to zoom and pan.

Incidentally, the viewer here is embedded code from the aptly (though admittedly boringly) named Online 3D Viewer… an open source Javascript-based 3D viewer for the browser. In fact, this code it what lives behind 3dviewer.net. I exported my painted leg model from Blender to GLTF format and saved it to my phone. Then, I brought loaded 3dviewer.net on my phone browser and opened the file there to show it to the tattoo artists I asked to do the work.

Getting the tattoo done
#

Speaking of tattoo artists, I haven’t yet gone done the path of doing tattoo art, so I wasn’t going to do this myself. Also… as I already mentioned, the amount of contortion required to simply draw on myself secured my decision to have someone else do it.

But who? The Sumi-e painting style isn’t something that’s commonly seen in a lot of tattoo work. So it’s not like I could just walk into any ol’ shop with this design and expect it to get done right.

Fortunately, there is an adjascent style of tattoo art that does come close to Sumi-e ink paintings… watercolor tattoos. Of course, artists that specialize in that aren’t always close by. Fortunately… I travel full time. So I have a lot of flexibility in choosing the artist, so long as they have availability that aligns with my travel schedule.

With a bit of persistent searching and some serendipitous timing, I found exactly the right artist for this piece. Angelina at Human Canvas Tattoo in Fredericksburg, Virginia was able to bring this goofy idea of mine to life. She was very patient with my nerdy approach to designing the tattoo and I’m super pleased with the results of her work. You should for-sure check out her work… and actually all of the work done by the artists in that shop. It’s really great stuff.

The whole leg piece was actually done in two separate sessions… mostly because I decided to do the shin after I had the calf done and I liked it so much.

Would I do it again?
#

So it comes to the final question… would I take this approach again for this tattoo, or even a future one?

Yes.

The tattoo artists that I’ve spoken with seemed to appreciate having this kind of previs work done because it reduces the amount of guesswork and stuff that needs to be figured out in the moment… or in advance. Of course, since they’re tattoo artists and [mostly] not 3D artists, I don’t expect any of them to be adopting this workflow.

That said… if photoscanning didn’t require so many images (at least 50 in total) and compute power to produce the base 3D model, I could see this workflow being really useful. I could even imagine a kind of service website where people could upload images of themselves and plan their own tattoos in advance of seeing an artist. But right now photoscanning is a bit of a pain in the ass. Maybe someone will figure out how to make it less cumbersome.

In the meantime, I’m happy with the tattoo that I’ve gotten. I don’t currently have any ideas for future ink, but I’ll probably be taking this approach when I do.