Well… It’s started. You can click the image above or you can click this little bit of text right here to go to my Kickstarter project. The campaign has 10 days. I’m trying to raise $125 to purchase an ISBN for my first independently published book in ebook format. Of course, if we can raise more than that, I can publish the book in print as well. But let’s not get ahead of ourselves. The pledge values are reasonable and the rewards are well worth it… or at least fun.
In any case, this is my first time trying any kind of crowdfunding, so I’m excited to see how it all works (and if it works… I’m a little terrified). If you can offer support, I’d appreciate it. If you could spread the word to folks you know, then I’ll be forever in your debt.
Assuming all of the approvals go through and there are no hiccups, then within the next 24 hours, I’m going to be launching my first ever Kickstarter campaign. If you’re subscribed to my email list, then you’ve already gotten a sneak preview of it (thanks for the feedback, folks!). I won’t go into heavy details here (you’ll see it when the campaign launches), but here’s the long and short of it: I’m raising cash to purchase an ISBN for my first indie published book, Definitely True: Year One, under my M. J. Guns pen name. It’s an exciting time and I’m super stoked to see how this little experiment pans out.
The campaign will run for 10 days. So if you follow me here, on social media, or my podcast, I might get a tiny bit salesy. Of course, if I’m overdoing it, don’t hesitate in letting me know. I listen. I really do.
So yeah… keep your eyes peeled. The countdown for kicking off the Kickstarter campaign commences… um…. ksoon.
So this has been circulating the interwebbernets today a bit and it got me thinking. So much so, that it got me writing here. With any luck, it’ll get you thinking, too.
In any case, the short version goes something like this: Recently there was a mobile app (iDevice and Android) released called Clean Reader. It purports to allow users to “Read books, not profanity.” Writers all over have taken to condemning this app up and down… including a beautifully profane (albeit long) piece by Chuck Wendig. Almost universally, writers have been finding all sorts of fun and creative ways to hurl expletives in the direction of the Clean Reader developers (Chuck’s [can I call you Chuck?] “fuckecho through the canyon of fucks” definitely made me giggle).
But I’m torn.
I’m of a few different minds on this one… four to be specific. I’ll name them: Writer, Artist, Business Monkey, and IP Nerd. Let’s break it down into headings. Headings are fun…
This guy in my head vehemently agrees with Chuck and other like-minded writerfolk. You want to change my words? The ones I slaved over and picked for very specific reason? Kindly go fuck yourself. If you can’t handle profanity in a book, then you probably aren’t in the target audience of the book.
Sometimes profane language is haphazard, but generally speaking, it’s included within very specific contexts… and those contexts are likely to make you even more uncomfortable than a simple “shit” or “asshat” or “cumcicle” (gross).
However, I’ve had a very specific philosophy when considering my art — be it graphical, written, performance, what-have-you. Simply put, that philosophy states that once you put a piece of art out into the world, it’s no longer yours. It belongs to the audience. And the audience can interpret the work however it likes… even manipulate it.
With that perspective, Clean Reader is simply an extension of the audience. It might be ultimately hampering the communication of the intended message, but it’s not my place as an artist to control that. And as someone who is personally interested in open content and permissive licensing, it’s hard to get my ire raised too much by an app like Clean Reader.
Technically speaking, what Clean Reader is doing is creating a translation of the work on the fly. Funny sidenote: when my first kid was born, I tried like hell to convince my wife that we should have a bilingual family. In public, we’d speak English, but in the home, we would speak Profanity. Sadly, I don’t think she ever seriously bought into that pitch.
In any case, in the business of books, translations are controlled by contracts. That is, a writer or publisher may allow a third party to translate a book and distribute it in a foreign market in exchange for splitting the proceeds. This means two things. First, the writer or publisher has to consent to the translation being done. And second, the writer or publisher gets paid. At this point, the second thing isn’t a huge deal for the app… but it may be if they start charging for their cleaning service. The first point, however, is a big one…. and it dovetails into my fourth mind.
In my understanding of copyright law, Clean Reader is producing what’s called a derivative work. If the original work has a copyright with all rights reserved, then derivative works must be approved by the original copyright owner. If they are not, then the unsanctioned derivative can’t be distributed.
Now, from my understanding of the app, Clean Reader isn’t technically distributing the derivative work. You purchase the original property and it gets translated/filtered within the app. The derivative work is created on the fly, but it’s the original being distributed. So maybe Clean Reader is in the clear here… but maybe not. By allowing the purchase of books within their app’s ecosystem with the explicit purpose of making that derivative work, it could be argued that they’re effectively distributing the derivative work. It’s a bit shaky, though… because a ruling in that direction could easily have big ramifications as it pertains to copyright law.
A lot of writers have been slippery-sloping this in one direction (“What if a whole scene in my book is deemed offensive? Will a future version of this app bleep that scene out and replace it with dancing puppies?”), but let’s look at it in the other direction. There’s a lot of technological development going on these days for “legitimate” translation on the fly. Technically speaking, those auto-generated translations count as derivative works, too. Would writers nuke the future possibility of having their work read by a foreign audience because they want to maintain profanity in their native language?
So yeah… four minds. Tallying it up, it looks like I have one “fuck no,” one “it’s no big deal”, one “will I get paid for this?” and one “this will be interesting to watch if it gets to be a legal battle.” And people wonder why I have a hard time figuring out where I want to eat lunch…
What’s your thoughts? Which mind do you agree with most? Do you have a different opinion altogether?
You may or may not have noticed that I have a little graphic to accompany my daily lies. Well… I made that in Blender. And for fun, I recorded a timelapse of the whole process. Of course, I didn’t have any epic music to lay over the video, so instead I hastily recorded a voice over to describe the various steps I took in the process. It’s quick n’ dirty, but I think it does the job pretty nicely for now.
In any case, here’s the video (feel free to mute me if my post-laryngitis voice starts bothering you):
Lemme get technical for a second and ask for some suggestions. First though… story time!
I use version control for all of my creative projects (3D, animation, graphics, writing). I started years ago with Subversion and eventually migrated to Mercurial. There were a number of reasons for this switch, but the biggest was the ability to do ad-hoc local version control without setting up a server. Also, Mercurial handles binary diffing slightly better than Git, making it a better choice for art assets.
Now, I’ve been doing a lot more writing lately and, more importantly, a good chunk of my writing these days has been on my Android tablet. And therein lies my biggest problem. There’s no Mercurial client for Android and there’s no plans for a port (and I’ve tried various workarounds… even came close by running Python on the tablet, but there are limitations in Androids filesystem that prevent it from running an unmodified version of Mercurial and I don’t quite have the necessary chops for modifying Mercurial’s source). My current solution has been to use rsync to synchronize files between my home workstation and my tablet, and only commit on the home box. It works… but it’s pretty kludgey. Add to this the fact that I’ve been slowly migrating from writing in LibreOffice ODTs to plaintext formats like Markdown and Fountain, so for my writing at least, the saved disk space of binary diffing becomes less of a determining factor.
All of this is a long way of saying… I’m in the market for a new version control system for my writing projects. The main two requirements are as follows:
Should not require a server (allow local ad-hoc versioning)
Has an Android client that allows for commits and pushes
Right now, it looks like my options are Git and Fossil. Anyone got any other recommendations or suggestions?
A couple months ago, I was contacted by Michael Crouch, a friend of mine who’s a commercial producer for an NBC affiliate station in Richmond, Virginia. He was producing a promotional spot for Angel Tree, a Salvation Army charity program that runs during the holidays. So standard fare: tight-ish timeline, low budget. However this was for a good cause… and more importantly, he wanted to do something special. An animated spot. He would focus on the overall spot’s script, audio, backgrounds, and environment. I’d be providing character animation that he could lay into the scene.
Oh hell yes.
After batting around a few ideas, we came up with the concept of having a somewhat paper-cut look to the environment. The character animation would be added as if it were a set of ink and watercolor drawings, cut out and laid into the scene. For both aesthetic reasons and time-constraint reasons, it was decided that the bulk of animation would be on 3s and 4s.
We hammered through a few more details on the aesthetic. It was important to avoid singling out any specific ethnic or cultural type as being either a benefactor or beneficiary with respect to Angel Tree. Anyone can help, and everyone can benefit from hope. This presented an interesting challenge in terms of character design. To reflect that message, our character design landed at being a somewhat androgynous child, and as the animation played through, the child’s hair and skin tone would cycle through a series of colors. This way, we could play up the watercolor look and give the spot a bit of a multicultural taste. With the character design in mind, I used a handful of layers in Krita along with a still from his work-in-progress environment to mock-up a proof of concept, both for the aesthetic and for my own personal workflow.
It’s worth mentioning that I did the frames of this character test using the stable release of Krita. I knew at the time that there’s a development branch with animation features. However I hadn’t used it yet and didn’t have a build environment for Krita. That, coupled with time constraints, caused me to decide that I’d just do it the old layer-based way in the stable version.
Happily the character design and the animation aesthetic were approved.
That meant I could push forward with animation. However, as I mentioned, I wasn’t using the Krita animation branch. I could do ink and paint in Krita,but I needed a timeline for doing roughs/pencils. Dopey, the MyPaint fork with animation capabilities hadn’t seen any new updates for about a year. And Pencil, another useful tool that I’d used for my micro-short, Singularity, hadn’t seen meaningful development for even longer. I could’ve maybe used Synfig or Tupi, but as those are more vector-based animation programs, they don’t quite have the drawing tools that want for roughs; they’re too clean. So, predictably, I decided to use Blender.
Yeah, Blender, the 3D modeling and animation package. Most people are aware that Blender is an extremely capable animation suite for 3D computer graphics. Those same people, however, might not be aware of how useful it is as a 2D animation tool. In this case, I’m specifically referring to Blender’s Grease Pencil feature, typically used for comments, draw-overs, and other kinds of annotations. But as an important feature for this project, Grease Pencil layers can be animated and show onionskinning. Furthermore, drawing with Grease Pencil is incredibly responsive. Grease Pencil strokes are, in fact, 3D curves, but they’re responsive enough to match may naturally scribbly drawing style. And as a kicker, there’s a special branch in the Blender development tree called GPencil_EditStrokes that gives additional features of value to a 2D animator such as editable strokes (duh), colored onionskinning, and Grease Pencil fills.
So here’s the basic workflow I used for this project:
Draw rough pencils of the animation with proper timing using Grease Pencil in Blender.
For each Grease Pencil key, generate an OpenGL render of the animation from the Camera view.
As a house-cleaning step, pull each OpenGL rendered key into Blender’s Video Sequence Editor (VSE) as a strip with a duration matching its screen time as a Grease Pencil key.
Commit everything — the .blend file and each OpenGL-rendered Grease Pencil frame — to version control (I typically use Mercurial)
In Krita, pull in each Grease Pencil frame as a layer, nested within its own layer group of the same name.
Ink and paint each frame of the animation, using a layer group for each frame.
For each layer group (animation frame) in Krita, solo the group and export that, overwriting the Grease Pencil rough.
Commit the updated frames (and the Krita project file) in version control.
Back in Blender’s VSE, open the .blend (or refresh it).
Render in an MOV container using the QTRLE (QuickTime Animation) codec, preserving the animation’s alpha channel.
I did this for each of the small animations I needed to deliver: a walk, an idle stand, a sit, a turn, and the final animation grabbing and sharing the Angel Tree tag. This workflow served sufficiently well since each little animation was reasonably short and I was animating on 3s and 4s. However, if the animations needed to be longer or I needed to animate on 1s or 2s, there’s a lot of opportunity for scripting some automation into the process. In particular, getting each of the Grease Pencil frames rendered and pulled back into the VSE is a prime candidate for making more efficient. So is the re-exporting process from Krita.
But, each of the animations were complete and could be arranged together by my client to complete his spot. His part was done in After Effects, but to make sure they would sequence well together, I did a test using Blender.
Once I proved to myself that a reasonably acceptable result could be assembled from these core pieces, I passed them along to Michael for integrating into the rest of the spot. They post the finished piece to the NBC12 Commercial Services Facebook page last week (click the image… embedded Facebook video looks a bit wonky until you click on it) and will be airing on NBC12 in Richmond for at least another week, I think.
Generally speaking, I’m really happy with the finished spot. Michael and his team did a fantastic job of integrating the pieces together and giving a strong, unified feel. There’s a bit of a foot slide issue in how they included the walk cycle, but I think that’s only distracting to an animation nerd like me.
Workflow-wise, I’m really quite fond of the Blender to Krita pipe using Grease Pencil for roughs. There’s room for automating the process and I’d really like to explore adding a bit of 2D/3D integration, but the tools are definitely all there. I also ran into a bit of a nasty bug in the GPencil_EditStrokes branch where if you create a new scene in Blender by doing a full copy, the Grease Pencil layers weren’t really copied… they were linked. So changes in my new scene would absolutely obliterate the Grease Pencil strokes in the original scene. I believe that bug has since been squashed, but boy did it scare the mess outta me while I was working.
But yet, this project came together nicely and I’m quite pleased. I’m looking forward to using this technique more. And things will get even more interesting if the Krita animation branch gets merged into Krita proper… and I’m looking forward to it.
[Update 2014-12-09]: The features from the GPencil_Editstrokes branch of Blender have been merged into the master development branch. They’ll be available for everyone to play with in Blender 2.73 (or, if you’re the adventurous sort, you can check it out in one of the nightly development builds).
Giant Coffee Blunderbuss
Yes. I have a low-traffic, plain text newsletter that I send out when interesting things happen.