The barn in the image is modeled after a Sears-Roebuck kit barn that was available for purchase and (unassembled) delivery to your building site until about 1940. This particular one was a 40×70 foot dairy barn. Modeled and rendered entirely with Blender. Textures were either procedural or painted by hand in Photoshop. A small bit of finishing, as mentioned below, was done in Photoshop. You can click the image for an 1152×864 version.
The barn consists of some very simple models. One thing that I do when I work is to try to plan ahead for beveling on all objects. Almost all of the main objects in the piece are bevelled. If you’re going for photo-realism, you need to do this, as it adds the proper highlights to your models’ edges. Sometimes, it is useful to just model away, then use Blender’s built-in beveling function. This technique was used primarily for boards. In fact, all individual boards in the image (fence planks, white door trim, the open center door and even the electrical post) are duplicates of the same beveled cube.
Detail of boards in the center door. Bevelled with the bevel editing tool.
Other times, I would model the bevels directly into the object as it was created. For example, the window casing was a plane, extruded several times to achieve the desired topology. When extruding inward, I would add one or two tiny extrusions to mimic beveling, instead of a single, long extrusion.
Window casing bevelled while extruding.
Some people have trouble creating good geometry when they want to create a wall with windows. They resort to using the boolean tools, which aren’t the best way to tackle this. A good technique is to use as many sections for your wall base as you will have windows. After extruding your base upward, you can then select each face, do an extrude followed by a scale-down, then deleting the resulting smaller face. A decent way to cut holes in a wall, and produces decent geometry.For the face of the barn, though, I had another problem, which was that the top edge had to follow the curve of the roof. The solution was to create a wall base (basically, a line), with enough divisions (60 in this case) to approximately the roof curve at the top. This made it easy to create windows, as well. I extruded the bottom line only up to the bottom of the first break in the wall (the bottom edge of the center doorway), then again to the next edge (bottom of windows), then to the next (top of the door), etc. Once this was once, I was able to simple select the appropriate faces and delete them, leaving me with nice wall holes.
Click on the image to show a brief animation of this technique.
Afterward, I used PET modeling to create the proper curvature in the upper edge, while not affecting any of the pertinent interior geometry.
Finished wireframe of the barn’s front face.
The roof is a duplicate of the upper edges of the face, extruded to form the proper surface, and finished with PET modeling to shape the overhang on the bottom. The roof overhang in the front of the barn is an extrusion of several of the edges on the front face of the roof. Quick test. Q: Main modeling tool? A: Extrude!
One of the three roof vents.
Roof vents are (beveled!) cubes, extruded several times, then scaled and smoothed in various ways to produce the proper shapes. Pretty basic.Silo
The silo is a simple tube. I decided to actually model the bands running around the silo, as I wanted to be able to see them sticking out slightly on the sides of the tower, and texturing them would not be able to do that.
Torii are hard to model, so I used the NURBS Donut surface primitive, eventually converting it to a mesh and duplicating it along the height of the silo. The roof started as a single section which was then arrayed with the SpinDup
tool. The chute is a flattened tube; the bands on it, a texture.
The roof in edit mode. The SpinDup’ed section is selected.Wires and Cables
The electrical wiring on the face of the barn and the one leading offstage were bezier curves with bezier circles set as the BevObs in the edit buttons
. Don’t be afraid of these tools. They are easy to use, and give great results. Once I had them as I wanted, I converted them to meshes. Afterward, I was able to fine tune their thickness by entering edit mode and using alt-S to scale along the vertex normals.Grass
Two different techniques were used to generate the grass, but first I’d like to talk a little bit about how I decided to do what I did. In 3D, you need to have enough complexity and randomness so the image does not look bland. On the other hand, complete randomness also looks computer generated, and too much complexity will both overwhelm your image, and shoot your rendering times through the roof. On the complexity issue, I was able to get away with much less grass than there is in nature: the grass ends just past the edge of the image, and seen from above, the grass that there is is unnaturally sparse. The low viewing angle of the camera helped the grass to appear thicker than it actually was. That took care of the complexity issue. As for randomness, that came into play in the modeling process, so let’s talk about it now.
I’m a big fan of RipSting’s Fiber generator, and I use version 2.03 at the moment. I did not, however, use it for the main grass on this piece. As good as it is, Fiber cannot make branches or leaves on it’s grass, and I needed more detail than it could provide. I felt that the bulged head at the top of seed grass was an important visual for my image – it’s what the eye is used to seeing in such a situation. Here was my process:
The grass clump on the left, the seed grass on the right.
I modeled a single stalk of seed grass, keeping it as low poly as I could. Working on a copy of the ground object (a fractal subdivided plane pulled around a bit with PET), I selected some sections of vertices and faces, then subdivided them. There was not a lot of rhyme or reason to it, but I didn’t want to be completely random, either. The areas of denser vertices would receive more (and therefore denser) grass. I also used the Select Random verts tool a couple of times for some true computer generated noise.
Click the image for an animation of the ground subdivision process.
Then, I made the ground the parent of my grass stalk, turned on dupliverts, and made the ground mesh a static particle system. In order to get things oriented properly, I had to fool with the settings on both the particle system for the ground object, and tracking buttons on the grass. Once I was happy with the distribution of the grass, I made the dupliverted objects real.
Sample settings for the grass particles. Dupliverts is turned ON.
Note the TrackTo and Up settings for the grass object.
Grass as static dupliverted particles. Note the higher density where there are more vertices.
After that, I had a bunch of seed grass, all the same height and all rotated the same way. It looked like crap. So, I wrote a short Python script
that applied a random but specifiable rotation and scaling to all selected objects. I ran it, and a few seconds later, I had some great-looking grass. Of course, the first time I did this, the grass was way too thick, one of the problems with my last image, so I went through this whole procedure several times until I achieved grass of the proper visual scale.My complexity and randomness rules mentioned about indicated to me that this was not good enough. Even randomized and distributed, the single model of a grass stalk was too uniform. So, I did another pass of this whole procedure, this time using a model of a cluster of simple bladed, V-shaped grass stalks. Of course, I used a different copy of the ground object, with a different semi-random subdivision pattern to make the static particles, so that the different kinds of grass had varying densities in relation to one another. After applying the randomizing script to this new grass object, things were looking very good.Finally, I went to RipSting’s Fiber script for the green ground cover. It makes great flat grass, and that was all that I needed as an underlayer to complete the modeling.
Well, not quite. I still wasn’t entirely happy with the randomness. It was too uniformly random. So, I selected different sections of the finished grass and shrunk them, by using the randomizer script to scale them to between .4 and .8 of their original height. This is why in the finished image you see the grass get smaller in a path leading up to the front doors of the barn, and why there are areas in the extreme right foreground where it is significantly shorter. Finished product: random but showing signs of intelligent and patterned interaction; complex enough to look right, but not so much that it was impossible to work with.
Mostly, the background is a subdivided and extruded plane, pulled around with PET. It is very large, though, as I tried to make it to scale. The trees, however are a good hybrid of modeling and texturing, so I’ll use them to pull the modeling to texturing transition here. All of the trees are cards. Working on the complexity principles above, I decided that four different trees, duplicated judiciously, would produce enough randomness to keep the viewer from noticing they were, in fact, duplicates. But there was no way I was modeling and rendering that many high poly trees. Google Image search was little help, and I kind of wanted to do the whole thing myself anyway, so I fired up Arbaro, an open source java-based tree making application.
Using Arbaro, I made a fairly high poly tree, based on the Cottonwood template I believe, exported it as an .obj file, and brought it into Blender. I squared it up in front of an Ortho camera, did some basic texturing, turned on AO, and rendered a 400×400 .tga file with Alpha set to Key. Then, I applied the nicely provided Normal Map material and rerendered. I rotated the tree so it presented a distinctly different profile and rerendered both color and normal passes again. I did this two more times, until I had what looked like four very different tree images with normal and alpha information.
Left to right: color, alpha, normal.
I mapped the images to four different single face planes (remember to click Normal Map in the texture buttons for your normal map image!), then began placing them by hand. I made nice clusters of trees, then duplicated those, keeping a close eye on the camera view so that I only placed trees where they would be seen. Three scripts helped out, then. I used the randomizer script to give me all manner of short, tall, thin and wide trees. My Drop2Ground script
planted the trees on the terrain, which would have been a huge pain to do by hand. Finally, my ZTrack script
oriented the cards to face the camera while remaining upright.
Wireframe of the hills, covered with tree and grass cards.
I used the same technique to place grass cards in the distance, except that I rendered a portion of the grass I had already created for my base image.Lighting
Before I go in depth with texturing, I’ll take a small break to mention the very simple lighting setup used here. AO plus a single ray shadow sun lamp. The AO was set to Add and to use the Sky Texture. The Sky, which I wanted to be simple, used the SunSky plugin. The addition of AO and ray shadows on sun lamps makes good outdoor lighting very easy to do in Blender.
The shingles needed to be mapped in a perfectly uniform manner, so I used Blender’s UV tools. LCSM mapping did it perfectly the first time, without any additional work on my part. Woohoo LCSM! For the bump map (unfortunately normal maps only work on a flat plane in Blender 2.36 – it would have been great to use them everywhere), I jumped into Photoshop and created a tileable pattern. I then created and image that encompassed the entire roof, and hit it with my pattern. I’m not going to go in detail with the Photoshop techniques, as I’m a very good PS’er and that truly would be a whole other full-length feature. Suffice it to say that I created some custom brushes, then painted the dirt/bleaching/rust on the roof texture in various layers, using a screen shot of the roof’s profile for positioning. Afterward, I added some noise and blurred it a bit to simulate the speckling of the shingles. From pictures I have seen, it does not appear that the actual shingles on this type of barn were speckled, but that’s what people are used to seeing these days. To not include it would have been more historically accurate, but it wouldn’t have looked as realistic.
Click the thumbnails for larger versions.
One thing to be careful of is to not go overboard with the height settings in Blender when bump mapping. On the foor, you can barely see the bumping. It’s subtle, but there. Sometimes I see artists who need to make sure that every little bit they’ve worked on shows up, but it is at the cost of the quality in their finished work.Barn Face
The detail on the front face of the barn is all texturing. I started with a screen shot of the wireframe of the barn. After deciding on a base color, I began to paint in details, like the paint runs, the rust under the electrical connections and the dirt stains under the cross piece and the center door. For the finer details, like the planks, I began in Illustrator. Illustrator lets you make arrays of objects, and in this case, I made an array of 60 vertical lines, evenly spaced. I brought those lines into Photoshop, and they became the base of my bump map. I ticked in the horizontal lines that represented plank joints by hand with a special brush, once again random enough that it looked good, but keeping in mind that someone putting together a real barn isn’t striving for truly mathematical randomness. They just want to efficiently use the wood that they were provided.
Click the thumbnail for a larger version.
And here is another place where I deviated from the actual barn structure. I think that each facing strip in the real barn kit ran from bottom to top without a break. In my image, the verticals are assembled from multiple planks. The reason? The single strip method, although historically accurate, looked fake in 3D. You see it and you BS detector says “Duh. No one built barns that way.” Of course, they did, but it’s not a convincing visual.I will share one specific Photoshop trick with you, as I just came up with it and it was useful twice in this image, and no doubt will be in the future as well. If you need to create variations within discrete cells, say each plank in a wood texture or each square in a silo texture, noise and nearest neighbor scaling are your best friends. Count the number of cells you want to have. In the case of the barn face, I think it was 60. Create a new image that is 60 pixels by 1 pixel. Apply noise to it – start at 40% or so – then scale it up to your full texture size, using the Nearest Neighbor method (NOT bicubic/bilinear/etc.). You will have, in this case, a nice full sized image with 60 vertical strips of varying intensities. You can use this image as a mask for colorization, adjusting levels, or anything else you would like to do to your base color map. For the silo, I made an original image that was 50 wide by 20 high, then noised and scaled, giving me a grid of 50×20 distinct cells on which to run embossing/color/blurring/noise filters in order to achieve my desired color and bump maps.Silo
Once again, hand-painted in Photoshop, using the cell technique discussed above for the patterning on the main body of the silo. I don’t usually do this, but here I layered to color maps within Blender. Normally, I do it in PS, but I already had things as I liked them with the first and did not want to mess with it.
Click the thumbnails for larger versions.Other Wood
I was feeling lazy, so I cribbed the board textures from my dutch sheep barn image and fooled around with their contrast so the bumping was more prominent. Once again, four different boards sufficed for duplication, although each texture was scaled and moved around a bit by hand.
One of the textures used for wood. Painted in PS.More Randomness and Complexity
The grass and trees were both too uniform color for my tastes, the trees in particular. So, to the tree and grass textures, I added two new channels that were set to World coordinates, and that added some color variation on a large scale. To test it, I applied the material to the ground object and rendered, giving a nice marbled texture of tans and greens. Once I had it subtle enough that it didn’t stick out, yet was still noticeable, I added it to the materials of the trees and grass. It varied their coloring just enough to give them the last bit of realism they needed.Also, to add one last piece of visual complexity at low rendering cost, I rendered the most dense portion of the grass from above, with an Ortho camera, using the final render’s lighting setup. I took this image into Photoshop and played around with it until I was happy. Once done, I used this rendered image of grass as the new ground texture, tiled. Normally I avoid tiling. It looks like crap. But in this case, I knew that it was only to provide some complexity and the tiles would not be seen beneath the modeled grass.
Render of grass from above, used as the ground’s texture map.Galvanzied Steel
I made a nice procedural galvanized steel material for the roof vents. It might be useful to someone, so I thought I’d share it
The farmer and the girl were both made with MakeHuman 2 Alpha. It doesn’t have targets for children, so I tweaked the child mesh after I had it out of MakeHuman. The characters were rigged and posed with my standard biped armature. Once posed, I used the Apply Deformation script to create a new mesh in the posed position. I then subsurfed, applied the subsurfing to get a higher res mesh, and deleted all body parts that would be under clothing. For coloring, I began by trying the MakeHuman project’s subsurface scattering script, but the results were too subtle for the small size of the characters in the image. In the end, I used vertex painting to lighten things like the ears, fingers and nostrils where they would be more translucent, and to add eyebrows and color variation to the rest of the skin. The need to paint on a higher res mesh was the reason for the subdivision metioned earlier.
Vertex painting, unposed characters, MakeHuman meshes, and details of clothes creasing.
Once I had the characters posed and painted, I went back to the armature’s rest position and created very simple clothes. I allowed the armature to deform the clothes into the same position as the characters, then applied the deformation. I had to tweak the clothes with PET editing to get them to fall properly. Once the clothes were in place, I added some new vertex loops to up the mesh resolution in strategic places, then selected areas and moved them in and out with alt-S to make creases. In the final render, they are mostly too small to see, but they are there. Textures on the clothes were mostly procedural, although I did UV map the girls dress and apply a floral pattern to it. LCSM to rescue again, for fast, accurate, painless UV mapping. You just think about how the actual clothes would be put together, then draw the seams. LCSM does the rest. Do the commercial packages offer it? If not, we could use it to convert some people to Blender, as it completely rocks.Rendering and Post Work
Everything was put together and rendered in a single pass. I then made a duplicate of the whole .blend file, and gave all objects in the duplicate a gradient material, getting texture coordinates from an empty about the size of the whole scene. When rendered without lights, the material gave my a fake but useable 8-bit z-buffer image. I had tried to use the Show Z Buffer sequence plugin, but it kept giving me garbage.
Click for a larger view of the faked z buffer image.
I used the fake z image to apply depth cueing to the raw render. Basically, I applied the tiniest bit of noise (.5) and gaussian blur (.6%) using the z image as a mask. I also used it to desaturate and lighten the distant objects. After that, I applied a tiny bit of gaussian blur (.4%) and noise (2), in that order, to the overall image. I finished with the normal color correction and sharpening that I would do to any digital photo that crosses my desk.Conclusion
Unlike many projects that I’ve worked on, this one remained fun almost the whole way through. Even writing this article as (almost) fun. To everyone who enjoyed my latest barn image, I’m glad that I could show you something nice. Maybe next time, I’ll do an interior view, with a collapsed roof, and hay everywhere, and all kinds of particles floating around in the streaming sunlight to show off the interior space…