The Age of Monochrome Illustration

Comic Strip Illustration using Brush-and-Ink
Comic Strip Illustration using Brush-and-Ink

Is this the end of the Age of Monochrome Illustration?

I painted the illustration frame above many years ago using the “standard” comic strip technique; black ink applied onto white card with a brush. At the time, I gave no thought to the idea that that technique might become outdated, and even within my own lifetime.

I first learned to use the brush-and-ink technique myself while at university, although I received no formal training in it. I basically figured it out by making several visits to an exhibition of illustrative artwork that displayed work done for the BBC’s house magazine Radio Times. The exhibition took place at the Victoria & Albert Museum, which was just around the corner from Imperial College, and thus was very convenient for me. The front cover of the exhibition catalog is shown below.

Art of Radio Times Exhibition Catalog
Art of Radio Times Exhibition Catalog

Following that exhibition, my first attempt to use the brush-and-black-ink technique was to illustrate a poster for a lecture titled The Psychology of Gambling. My poster illustration is shown below.

Psychology of Gambling
Psychology of Gambling

It was also my responsibility to prepare the print masters for my posters. When preparing the master for the poster above, I learned a valuable lesson about the use of solid expanses of black! Although they do make for a striking design, half-screen reproduction processes didn’t handle them well, so they were best avoided in those days. (Modern printing techniques are less prone to this kind of problem, but it’s still something to bear in mind.)

Ian Ribbons

Recently, I wrote an article about my experiences in an Illustration class that I took at St. Martins School of Art, London, way back in 1982. My tutor for that class was Ian Ribbons, who (unbeknown to me at that time) was a fairly famous British illustrator. (I find it sobering to reflect that he may have been the only art teacher I ever had who was a noted artist in his own right.) My experience in that class, and Mr. Ribbons’ guidance, were immensely helpful to me in developing my own styles and approaches to art projects.

Years later, while browsing in a secondhand bookshop, I came across a copy of a 1963 book by another famous British illustrator, Robin Jacques, in which he compiled biographies and work samples of many contemporary artists (one of whom was Ian Ribbons, which was what drew my attention to the book). The book is called Illustrators at Work, and the front cover is shown below.

Monochrome vs. Black-and-White

The notable but unspoken common characteristic of every art sample in the Illustrators at Work book is that it is not only shown in monochrome (or grayscale in computer terms), but was specifically produced for monochrome-only reproduction.

(Such artwork is typically called “black and white”, but that’s not strictly accurate because much of it includes shades of gray. Here, I’ll use “black and white” to refer only to artwork that literally uses only those two colors, and does not include grays. I’ll use “monochrome” to refer to artwork that consists of gradations from one color—usually black—to white.)

That fact made me think about how much art, illustration and reproduction methods have changed during my lifetime. For most of the twentieth century, it was taken for granted that most artwork for printed reproduction would be monochrome, primarily for economic and technical reasons. Most books, newspapers and magazines were printed entirely or mostly using only black ink, so there was no possibility of reproducing anything in color.

Why Does Monochrome Work?

I notice that very few people stop to consider why we accept some monochrome images as being valid two-dimensional representations of a scene, when we would not accept certain other kinds of monochrome images.

For example, for many years most photographs were monochrome (“black and white”). Provided that the grayscale in the image corresponds to that of the actual scene, the human brain accepts it as valid and can interpret the content, for example by recognizing a face.

However, if the grayscale in the image does not correspond to that in the real scene, the brain cannot interpret it correctly. For example, if shown a monochrome photographic negative, most people would have difficulty identifying a face that they would immediately recognize if shown the corresponding positive image.

Why is this the case?

I described in a previous post how the human visual system relies on various types of light receptor cells within our eyes. One type are called “Rods”, and these provide us with monochrome vision in low light conditions. It is because of this ability that we accept a monochrome image as being a valid representation of a scene; our brains just assume that we’re looking at something in low light.

Reproduction of Illustrations

Until the twentieth century, illustrations that were intended for printed reproduction were often produced using intaglio techniques. This involved the creation of the illustration by literally incising lines onto a metal or wood surface, and thus all shading had to consist of patterns of lines or dots. Perhaps one of the most famous masters of this technique was Sir John Tenniel, who illustrated Lewis Carroll’s “Alice in Wonderland” and similar works.

Last year, I parodied Tenniel’s style to produce a satirical image of a well-known politician throwing a characteristic temper tantrum. Tenniel’s original images of Tweedledum and Tweedledee, which inspired my image, were woodblock cuts, but mine is a pen-and-ink drawing.

Tweedle Don
Tweedle Don

Limitations Stimulate Creativity

The restriction to a single color, and the inability to print continuous shades of even that one color, forced artists to develop many sophisticated drawing techniques that used black and white patterns to simulate continuous tones, such as cross-hatching and stippling.

My Tenniel parody above shows samples of cross-hatching, whereas the image below shows a sample of stippling, in my never-finished portrait of the young H G Wells.

Unfinished Portrait of H G Wells
Unfinished Portrait of H G Wells

I mentioned above one artist who excelled in such techniques, Robin Jacques. His artwork has appeared in many children’s books and is justly world-famous. Another master of the art was Eric Fraser, whose work appears on the cover of the Art of Radio Times exhibition catalog shown above.

Frank Patterson’s Linework

A less famous artist who excelled in monochrome illustration, and particularly in the use of linework, was Frank Patterson, most of whose work was produced for British cycling magazines from the 1920s through the 1940s.

The illustration of a road across Haworth Moor (shown below) is a spectacular sample of how Patterson could create a dynamic and emotive scene merely from black lines. This is clearly one of those cases where a photograph of the scene would probably be far less effective than the artist’s imaginative creation.

Haworth Moor. Copyright Frank Patterson
Haworth Moor. Copyright Frank Patterson

Conclusion: The Brave New World of Full-Color Illustration

As I mentioned above, the situation now is that, in most publications, there are no restrictions on color reproduction at all. Every image can be reproduced in full color at no additional cost, relative to monochrome reproduction.

While this opens up new creative possibilities for artists, it does mean perhaps that we will never again see the development of ingenious new techniques for monochrome artwork.

Figure Drawing Techniques & Options

Ballpoint Pen Illustration of Pallab Ghosh as "Super-Editor"
Ballpoint Pen Illustration of Pallab Ghosh as “Super-Editor”

This article describes some figure drawing techniques for human figures. Even in technical illustration tasks, it’s sometimes desirable to be able to include human figures. However, depicting human figures accurately can be time-consuming, so I’ll suggest some time-saving options.

I will discuss:

  • Realistic drawings or paintings for “finished” artwork,
  • Figure sketching for storyboarding,
  • Cartooning as a way of producing acceptable representations quickly.

Learning to draw is itself a complex skill, and drawing the human figure is perhaps one of the most demanding tasks any artist can face. I’m aware that many entire books have been written on the subject. There are also many books on the subjects of cartooning and storyboarding, so this will be a very cursory overview of my own experience. Nonetheless, the techniques I offer here may be helpful if you need to create such artwork.

The illustration at the top of this article is part of a poster that I produced for Pallab Ghosh, who was at that time a fellow student at Imperial College, London. It was drawn entirely using black ballpoint pens, then scanned to make the final poster. There are more details about this drawing below.

To develop and maintain my skills, I frequently attended “Life Drawing” sessions, which typically involve drawing or painting a live human model.

Figure Drawing: Pencil Technique

Largely as a result of my experience at Life Drawing sessions, I evolved a standard technique for pencil drawing. I prefer to draw in pencil because it is relatively fast, and requires minimal preparation and materials, while still allowing for some correction of errors. The from-life drawing below shows an example of this technique, from a session at Cricklade College, Andover, UK.

Figure Drawing Techniques: Sample in Pencil
Life Drawing Sample in Pencil

It’s usual for models in Life Drawing classes to pose nude, and this was the case for the drawing above. Therefore, I’ve cropped the image so that it won’t be “NSFW”!

Speed is of the essence in life drawing sessions, because live models cannot hold their poses indefinitely. Even in a very relaxed pose, most models need a break after an hour, and most poses are held for only five to thirty minutes. Therefore, even though my technique allows for the correction of errors, there is usually little time to do that.

My pencil technique certainly does not follow “conventional wisdom”, and in fact I have found some standard advice to be counter-productive. The details of my technique are:

  • Pencils. I find it best to use an HB “writing” pencil instead of the usually-recommended soft drawing pencil. I find that the softer pencils wear down too quickly, and that their marks have an annoying tendency to smudge. Eagle writing pencils seem to have smoother graphite than so-called “drawing” pencils, which provide a more uniform line.
  • Paper. I use thin marker paper rather than heavy Bristol board or watercolor paper. Again, the smooth surface of the marker paper allows for more subtle shading effects, because the pencil line does not “catch” on irregularities in the paper surface.
  • Sharpening. I don’t use a pencil sharpener. Instead, I sharpen my pencils by carving off the wood with a knife, leaving about 5mm of graphite projecting, then rub the tip to a point using sandpaper. This is a technique that I actually learned at school during Technical Drawing O-level classes. The benefits are that I don’t have to sharpen the pencil so frequently, and can adjust the shape of the point to provide either a very fine line or a broader “side” stroke.

Figure Drawing Techniques: Artwork for Scanning

It sometimes seems that there’s an attitude that “pencils are for sketching only”, and that it’s not possible to produce “finished artwork” in pencil. Hopefully, the sample above will demonstrate that that’s not true.

However, it is true that pencil artwork can be difficult to scan. Even the darkest lines created by a graphite pencil are typically a dark gray rather than true black, so there is often a lack of dynamic shading range in a pencil drawing.

Reproducing printed versions of continuous-tone images requires application of a halftone screen, and such halftoning typically does not interact well with the subtleties of pencil shading.

To solve these scanning and printing problems with early photo-reproduction equipment, I developed a “pencil-like” technique using black ballpoint pen, and used it in the poster portrait shown above.

Pallab was standing for the office of Student Newspaper Editor, and, for his election poster, he wanted to be depicted as Superman (his idea—not mine!). It was of course important that the illustration would be recognizable as being Pallab. It was also important that the artwork I produced be:

  1. Monochrome,
  2. Easy to scan using the Student Newspaper’s reproduction camera.

It’s not particularly obvious from the reduced-size reproduction of the portrait above, but in fact there are no shades of gray in the drawing. The drawing consists entirely of fine black lines, which could be scanned and printed without requiring a halftone screen.

Figure Drawing Techniques: Storyboarding

If you’re working in advertising or video production, Storyboarding may form a significant part of your work. This involves sketching out the scenes of an advertisement or other video in a comic strip format.

Recently, I’ve also seen the use of the term “Storyboarding” in connection with Agile software development, where it’s used to describe a task sheet. Even though I have substantial software development experience myself, I’m a little cynical about this usage, because it seems like it’s just a way to make a simple and unremarkable concept sound “hip” and exciting. Anyway, that usage is not what I’m referring to here!

Probably the earliest use of the storyboarding technique was for movies. Every scene of a planned movie would be drawn out, showing the content of the scene, movement of actors, camera movements, and so on. Some directors created immensely detailed storyboards, Alfred Hitchcock being perhaps the best-known.

Several years ago, I attended a Storyboarding workshop at the American Film Institute in Los Angeles, presented by Marcie Begleiter. Marcie went on to write the book on storyboarding: “From Word to Image”. The AFI workshop gave me a chance to practice producing storyboard artwork, in pencil, “on the fly”. A sample extract from one of the storyboards that I produced during the class is shown below. This was drawn from memory, without reference material of any kind. The script from which I was working was for a 1940s-era film noir story, hence the period costumes and transport.

Sample Storyboard Excerpt
Sample Storyboard Excerpt

Storyboards are usually not intended as finished artwork, of course, as is the case for the sample above. They are used as “working drawings”, from which a final video, movie or even comic strip will be created. However, this type of artwork does call for rapid figure drawing skills, and storyboard illustrations can sometimes later be worked up as finished pictures in their own right.

Figure Drawing Techniques: Cartooning

Fortunately, there’s a way to represent the human figure that is generally much quicker, and doesn’t require precise drawing skills.

The human brain has evolved great acuity in the recognition of human facial features and other details of the human body. Even people who themselves have no drawing skills are intuitively good at discerning the smallest differences between human faces. In fact, that’s partly why figure drawing and portraiture are relatively difficult for artists; simply because your viewers will spot tiny errors that they would never notice in any other subject.

Conversely, our brains have also evolved the ability to discern human features in very simple shapes. Our brains can abstract human-looking details from images that do not accurately depict humans (or even images that are non-living, such as the “Man in the Moon”). The technical name for this phenomenon is pareidolia. Artists can take advantage of this tendency by creating cartoons, which are deliberately not accurate portrayals of the human body, but which we nonetheless accept as credible representations.

I frequently use cartooning techniques in my work, either to provide a lighthearted feel to an article, or else simply to save time! The example below shows an illustration for an early multimedia title that I created, which was intended to help owners of PCs understand and upgrade their systems. This was not a humorous eBook: it was intended to provide useful and serious information.

PC Secrets Title Cartoon
PC Secrets Title Cartoon

Clearly, this is not a realistic image, but the average viewer understands it quickly, and it serves its purpose in showing the intention of the associated content.

Summary

Even in technical illustration, it’s sometimes desirable to include human figures, either completely or partially. Drawing accurate figures requires significant skill, and can be time-consuming. Quicker alternatives include storyboard-style sketching, and cartooning. I’ve explained why cartoon-style drawing should be considered even when illustrating “serious” technical work.

Digital Color Palettes: the Essential Concepts

 

Fantasy Illustration of Computer Artist using Palette for Digital Color Palettes
An Unlikely Computer Artist

The word “palette” (or “pallet”) has several meanings: it can refer to a tray used to transport items, or to a board used by artists to mix colors (as shown in the fantasy illustration above, which I produced many years ago for a talk on Computer Artwork). In this article, I’ll discuss the principles of Digital Color Palettes. If you’re working with digital graphics files, you’re likely to encounter “palettes” sooner or later. Even though the use of palettes is less necessary and less prevalent in graphics now than it was years ago, it’s still helpful to understand them, and the pros and cons of using them.

Even within the scope of digital graphics, there are several types of palette, including Aesthetic Palettes and Technical Palettes.

I discussed the distinction between bitmap and vector representations in a previous post [The Two Types of Computer Graphics]. Although digital color palettes are more commonly associated with bitmap images, vector images can also use them.

The Basic Concept

A digital color palette is essentially just an indexed table of color values. Using a palette in conjunction with a bitmap image permits a type of compression that reduces the size of the stored bitmap image.

In A Trick of the Light, I explained how the colors you see on the screen of a digital device display, such as a computer or phone, are made up of separate red, green and blue components. The pixels comprising the image that you see on-screen are stored in a bitmap matrix somewhere in the device’s memory.

In most modern bitmap graphic systems, each of the red, green and blue components of each pixel (which I’ll also refer to here as an “RGB Triple” for obvious reasons) is represented using 8 bits. This permits each pixel to represent one of 224 = 16,777,216 possible color values. Experience has shown that this range of values is, in most cases, adequate to allow images to display an apparently continuous spectrum of color, which is important in scenes that require smooth shading (for example, sky scenes). Computers are generally organized to handle data in multiples of bytes (8 bits), so again this definition of an RGB triple is convenient. (About twenty years ago, when memory capacities were much smaller, various smaller types of RGB triple were used, such as the “5-6-5” format, where the red and blue components used 5 bits and the green component 6 bits. This allowed each RGB triple to be stored in a 16-bit word instead of 24 bits. Now, however, such compromises are no longer worthwhile.)

There are, however, many bitmap images that don’t require the full gamut of 16,777,216 available colors. For example, a monochrome (grayscale) image requires only shades of gray, and in general 256 shades of gray are adequate to create the illusion of continuous gradation of color. Thus, to store a grayscale image, each pixel only needs 8 bits (since 28 = 256), instead of 24. Storing the image with 8 bits per pixel (instead of 24 bits) reduces the file size by two-thirds, which is a worthwhile size reduction.

Even full-color images may not need the full gamut of 16,777,216 colors, because they have strong predominant colors. In these cases, it’s useful to make a list of only the colors that are actually used in the image, treat the list as an index, and then store the image using the index values instead of the actual RGB triples.

The indexed list of colors is then called a “palette”. Obviously, if the matrix of index values is to be meaningful, you also have to store the palette itself somewhere. The palette can be stored as part of the file itself, or somewhere else.

To restate, whether implemented in hardware or software, an image that uses a palette does not store the color value of each pixel as an actual RGB triple. Instead, each color value is stored as an index to a single entry in the palette. The palette itself stores the RGB triples. You specify the pixels of a palettized* image by creating a matrix of index values, rather than a matrix of the actual RGB triples. Because each index value is significantly smaller than a single triple, the size the resulting bitmap is much smaller than it would be if each RGB triple were stored.

The table below shows the index values and colors for a real-world (albeit obsolete) color palette; the standard palette for the IBM CGA (Color Graphics Adapter), which was the first color graphics card for the IBM PC. This palette specified only 16 colors, so it’s practical to list the entire palette here.

CGA Color Palette Values
CGA Color Palette Table

(* For the action associated with digital images, this is the correct spelling. If you’re talking about placing items on a transport pallet, then the correct spelling is “palletize”.)

Aesthetic Palettes*

In this context, a palette is a range of specific colors that can be used by an artist creating a digital image. The usual reason for selecting colors from a palette, instead of just choosing any one of the millions of available colors, is to achieve a specific “look”, or to conform to a branding color scheme. Thus, the palette has aesthetic significance, but there is no technical requirement for its existence. The use of aesthetic palettes is always optional.

(* As I explained in Ligatures in English, this section heading could have been spelled “Esthetic Palettes”, but I personally prefer the spelling used here, and it is acceptable in American English.)

Technical Palettes

This type of palette is used to achieve some technological advantage in image display, such as a reduction of the amount of hardware required, or of the image file size. Some older graphical display systems require the use of a color palette, so their use is not optional.

Displaying a Palettized Image

The image below shows how a palettized bitmap image is displayed on a screen. The screen could be any digital bitmap display, such as a computer, tablet or smartphone.

Diagram of Palettized Image Display for Digital Color Palettes
Palette-based Display System

The system works as follows (the step numbers below correspond to the callout numbers in the image):

  1. As the bitmap image in memory is scanned sequentially, each index value in the bitmap is used to “look up” a corresponding entry in the palette.
  2. Each index value acts as a lookup to an RGB triple value in the palette. The correct RGB triple value for each pixel is presented to the Display Drivers.
  3. The Display Drivers (which may be Digital-to-Analog Converters, or some other circuity, depending on the screen technology) create red, green and blue signals to illuminate the pixels of the device screen.
  4. The device screen displays the full-color image reconstituted from the index bitmap and the palette.

Hardware Palette

In the early days of computer graphics, memory was expensive and capacities were small. It made economic sense to maximize the use of digital color palettes where possible, to minimize the amount and size of memory required. This was particularly important in the design of graphics display cards, which required sufficient memory to store at least one full frame of the display. By adding a small special area of memory on the card for use as a palette, it was possible to reduce the size of the main frame memory substantially. This was achieved at the expense of complexity, because now every image that was displayed had to have a palette. To avoid having to create a special palette for every image, Standard color palettes and then Adaptive color palettes were developed; for more details, see Standard vs. Adaptive Palettes below.

One of the most famous graphics card types that (usually) relied on hardware color palettes was the IBM VGA (Virtual or Video Graphics Array) for PCs (see https://en.wikipedia.org/wiki/Video_Graphics_Array).

As the cost of memory has fallen, and as memory device capacities have increased, the use of hardware palettes has become unnecessary. Few, if any, modern graphics cards implement hardware palettes. However, there are still some good reasons to use software palettes.

Software Palette

Generally, the software palette associated with an image is included in the image file itself. The palette and the image matrix form separate sections within one file. Some image formats, such as GIF, require the use of a software palette, whereas others, such as BMP, don’t support palettes at all.

Modern bitmap image formats, such as PNG, usually offer the option to use a palette, but do not require it.

Standard & Adaptive Palettes

Back when most graphics cards implemented hardware palettes, rendering a photograph realistically on screen was a significant problem. For example, a photograph showing a cloud-filled sky would include a large number of pixels whose values are various shades of blue, and the color transitions across the image would be smooth. If you were to try to use a limited color palette to encode the pixel values in the image, it’s unlikely that the palette would include every blue shade that you’d need. In that case, you were faced with the choice of using a Standard Palette plus a technique called Dithering, or else using an Adaptive Palette, as described below.

Standard Palette

Given that early graphics cards could display only palettized images, it simplified matters to use a Standard palette, consisting of only the most commonly-used colors. If you were designing a digital image, you could arrange to use only colors in the standard palette, so that it would be rendered correctly on-screen. However, the standard palette could not, in general, render a photograph realistically—the only way to approximate that was to apply Dithering.

The most commonly-used Standard palette for the VGA graphics card was that provided by BIOS Mode 13H.

Dithering

One technique that was often applied in connection with palettized bitmap images is dithering. The origin of the term “dithering” seems to go back to World War II. When applied to palettized bitmap images, the dithering process essentially introduces “noise” in the vicinity of color transitions, in order to disguise abrupt color changes. Dithering creates patterns of interpolated color values, using only colors available in the palette, that, to the human eye, appear to merge and create continuous color shades. For a detailed description of this technique, see https://en.wikipedia.org/wiki/Dither.

While dithering can improve the appearance of a palettized image (provided that you don’t look too closely), it achieves its results at the expense of reduced image resolution, because of the fact that the dithering of pixel values introduces “noise” into the image. Therefore, you should never dither an image that you want to keep as a “master”.

Adaptive Palette

Instead of specifying a Standard Palette that includes entries for any image, you can instead specify a palette that is restricted only to colors that are most appropriate for the image that you want to palettize. Such palettes are called Adaptive Palettes. Most modern graphics software can create an Adaptive Palette for any image automatically, so this is no longer a difficult proposition.

A significant problem with Adaptive Palettes is that a display device that relies on a hardware palette can typically use only one palette at a time. This makes it difficult or impossible to display more than one full-color image on the screen. You can set the device’s palette to be correct for the first photograph and the image will look great. However, as soon as you change the palette to that for the second photograph, the colors in the first image are likely to become completely garbled.

Fortunately, the days when graphical display devices used hardware palettes are over, so you can use Adaptive Palettes where appropriate, without having to worry about rendering conflicts.

Should you Use Digital Color Palettes?

Palettization of an image is usually a lossy process. As I explained in a previous post [How to Avoid Mosquitoes], you should never apply lossy processes to “master” files. Thus, if your master image is full-color (e.g., a photograph), you should always store it in a “raw” state, without a palette.

However, if you want to transmit an image as efficiently as possible, it may reduce the file size if you palettize the image. This also avoids the necessity to share the high-quality unpalettized master image, which could be useful if you’re posting the image to a public web page.

If it’s obvious that your image uses only a limited color range, such as a monochrome photograph, then you can palettize it without any loss of color resolution. In the case of monochrome images, you don’t usually have to create a custom palette, because most graphics programs allow you to store the image “as 8-bit Grayscale”, which achieves the same result.

In summary, then, in general it’s best not to use palettes for full-color images. However, if you know that your image is intended to contain only a limited color range, then you may be able to save file space by using a palette. Experimentation is sometimes necessary in such cases. You may also want to palettize an image so that you don’t have to make the high-quality original available publicly. If you’re an artist who has created an image that deliberately uses a limited palette of colors, and you want to store or communicate those choices, then that would also be a good reason to use a palettized image.

How to Avoid Mosquitoes (in Compressed Bitmap Images)

In this post, I’m going to explain how you can avoid mosquitoes. However, if you happen to live in a humid area, I’m afraid my advice won’t help you, because the particular “mosquitoes” I’m talking about are undesirable artifacts that occur in bitmap images.

For many years now, my work has included the writing of user assistance documents for various hardware and software systems. To illustrate such documents, I frequently need to capture portions of the display on a computer or device screen. As I explained in a previous post, the display on any device screen is a bitmap image. You can make a copy of the screen image at any time for subsequent processing. Typically, I capture portions of the screen display to illustrate the function of controls or regions of the software I’m describing. This capture operation seems like it should be simple, and, if you understand bitmap image formats and compression schemes, it is. Nonetheless, I’ve encountered many very experienced engineers and writers who were “stumped” by the problem described here, hence the motivation for my post.

Below is the sample screen capture that I’ll be using as an example in this post. (The sample shown is deliberately enlarged.) As you can see, the image consists of a plain blue rectangle, plus some black text and lining, all on a plain white background.

Screen Capture Example
Screen Capture Example

Sometimes, however, someone approaches me complaining that a screen capture that they’ve performed doesn’t look good. Instead of the nice, clean bitmap of the screen, as shown above, their image has an uneven and fuzzy appearance, as shown below. (In the example below, I’ve deliberately made the effect exceptionally bad and magnified the image – normally it’s not this obvious!)

Poor Quality Screen Capture, with Mosquitoes
Poor Quality Screen Capture, with Mosquitoes

In the example above, you can see dark blemishes in what should be the plain white background around the letters, and further color blemishes near the colored frame at the top. Notice that the blemishes appear only in areas close to sharp changes of color in the bitmap. Because such blemishes appear to be “buzzing around” details in the image, they are colloquially referred to as “mosquitoes”.

Typically, colleagues present me with their captured bitmap, complete with mosquitoes, and ask me how they can fix the problems in the image. I have to tell them that it actually isn’t worth the effort to try to fix these blemishes in the final bitmap, and that, instead, they need to go back and redo the original capture operation in a different way.

What Causes Mosquitoes?

Mosquitoes appear when you apply the wrong type of image compression to a bitmap. How do you know which is the right type of compression and which is wrong?

There are many available digital file compression schemes, but most of them fall into one of two categories:

  • Block Transform Compression
  • Lossless Huffman & Dictionary-Based Compression

Block Transform Compression Schemes

Most people who have taken or exchanged digital photographs are familiar with the JPEG (Joint Photographic Experts Group) image format. As the name suggests, this format was specifically designed for the compression of photographs; that is, images taken with some type of camera. Most digitized photographic images display certain characteristics that affect the best choice for compressing them. The major characteristics are:

  • Few sharp transitions of color or luminance from one pixel to the next. Even a transition that looks sharp to the human eye actually occurs over several pixels.
  • A certain level of electrical noise in the image. This occurs due to a variety of causes, but it has the effect that pixels in regions of “solid” color don’t all have exactly the same value. The presence of this noise adds high-frequency information to the image that’s actually unnecessary and undesirable. In most cases, removing the noise would actually improve the image quality.

As a result, it’s usually possible to remove some of the image’s high-frequency information without any noticeable reduction in its quality. Schemes such as JPEG achieve impressive levels of compression, partially by removing unnecessary high-frequency information in this way.

JPEG analyzes the frequency information in an image by dividing up the bitmap into blocks of 16×16 pixels. Within each block, high-frequency information is removed or reduced. The frequency analysis is performed by using a mathematical operation called a transform. The problem is that, if a particular block happens to contain a sharp transition, removing the high-frequency components tends to cause “ringing” in all the pixels in the block. (Technically, this effect is caused by something called the Gibbs Phenomenon, the details of which I won’t go into here.) That’s why the “mosquitoes” cluster around areas of the image where there are sharp transitions. Blocks that don’t contain sharp transitions, such as plain-colored areas away from edges in the example, don’t contain so much high-frequency information, so they compress well and don’t exhibit mosquitoes.

In the poor-quality example above, you can actually see some of the 16×16 blocks in the corner of the blue area, because I enlarged the image to make each pixel more visible.

Note that the removal of high-frequency information from the image results in lossy compression. That is, some information is permanently removed from the image, and the original information can never be retrieved exactly.

Huffman Coding & Dictionary-Based Compression Schemes

Computer screens typically display bitmaps that have many sharp transitions from one color to another, as shown in the sample screen capture. These images are generated directly by software; they aren’t captured via a camera or some other form of transducer.

If you’re reading this article on a computer screen, it’s likely that the characters you’re viewing are rendered with very sharp black-to-white transitions. In fact, modern fonts for computer displays are specifically designed to be rendered in this way, so that the characters will appear sharp and easy to read even when the font size is small. The result is that the image has a lot of important high-frequency information. Similarly, such synthesized images have no noise, because they were not created using a transducer that could introduce noise.

Applying block-transform compression to such synthesized bitmaps results in an image that, at best, looks “fuzzy” and at worst contains mosquitoes. Text in such bitmaps can quickly become unreadable.

If you consider the pixel values in the “mosquito-free” sample screen capture above, it’s obvious that the resulting bitmap will contain many pixels specifying “white”, many specifying “black”, and many specifying the blue shade. There’ll also be some pixels with intermediate gray or blue shades, in areas where there’s a transition from one color to another, but far fewer of those than of the “pure” colors. For synthesized images such as this, an efficient form of compression is that called Huffman Coding. Essentially, this coding scheme compresses an image by assigning shorter codewords to the pixel values that appear more frequently, and longer codewords to values that are less frequent. When an image contains a large number of similar pixels, the overall compression can be substantial.

Another lossless approach is to create an on-the-fly “dictionary” of pixel sequences that appear repeatedly in the image. Again, in bitmaps that contain regions with repeated patterns, this approach can yield excellent compression. The details of how dictionary compression works can be found in descriptions of, for example, the LZW algorithm.

Unlike many block transform schemes, such compression schemes are lossless. Even though all the pixel values are mapped from one coding to another, there is no loss of information, and, by reversing the mapping, it’s possible to restore the original image, pixel-for-pixel, in its exact form.

One good choice for a bitmap format that offers lossless compression is PNG (Portable Network Graphics). This format uses a two-step compression method, by applying firstly dictionary-based compression, then following that by Huffman coding of the results.

A Mosquito-Free Result

Here is the same screen capture sample, but this time I saved the bitmap as a PNG file instead of as a JPEG file. Although PNG does compress the image, the compression is lossless and there’s no block transform. Hence, there’s no danger that mosquitoes will appear.

High Quality Screen Capture without Artifacts
High Quality Screen Capture without Artifacts

Avoiding Mosquitoes: Summary

As I’ve shown, the trick to avoiding mosquitoes in screen capture bitmaps or other computer-generated imagery is simply to avoid using file formats or compression schemes that are not suitable for this kind of image. The reality is that bitmap formats were designed for differing purposes, and are not all equivalent to each other.

  • Unsuitable formats include those that use block-transform and/or lossy compression, such as JPEG.
  • Suitable formats are those that use lossless Huffman coding and/or dictionary-based compression, or no compression at all, such as PNG.

Data Extinction: The Problem of Digital Obsolescence

Dinosaur PCB Graphic illustrating Digital ObsolescenceI suspect that many of us, as computer users, have had the experience of searching for some computer file that we know we saved somewhere, but can’t seem to find. Even more frustrating is the situation where, having spent time looking for the file and having found it, we discover either that the file has been corrupted, or is in a format that our software can no longer read. This is perhaps most likely to happen with digital photographs or videos, but it can also happen with text files, or even programs themselves. This is the problem of Digital Obsolescence.

In an earlier post, I mentioned a vector graphics file format called SVG, and I showed how you can use a text editor to open SVG files and view the individual drawing instructions in the file. I didn’t discuss the reason why it’s possible to do that with SVG files, but not with some other file types. For example, if you try to open an older Microsoft Word file (with a .doc extension) with a text editor, all you’ll see are what appear to be reams of apparently random characters. Some file types, such as SVG, are “text encoded”, whereas other types, such as Word .doc files, are “binary encoded”.

Within the computer industry, there has come to be an increasing acceptance of the desirability of using text-encoded file formats for many applications. The reason for this is the recognition of a serious problem, whereby data that has been stored in a particular binary format eventually becomes unreadable because software is no longer available to support that format. In some cases, the specification defining the data structure is no longer available, so the data can no longer be decoded.

The general problem is one of “data retention”, and it has several major aspects:

  • Storing data on physical media that will remain accessible and readable for as long as required,
  • Storing data in formats that will continue to be readable for as long as required.
  • Where files are encrypted or otherwise secured, ensuring that passwords and keys are kept in some separate but secure location where they can be retrieved when necessary.

Most people who have used computers for a few years are aware of the first problem, as storage methods have evolved from magnetic tapes to optical disks, and so on. However, fewer people consider the second and third problems, which is what I want to discuss in this article.

Digital Obsolescence: The Cost of Storage and XML

In the early days of computers, device storage capacities were very low, and the memory itself was expensive. Thus, it was important to make the most efficient use of all available memory. For that reason, binary-encoded files tended to be preferred over text-encoded files, because binary encoding was generally more efficient.

However, those days are over, and immense quantities of memory are available very cheaply. Thus, even if text-encoding is less efficient than binary-encoding, that’s no longer a relevant concern in most cases.

Many modern text-encoding formats (including SVG and XHTML) are based on XML (eXtensible Markup Language). XML provides a basic structure for the creation of “self-describing data”. Such data can have a very wide range of applications, so, to support particular purposes, most XML files use document models, called Document Type Definitions (DTDs) or schemas. Many XML schemas have now been published, including, for example, Microsoft’s WordML, which is the schema that defines the structure of the content of newer Word files (those with a .docx extension).

XML is a huge subject in its own right, and many books have been written about it, even without considering the large number of schemas that have been created for it. I’ll have more to say about aspects of XML in future posts.

Digital Obsolescence: Long Term vs. Short Term Retention

Let’s be clear that the kind of “data retention” I’m talking about here refers to cases where you want to keep your data for the long term, and ensure that your files will still be readable or viewable many years in the future. For example, you may have a large collection of digital family photos, which you’d like your children to be able to view when they have grown up. Similarly, you may have a diary that you’ve been keeping for a long time, and you’ll want to be able to read your diary entries many years from now.

This is a very different problem from short-term data retention, which is a problem commonly faced by businesses. Businesses need to store all kinds of customer and financial information (and are legally required to do so in many cases), but the data only needs to be accessible for a limited period, such as a few years. Much of it becomes outdated very quickly in any case, so very old data is effectively useless.

There are some organizations out there who will be happy to sell you a “solution” to long-term data retention that’s actually useful only for short-term needs, so it’s important to be aware of this distinction.

Digital Obsolescence: Examples from my Personal Experience

In the early “pre Windows” days of DOS computers, several manufacturers created graphical user interfaces that could be launched from DOS. One of these was the “Graphical Environment Manager” (GEM), created by Digital Research. I began using GEM myself, largely because my employer at the time was using it. One facet of GEM was the “GEM Draw” program, which was (by modern standards) a very crude vector drawing program. I produced many diagrams and saved them in files with the .GEM extension.

A few years later, I wanted to reuse one of those GEM drawing files, but I’d switched to Windows, and GEM was neither installed on my computer nor even available to buy. I soon discovered that there was simply no way to open a GEM drawing file, so the content of those files had become “extinct”.

Similarly, during the 1990s, before high-quality digital cameras became available, I took many photographs on 35mm film, but had the negatives copied to Kodak Photo-CDs. The Photo-CD standard provided excellent digital versions of the photos (by contemporary standards), with each image stored in a PCD file in 5 resolutions. Again, years later, when I tried to open a PCD file with a recent version of Corel Draw, I discovered that the PCD format was no longer supported. Fortunately, in this case, I was able to use an older version of Corel Draw to batch-convert every PCD file to another more modern format, so I was able to save all my pictures.

Digital Obsolescence: Obsolete Data vs. Obsolete Media

As mentioned above, the problem I’m describing here doesn’t relate to the obsolescence of the media that contain the files you want to preserve. For example, there can’t be many operational computers still around that have working drive units for 5.25” floppy disks (or even 3.5” floppy disks), but those small disks were never particularly reliable storage media in any case, so presumably anyone who wanted to preserve files would have moved their data to more modern and robust devices anyway.

I’ll discuss some aspects of media obsolescence further in a future post.

Digital Obsolescence: Survival Practices

So what can you do to ensure that your data won’t go extinct? There are several “best practices”, but unfortunately some of these involve some form of tradeoff, whereby you trade data survivability for sophisticated formatting features.

  • Never rely on “cloud” storage for the long term. Cloud storage is very convenient for short-term data retention, or to make data available from multiple locations, but it’s a terrible idea for long-term retention. All kinds of bad things could happen to your data over long periods of time: the company hosting the data could have its servers hacked, or it could go out of business, or else you could simply forget where you stored the data, or the passwords you need to access it!
  • Prefer open data formats to proprietary formats.
  • Prefer XML-based formats to binary formats.
  • Try to avoid saving data in encrypted or password-protected forms. If it must be stored securely, ensure that all passwords and encryption keys exist in written form, and that you’ll be able to access that information when you need it! (That is, ensure that the format of the key storage file won’t itself become extinct.)
  • Expect formats to become obsolete, requiring you to convert files to newer formats every few years.
  • Copy all the files to new media every few years, and try opening some of the copied files when you do this. This reduces the danger that the media will become unreadable, either because of corruption or because physical readers are no longer available.

Sometimes you’ll see recommendations for more drastic formatting restrictions, such as storing text files as plain-text only. Personally, I don’t recommend following such practices, unless the data content is extremely critical, and you can live within the restrictions. If you follow the rules above consistently, you should be relatively safe from “data extinction”.

The Two Types of Computer Graphics: Bitmaps and Vector Drawings

I received some feedback from my previous posts on computer graphics asking for a basic explanation of the differences between the two main ways of representing images in digital computer files, which are:

  • Bitmap “paintings”
  • Vector “drawings”

Most people probably view images on their computers (or phones, tablets or any other digital device with a pictorial interface) without giving any thought to how the image is stored and displayed in the computer. That’s fine if you’re just a user of images, but for those of us who want to create or manipulate computer graphic images, it’s important to understand the internal format of the files.

Bitmap Images

If you’ve ever taken or downloaded a digital photo, you’re already familiar with bitmap images, even if you weren’t aware that that’s what digital photos are.

A bitmap represents an image by treating the image area as a rectangle, and dividing up the rectangle into a two-dimensional array of tiny pixels. For example, an image produced by a high-resolution phone camera may have dimensions of 4128 pixels horizontally and 3096 pixels vertically, requiring 4128×3096 = 12,780,288 pixels for the entire image. (Bitmap images usually involve large numbers of pixels, but computers are really good at handling large numbers of items!) Each pixel specifies a single color value for the image at that point. The resulting image is displayed simply by copying (“blitting”) the array of pixels to the screen, with each pixel showing its defined color.

Some of the smallest bitmap images you’ll see are the icons used for programs and other items in computer user interfaces. The size of these bitmaps can be as small as 16×16 pixels, which provides very little detail, but is sufficient for images that will always be viewed in tiny sizes. Here’s one that I created for a user interface some time ago:

icon16x16_versions2

Enlarging this image enables you to see each individual pixel:

icon16x16_versions_enlarged

You can see the pixel boundaries here, and count them to confirm that (including the white pixels at the edges) the image is indeed 16×16 pixels.

Obviously, the enlarged image looks unacceptably crude, but, since the image would normally never be viewed at this level of magnification, it’s good enough for use as an icon. In most cases, such as digital photographs, there are so many pixels in the bitmap that your eye can’t distinguish them at normal viewing sizes, so you see the image as a continuous set of tones.

Bitmap images have a “resolution”, which limits the size to which you can magnify the image without visible degradation. Images with higher numbers of pixels have higher resolution.

Given that bitmap image files are usually large, it’s helpful to be able to be able to compress the pixel map in some way, and there are many well-known methods for doing this. The tradeoff is that, the more compression you apply, the worse the image tends to look. One of the best-known is JPEG (a standard created by the Joint Photographic Experts’ Group), which is intended to allow you to apply variable amounts of compression to digital photographs. However, it’s important to realize that bitmap image files are not necessarily compressed.

Programs that are designed to process bitmap images are referred to as “paint” programs. Well-known examples are: Adobe Photoshop and Corel PhotoPaint.

Vector Images

The alternative way of producing a computer image is to create a list of instructions describing how to draw the image, then store that list as the image file. When the file is opened, the computer interprets each instruction and redraws the complete image, usually as a bitmap for display purposes. This process is called rasterization.

This may seem to be an unnecessarily complex way to create a computer image. Wouldn’t it just be simpler to stick to bitmap images for everything? Well, it probably wouldn’t be a good idea to try to store a photo of your dog as a vector image, but it turns out that there are some cases where vector images are preferable to bitmap images. Part of the skill set of a digital artist is knowing which cases are best suited to vector images, and which to bitmaps.

There are many vector drawing standards, and many of those are proprietary (e.g., AI, CDR). One open vector drawing standard that’s becoming increasingly popular is SVG (Scalable Vector Graphics). You can view the contents of an SVG file by opening it with a text editor program (such as Notepad).

Here’s a very simple example of an SVG image file, consisting of a white cross on a red circle:

icon_err_gen

(Not all browsers can interpret SVG files, so I rendered the image above as a bitmap to ensure that you can see it!)

If you open the SVG file with a text editor, you can see the instructions that create the image shown above. In this case, the important instructions look like this:

<g id=”Layer_x0020_1″>

<circle class=”fil0″ cx=”2448″ cy=”6098″ r=”83″/>

<path class=”fil1″ d=”M2398 6053l5 -5c4,-4 13,-1 20,5l26 26 26 -26c7,-7 16,-9 20,-5l5 5c4,4 1,13 -5,20l-26 26 26 26c7,7 9,16 5,20l-5 5c-4,4 -13,1 -20,-5l-26 -26 -26 26c-7,7 -16,9 -20,5l-5 -5c-4,-4 -1,-13 5,-20l26 -26 -26 -26c-7,-7 -9,-16 -5,-20z”/>

</g>

As you’d expect, the instructions tell the computer to draw a “circle”, and then create the cross item by following the coordinates specified for the “path” item.

Of course, if you were to try to represent a photograph of your dog as a vector image, the resulting file would contain a huge number of instructions. That’s why bitmap images are usually preferable for digital photographs and other very complex scenes.

A major advantage of vector image formats is that the picture can be rendered at any size without degradation. Bitmap images have inherent resolutions, which vector images do not have.

Programs that are designed to process vector images are referred to as “drawing” programs. Well-known examples are: Adobe Illustrator and Corel Draw.

Converting Between Bitmap and Vector Images

It’s often necessary to convert a vector image into a bitmap image, and, less frequently, to convert a bitmap image into a vector image.

Conversion of vector images to bitmaps occurs all the time, every time you want to view the content of a vector image. When you open a vector image, the computer reads the instructions in the file, and draws the shapes into a temporary bitmap that it displays for you.

Converting bitmaps to vector images requires special software. The process is usually called “Tracing”. Years ago, you had to buy tracing software separately, but now most vector drawing software includes built-in tracing capabilities. As the name suggests, tracing software works by “drawing around” the edges of the bitmap, so that it creates shapes and lines representing the image. The result of the operation is that the software generates a set of mathematical curves that define the vector image.

Summary of the Pros and Cons

There are situations where bitmap images are preferable to vector images, and vice versa. Here’s a summary of the pros and cons of each type.

Bitmap

Advantages:

  • Complex scenes can be depicted as easily as simple scenes.
  • Significant compression is usually possible, at the expense of loss of quality.
  • Rendering is computationally easy; requires minimal computing power.

Disadvantages:

  • Size: Files tend to be large.
  • Not scalable: attempting to magnify an image causes degradation.

Vector

Advantages:

  • Compact: Files tend to be small.
  • Scalable: images can be displayed at any resolution without degradation.

Disadvantages:

  • Complex scenes are difficult to encode, which tends to create very large files.
  • Rendering is computationally intensive; requires significant computing power.

Artwork Postscript: Pre-Computer Techniques

I mentioned in a previous post that, in “pre-computer” days, I’d frequently create ink artwork by drawing an initial pencil sketch, then inking it in and erasing the pencil outline as I went. This early example was produced in that way, except that, for some long-forgotten reason, in this case I still have a version of the original sketch.

pysch_gambling2

At the time, I was the Publicity Officer of the Imperial College H G Wells Society, and this was a poster illustration for a talk entitled “The Psychology of Gambling”. The reason for choosing a “comic strip” inkwork technique was due to the limitations of the society’s poster printing capabilities. Posters could be printed only in black, and the process handled line art much better than it did halftones.

The lettering for the poster, which isn’t included in the image, was added by literally pasting down strips of printout from the same phototypesetter that was used to create the Student Union newspaper. This was long before the days of desktop publishing!

The pencil sketch that survives in this case shows some differences, relative to the final form of the drawing. The pose of the male figure is actually better in the sketch; he seems to have become more stiffly “wooden” in the final image!

pysch_gambling_sketch

The pencil sketch also reveals the design of the image, in that the vanishing point was deliberately set to be the palm of the male figure’s hand.

This was the first illustration I created while at Imperial College, and its public display led to many other requests for artwork during my undergraduate days.

Moggies: Computer Techniques for Comic Strips

Here is another example of how I’ve used computer techniques to help with the production of artwork that not only appears to be “conventional” (i.e., drawn or painted directly on paper), but is in fact largely produced using conventional methods.

Several years ago, I produced a short series of comic strip cartoons titled “Moggies” (“moggie” being a British slang term for a non-pedigree cat). I wanted to produce the strips using the standard “brush and ink” technique, but I didn’t want to spend a lot of time trying to ink in the balloon text, and felt that that task would be more efficiently handled using computer desktop publishing techniques.

Of course, it is possible to create comic strips entirely using computer techniques, and I’m not suggesting that conventional techniques are somehow better. Nonetheless, in this instance I wanted the published strip to look informal and light-hearted, and I felt that a more “freehand” approach would help to provide those qualities.

The Final Result

Let’s look at one of the final cartoons, before discussing how it was produced. This was exhibited in an Art Show, where it won a “Best of Show” prize. As you can see, the style looks like a fairly standard comic strip; consisting of colored-in areas over a black outline. There’s nothing remotely avant garde about the design of the strip; it has four rectangular panels, the aim being to make it easy to follow. This was a humorous cartoon, so I wanted the artistic style to be “casual” and light, rather than precise and technically accurate.

The Complete 4-panel Moggies Cartoon

Since I’d be “telling a story” here, the first step of course was to specify the details of the story, and decide how it would be plotted. I’d decided that my strip would have four panels, and I’d use the panels to develop a “gag”. The punch line for the gag would obviously need to be in the fourth panel, with the scenes in the previous three panels providing the lead-up to that.

Comic strips usually adhere to implicit conventions that you need to follow, if you want your readers to be able to follow the strip easily. For example, in English-language countries, we read the strip from top to bottom, and from left to right. It’s important that the conversation “bubbles” in the strip frames should also flow from left to right, so that readers will tend to read each one in the order that is natural for them. Thus, the positions of the cats in each frame of the strip had to be such that it would be possible to place the bubbles in the most readable order.

Firstly then, I wrote out the script for the bubbles in the four frames, without pictures, to ensure that the gag would be understandable. This also established how many bubbles there would need to be.

Next, I printed the outlines of the four frames on a sheet of paper (just so I’d be able to erase portions of the drawing without having to redraw the frames). Within the frame outlines, I sketched in the cats in pencil, together with rough ellipses for the positions of the bubbles, as below.

The 4-panel Pencil Sketch for the Moggies Cartoon

I then scanned my pencil drawing, and imported the scanned bitmap into Corel Draw. I added the bubbles as vector ellipses in Corel Draw (and then trimmed the ellipses with the frame borders, as I described for the gear icon in a previous post). I inserted the text into the bubbles, using an appropriately-named font called “Balloon”, and adjusted everything so that the text fitted neatly into the bubbles. In the working image below, the red and green outlines show the elements that were generated by Corel Draw, over the top of the scanned pencil bitmap.

Speech Bubbles and Outlines added to 4-panel Moggies Cartoon

When I’d arranged everything as I wanted, I inked in the original pencil sketch of the cats, as below. Then, I rescanned the inked drawing as below, and imported that into the same Corel Draw file, replacing the previous scanned pencil image.

Inked-in Pencil Drawing for 4-panel Moggies Cartoon

At this point, you may be wondering, “But what about the colors?” Having combined the inked drawing with the computer-generated outlines and text in Corel Draw, I printed out the result on a laser printer. This provided a complete, waterproof outline drawing with text, over which I could paint using watercolors. Of course, I could have colored in the monochrome outline in the Corel Draw image, but I felt that hand-coloring would convey the “casual” style that I wanted.

This post more or less completes my series on computer-assisted artwork production. I do plan to add one more short “postscript” post, describing an very early item of comic strip artwork that I produced in my “pre-computer” days!

Using Computer Techniques to Produce “Conventional” Artwork Efficiently

In the previous post, I described techniques for producing artwork that is intended for use with computers, and intentionally looks like it is computer-produced. In this post, I want to discuss how computer techniques can help you produce more “conventional” artwork more accurately and efficiently.

A few years ago, I set about producing a painting that would show a fictitious ruined Roman temple in Britain, as it might have appeared a century or so after the Roman legions left Britannia. I wanted to show the building glowing in the winter sun, after a snowfall. This idea was inspired by the many medieval monastic ruins that actually exist in North Yorkshire, where I grew up. Many of these buildings were constructed from local sandstone, the honey color of which seems to glow in low sunlight.

The Final Painting

My deliberate goal was to create a conventional painting using watercolor, gouache and colored pencil. I felt that this combination of media would best achieve the lighting effects that I wanted. Here’s the final result, so you can see what I was aiming to achieve.

temple2

The painting was created on paper, on a watercolor block, which is a pad consisting of sheets of paper that are glued on all four sides. The idea of that is to prevent the paper wrinkling when you apply the wet paint to it.

My usual starting point when tackling such work is to draw a light outline in pencil on the blank paper, then adjust the outline as necessary to achieve the composition that I want. When satisfied with the composition, I begin applying paint, erasing the outline as I go so that it doesn’t show through the transparent watercolor.

At this point, you may think that I’m going to tell you that I used a CAD program to design a 3-D model of the building, then created a view of that for the painting. While that would be possible, I didn’t need to spend time doing that in this case, because I understand perspective drawing, and I had a fairly clear idea of what I wanted the building to look like. Instead, it was much quicker for me to establish a couple of vanishing points, then draw perspective lines to delineate the building.

Posing the Cat

One of the elements that I wanted to include in the painting was a “wildcat”, which, in order to be visible, would need to be placed somewhere in the foreground. However, I wasn’t sure what would be the best pose for the cat, or exactly where to place it.

Although we had a large tabby cat who would make a good model for a “wildcat”, he was naturally reluctant to pose in any desired position! I wanted to be able to play around with the size and position of the cat in the final painting. In my “pre-computer” days, I’d have done this manually, by drawing and erasing the cat sketch several times, but it’s much faster to draw an initial sketch, then scan the entire outline and cut out (electronically) the portion with the cat (as below). You can then move that portion around to decide the best position and pose. The computer-assisted method is much faster, and enables you to make more exhaustive experiments before settling on the final design.

wildcatSketch1

A further technique that is much easier with computer assistance is to see your image “with different eyes”. When working on an image for many hours, I sometimes find that I become so familiar with it that I fail to see obvious errors until it’s too late. I can avoid this over-familiarity by scanning the picture, then modifying the scanned image in various ways. In this case, I posterized the scanned image to create a “stipple effect” monochrome version, then flipped the result horizontally, as below.

temple2r1

This allowed me to look at the whole picture afresh, and see whether anything was obviously wrong.

The Lions of Aker (aka the Lions of Yesterday & Tomorrow)

One feature of the painting that immediately strikes most viewers is the unusual segmental shape of the temple’s pediment, contrasting with the usual triangular shape of a classical pediment. The segmental shape was inspired by a real image of an Isean temple on a genuine Roman coin, although I’m aware that the image may not be an accurate depiction of a real building!

I also wanted the pediment to feature a carving showing the “Lions of Aker”, which is an Ancient Egyptian motif. I didn’t want to copy any existing drawing of the lions slavishly, but instead create my own version of the design, as below.

temple_parapet

I wanted the lions to be exact back-to-back reflections of each other. I had a general idea of what I wanted my “heraldic beasts” to look like, so I sketched out one lion freehand, as below, together with part of the central shield that it would be supporting. When I was happy with the shape, I inked in the outline with a thin black pen, so that the image would scan well. The outline isn’t perfect, but it doesn’t need to be.

lionhorizon2

After scanning the image of the one lion and cleaning up the result, it was a simple matter to create a flipped copy of it for the other lion, and then use a vector drawing program (Corel Draw) to insert the central shield as an ellipse. The result was as shown below.

2lionsCrest

Of course, in the painting, we’re not looking at the temple pediment head-on, so the effects of perspective must be taken into account. Therefore, I used Corel Draw’s distortion tools to skew the drawing, approximating the perspective angles that allow it to fit into the picture, as below. It was then simply a case of transferring the final image of the fictitious carving to the watercolor pad, painting it in and adding the shading and fallen snow.

parapet_projection

In the next post in this series, I’m going to discuss how to use computer graphic techniques to help with the production of a humorous comic strip.

Adapting Drawing Principles for Computer-Generated Artwork

I learned to draw in two “old fashioned” ways. One way was “freehand” with pencil and paper; you just sat down with a blank sheet of paper, and sketched everything out as best your drawing skills would allow. It wasn’t especially precise, but that was acceptable in the context.

The second way was what was called “Geometric” drawing. You still needed a pencil and a piece of paper, but also a drawing board and a tee square, and probably also a pair of compasses, protractor, dividers, etc. If you needed to create a neat, precise drawing, this was the way to achieve it.

My grandfather had been a professional draughtsman [British English spelling, and, yes, it is pronounced “draftsman”!] for the City of Leeds, and that was the way he produced drawings too. I still have a few of his drawing tools, including a pair of brass compasses that pivot around a soldered-on gramophone needle!

Another useful tool for geometric drawing was a set of stencils. These were wooden or plastic templates, offering regular shapes that you could draw around or inside. For example, even the best artists have difficulty drawing an exact circle freehand, so circle templates were useful when these were needed.

These days, of course, I suspect that almost nobody still does geometric drawing with a board and tee square, just because it’s easier to produce the same results, or better, using computers.

When I made the switch from pencil-and-paper drawing to computer artwork generation, I was initially confused about how to translate the techniques I’d learned. Was there a computer equivalent of a pair of compasses, a protractor, and so on? Although there are in fact such equivalents, they’re not necessarily the best way to go about producing a drawing. It took me much time and practice before I learned the best ways to translate conventional geometric drawing techniques into their digital equivalents.

Although I did considerable online research, and bought books on the subject, I was never able to find a tutorial providing a general approach to this kind of drawing problem. Everything I found was either very abstract, or else was a specific set of instructions to enable you to draw “exactly what I already drew”. I didn’t find any of that particularly useful, so I’m offering this post in the hope that it may, in a small way, fill that void for aspiring computer artists.

Creating a Gear Icon

Recently, I needed to produce a “gear icon” drawing for some computer documentation. This type of icon is quite popular these days for the identification of “Settings” controls in software, in a linguistically neutral way. The end result I had in mind was to be as shown below.

gear_synth_final

The tool I’d be using was Corel Draw (similar to Adobe Illustrator), which produces vector drawings. The advantage of vector drawing over bitmap painting (as would be produced by Adobe Photoshop) is that you can draw your original at any scale, and re-scale your final image to any size without loss of resolution. (A vector drawing records shapes as sets of mathematical equations, rather than color values in a matrix of pixels.)

So, how to go about producing this icon? One fact about computer artwork that I learned early on was that there are many ways to produce the same output, so it becomes a question of choosing the most efficient way to produce the result that you want.

It also pays to make maximum use of your software’s built-in drawing “primitives” whenever you can. Any credible vector drawing package includes controls for drawing such basic shapes as rectangles, circles, etc., and most include controls to produce more sophisticated shapes.

To produce the gear icon, I could have drawn out every curve individually, then tried to tweak the result until it was correct, but this would be incredibly slow, and likely produce an imperfect result. Alternatively, I could have started with a circle, then created a “tooth” shape, and attached rotated copies of the tooth to the circle, but this again seemed like a lot of work!

An Efficient Method

Of the available primitive shape controls, the Star tool seemed to produce a result that had the greatest resemblance to my desired result (Adobe Illustrator offers a very similar Star tool). So, I selected the Star tool and drew a regular, 11-point star, as below. The exact proportions of the star didn’t matter in this case, so I just “eyeballed” them. (The fill color used in these samples is for clarity only; you can use any fill type, or none at all.)

gear_synth1

Now I had to trim and modify the basic star shape to make it look more like a gear.

Next, I used the Ellipse tool to draw 3 concentric circles over the star, and centered all the objects vertically and horizontally, as below. The exact diameters of the circles didn’t matter. All that mattered was that the diameter of the outermost circle should be less than that of the star, and the diameters of the other two circles should be respectively larger and smaller than the inner portion of the star, as shown.

gear_synth2

Now, I opened Corel Draw’s Shaping menu to begin combining the basic shapes. Firstly, I selected the outermost circle and used the Intersect control in the Shaping menu to lop off the points of the star, with the result below.

gear_synth3

Next, I selected the outermost remaining circle (above), and used the Weld control in the Shaping menu to merge it with the remains of the star. This “filled in” the inner vertices of the star, to create the gear’s “bottom land”, as below.

gear_synth4

Finally, it was just a question of using the innermost (remaining) circle to punch out the hole in the center. I achieved this by selecting the inner circle, then using the Trim tool in the Shaping menu to remove the center from the remains of the star shape.

Final Drawing

The end result was exactly the shape that I wanted. I’m now able to recolor and resize this vector shape for whatever application I need. Below is an example of the icon applied to a fictitious software button.

gear_synth_example

Just to demonstrate that the hole in the center of the gear icon really is a hole and not an opaque circle, below is an example of the gear icon laid over a bluish square in Corel Draw. The square is covered by the red portion of the icon, but shows through the hole in its center.

gear_synth_overlap

I suppose that there’s an element of “lateral thinking” in these designs, in the sense that you have to start by thinking about the desired end result, then work backwards from there to the primitive shapes supplied by the drawing tool.

In my next post, I plan to discuss how to use computer-aided techniques to assist the production of more “conventional” artwork.