RGB Averages; Pixel as Metonymy

{125, 127, 73}


(Average RGB from the butterfly image below when rinsed into a single pixel. 
Expanded again for easy viewing.)

Bumgardner explains
Color

Pickr
in the comments over
here.

I use a Perl script to retrieve all the thumbnails of all the photos in the
group, which takes a few minutes. Then, using ImageMagick, I reduce each
thumbnail to 1 pixel in size, and record the color in a datastructure.

The data structure, containing the photo’s ID and average R,G,B values are
then written to an actionscript file.

Well, no, I don’t know how to do it, yet (yet!), but the process is beginning
to make sense (and not just in its applicability to images, but that’s all I’ll
say about that for right now).  It’s the basic rendering of an image into a
color-based number (Hit
Song Science
for the designing eye?).  The single pixel functions as a kind of
meta-name for the image, a name by which it gets to associate with others like
it through action script referencing.

7 Comments

  1. When I did a color-based image index like this (though without any of the cool interface) many years back, I also used the spatial variance of the pixel colors. That way, things that were more “complicated” tended to clump together, and things that had big blobs of one color tended to be in a separate clump.

  2. Is your index online, Bill? I’m interested to see how the spatial variance is accounted for. In Bumgardner’s program the shape-based tags (in the round-shapes pickr, for example) look like they’re established only by the tagging system in Flickr and not by the action script’s treatment of any spatial indicators in the image file.

  3. I’m not currently computing spacial variance – although it would be interesting to do so, and add an extra slider for this.

    I have noticed that there is an inverse correlation between saturation and variance – less saturated color choices (closer to the center of the picker) tend to produce more variance in the images (because the multitude of colors tend to average to a less saturated result).
    – Jim Bumgardner

  4. I don’t know anything about ImageMagick, Jim. But when I reduced the image to one pixel in photoshop I wondered how to verify that the RGB average was, indeed, the average of all the individual pixel RGBs. Figured that it ran from the edges inward. Yet I can’t imagine that the saturation different is so far afield that the tool is diminished. From everything I’ve seen, Pickr correlates the images and the color wheel with impressive results.

  5. Well, there’s a consulting client’s NDA in place on a lot of what I was doing. But generally speaking, we used the number and magnitude of wavelet coefficients describing the image as an estimate of spatial complexity. If you want to divorce this from color, you might try looking at a grayscaled image.

    And if that sounds needlessly opaque or scary, consider that every JPEG in the world has the wavelet decomposition stored in it….

  6. The vocab is new for me, but it leads me to wonder whether the wavelet coefficients (estimates of spatial complexity, yes?) are elaborate/nuanced enough to group jpegs into families according to the images or image-shapes they present. Even if not, it might be interesting to explore the relationship between this and particular tags.

  7. Wavelet decompositions have been used to classify images (and video clips) in the past in several research settings, yes. In broad categories, like “faces” or “landscapes” and “cars driving,” there’s enough information captured by wavelets’ “where the stuff is” summary to make them a good simple guess. I wouldn’t expect too much from them in classifying “dogs” vs “cats”, “happy” and “sad”, &c though….

Comments are closed.