Figga     Help
 
 


What is it?
Figga is an image search engine. Instead of entering search words or keywords you are able to create a "keypicture" or image by drawing and/or uploading a images of your own. It than takes this search image and returns a set of visually similar images. It is a bit like Google's image search but with the fundamental difference that it relies solely on matching the visual features of images.

The images it returns don't make sense?
Sometimes the images it returns are obviously similar and sometimes they seem random, mistaken or "false positives". When the software is searching for matches it is entirely uninfluenced by the meaningful content of the pictures it is comparing, nor by any kind of object recognition, file names or textual annotations. It is purely looking for similarities in their visual features - composition, light and dark, shape and pattern. If you look carefully you can often see visual elements that match your keypicture, but in ways that are quite unexpected. As you get used to understanding this "algorithmic" matching process you will find that the software is teaching you to "see" images in the same that it does!


The results still look inaccurate. What's the best way to draw with this?
Because the search engine only looks at the strongest visual features it will tend to produce more obviously "accurate" results if you draw the most graphic images. Try keeping your images very simple and direct, drawing only the most significant visual structure or skeleton and skipping the details. So if you are drawing a face, concentrate on the darkest features such as the eyes, mouth, hair and their relative positions. Sometimes simple patterns can retrieve interesting matches. If you draw a circle in the middle of the canvas it will treat it as a completely image to if you draw it at the top left. And remember that if an image of a face is half in shadow then it is that shadow silhouette that will become the most likely feature for it to match on - it won't recognise eyes themselves.

Where do the image results come from?
The images it retrieves are meant to reflect the scope of a commercial internet search engine. But because I can't afford to store the millions of images that a service like Google or Alta Vista can provide, the database that Figga searches typically contains a few tens of thousands of images. These have been collected at random from the internet and are periodically updated. So it is a bit like a shifting "window" of all the images on the internet. So if you submit the same image a few weeks later you may get different results. The larger I make this database the more accurate the results are likely to be - it can only return the nearest matches to the images available in its current database.

How does it work?
Searching by visual appearance is known as CBIR (Content Based Image Retrieval) and is still an emergent technology (that means it doesn't work perfectly). It is an enormously more complex process than textual matching because images are formed of continuous surfaces of features and can vary in all the two dimensional aspects of size, orientation, etc. They are not composed of a standard and limited set of signs. Figga uses a piece of open source software called "imgSeek" for its basic image recognition engine. imgSeek uses a technique called "multi-resolution wavelet decomposition" to break images down into bits it can search and compare. It is a bit like a compression algorithm like JPEG.

Who are you?
Richard Wright has worked as a media artist for over twenty years, specialising in digital imagery, history and parallels between the Baroque and digital culture. For more info see www.futurenatural.net.

Credits
Concept and design: Richard Wright
Technical Assistance: Tony Shaper
Hosted by: Furtherfield.org
Funded by a grant from Arts Council England 2006