Programming AI with Leaf/Face Recognition with Eigenfaces

From Wikibooks, open books for an open world
Jump to navigation Jump to search

A really oversimplified description of eigenfaces.

First, what is an eigenface in nontechnical terms. Let's suppose that you're looking at a brother and a sister. You might think - "He looks mostly like his dad, but she's the spitting image of her mother." Looking closer you might add - "But he has his mother's eye's and she has her father's chin." So let's suppose that the Mom and Dad's faces are the models that we're going to use to "define" what the kids look like. Now, "mathematically" we might say that the boy looks like 90% Dad and 10% Mom, but the girl looks like 5% Dad and 95% Mom. Or to really simplify the description:

Bob = (0.90 0.10)

Jane = (0.05 0.95)

In this (crazy) example, Dad's face and Mom's face are the "eigenfaces" - the idealized models that we're going to use to "deconstruct" faces for comparison. So let's see what happens next. Someone walks into the room (either Bob or Jane). We take a picture. Our algorithm deconstructs the image and concludes that it looks 85% like Dad and 15% like Mom. Or:

Unknown = (0.85 0.15)

Comparing this to our stored images of Bob and Jane you can see that this image is "closer" to Bob than it is to Jane - so we conclude that we're looking at Bob.

A Better Explanation:

For those of you with an engineering background, a little more realistic explanation of eigenfaces would be to make the analogy to a Fourier transformation. Any complex waveform can theoretically be recreated as the sum of an infinite number of sine waves of varying magnitude and period. However you can usually get a pretty good approximation of the complex wave using just a few key sine waves. Now let's say that you have a million faces in your database. Well if you had a million eigenfaces, then you could find an exact match - but that's pretty inefficient. It turns out that, just as with the Fourier example, you really only need a few key eigenfaces - maybe only a few percent of the total faces in the database.... and that's "good enough" for matching and identification.

But these are bad examples. How do eigenfaces really work? And how does Leaf use them?

A Detailed Explanation:

[Disclaimer: I'm not going to explain eigenfaces or eigenvalues in mathematical terms here - my intention is to show sort of what they are and how they're used... for a complete description of the mathematics you'll need a background in Linear Algebra....]

Okay, I say to Leaf "Leaf I'm Bruce." Leaf takes my picture. It's converted to a black and white gray scale image. OpenCV has an algorithm that identifies a "region of interest" within the picture that's probably a face - this is the "face detection" step. That region is extracted and reduced or enlarged so that it's exactly 100 pixels by 100 pixels. That's our "standard image". Notice that this step helps to bring all images into the same size or scale - images that were far away get enlarged and images that were close up get reduced so that all the images are about the same size.... we'll be comparing apples to apples. Leaf then stores that image with the name "Bruce" in his "face database".

When Leaf has a bunch of pictures in his database, he runs an OpenCV based algorithm that Gary Malolepsy and I wrote that extracts the eigenfaces from the faces stored in the database. In my first example, it'd be kind of like looking at a bunch of kids and trying to create pictures of what Mom and Dad look like. In my second example (the more mathematically similar example), it'd be like running a Fourier transform on a bunch of complex waveforms trying to extract a few key sine waves out of them.

If you want to see some face pictures, look in the Leaf folder in the LeafFaces folder. If you want to see the eigenfaces for those faces, then look in the eigen folder - the eigenfaces are mathematical extractions. ... they look ghostly....

Okay. Now I say "Leaf who am I?" Leaf takes my picture. Converts it to gray scale. Sizes it to 100 x 100 pixels. And then he "deconstructs" the picture into it's eigenvalues based on the eigenface patterns in his eigen database. He gets some eigenvalues that look something like my first example - it's like he's saying "Okay, this unknown face is 45% eigen1, 5% eigen2, 25% eigen3, etc. So who in my face database matches that description? Hey, Bruce is 46% eigen1, 3% eigen2, 22% eigen3, etc. I think it looks like Bruce!"

For a more detailed explanation, here are some useful links:

http://en.wikipedia .org/wiki/ Eigenface

http://www.scholarp article/Eigenfac es

http://www.pages. ~sis26/Eigenface %20Tutorial. htm

[Another disclaimer: Now for the bad news.]

The Eigenface method works pretty well. However there are limitations.

For starters, it always returns an answer. If George (who is not in the database) says "Leaf who am I?" then Leaf will respond with whoever George most closely matches - maybe with "You look like Bruce." This indicates the need for further training. So George should then say "Leaf this is George."

However the process is also limited by changes in lighting, partial obstruction of the face (e.g. wearing sunglasses), changes in the background and in fact anything that significantly alters the appearance of the picture from the pictures in the database.

The article that I included in my original post explores these limitations in more detail and tries to find an approach that improves on recognition when these problems occur:

Well, that about covers it - as non-technical an introduction to eigenfaces as I can provide... but hopefully with this explanation you'll be able to see the technical explanations a little more clearly if you decide to delve into them....