Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

My current project is to build a face authentication system. The constraint I have is: during enrollment, the user gives single image for training. However, I can add and use images given by the user while authentication.

The reason I want to add more images into training is, the user environment is not restricted - different lighting conditions, different distance from camera, from different MP cameras. The only relief is the pose is almost frontal.

I think, the above problem is similar to the face tagging app widely available. Can anyone suggest a method to use the available images adaptively and smartly??

--Thanks

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
527 views
Welcome To Ask or Share your Answers For Others

1 Answer

To make your classifier robust you need to use condition independent features. For example, you cannot use face color since it depends on lighting conditions and state of a person itself. However, you can use distance between eyes since it is independent of any changes.

I would suggest building some model of such independent features and retrain classifier each time person starts authentication session. Best model I can think of is Active Appearance Model (one of implementations).


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...