|Title:||Multi-cue visual tracking for human-robot interaction||Authors:||Menezes, Paulo Jorge Carvalho||Keywords:||Interacção homem-computador;Visão por computador||Issue Date:||13-Jul-2007||Abstract:||This thesis is organized as follows. Chapter 2 presents the construction of a 3D model and its use to generate 2D templates that are used to perform measurements. This 3D model is composed of sections of cones represented by combinations of degenerated quadrics. Such primitives require special handling both to obtain the truncated projection and remove the hidden parts. For this, an algorithm is proposed to handle the truncation and visibility problems. Chapter 3 presents some stochastic filtering formalisms and algorithms to estimate the state of a process, given a set of observations of its output. It starts with the presentation of a model-based tracking Bayesian principle, which can be seen as a basis for some estimation techniques. These techniques can normally be described as processes composed of two stages (or steps), which are: prediction and update. In the first step a dynamics model is used to predict the evolution of the system and, in the second one, this prediction is fused with the observations to obtain a corrected estimate. These methods are presented in order from those that use the more restrictive assumptions to the less restrictive ones: linear/Gaussian, nonlinear/Gaussian, and nonlinear/non-Gaussian. Following this order we have: Kalman filters which are adequate to linear systems on the presence of Gaussian noise, Extended Kalman and Unscented Kalman filters, which try to extend the applicability to nonlinear systems butmaintaining the Gaussian noise assumption, and finally Particle filters that remove the Gaussian constraint and accept virtually any type of noise. Chapter 4 proposes some methods for measuring how much a model with a given set of parameters can correspond to an image of the target. These methods try to produce measures like: edge-to-contour distances to provide a shape matching information, optical-flow, which is the result of the motion of the target and can be used to distinguish a moving target from a static background, and colour matching level between the model and the target, which assume that the target presents distinctive colours or colour patterns. The advantages and limitations of each method are presented and discussed, and solutionswere proposed to reduce the influence of those limitations. All the proposed methods verify the requirement of introducing very little computational load. Such requirement is crucial for a real-time system, especially in the context of a particle filter, where measures must be taken for validating every particle, whose number can ascend to hundreds or thousands. Chapter 5 presents the development of a set of visual functions that aim to fulfil a basic step of interaction functionalities. Face detection and recognition based on Haar functions and eigenfaces enable the recognition of the tutor users. A modified Haar-based classifier was created to detect open hands in images. User tracking to make the robot follow the user is implemented using a particle filter that uses colour distribution over rectangular patches as target features. In this case, the colour distributions that correspond to each patch are updated on-line to account for the changes produced by the targets motion or illumination variations. Finally a method capable of tracking the configuration of the human arms from a single camera video flow is presented. Chapter 6 closes this thesis by summarising the contributions and results, and opens the discussion about the future work.||Description:||Tese de doutoramento em Engenharia Electrotécnica (Informática) apresentada à Fac. de Ciências e Tecnologia de Coimbra||URI:||http://hdl.handle.net/10316/7559||Rights:||openAccess|
|Appears in Collections:||FCTUC Eng.Electrotécnica - Teses de Doutoramento|
Show full item record
Page view(s) 50341
checked on Nov 16, 2018
checked on Nov 16, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.