Active Calibration and Tracking in Video Streams
Chiori Tay and B. Parvin
We have developed a simple motion tracking system
with active camera calibration,
which detects and tracks moving objects.
Motion detection is based on a generic optical flow field algorithm
and grouping self similar motion vectors.
In general, the flow field cannot handle large spatial motion
with continuity that corresponds to real 3D object, and
a simple protocol
for extraction of valid motion boundary is proposed for
the purpose of initialization.
The flow field and subsequent grouping provides a template for
correlationbased tracking.
These two processes run concurrently to provide continuous updating
and correction of template size and its location. The template size is a
critical factor as the object distance to the camera center varies
during the tracking process.
Our implementation is optimized for near realtime
performance through a pyramid implementation. The actual processes
are threaded for improved concurrency.
A unique aspect of our work is in the calibration
of intrinsic camera parameters with an active stage.
A typical camera calibration technique uses a specially constructed
reference object such as a calibration chart or rectangles, and requires
very accurate measurement of world and image points
[2, 1].
Our method is based on finding adequate corners in a natural office environment,
moving stage, matching those corners, and constructing the necessary and
sufficient equations for recovery of intrinsic parameters. Hence, in the
absence of calibration chart,
self calibration can be an aspect of the dynamic tracking process.
Our method recovers focal length and
focal center with high repeatability, where
the focal length is used to correct the
camera orientation with appropriate for stage movement.
Three camera orientations, with two small angular motions, provides
the sufficient geometric constraints to recover focal length and
focal center.
Figure 1 shows the camera geometry with
three different orientation of , and , towards a
world point A. Point A is a fixed object at the same location.
, and represent the
image planes with respect to each camera orientation.
The angles , and are the focal directions towards
object A. is the center point in
the image planes, which remins stationary.
Points , and
correspond to projection of A in each viewing
direction of , and respectively, and
and are angular motion of the stage.
The constraints are given by:
Which reduces to the following nonlinear equation. This equation is
solved by searching for its zerocrossing given the small field of view
of the camera lens.
The focal length and the image center can now be found by the following
equations:
Figure 1: Camera orientation and geometry
Figure 2: The software architecture for the motion tracking system with a single camera
Figure 3: An example of automated stage compensation of the moving person
Figure 4: The system architecture of activecalibration
The software architecture of active calibration is shown
in Figure 4, where corners are detected, matched, and
their position refined with subpixel accuracy.
Our results indicate lower dispersion error and better repeatability.
In addition,
the error appears to be correlated with the position of the reference point
in the image frame, as the error is less at the center than near
the edges in the image frame. This is probably due to the lens
distortion which is maximum in the periphery of the image. The
results are tabulated below:
Table 1: Recovery of intrinsics
with subpixel accuracy
(panning and titling angular motion = 4)
Next: References
