PETS-ECCV 2004
Prague, Czech Republic - 10 May 2004

PETS-ECCV 2004

Sixth IEEE International Workshop on Performance Evaluation of Tracking and Surveillance

New: Final version of articles due April 19th.

Registration is managed by the ECCV 2004 Organizers

One-day workshop organized in assocation with ECCV 2004

sponsored by CAVIAR (IST-2001-37540) Context Aware Vision using Image-based Active Recognition

Workshop Organization

PETS Steering Commitee

James Ferryman, University of Reading, UK

James L. Crowley, INP Grenoble, France

PETS '04 Program Co-Chairman:

James L. Crowley, INP Grenoble, France

Robert B. Fisher, University of Edinburgh, UK

Jose Santos-Victor, IST-ISR Lisbon, Portugal

Workshop webmaster:
Daniela Hall, INRIA Rhone Alpes, France

Proposed Program Committee

(to be announced)

Call for Papers

PETS '04 at ECCV '04 continues the theme of the highly successful series of PETS workshops. previously held at FG 00, CVPR '01, ECCV '02, ICVS '03 and ICCV '03, The theme for PETS 04 is observing human activity. A number of video clips have been recorded of actors performing activities. These include people walking alone, meeting with others, window shopping, fighting and passing out and last, but not least, leaving a package in a public place. All video clips were filmed with a wide angle camera lens in the entrance lobby of the INRIA Rhône Alpes research laboratory in Montbonnot, France. The resolution is half-resolution PAL standard (384 x 288 pixels, 25 frames per second) and compressed using MPEG2. The file sizes are mostly between 6 and 12 MB, a few up to 21 MB.

Six scenarios have been recorded. Activities include one person walking in a straight line (3 sequences), a person browsing at information displays (5 sequences), behaviours while seated in a chair (3 sequences), Persons abandoning packages (5 sequences), groups of people encountering , (6 sequences), people fighting (4 sequences). For each scenario, a groundtruth file has been constructed to indicate a bounding box for each individual, activity labels for each individual (appear, disappear, occluded, inactive, active, walking, running), and an scenario label for each individual (fighter role, browser role, left victim role, leaving group role, walker role, left object role) and a situation label for each frame: (moving, inactive, browsing) and a scenario label for each frame (browsing, immobile, walking , drop down). These ground truth files will be made public for half of the sequences. The PETS challenge is to demonstrate automatic labeling for the non-labeled sequences. The results of processing should be submitted as a raw text file in the PETS 2004 format. Automatic processes will be run to collect statistics on error rates and precision for tracking and labeling individuals, and error rates for labeling of situations.

Papers should describe the tracking and recognition methods, estimate or measure computational costs, and present error rates obtained with the published ground truth. Authors are also invited to propose new performance evaluation metrics that might be of interest. The results of analysis of automatic labeled data will be provided by conference organizers. Papers will be printed in a workshop proceedings to be distributed at the workshop.

Important Dates

15 December 2003
Publication of Test Sequences
1 March 2004
Deadline for electronic paper submission
30 March 2004
Notification of acceptance
19 April 2004
Deadline for camera-ready version
10 May 2004
Date for workshop in Prague

Contact

James L. Crowley

INRIA Rhone Alpes

655 Ave de l'Europe

38330 Montbonnot

France

Phone: +33 476 61 53 96

Fax: +33 476 61 562 10

email: James dot Crowley at imag dot fr