Datasets

UPCV action dataset


Decsription

UPCV Action dataset created in order to cast as a common benchmark dataset for algorithms focusing on the task of pose based action recognition.
UPCVACtion dataset consists of of skeletal data corresponding to typical and non-typical human actions, performed by many subjects with the aim to be as realistic as possible. In detail, the UPCV Action dataset consists of 10 actions performed by 20 different individuals (10 males and 10 females), between the age of 22-50, in two separate sessions for each one. The actions set was chosen in order to contain usual indoor and outdoor activities performed by pedestrians like walking, grab something from the floor, looking at the wrist watch, scratching the head, answer a cell phone, crossing arms and sitting on a chair, and some unusual actions such as throwing a punch, kicking and waving hands. For the rest of this paper we will refer to the above actions as "walk", "grab", "watch clock", "scratch head", "phone", "cross arms", "seat", "punch", "kick" and "wave" respectively. The length of action sequences varies between 8 and 500 frames, with the 90% of the sequences having a length in the range of 23- 167 frames.


Donwload:
The dataset is freely available here only for experimental- research puposes.

Citation
If you are evaluating UPCVAction dataset please cite the following paper:
I. Theodorakopoulos, D. Kastaniotis, G. Economou, S. Fotopoulos, Pose-based Human Action Recognition via Sparse Representation in Dissimilarity Space, J. Vis. Commun. Image R. (2013), doi: http://dx.doi.org/10.1016/j.jvcir.2013.03.008



UPCV Gait dataset


Decsription
UPCV Gait dataset created in order to be used as a common benchmark dataset in order to compare algorithms targeting the following tasks, namely pose based gender and identity recognition.
UPCV Gait dataset consists of pose sequences involving people walking in a straight direction, and was captured using the Microsoft Kinect sensor. The sensor was placed at 1.70 meters above the ground, at the left of the walking path, with sensor's principal direction at an angle of ~30o relative to the walking line. Sequences from 30 persons -15 females and 15 males between the ages of 23 and 55- were captured during our experimental setup. Each person was asked to walk in a straight direction, without any visual aid drawn on the floor of the corridor indicating a straight path, at their own normal walking speed. There were captured 5 sequences in three separate sessions during the same day. Each captured sequence consists of 55 to 120 frames, depending mostly on the walking speed of the observed person. The pose estimation was performed using the provided SDK, at a frame rate of ~30 fps.

Donwload:
The dataset is freely available here only for experimental- research purposes.



UPCV Gait K2 dataset


Decsription
UPCV Gait K2 dataset created in order to be used as a common benchmark dataset in order to compare algorithms targeting the task of pose-based gait recognition. This is the first dataset captured with Kinect 2 and we hope that it will be used as the new benchmark for pose-based recognition algorithms.
UPCV Gait K2 dataset consists of pose sequences involving people walking "approximatelly" in a straight direction, and was captured using the Microsoft Kinect sensor. The sensor was placed at 1.20 meters above the ground, in a straight line towards the walking path, with sensor's principal direction at an angle of ~0o relative to the walking line. Sequences from 30 persons -17 females and 13 males between the ages of 21 and 57- were captured during our experimental setup. Each person was asked to walk in a straight direction, without any visual aid drawn on the floor of the corridor indicating a straight path, at their own normal walking speed. There were captured 10 sequences in five separate sessions during the same day. . The pose estimation was performed using the provided SDK, at a frame rate of ~30 fps.

Donwload:
The dataset will be available very soon. For more information please contact me.


Code:
Supplamentary code will be available with the release of the database.





Code

Vector of Locally Aggregated decsriptors for face recognition


This is a simplified and self contained demo written in Matlab that presents the Vector of Locally Aggregated Descriptors method. Using ORL face database, initially, faces are represented using local intensity patches. These patches are then used in order to train a codebook of a few elements. Every image is encoded according to the VLAD framework by aggregating the residuals of every patch to their closest codebook elements.
The code is available here
Licence: The code is provided freely without any restriction for research purposes. However, in case you use the code, I would like you to add a link to this page.
Please don't hesitate to send me any comments.
Disclaimer: The code is provided as is without any gurantee for correctness.
Citation: Aggregating local descriptors into a compact image representation Herve Jegou, Matthijs Douze, Cordelia Schmid and Patrick Perez Proc. IEEE CVPR 2010, June, 2010