SyDog: A Synthetic Dog Dataset for Improved 2D Pose Estimation
Moira Shooter
Charles Malleson
Adrian Hilton
{m.shooter, charles.malleson, a.hilton}@surrey.ac.uk
University of Surrey (UK)
[Paper]
[Supp]
[Poster]
[arXiv]

Samples of the SyDog.
Pipeline for generating synthetic dog images with 2D annotations of joint locations and bounding box coordinates.

Abstract

Estimating the pose of animals can facilitate the understanding of animal motion which is fundamental in disciplines such as biomechanics, neuroscience, ethology, robotics and the entertainment industry. Human pose estimation models have achieved high performance due to the huge amount of training data available. Achieving the same results for animal pose estimation is challenging due to the lack of animal pose datasets. To address this problem we introduce SyDog: a synthetic dataset of dogs containing ground truth pose and bounding box coordinates which was generated using the game engine, Unity. We demonstrate that pose estimation models trained on SyDog achieve better performance than models trained purely on real data and significantly reduce the need for the labour intensive labelling of images. We release the SyDog dataset as a training and evaluation benchmark for research in animal motion.


Paper and Supplementary Material

M. Shooter, C. Malleson, A. Hilton
SyDog: A Synthetic Dog Dataset for Improved 2D Pose Estimation
In CV4Animals: Computer Vision for Animal Behavior Tracking and Modeling Workshop In conjuction with Computer Vision and Pattern Recognition, 2021.


[Bibtex]


Dataset

The dataset contains 32k synthetic images containing dogs and 2D annotations (2D joint locations and bounding box coordinates). Dataset can be accessed by downloading the .zip file, if any issues e-mail m.shooter@surrey.ac.uk. There is a demo included on how to use the dataset.


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.