The CORSMAL benchmark for the prediction of the properties of containers

07/27/2021
by   Alessio Xompero, et al.
3

Acoustic and visual sensing can support the contactless estimation of the weight of a container and the amount of its content when a person manipulate them. However, opaqueness and transparencies (both of the container and of the content) and the variability of materials, shapes and sizes make this problem challenging. In this paper, we present an open framework to benchmark methods for the estimation of the capacity of a container, and the type, mass, and amount of its content. The framework includes a dataset, well-defined tasks and performance measures, baselines and state-of-the-art methods, and an in-depth comparative analysis of these methods. Deep learning with neural networks with audio alone or a combination of audio and visual data are used by the methods to classify the type and amount of the content, either independently or jointly. Regression and geometric approaches with visual data are preferred to determine the capacity of the container. Results show that classifying the content type and level with methods that use only audio as input modality achieves a weighted average F1-score up to 81 Estimating the container capacity with vision-only approaches and filling mass with audio-visual, multi-stage algorithms reaches up to 65 capacity and mass scores.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset