Direct Segmented Sonification of Characteristic Features of the Data Domain

11/30/2017
by   Paul Vickers, et al.
0

Sonification and audification create auditory displays of datasets. Audification translates data points into digital audio samples and the auditory display's duration is determined by the playback rate. Like audification, auditory graphs maintain the temporal relationships of data while using parameter mappings (typically data-to-frequency) to represent the ordinate values. Such direct approaches have the advantage of presenting the data stream `as is' without the imposed interpretations or accentuation of particular features found in indirect approaches. However, datasets can often be subdivided into short non-overlapping variable length segments that each encapsulate a discrete unit of domain-specific significant information and current direct approaches cannot represent these. We present Direct Segmented Sonification (DSSon) for highlighting the segments' data distributions as individual sonic events. Using domain knowledge to segment data, DSSon presents segments as discrete auditory gestalts while retaining the overall temporal regime and relationships of the dataset. The method's structural decoupling from the sound stream's formation means playback speed is independent of the individual sonic event durations, thereby offering highly flexible time compression/stretching to allow zooming into or out of the data. Demonstrated by three models applied to biomechanical data, DSSon displays high directness, letting the data `speak' for themselves.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset