MMDF2018 Workshop Report

08/30/2018
by   Chun-An Chou, et al.
0

Driven by the recent advances in smart, miniaturized, and mass produced sensors, networked systems, and high-speed data communication and computing, the ability to collect and process larger volumes of higher veracity real-time data from a variety of modalities is expanding. However, despite research thrusts explored since the late 1990's, to date no standard, generalizable solutions have emerged for effectively integrating and processing multimodal data, and consequently practitioners across a wide variety of disciplines must still follow a trial-and-error process to identify the optimum procedure for each individual application and data sources. A deeper understanding of the utility and capabilities (as well as the shortcomings and challenges) of existing multimodal data fusion methods as a function of data and challenge characteristics has the potential to deliver better data analysis tools across all sectors, therein enabling more efficient and effective automated manufacturing, patient care, infrastructure maintenance, environmental understanding, transportation networks, energy systems, etc. There is therefore an urgent need to identify the underlying patterns that can be used to determine a priori which techniques will be most useful for any specific dataset or application. This next stage of understanding and discovery (i.e., the development of generalized solutions) can only be achieved via a high level cross-disciplinary aggregation of learnings, and this workshop was proposed at an opportune time as many domains have already started exploring use of multimodal data fusion techniques in a wide range of application-specific contexts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset