Machine Learning Pipelines with Modern Big Data Tools for High Energy Physics

09/23/2019
by   Matteo Migliorini, et al.
0

The effective utilization at scale of complex machine learning (ML) techniques to HEP use cases poses several technological challenges, most importantly on the actual implementation of dedicated end-to-end data pipelines. A solution to that issue is presented, which allows training neural network classifiers using solutions from the Big Data ecosystems, integrated with tools, software, and platforms common in the HEP environment. In particular, Apache Spark is exploited for data preparation and feature engineering, running the corresponding (Python) code interactively on Jupyter notebooks; key integrations and libraries that make Spark capable of ingesting data stored using ROOT and its EOS/XRootD protocol will be described and discussed. Training of the neural network models, defined using the Keras API, is performed in a distributed fashion on Spark clusters using BigDL with Analytics Zoo and Tensorflow. The implementation and the results of the distributed training are described in details in this work.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset