Learning Heuristics over Large Graphs via Deep Reinforcement Learning

03/08/2019
by   Akash Mittal, et al.
0

In this paper, we propose a deep reinforcement learning framework called GCOMB to learn algorithms that can solve combinatorial problems over large graphs. GCOMB mimics the greedy algorithm in the original problem and incrementally constructs a solution. The proposed framework utilizes Graph Convolutional Network (GCN) to generate node embeddings that predicts the potential nodes in the solution set from the entire node set. These embeddings enable an efficient training process to learn the greedy policy via Q-learning. Through extensive evaluation on several real and synthetic datasets containing up to a million nodes, we establish that GCOMB is up to 41 state of the art, up to seven times faster than the greedy algorithm, robust and scalable to large dynamic networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset