Learning to Infer Graphics Programs from Hand-Drawn Images

07/30/2017
by   Kevin Ellis, et al.
0

We introduce a model that learns to convert simple hand drawings into graphics programs written in a subset of . The model combines techniques from deep learning and program synthesis. We learn a convolutional neural network that proposes plausible drawing primitives that explain an image. These drawing primitives are like a trace of the set of primitive commands issued by a graphics program. We learn a model that uses program synthesis techniques to recover a graphics program from that trace. These programs have constructs like variable bindings, iterative loops, or simple kinds of conditionals. With a graphics program in hand, we can correct errors made by the deep network, measure similarity between drawings by use of similar high-level geometric structures, and extrapolate drawings. Taken together these results are a step towards agents that induce useful, human-readable programs from perceptual input.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset