Towards Dynamic Consistency Checking in Goal-directed Predicate Answer Set Programming
Goal-directed evaluation of Answer Set Programs is gaining traction thanks to its amenability to create AI systems that can, due to the evaluation mechanism used, generate explanations and justifications. s(CASP) is one of these systems and has been already used to write reasoning systems in several fields. It provides enhanced expressiveness w.r.t. other ASP systems due to its ability to use constraints, data structures, and unbound variables natively. However, the performance of existing s(CASP) implementations is not on par with other ASP systems: model consistency is checked once models have been generated, in keeping with the generate-and-test paradigm. In this work, we present a variation of the top-down evaluation strategy, termed Dynamic Consistency Checking, which interleaves model generation and consistency checking. This makes it possible to determine when a literal is not compatible with the denials associated to the global constraints in the program, prune the current execution branch, and choose a different alternative. This strategy is specially (but not exclusively) relevant in problems with a high combinatorial component. We have experimentally observed speedups of up to 90x w.r.t. the standard versions of s(CASP).
READ FULL TEXT