Abstract

ENIGMA is a learning-based method for guiding given clause selection in saturation-based theorem provers. Clauses from many previous proof searches are classified as positive and negative based on their participation in the proofs. An efficient classification model is trained on this data, classifying a clause as useful or un-useful for the proof search. This learned classification is used to guide next proof searches prioritizing useful clauses among other generated clauses. The approach is evaluated on the E prover and the CASC 2016 AIM benchmark, showing a large increase of E’s performance.

Notes

Acknowledgments

We thank Stephan Schulz for his open and modular implementation of E and its many features that allowed us to do this work. We also thank the Machine Learning Group at National Taiwan University for making LIBLINEAR openly available. This work was supported by the ERC Consolidator grant no. 649043 AI4REASON.