Boosting Logical Reasoning in Large Language Models through a New Framework: The Graph of Thought

08/16/2023
by   Bin Lei, et al.
0

Recent advancements in large-scale models, such as GPT-4, have showcased remarkable capabilities in addressing standard queries. However, when facing complex problems that require multi-step logical reasoning, their accuracy dramatically decreases. Current research has explored the realm of prompting engineering to bolster the inferential capacities of these models. Our paper unveils a pioneering prompting technique, dubbed Graph of Thoughts (GoT). Through testing on a trio of escalating challenges: the 24-point game, resolution of high-degree polynomial equations, and derivation of formulas for recursive sequences, our method outperformed GPT-4, achieving accuracy improvements of 89.7%, 86%, and 56% for each respective task. Moreover, when juxtaposed with the state-of-the-art (SOTA) prompting method, Tree of Thought (ToT), our approach registered an average accuracy boost of 23%, 24%, and 15%.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset