top of page

Encyclopedia of Learning and Using AI

Illustration of colorful robots, connected by a rope or chain.


Research Paper

a few hours


AI Prompts 101, AI Prompts, AI Research

CoT, LLM Reasoning, AI Research Paper

Discover how Chain-of-Thought (CoT) prompting significantly boosts large language model (LLM) reasoning capabilities in this insightful paper.

This paper explores how generating a chain of thought—a series of intermediate reasoning steps—significantly improves the ability of large language models to perform complex reasoning. 

The authors show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain-of-thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. 

Experiments on three large language models show that chain-of-thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a PaLM 540B with just eight chain-of-thought exemplars achieves state-of-the-art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier.

CoT Prompting Elicits Reasoning in LLMs

bottom of page