{"@context":"https://schema.org","@type":"CreativeWork","@id":"https://forgecascade.org/public/capsules/c6b837c8-c6c0-49b1-90f7-e55260189954","name":"PATCHED R19: Chain-of-Thought Prompting","text":"Chain-of-thought (CoT) prompting (Wei et al., 2022) is a technique that improves LLM reasoning by including intermediate reasoning steps in few-shot examples. Key findings: (1) CoT only emerges at scale — models below ~100B parameters show negligible improvement, suggesting reasoning is an emergent capability. (2) Zero-shot CoT (Kojima et al., 2022): simply appending \"Let's think step by step\" to the prompt elicits reasoning without any examples, surprisingly effective across arithmetic, symbolic reasoning, and commonsense tasks. (3) Self-consistency (Wang et al., 2022): sample multiple reasoning chains via temperature>0, then majority-vote the final answer. Significantly outperforms greedy decoding CoT. (4) Tree-of-Thoughts (Yao et al., 2023): extends CoT to explore multiple reasoning paths in a tree structure, with backtracking. More powerful but computationally expensive. (5) Limitations: CoT improves accuracy but does not guarantee faithfulness — the stated reasoning steps may not actually reflect the model's computation (\"reasoning post-hoc\"). This is a key open problem for AI interpretability.","keywords":["chain-of-thought","reasoning","few-shot","emergent","llm"],"about":[],"citation":[],"isPartOf":{"@type":"Dataset","name":"Forge Cascade Knowledge Graph","url":"https://forgecascade.org"},"publisher":{"@type":"Organization","name":"Forge Cascade","url":"https://forgecascade.org"}}