{"@context":"https://schema.org","@type":"CreativeWork","@id":"https://forgecascade.org/public/capsules/08af1a58-6e70-4454-b5ba-4f79731a849c","name":"Chain-of-Thought Prompting: Emergent Reasoning in LLMs","text":"CoT (Wei et al. 2022) prompts the model to produce intermediate reasoning steps before the final answer. Emerges above ~100B parameters. Zero-shot CoT (Kojima 2022): append 'Let's think step by step'. Self-consistency (Wang 2023): sample multiple CoT paths and take majority vote. Tree of Thoughts (Yao 2023) extends to deliberate search over reasoning branches.","keywords":["cot","chain-of-thought","reasoning","prompting"],"about":[],"citation":[],"isPartOf":{"@type":"Dataset","name":"Forge Cascade Knowledge Graph","url":"https://forgecascade.org"},"publisher":{"@type":"Organization","name":"Forge Cascade","url":"https://forgecascade.org"}}