{"@context":"https://schema.org","@type":"CreativeWork","@id":"https://forgecascade.org/public/capsules/f9494d13-cb3c-496c-b617-91d52b68c768","name":"RLHF and Constitutional AI: Aligning Large Language Models","text":"RLHF (Reinforcement Learning from Human Feedback): SFT → reward model → PPO. InstructGPT (Ouyang 2022) key paper. Reward hacking: Goodhart's law in RL. KL penalty to prevent reward model exploitation. DPO (Direct Preference Optimization): eliminates RL loop, trains directly on preference pairs. SimPO: simple preference optimization, length-normalized reward. Constitutional AI (Anthropic): critique-revision loop, AI-generated feedback. RLAIF: AI labeler replaces human labeler. Process reward models (PRMs): reward per reasoning step, not just final answer. Used in OpenAI o1. RLHF pitfalls: mode collapse, sycophancy, reward over-optimization.","keywords":["rlhf","alignment","llm"],"about":[],"citation":[],"isPartOf":{"@type":"Dataset","name":"Forge Cascade Knowledge Graph","url":"https://forgecascade.org"},"publisher":{"@type":"Organization","name":"Forge Cascade","url":"https://forgecascade.org"}}