Second Thoughts are Best: Learning to Re-Align With Human Values from Text Edits

01/01/2023
by   Ruibo Liu, et al.
18

We present Second Thought, a new learning paradigm that enables language models (LMs) to re-align with human values. By modeling the chain-of-edits between value-unaligned and value-aligned text, with LM fine-tuning and additional refinement through reinforcement learning, Second Thought not only achieves superior performance in three value alignment benchmark datasets but also shows strong human-value transfer learning ability in few-shot scenarios. The generated editing steps also offer better interpretability and ease for interactive error correction. Extensive human evaluations further confirm its effectiveness.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset