Steering Out-of-Distribution Generalization
with Concept Ablation Fine-Tuning

Helena Casademunt1*, Caden Juang2*, Adam Karvonen, Samuel Marks3, Senthooran Rajamanoharan, Neel Nanda,
1Harvard University, 2Northeastern University, 3Anthropic; *Equal contribution

ArXiv Preprint thumbnail
ArXiv
Preprint
Github code thumbnail
Source Code

How do we control what a model learns?

Fine-tuning large language models (LLMs) can lead to unintended out-of-distribution generalization. Standard approaches to this problem rely on modifying training data, for example by adding data that better specify the intended generalization. However, this is not always practical. We introduce Concept Ablation Fine-Tuning (CAFT), a technique that leverages interpretability tools to control how LLMs generalize from fine-tuning, without needing to modify the training data or otherwise use data from the target distribution. Given a set of directions in an LLM's latent space corresponding to undesired concepts, CAFT works by ablating these concepts with linear projections during fine-tuning, steering the model away from unintended generalizations. We successfully apply CAFT to three fine-tuning tasks, including emergent misalignment, a phenomenon where LLMs fine-tuned on a narrow task generalize to give egregiously misaligned responses to general questions. Without any changes to the fine-tuning data, CAFT reduces misaligned responses by 10x without degrading performance on the training distribution. Overall, CAFT represents a novel approach for steering LLM generalization without modifying training data.

How to cite

This work is not yet peer-reviewed. The preprint can be cited as follows.

bibtex

@misc{casademunt2025steeringoutofdistributiongeneralizationconcept,
    title={Steering Out-of-Distribution Generalization with Concept Ablation Fine-Tuning}, 
    author={Helena Casademunt and Caden Juang and Adam Karvonen and Samuel Marks and Senthooran Rajamanoharan and Neel Nanda},
    year={2025},
    eprint={2507.16795},
    archivePrefix={arXiv},
    primaryClass={cs.LG},
    url={https://arxiv.org/abs/2507.16795}, 
}