Hello!
I am working on a researh project and going through TransformerLens as of recently. I was trying to understand what the various flags do and noticed that in the HookedTransformer module, the docstring for the method set_use_split_qkv_input is the same as for set_use_attn_in (lines 1999 and 2019 currently, respectively).
The text is as follows:
Toggles whether to allow editing of inputs to each attention head.
Is this correct?
PS. The invitations for Open Source Mechanistic Interpretability Slack and Mechanistic Interpretability Discord seem to have expired.