Offline evaluation is expected to include:
- Accuracy of toxicity classifier
- Calibration of toxicity classifier

- Accuracy of toxicity classifier, divided by toxic comment type
- Accuracy of toxic comment type classifier
- Accuracy of toxic comment type classifier on identity attacks, divided by identity type
- Identify known failure modes from initial evaluation, and write tests for them
- Unit tests based on templates: e.g. if you have a non-toxic comment and you swap out an innocent word for an obscene work, the output should change. if you swap out an innocent word for an innocent synonym, output should not change. etc.
Note: divide data by timestamp, production data should be last in time. This way you can do things like monitor for data drift, monitor if certain types of identity-based attacks are becoming more common, etc.
Offline evaluation is expected to include:
Note: divide data by timestamp, production data should be last in time. This way you can do things like monitor for data drift, monitor if certain types of identity-based attacks are becoming more common, etc.