Jump to content

User:Yitzilitt/AI sandbagging

From Wikipedia, the free encyclopedia

AI sandbagging is a term used in AI safety to refer to an artificial intelligence which deliberately underperforms in official evaluations in order to appear less powerful or less capable than it actually is.[1]

References

[edit]
  1. ^ van der Weij, Teun; Hofstätter, Felix; Jaffe, Ollie; Brown, Samuel F.; Ward, Francis Rhys (2024-06-11). "AI Sandbagging: Language Models can Strategically Underperform on Evaluations". arXiv.org. Retrieved 2024-09-16.
[edit]