Effective management of learned knowledge is a challenge when modeling human-level behavior within complex, temporally extended tasks. This paper evaluates one approach to this problem: forgetting knowledge that is not in active use (as determined by base-level activation) and can likely be reconstructed if it becomes relevant. We apply this model for selective retention of learned knowledge to the working and procedural memories of Soar. When evaluated in simulated, robotic exploration and a competitive, multi-player game, these policies improve model reactivity and scaling while maintaining reasoning competence.