This paper explores the benefits and challenges of using the ACT-R cognitive architecture in the development of a large-scale, functional, cognitively motivated language analysis model. The paper focuses on ACT-R’s declarative memory retrieval mechanism, proposing extensions to support verification of retrieved chunks, multi-level activation spread and carry over activation. The paper argues against the need for inhibition between competing chunks which is necessarily task specific.