Models of errors during routine sequential action are typically interface-independent. We present here evidence that different task spatial layouts, however, result in different patterns of sequence errors. We explain this data by expanding upon the Memory for Goals framework's activation-based, sequential process to include environmental (such as visual) contextual cues, as well as a richer priming structure. We show a strong qualitative and quantitative fit to experimental data.