Flexible Spatial Language Behaviors: Developing a Neural Dynamic Theoretical Framework

Abstract

To date, spatial language models have tended to overlook process-based accounts of building scene representations and their role in generating flexible spatial language behaviors. To address this theoretical gap, we implemented a model that combines spatial and color semantic terms with neurally-grounded scene representations. Tests of this model using real-world camera input support its viability as a theoretical framework for behaviorally flexible spatial language.


Back to Table of Contents