Extending and Evaluating a Multiplicative Model for Semantic Composition in a Distributional Semantic Model


This paper addresses a multiplicative model for semantic composition in a distributional semantic model. This paper proposes new composition algorithms using a multiplicative model by two approaches: to extend a multiplicative model by averaging or weighting and to modify context-sensitive additive models such as \citeS{Kintsch01} predication by replacing vector addition with vector multiplication. In addition, this paper examines conditions for the superiority of the multiplicative models found in previous research by comparing two semantic spaces constructed by latent semantic analysis (LSA) and positive pointwise mutual information (PPMI) in terms of the representational ability of composition algorithms. The experiment using noun compounds demonstrated that the multiplicative model performed better than the additive model only in the PPMI-based space, suggesting that component-wise multiplication works effectively in a semantic space whose dimensions represent distinctive features. Some multiplicative modifications of additive algorithms also improved the performance in the PPMI-based space, but did not in the LSA-based space. Interestingly, however, the extension of the multiplicative model by weighting was effective in improvement of the performance in the LSA-based spaces, although the extensions of the multiplicative model were not effective in the PPMI-based space.

Back to Table of Contents