Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

The capability of interpreting the conceptual and affective information associated with natural language through different modalities is a key issue for the enhancement of human-agent interaction. The proposed methodology, termed sentic blending, enables the continuous interpretation of semantics and sentics (i.e., the conceptual and affective information associated with natural language) based on the integration of an affective common-sense knowledge base with any multimodal signal-processing module. In this work, in particular, sentic blending is interfaced with a facial emotional classifier and an opinion mining engine. One of the main distinguishing features of the proposed technique is that it does not simply perform cognitive and affective classification in terms of discrete labels, but it operates in a multidimensional space that enables the generation of a continuous stream characterising user's semantic and sentic progress over time, despite the outputs of the unimodal categorical modules have very different time-scales and output labels. © 2013 IEEE.

Original publication

DOI

10.1109/CIHLI.2013.6613272

Type

Conference paper

Publication Date

04/11/2013

Pages

108 - 117