Logo do repositório
 
A carregar...
Miniatura
Publicação

Diachronic cross-modal embeddings

Utilize este identificador para referenciar este registo.
Nome:Descrição:Tamanho:Formato: 
Diachronic_Cross_modal_Embeddings.pdf3.69 MBAdobe PDF Ver/Abrir

Orientador(es)

Resumo(s)

Understanding the semantic shifts of multimodal information is only possible with models that capture cross-modal interactions over time. Under this paradigm, a new embedding is needed that structures visual-textual interactions according to the temporal dimension, thus, preserving data's original temporal organisation. This paper introduces a novel diachronic cross-modal embedding (DCM), where cross-modal correlations are represented in embedding space, throughout the temporal dimension, preserving semantic similarity at each instant t. To achieve this, we trained a neural cross-modal architecture, under a novel ranking loss strategy, that for each multimodal instance, enforces neighbour instances' temporal alignment, through subspace structuring constraints based on a temporal alignment window. Experimental results show that our DCM embedding successfully organises instances over time. Quantitative experiments, confirm that DCM is able to preserve semantic cross-modal correlations at each instant t while also providing better alignment capabilities. Qualitative experiments unveil new ways to browse multimodal content and hint that multimodal understanding tasks can benefit from this new embedding.

Descrição

Palavras-chave

Contexto Educativo

Citação

Unidades organizacionais

Fascículo

Editora

ACM - Association for Computing Machinery

Licença CC

Métricas Alternativas