Multimodal

Multi Modal Variational AutoEncoders

Why we have different representation for different domain, why can't we have shared or single representation of all modalities like image, text and speech. This paper is a baby step in this direction. Variational Auto Encoders for Multi Modal Generative Network.

Recurrent Multimodal Interaction for Referring Image Segmentation

This idea of treating this problem as sequential problem is innovative and having an mLSTM cell to encode both visual and linguistic features is also good. This gives model ability to forget all those pixel which defy correspondence initially. My guts are that this kind of model would work good even where there are small objects because at each time step there would be a reduction/change of probable pixels for segmentation.