Publications


Thesis

  1. Correlational Neural Networks for Common Representation Learning.
    Sarath Chandar
    Master's Thesis, 2015
    Department of Computer Science and Engineering, IIT Madras.
    2016 Biswajit Sain Memorial Award for Best MS Thesis in Computer Science, IIT Madras.
    [pdf]

Papers

  1. GuessWhat?! Visual object discovery through multi-modal dialogue.
    Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, Aaron Courville.
    Proceedings of CVPR, 2017.
    [arXiv]

  2. Memory Augmented Neural Networks with Wormhole Connections.
    Caglar Gulcehre, Sarath Chandar, Yoshua Bengio.
    In arXiv, 2017.
    [arXiv]



  3. Hierarchical Memory Networks.
    Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio.
    In arXiv, 2016.
    [arXiv]

  4. Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes.
    Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio.
    IJCAI Workshop on Deep Reinforcement Learning: Frontiers and Challenges, 2016.
    [arXiv]

  5. A Correlational Encoder Decoder Architecture for Pivot Based Sequence Generation.
    Amrita Saha, Mitesh M Khapra, Sarath Chandar, Janarthanan Rajendran, Kyunghyun Cho.
    Proceedings of COLING, 2016.
    [arXiv]

  6. Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus.
    Iulian Vlad Serban, Alberto Garcia-Duran, Caglar Gulcehre, Sungjin Ahn, Sarath Chandar, Aaron Courville, Yoshua Bengio.
    Proceedings of ACL, 2016.
    [arXiv]

  7. Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning.
    Janarthanan Rajendran, Mitesh M Khapra, Sarath Chandar, Balaraman Ravindran.
    Proceedings of NAACL, 2016.
    [Initial version appeared in NIPS Workshop on Multimodal Machine Learning, 2015.]
    [arXiv]

  8. Correlational Neural Networks.
    Sarath Chandar, Mitesh M Khapra, Hugo Larochelle, Balaraman Ravindran.
    Neural Computation, 28(2): 286-304, 2016.
    [pdf][arXiv][code]



  9. Clustering is Efficient for Approximate Maximum Inner Product Search.
    Alex Auvolat, Sarath Chandar, Pascal Vincent, Yoshua Bengio, Hugo Larochelle.
    In arXiv, 2015.
    [arXiv]

  10. TSEB: More Efficient Thompson Sampling for Policy Learning.
    Prasanna P, Sarath Chandar, Balaraman Ravindran.
    In arXiv, 2015.
    [arXiv]

  11. Reasoning about Linguistic Regularities in Word Embeddings using Matrix Manifolds.
    Sridhar Mahadevan, Sarath Chandar.
    In arXiv, 2015
    [arXiv]

  12. From Multiple Views to Single View : A Neural Network Approach.
    Subendhu Rongali, Sarath Chandar, Ravindran B.
    Second ACM-IKDD Conference on Data Sciences, 2015.



  13. An Autoencoder Approach to Learning Bilingual Word Representations.
    Sarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh M Khapra, Balaraman Ravindran, Vikas Raykar, Amrita Saha.
    Neural Information Processing Systems (NIPS 28), 2014.
    [pdf][Project Page][code]



  14. Multilingual Deep Learning.
    Sarath Chandar, Mitesh M Khapra, Balaraman Ravindran, Vikas Raykar, Amrita Saha.
    NIPS Deep Learning Workshop, 2013.
    [pdf]