Sarath Chandar

Publications

Preprints

  • An Empirical Investigation of the Role of Pre-training in Lifelong Learning.

    Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, Emma Strubell.

    ICML Theory and Foundation of Continual Learning Workshop 2021.

    In arXiv, 2021.

    [pdf]
  • Scaling Laws for the Few-Shot Adaptation of Pre-trained Image Classifiers.

    Gabriele Prato, Simon Guiroy, Ethan Caballero, Irina Rish, Sarath Chandar.

    In arXiv, 2021.

    [arXiv]

Conference and Journal Papers

  1. Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness.

    Abdelrahman Zayed, Prasanna Parthasarathi, Gonçalo Mordido, Hamid Palangi, Samira Shabanian, Sarath Chandar.

    AAAI Conference on Artificial Intelligence (AAAI), 2023.

    [arXiv]

  2. Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes.

    Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar.

    Findings of Empirical Methods in Natural Language Processing (EMNLP), 2022.

    [arXiv]
  3. Local Structure Matters Most in Most Languages.

    Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar.

    AACL-IJCNLP 2022, 2022.

    [arXiv]
  4. TAG: Task-based Accumulated Gradients for Lifelong learning.

    Pranshu Malviya, Balaraman Ravindran, Sarath Chandar.

    Conference on Lifelong Learning Agents (CoLLAs), 2022.

    [arXiv], [code]
  5. Improving Meta-Learning Generalization with Activation-Based Early-Stopping.

    Simon Guiroy, Christopher Pal, Gonçalo Mordido, Sarath Chandar.

    Conference on Lifelong Learning Agents (CoLLAs), 2022.

    [arXiv], [code]
  6. Combining Reinforcement Learning and Constraint Programming for Sequence-Generation Tasks with Hard Constraints.

    Daphné Lafleur, Sarath Chandar, Gilles Pesant.

    Principles and Practice of Constraint Programming (CP), 2022.

  7. Towards Evaluating Adaptivity of Model-Based Reinforcement Learning Methods.

    Yi Wan*, Ali Rahimi-Kalahroudi*, Janarthanan Rajendran, Ida Momennejad, Sarath Chandar, Harm van Seijen.

    International Conference on Machine Learning (ICML), 2022.

    [arXiv], [code]
  8. Post-hoc Interpretability for Neural NLP: A Survey.

    Andreas Madsen, Siva Reddy, Sarath Chandar.

    ACM Computing Surveys, 2022.

    [arXiv]
  9. Local Structure Matters Most: Perturbation Study in NLU.

    Louis Clouatre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar.

    Findings of ACL, 2022.

    [arXiv]
  10. Memory Augmented Optimizers for Deep Learning.

    Paul-Aymeric McRae*, Prasanna Parthasarathi*, Mahmoud Assran, Sarath Chandar.

    International Conference on Learning Representations (ICLR), 2022.

    [arXiv]
  11. PatchUp: A Regularization Technique for Convolutional Neural Networks.

    Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma, Sarath Chandar.

    AAAI Conference on Artificial Intelligence, 2022.

    [arXiv][code]

  12. Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task?

    Prasanna Parthasarathi, Joelle Pineau, Sarath Chandar.

    Proceedings of the 22nd Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL), 2021.

    [arXiv]
  13. A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss

    Prasanna Parthasarathi, Mohamed Abdelsalam, Sarath Chandar, Joelle Pineau.

    Proceedings of the 22nd Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL), 2021.

    [arXiv]
  14. MLMLM: Link Prediction with Mean Likelihood Masked Language Model.

    Louis Clouatre, Philippe Trempe, Amal Zouaq, Sarath Chandar.

    Findings of Association for Computational Linguistics (ACL), 2021.

    [arXiv]
  15. Continuous Coordination As a Realistic Scenario for Lifelong Learning.

    Hadi Nekoei, Akilesh Badrinaaraayanan, Aaron Courville, Sarath Chandar.

    International Conference on Machine Learning (ICML), 2021.

    [arXiv], [code]
  16. A Survey of Data Augmentation Approaches for NLP.

    Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, Eduard Hovy.

    Findings of Association for Computational Linguistics (ACL), 2021.

    [arXiv]
  17. IIRC: Incremental Implicitly-Refined Classification.

    Mohamed Abdelsalam, Mojtaba Faramarzi, Shagun Sodhani, Sarath Chandar.

    Conference on Computer Vision and Pattern Recognition (CVPR), 2021.

    [arXiv], [code], [website], [PyPI], [docs]
  18. Towered Actor Critic for Handling Multiple Action Types in Reinforcement Learning For Drug Discovery.

    Sai Krishna Gottipati, Yashaswi Pathak, Boris Sattarov, Sahir, Rohan Nuttall, Mohammad Amini, Matthew E. Taylor, Sarath Chandar.

    AAAI Conference on Artificial Intelligence, 2021.


  19. The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning.

    Harm van Seijen, Hadi Nekoei, Evan Racah, Sarath Chandar.

    Neural Information Processing Systems (NeurIPS), 2020.

    [arXiv], [code]
  20. Learning To Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning.

    Sai Krishna Gottipati*, Boris Sattarov*, Sufeng Niu, Yashaswi Pathak, Haoran Wei, Shengchao Liu, Karam MJ Thomas, Simon Blackburn, Connor W Coley, Jian Tang, Sarath Chandar, Yoshua Bengio.

    International Conference on Machine Learning (ICML), 2020.

    [arXiv]
  21. The Hanabi Challenge: A New Frontier for AI Research.

    Nolan Bard*, Jakob N. Foerster*, Sarath Chandar, Neil Burch, Marc Lanctot, H. Francis Song, Emilio Parisotto, Vincent Dumoulin,

    Subhodeep Moitra, Edward Hughes, Iain Dunning, Shibl Mourad, Hugo Larochelle, Marc G. Bellemare, Michael Bowling

    Artificial Intelligence Journal (AIJ), 2020.

    [arXiv], [code]
  22. Towards Training Recurrent Neural Networks for Lifelong Learning.

    Shagun Sodhani*, Sarath Chandar*, Yoshua Bengio.

    Neural Computation, 2020.

    [arXiv]

  23. Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study.

    Chinnadhurai Sankar, Sandeep Subramanian, Chris Pal, Sarath Chandar, Yoshua Bengio.

    Association for Computational Linguistics (ACL), 2019.

    Nominated for Best Paper Award.

    [arXiv]
  24. Towards Lossless Encoding of Sentences.

    Gabriele Prato, Mathieu Duchesneau, Sarath Chandar, Alain Tapp.

    Association for Computational Linguistics (ACL), 2019.

    [arXiv]
  25. Towards Non-saturating Recurrent Units for Modelling Long-term Dependencies.

    Sarath Chandar*, Chinnadhurai Sankar*, Eugene Vorontsov, Samira Ebrahimi Kahou, Yoshua Bengio.

    Proceedings of AAAI, 2019.

    [arXiv], [code]
  26. Edge Replacement Grammars: A Formal Language Approach for Generating Graphs.

    Revanth Reddy*, Sarath Chandar*, Balaraman Ravindran.

    Proceedings of SIAM International Conference on Data Mining (SDM19), 2019.

    [arXiv]

  27. Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph.

    Amrita Saha, Vardaan Pahuja, Mitesh M. Khapra, Karthik Sankaranarayanan, Sarath Chandar.

    Proceedings of AAAI, 2018.

    [arXiv], [code/data]
  28. Dynamic Neural Turing Machine with Continuous and Discrete Addressing Schemes.

    Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio.

    Neural Computation, 30(4): 857-884, 2018.

    [Initial version appeared in IJCAI Workshop on Deep Reinforcement Learning: Frontiers and Challenges, 2016].

    [arXiv]

  29. GuessWhat?! Visual object discovery through multi-modal dialogue.

    Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, Aaron Courville.

    Proceedings of CVPR, 2017.

    [arXiv]
  30. A Deep Reinforcement Learning Chatbot.

    Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Mudumba, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio.

    Neural Information Processing Systems (NeurIPS) Demonstration Track, 2017.

    Second Prize for Demonstration.

    [arXiv]

  31. A Correlational Encoder Decoder Architecture for Pivot Based Sequence Generation.

    Amrita Saha, Mitesh M Khapra, Sarath Chandar, Janarthanan Rajendran, Kyunghyun Cho.

    Proceedings of COLING, 2016.

    [arXiv]
  32. Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus.

    Iulian Vlad Serban, Alberto Garcia-Duran, Caglar Gulcehre, Sungjin Ahn, Sarath Chandar, Aaron Courville, Yoshua Bengio.

    Proceedings of ACL, 2016.

    [arXiv]
  33. Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning.

    Janarthanan Rajendran, Mitesh M Khapra, Sarath Chandar, Balaraman Ravindran.

    Proceedings of NAACL, 2016.

    [Initial version appeared in NIPS Workshop on Multimodal Machine Learning, 2015.]

    [arXiv]
  34. Correlational Neural Networks.

    Sarath Chandar, Mitesh M Khapra, Hugo Larochelle, Balaraman Ravindran.

    Neural Computation, 28(2): 286-304, 2016.

    [pdf] [arXiv] [code]

  35. From Multiple Views to Single View : A Neural Network Approach.

    Subendhu Rongali, Sarath Chandar, Ravindran B.

    Second ACM-IKDD Conference on Data Sciences, 2015.


  36. An Autoencoder Approach to Learning Bilingual Word Representations.

    Sarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh M Khapra, Balaraman Ravindran, Vikas Raykar, Amrita Saha.

    Neural Information Processing Systems (NeurIPS), 2014.

    [pdf] [Project Page] [code]

Thesis

  • On Challenges in Training Recurrent Neural Networks.
    Sarath Chandar
    Ph.D. Thesis, 2019
    Department of Computer Science and Operations Research, University of Montreal.
    [pdf]
  • Correlational Neural Networks for Common Representation Learning.
    Sarath Chandar Master's Thesis, 2015
    Department of Computer Science and Engineering, IIT Madras.
    2016 Biswajit Sain Memorial Award for Best MS Thesis in Computer Science, IIT Madras.
    [pdf]

Workshop Papers and Technical Reports

  • Maximum Reward Formulation In Reinforcement Learning.
    Sai Krishna Gottipati, Yashaswi Pathak, Rohan Nuttall, Raviteja Chunduru, Ahmed Touati,
    Sriram Ganapathi Subramanian, Matthew E Taylor, Sarath Chandar.
    In arXiv, 2020.
    [arXiv]
  • Memory Augmented Neural Networks with Wormhole Connections.
    Caglar Gulcehre, Sarath Chandar, Yoshua Bengio.
    In arXiv, 2017.
    [arXiv]
  • Hierarchical Memory Networks.
    Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio.
    In arXiv, 2016.
    [arXiv]
  • Clustering is Efficient for Approximate Maximum Inner Product Search.
    Alex Auvolat, Sarath Chandar, Pascal Vincent, Yoshua Bengio, Hugo Larochelle.
    In arXiv, 2015.
    [arXiv]
  • TSEB: More Efficient Thompson Sampling for Policy Learning.
    Prasanna P, Sarath Chandar, Balaraman Ravindran.
    In arXiv, 2015.
    [arXiv]
  • Reasoning about Linguistic Regularities in Word Embeddings using Matrix Manifolds.
    Sridhar Mahadevan, Sarath Chandar.
    In arXiv, 2015
    [arXiv]
  • Multilingual Deep Learning.
    Sarath Chandar, Mitesh M Khapra, Balaraman Ravindran, Vikas Raykar, Amrita Saha.
    NeurIPS Deep Learning Workshop, 2013.
    [pdf]