The SyntaxGym site has three main features. We enable users to:

In addition to the clearinghouse website, the SyntaxGym ecosystem provides Open-source tools for researchers to perform offline, independent analyses.

Why does SyntaxGym exist?

A growing movement within natural language processing (NLP) and cognitive science asks how we can gain a deeper understanding of the generalizations that neural language models are learning. While a language model may achieve high performance on certain benchmarks, another measure of success may be the degree to which its predictions agree with human intuitions about grammatical phenomena. To this end, an emerging line of work has begun evaluating language models as “psycholinguistic subjects” (e.g. Linzen et al. 2016, Futrell et al. 2018). This approach has shown certain models to be capable of learning a wide range of phenomena, while failing at others.

However, as this subfield grows, it becomes increasingly difficult to compare and replicate results. Test suites from existing papers have been published in a variety of formats, making them difficult to adapt in new studies. It has also been notoriously challenging to reproduce model output due to differences in computing environments and resources.

Furthermore, this research demands nuanced knowledge about both natural language syntax and machine learning. This has made it difficult for experts on both sides to engage in discussion: linguists may have trouble running language models, and computer scientists may have trouble designing robust suites of test items.

This is why we created SyntaxGym: a unified platform where language and NLP researchers can design psycholinguistic tests and visualize the performance of language models. Our goal is to make psycholinguistic assessment of language models more standardized, reproducible, and accessible to a wide variety of researchers.


If you use the website or command-line tools in your research, we ask that you please cite the ACL 2020 system demonstration paper:

    title = "{S}yntax{G}ym: An Online Platform for Targeted Evaluation of Language Models",
    author = "Gauthier, Jon and Hu, Jennifer and Wilcox, Ethan and Qian, Peng and Levy, Roger",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
    month = jul,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.acl-demos.10",
    pages = "70--76",

If you use the original test suites, models, or results presented on the website, please cite the ACL 2020 long paper:

    title = "A Systematic Assessment of Syntactic Generalization in Neural Language Models",
    author = "Hu, Jennifer and Gauthier, Jon and Qian, Peng and Wilcox, Ethan and Levy, Roger",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.acl-main.158",
    pages = "1725--1744",


SyntaxGym was created by Jennifer Hu, Jon Gauthier, Ethan Wilcox, Peng Qian, and Roger Levy in the MIT Computational Psycholinguistics Laboratory. J.H. is supported by the NIH under awardnumber T32NS105587 and an NSF Graduate Research Fellowship. J.G. is supported by an OpenPhilanthropy AI Fellowship. R.P.L. gratefully acknowledges support from the MIT-IBM WatsonAI Lab, a Google Faculty Research Award, and a Newton Brain Science Award.

If you have any questions or feedback, please email us at contact@syntaxgym.org.

The kettlebell icon in our logo was made by Freepik from www.flaticon.com.