Author of the publication

SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems

, , , , , , , and . (2019)cite arxiv:1905.00537Comment: NeurIPS 2019, super.gluebenchmark.com updating acknowledegments.

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Embedding Word Similarity with Neural Machine Translation., , , , and . ICLR (Workshop), (2015)Know your audience: specializing grounded language models with listener subtraction., , , , and . EACL, page 3866-3893. Association for Computational Linguistics, (2023)Learning to Understand Goal Specifications by Modelling Reward., , , , , , and . ICLR (Poster), OpenReview.net, (2019)SemPPL: Predicting Pseudo-Labels for Better Contrastive Representations., , , , , , , , , and . ICLR, OpenReview.net, (2023)Semantic Exploration from Language Abstractions and Pretrained Representations., , , , , , , , and . NeurIPS, (2022)Towards mental time travel: a hierarchical memory for reinforcement learning agents., , , and . NeurIPS, page 28182-28195. (2021)Multimodal Few-Shot Learning with Frozen Language Models., , , , , and . NeurIPS, page 200-212. (2021)Specializing Word Embeddings for Similarity or Relatedness., , and . EMNLP, page 2044-2048. The Association for Computational Linguistics, (2015)Can language models learn from explanations in context?, , , , , , , , and . EMNLP (Findings), page 537-563. Association for Computational Linguistics, (2022)Learning Abstract Concept Embeddings from Multi-Modal Data: Since You Probably Can't See What I Mean., and . EMNLP, page 255-265. ACL, (2014)