From post

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

ResMLP: Feedforward Networks for Image Classification With Data-Efficient Training., , , , , , , , , и 1 other автор(ы). IEEE Trans. Pattern Anal. Mach. Intell., 45 (4): 5314-5321 (апреля 2023)Code Llama: Open Foundation Models for Code., , , , , , , , , и 15 other автор(ы). CoRR, (2023)Three Things Everyone Should Know About Vision Transformers., , , , и . ECCV (24), том 13684 из Lecture Notes in Computer Science, стр. 497-515. Springer, (2022)Grafit: Learning fine-grained image representations with coarse labels., , , , и . ICCV, стр. 854-864. IEEE, (2021)Training data-efficient image transformers & distillation through attention, , , , , и . Proceedings of the 38th International Conference on Machine Learning, том 139 из Proceedings of Machine Learning Research, стр. 10347--10357. PMLR, (18--24 Jul 2021)Training data-efficient image transformers & distillation through attention., , , , , и . CoRR, (2020)ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases., , , , , и . ICML, том 139 из Proceedings of Machine Learning Research, стр. 2286-2296. PMLR, (2021)Emerging Properties in Self-Supervised Vision Transformers., , , , , , и . ICCV, стр. 9630-9640. IEEE, (2021)ResMLP: Feedforward networks for image classification with data-efficient training., , , , , , , , , и . CoRR, (2021)Are Large-scale Datasets Necessary for Self-Supervised Pre-training?, , , , , и . CoRR, (2021)