This week we speak with Yasaman Razeghi and Prof. Sameer Singh from UC Irvine. Yasaman recently published a paper called “Impact of Pretraining Term Frequencies on Few-Shot Reasoning” where she demonstrated comprehensively that large language models only perform well on reasoning tasks because they memorize the dataset. For the first time she showed the accuracy was linearly correlated to the occurrence rate in the training corpus, something which OpenAI should have done in the first place!
We also speak with Sameer who has been a pioneering force in the area of machine learning interpretability for many years now, he created LIME with Marco Riberio and also had his hands all over the famous Checklist paper and many others.
We also get into the metric obsession in the NLP world and whether metrics are one of the principal reasons why we are failing to make any progress in NLU.