• Explore
    • Contact Us
  • Faculty
  • Research
    • Research Areas
    • Research Centers
  • Graduate Degrees
    • Computer Science Programs
    • Current Graduate Students
  • Undergraduate Degrees
  • News & Events
    • News
    • Seminar Series
    • Distinguished Lecture Series
    • Research Showcase
  • Apply Now
    • Undergraduate Admissions
    • Graduate Admissions
    • Faculty Candidates

Professor Singh Awarded Two NSF Grants to Advance Machine Learning

August 22, 2018

Assistant Professor of Computer Science Sameer Singh has been awarded two separate National Science Foundation (NSF) grants that started this summer. Both relate to machine learning but in very different ways.

Increasing Transparency and Trust
The first grant, “Explaining Decisions of Black-Box Models via Input Perturbations,” focuses on explaining the predictions of machine learning. From the user perspective, machine learning systems are essentially “black boxes” — they make complex decisions but users don’t know how or why. This lack of understanding will become problematic as machine learning increasingly supports our financial, healthcare, technology and defense systems.

To address this issue, Singh and his team are developing algorithms that explain why a classifier makes particular decisions, increasing the ease of use (and trust) of these complex systems through transparency. Furthermore, the team will make its work readily available via publications, open-source software, jargon-free documentation and interactive tutorials/demonstrations to encourage the use of machine learning in novel domains.

Modeling Multiple Modalities
The second grant, “Modeling Multiple Modalities for Knowledge-Base Construction,” focuses on conducting machine learning with multiple modalities, such as text, images, numbers and databases. In particular, Singh and his team will investigate a novel construction pipeline for knowledge bases, combining textual and relational evidence with numerical, image, and tabular data.

To accomplish this, the team will first extract new facts about an entity from a document by combining the different parts (that is, the text, images and tables). Then, the team will develop models to identify missing relations in graphs that contain multimodal facts. Ultimately, the project will initiate a body of research in machine learning and natural language processing that uses unstructured multimodal data to better extract knowledge.

— Shani Murray

« Professor Zhao Receives NSF Grant to Develop Tools for Improved Virtual Reality
Professor Harris Presents Tool to Detect Social Engineering Attacks at Black Hat USA »

Latest news

  • Identifying the Building Blocks of Attention in Deep Learning March 21, 2023
  • Faculty Spotlight: Jennifer Wong-Ma and the Power of Community March 20, 2023
  • Computer Science Ph.D. Candidate Takami Sato Named Public Impact Fellow March 14, 2023
  • Irani Builds New Collaborations as Associate Director of the Simons Institute March 6, 2023
  • UC Irvine Partners With Linux Foundation to Welcome New Open Source Projects from Peraton Labs to Scale 5G Security March 3, 2023
  • © 2023 UC Regents
  • Feedback
  • Privacy Policy