Research

We investigate human-centered AI systems with common sense to contribute to social good applications.

We perform fundamental research on commonsense AI and investigate its application to challenging domains, informed by empirical insights and cognitive theories.

Commonsense reasoning: Both commonsense psychology and naive physics are important topics for our team. We perform fundamental research on situational awareness, numeracy, and modeling of other agents. We focus on building robust methods that provide a faithful rationale for their reasoning. We explore neuro-symbolic methods such as prototype-based networks, combining LLMs with deterministic engines, and reasoning with scene knowledge graphs.
Analogy and abstraction: We study analogy and abstraction in AI inspired by key cognitive generalization mechanisms of humans. We focus on narratives and lateral thinking puzzles, covering text and vision. We create benchmarks and develop neuro-symbolic methods to advance the state-of-the-art ability to perform abstraction and analogical reasoning. We develop methods that derive and leverage explicit representations to enable robust and interpretable learning from experience.
AI for social good: We investigate how AI can serve individuals and society. We study and develop methods to interpret complex online media, such as internet memes and arguments, which depend on personal and cultural values and background knowledge. Our methods strive to capture and explain misinformation and hate speech with a keen eye on their dependency on personal and cultural views. We also work on understanding how knowledge-based AI solutions can support sustainable policies, neuro-symbolic models for traffic monitoring, and multimodal reasoning in robotics.

See my recent publications for more information.