research
How can we build AI systems that complement and extend our ability to make things? I tackle this question using tools from computational cognitive science. I develop online experiments where people build, draw, and create graphics, either alone or in collaboration with others. I then deconstruct these behaviors with statistical analysis and machine learning models.
How do we learn to make things?
A critical aspect of human generative behavior is our ability to learn on the fly. I’ve found that people are extremely fast learners when faced with novel construction tasks, acquiring new skills in just a few minutes of practice. This is partly due to our ability to consolidate experience into new procedural abstractions.
This rapid learning poses a challenge for AI systems– how are they meant to keep up with a human partner that has a constantly changing cognitive landscape? Fortunately for us, this is exactly the same challenged faced by humans when they collaborate…
How do we learn to make things together?
While people have many ways of coordinating their behavior, it’s clear that language is a uniquely powerful tool for aligning our goals. In this project, we created an algorithmic model of how collaborators develop linguistic conventions for referring to abstract concepts. This and other related projects leverage programmatic representations to model how people represent and communicate about the world.
By studying how people align the way they think and talk about the world, we can develop AI systems that are better able to understand and collaborate with us. The use of language as a tool for coordination is particularly promising given the rise of large language models, which I envision as a link between the flexibility of human cognition and the precision and power of AI.