I’m a cognitive scientist who builds human-centered data systems.
My MIT eye-tracking research (hundreds of citations) taught me how people actually see, remember, and decide.
Today I use those principles to design AI workflows and data platforms that reduce cognitive load and drive action.
Short version: technology should work the way humans think.
At MIT, I didn't just study vision—I mapped it. Tracking thousands of eye movements across visual scenes revealed how humans construct meaning from chaos in milliseconds. We discovered that attention isn't random; it follows predictable patterns based on both visual features and personal experience. This research into human information processing became my foundation for understanding why some data visualizations work instantly while others fail despite being "correct."
At UT Austin's HABLA lab, I discovered the messy reality of applied cognitive science. Working on NIH-funded bilingualism research meant designing experiments that 5-year-olds could complete while generating data rigorous enough for clinical decisions. I built data pipelines that tracked language development patterns across hundreds of children—learning that the best system architecture means nothing if humans can't use it under pressure. This taught me that cognitive principles only matter when they solve real problems for real people.
Running BALEX Healthcare Services revealed why healthcare technology often fails: it's designed for ideal scenarios, not cognitive reality. Nurses making decisions at 3am don't have working memory to spare. Our interfaces needed to work when users were exhausted, stressed, and multitasking. Three years of building systems under these constraints taught me that successful technology doesn't just process data correctly—it presents information in ways that reduce cognitive load when it matters most.
Consulting across industries revealed a universal pattern: technical solutions fail when they ignore human cognition. Whether helping a risk management firm visualize threats or designing medical bill classification systems, the challenge was always the same—how do we present complex information so humans can act on it quickly and accurately? My cognitive science training became invaluable for designing interfaces that guide attention to what matters and hide complexity until it's needed.
Supporting the SEC's data catalog initiative brought everything full circle. Most data discovery fails because systems are organized for machines, not human memory. We built interfaces that mirror how people actually recall information—through association and context, not alphabetical lists. My role wasn't just technical; it was cognitive translation. I helped teams understand that findability isn't about perfect taxonomies—it's about matching how humans naturally organize and retrieve information. The Python tools I built reflected this: simple surfaces hiding sophisticated information architecture designed around human memory patterns.
The same curiosity that drives my research shows up everywhere else in life.
Long-distance thinking, one mile at a time
Striving to expand knowledge for the public good — Inspired by the stunningly informative visuals at Our World in Data
Translating complex ideas into approachable insights — Explore one of my favorite visual explainers at Distill.pub
"How will a tired human at 3pm on Friday actually use this?" I test interfaces under cognitive load. If it doesn't work when you're distracted, it doesn't work.
"What patterns are hiding in this chaos?" But more importantly: "How do I surface them so humans can see them instantly?" Statistical significance means nothing if humans can't grasp it.
"What decision does this enable?" Every visualization, every model, every interface must answer: what action should someone take based on this? No action, no value.
"How does this scale cognitively?" A system that works for 10 items might break human comprehension at 100. I design for cognitive scalability, not just computational.
Today, I'm applying 15+ years of cognitive insights to the hardest problems in data and AI: How do we make machine learning explainable? How do we build dashboards that actually drive decisions? How do we design human-AI collaboration that amplifies rather than replaces human intelligence?
My recent projects—from knowledge graphs that mirror human memory to workout trackers that understand behavioral patterns—all demonstrate the same principle: technology succeeds when it works the way humans think.
I'm actively exploring the intersection of attention mechanisms in transformers and human visual attention. The parallels aren't coincidental—the same principles that help humans find meaning in visual chaos are now helping machines process language. This convergence is where I want to build next.
I specialize in the intersection of human understanding and technical execution—turning cognitive science into systems that genuinely serve their users.
Let’s talk about building something meaningful together →