The Cognitive Scientist Who Builds Data Systems

Barbara Hidalgo-Sotelo <>

I’m a cognitive scientist who builds human-centered data systems.
My MIT eye-tracking research (hundreds of citations) taught me how people actually see, remember, and decide. Today I use those principles to design AI workflows and data platforms that reduce cognitive load and drive action.

Short version: technology should work the way humans think.

The Big Turns

2009

Learning to See How We See

At MIT, I didn't just study vision—I mapped it. Tracking thousands of eye movements across visual scenes revealed how humans construct meaning from chaos in milliseconds. We discovered that attention isn't random; it follows predictable patterns based on both visual features and personal experience. This research into human information processing became my foundation for understanding why some data visualizations work instantly while others fail despite being "correct."

2013

From Lab Theory to Human Impact

At UT Austin's HABLA lab, I discovered the messy reality of applied cognitive science. Working on NIH-funded bilingualism research meant designing experiments that 5-year-olds could complete while generating data rigorous enough for clinical decisions. I built data pipelines that tracked language development patterns across hundreds of children—learning that the best system architecture means nothing if humans can't use it under pressure. This taught me that cognitive principles only matter when they solve real problems for real people.

🔬
2015

Healthcare Through a Cognitive Lens

Running BALEX Healthcare Services revealed why healthcare technology often fails: it's designed for ideal scenarios, not cognitive reality. Nurses making decisions at 3am don't have working memory to spare. Our interfaces needed to work when users were exhausted, stressed, and multitasking. Three years of building systems under these constraints taught me that successful technology doesn't just process data correctly—it presents information in ways that reduce cognitive load when it matters most.

2017

The Pattern Recognition Years

Consulting across industries revealed a universal pattern: technical solutions fail when they ignore human cognition. Whether helping a risk management firm visualize threats or designing medical bill classification systems, the challenge was always the same—how do we present complex information so humans can act on it quickly and accurately? My cognitive science training became invaluable for designing interfaces that guide attention to what matters and hide complexity until it's needed.

🌉
2021

Making Metadata Human-Friendly

Supporting the SEC's data catalog initiative brought everything full circle. Most data discovery fails because systems are organized for machines, not human memory. We built interfaces that mirror how people actually recall information—through association and context, not alphabetical lists. My role wasn't just technical; it was cognitive translation. I helped teams understand that findability isn't about perfect taxonomies—it's about matching how humans naturally organize and retrieve information. The Python tools I built reflected this: simple surfaces hiding sophisticated information architecture designed around human memory patterns.

Cognitive Principles That Guide My Work

Humans see patterns, not pixels. Design for pattern recognition.
Attention is scarce. Every element must earn its cognitive cost.
Memory is associative. Build systems that think in relationships.
Understanding happens in layers. Progressive disclosure beats information dumps.
Context determines meaning. The same data tells different stories to different viewers.
Cognitive load is cumulative. Simplify everything that isn't the main message.

How Cognitive Science Shapes My Data Science

My MIT training isn't just academic background—it's my competitive advantage. Every project benefits from understanding how humans actually process information:

Experimental Rigor

Every dashboard is an experiment. I build in metrics to track not just what users click, but where their attention goes and what they remember. A/B testing isn't enough—you need to understand the cognitive why.

Visual Processing Expertise

I know exactly how eyes scan a dashboard: top-left start, diagonal sweep, selective attention to contrast. This isn't theory—I've tracked thousands of scan patterns. I design visualizations that guide eyes to insights in the order they need to understand them.

Attention Architecture

Humans can track 7±2 items in working memory. Period. Every interface I design respects this limit. Complex analyses get broken into digestible chunks. Multi-model AI comparisons show 3 at a time, not 10. Constraints drive clarity.

Levels of Understanding

David Marr's computational framework taught me to approach problems at multiple levels: computational (what's the goal?), algorithmic (what's the process?), and implementation (what's the mechanism?). This framework shapes how I design explainable AI—each level needs its own explanation.

Lenses I Bring to Every Problem

The Cognitive Scientist Asks:

"How will a tired human at 3pm on Friday actually use this?" I test interfaces under cognitive load. If it doesn't work when you're distracted, it doesn't work.

The Data Scientist Asks:

"What patterns are hiding in this chaos?" But more importantly: "How do I surface them so humans can see them instantly?" Statistical significance means nothing if humans can't grasp it.

The Consultant Asks:

"What decision does this enable?" Every visualization, every model, every interface must answer: what action should someone take based on this? No action, no value.

The System Builder Asks:

"How does this scale cognitively?" A system that works for 10 items might break human comprehension at 100. I design for cognitive scalability, not just computational.

The Evolution of a Cognitive Data Scientist

Skills Journey Visualization

Each phase built on the last: cognitive foundations enable better data science, which enables better AI systems, which ultimately serve human decision-making.

Now & Next

Today, I'm applying 15+ years of cognitive insights to the hardest problems in data and AI: How do we make machine learning explainable? How do we build dashboards that actually drive decisions? How do we design human-AI collaboration that amplifies rather than replaces human intelligence?

My recent projects—from knowledge graphs that mirror human memory to workout trackers that understand behavioral patterns—all demonstrate the same principle: technology succeeds when it works the way humans think.

I'm actively exploring the intersection of attention mechanisms in transformers and human visual attention. The parallels aren't coincidental—the same principles that help humans find meaning in visual chaos are now helping machines process language. This convergence is where I want to build next.

Looking for someone who can make your data systems work the way humans think?
I'm interested in roles where cognitive science meets practical implementation—where understanding how humans process information is as valued as building the systems that serve it.

Let's talk about building something meaningful together →