Look Who's Giving: Developmental Shift in Attention From Object Movement to the Faces
What it is:
A Language and Cognitive Neuroscience Lab study examining how infants parse "give" events using eye-tracking.
What we did:
- Analyzed eye-tracking data from 7-11-month-old infants and adults viewing "Give" and "Show" events.
- Quantified gaze-transition strategies (Toy <-> Body, Face <-> Toy, Face <-> Face) to capture the developmental shift from tracking motion paths to linking social agents with objects.
- Validated that the shift disappears in inverted (upside-down) controls, indicating semantic rather than low-level visual drivers.
- View project on GitHub
- Read our VSS abstract
Read Full Project Details...
Abstract (shortened)
How do pre-linguistic infants carve dynamic “give” events into the argument roles required for language (giver, recipient, object)? We analyzed gaze transitions from 7-to-11-month-old infants and adults viewing “Give” and “Show” videos.
- Younger infants (7 months) prioritized Toy <-> Body transitions, closely following the physical motion of the object.
- Older infants and adults shifted to Face <-> Toy and Face <-> Face transitions, linking agents and objects rather than pure motion paths.
- Inverted controls removed this shift, suggesting the change reflects semantic understanding, not low-level visual salience.
Taken together, the data reveal a developmental move from tracking the physics of motion to constructing a relational structure that could support verb learning for actions like “give” and “show”.