Research

Communication requires coordination; for example, interlocutors coordinate turn taking; musicians synchronize their timing.  The goal of my research is to understand the mechanisms responsible. I take a multidisciplinary approach, drawing on theories of complex systems, cognition, and motor control. My work has required the development new measures, methods of analysis, and computational modeling. I have adapted techniques from time series, dynamical systems, and signal processing analysis, combining them with advanced statistical methods such as dyadic analysis and mixed-effects modeling. Driven by theoretical questions, these methodological developments have already answered several important questions that were previously unanswered, such as how musicians anticipate each others' actions to perform in tight temporal synchrony using simple discrete dynamical system models.

Based on our findings of how musicians behave we are currently developing a discrete non-linear coupled feedback dynamical systems (computational) model of synchronization for synchronization between two people. We created a discrete coupled model of phase adaptation in dyadic temporal coordination (Demos, Layeghi, & Palmer, in preparation). Essentially, the model allows for the timing of two systems to anticipate each other’s actions and remain synchronized even when we perturb the direction of coupling between the systems (i.e., change from bidirectional to unidirectional systems on the fly).  Our coupled bidirectional model simulates the behavior of real musicians in a perturbation task we used on live musicians better than previous dynamical or current linear models.

Mechanisms of Synchrony through Sound
Synchrony through Sound

While previous studies had shown that people spontaneously coordinate their rhythmic movements when they can see each other (e.g., Richardson et al., 2007), they had failed to show that people do the same when they can only hear each other. Using a complex systems framework, we were able to show that participants spontaneously coordinate their rhythmic movements based solely on the sound of a partner’s action (Demos, Chaffin, Begosh, Daniels, & Marsh, 2012). Unexpectedly, the addition of music did not increase synchrony. Rather, music acted like a third person in the room, reducing synchrony between the two participants by attracting them to synchronize with the music and their partner simultaneously. At the same time, participants who were more in step with the music felt more connected with their partners, suggesting that synchrony with music acts as a sort of social glue; a result that has since been conceptually replicated several times (Lang et al, 2015; Launay et. al., 2015).

Currently, we are using a variant of the rocking chair task in which participants shake maracas to examine coordination in special populations, like dyslexics, known to have difficulties with rhythmic processing (e.g., tracking a metronome). In a multidisciplinary collaboration (neuroscience, social psychology, and speech disorders) we have examined synchronization in pairs of people with varying levels of reading ability, ranging from good to dyslexic (Demos, Del Tufo, Marsh, Theodore, & Chaffin, in preparation). In the first two studies, we found that poor-to-dyslexic readers did not show the expected deficit in interpersonal synchronization. Instead, they were better at synchronization than those with normal reading abilities. In addition, synchronization was better in pairs matched in reading ability than in mismatched pairs. Our results suggest that the underlying problem in dyslexia is hypersensitivity to sounds. The difficulties that dyslexics have with reading may be due to over-sensitivity to variation between speakers that makes it difficult to map speech sounds onto the phonetic categories. When synchronizing with the sound of another person shaking a maraca, hypersensitivity is an asset, giving dyslexics an advantage.

Synchrony through Sound in Dyslexia
Coordination of Gesture and Sound

Psychologists have long studied the relationship between sound and gesture (movements of the body that convey meaning). The McGurk effect is a well-known example of how sound and gesture interact: Seeing a speaker produce a different syllable than what is heard can alter the perception of the sound. Gesture may have similar effects in music performance (Davidson, 2009), but this has been difficult to establish because there was no way to connect musical sounds with the gestures (movements) that created them (Demos, 2013). I have solved this problem using dynamical systems tools (recurrence quantification analysis; multi-fractal analysis) and signal processing methods (time-frequency transformations; filters) in combination with mixed effects models. I showed that, despite idiosyncratic differences between musicians in gestural style, there was a clear relationship between movement and musical structure (Demos, Chaffin, & Logan, in preparation).