Last updated: 2015.10.05
Master projects

  1. Can rhythm perception in monkeys be probed with EEG and ERP?

    It was recently shown that rhythmic entrainment, long considered a human-specific mechanism, can be demonstrated in a selected group of bird species, and, somewhat surprisingly, not in more closely related species such as nonhuman primates (cf. [1]). While there is currently no evidence for beat perception in monkeys ([1,2,3]), Rhesus Macaques might well be sensitive to regularity in a temporal stimulus. We are now pilotting a novel paradigm that allows us to disentangle regularity perception from beat perception using mMMN as an index (of violation) of rhythmic expectation. To analyse the measurements currently being collected at the Instituto de Neurobiología, Universidad Nacional Autonoma de México (UNAM), we look for a skilled master student with expertise in Matlab and EEG-analyses in both the time and frequency domain.


    - Expertise in analysing EEG, ERP and/or MMN
    - Skilled user of Matlab and statistical software
    - Interest in music and rhythm cognition


    [1] Honing et al. (2012)
    [2] Merchant & Honing (2014)
    [3] Merchant et al. (2015)

    Contact: Prof. dr H. Honing
    Starting date: Winter 2015.

  2. Does pitch processing in music affect pitch processing in language?

    There is currently no general consensus on whether pitch in language and music is processed by domain-specific or shared domain-general processing mechanisms (Patel, 2012a, 2012b; Peretz et al., 2015; Peretz, 2006, 2009). As pitch perception in both domains shows a number of parallels, musicians and speakers of tone languages have often been used as a comparative tool in exploring this dynamic relation (see Asaridou & McQueen (2013) for an overview).

    Tone languages use lexically contrastive pitches (tones) on syllables, characterised by height (frequency) and contour (direction or shift) of the fundamental frequency (F0), to differentiate word meaning. Mandarin makes use of 5 different tones: High leveled, rising, dipping, falling, and neutral. In Mandarin, the monosyllable /ma/ can thus have different meanings depending on the contour of the tone attached: With a falling tone /mà/ means ‘to scold’; with a dipping tone /mǎ/ becomes ’horse’; while a high level tone /mā/ changes its meaning to ‘mother’.

    In this project, the influence of pitch from melody on the processing of lexical pitch is investigated. It will assess how simultaneous processing of pitch in language and music affects lexical processing in native speakers of Mandarin. The data set for this experiment will consist of spoken phrases in Mandarin and small melodies. Behavioural data will be collected, and there is the option of expanding the study in an EEG paradigm.


    − Knowledge of linguistics and music cognition.
    − Knowledge of Mandarin is not required but a pre.
    − Experience with conducting psychological experiments.
    − Experience with statistical analyses.


    Asaridou, S. S., & McQueen, J. M. (2013). Speech and Music Shape the Listening Brain: Evidence for Shared Domain-General Mechanisms. Frontiers in Psychology, 4, 321. doi:10.3389/fpsyg.2013.00321

    Patel, A. D. (2012a). Language, Music, and the Brain: A Resource-Sharing Framework. In P. Rebuschat, M. Rohrmeier, J. Hawkins, & I. Cross (Eds.), Language and Music as Cognitive Systems (pp. 204–223). Oxford: Oxford University Press.

    Patel, A. D. (2012b). The OPERA Hypothesis: Assumptions and Clarifications. Annals of the New York Academy of Sciences, 1252(1), 124–128. doi:10.1111/j.1749-6632.2011.06426.x

    Peretz, I. (2006). The Nature of Music from a Biological Perspective. Cognition, 100, 1–32. doi:10.1016/j.cognition.2005.11.004
    Peretz, I. (2009). Music, Language and Modularity Framed in Action. Psychologica Belgica, 49(2&3), 157–175. doi:

    Peretz, I., Vuvan, D., & Armony, J. L. (2015). Neural Overlap in Processing Music and Speech. Phil. Trans. R. Soc. B 370: 20140090. doi:

    Contact: J. Weidema
    Starting date: Winter 2015.

  3. Is timing a more relevant feature than loudness when discriminating between expressive performance styles?

    How do we perceive the expressive differences between various performances of a same piece may depend on the role of the auditory features being listened to. Within the studies of analysis and modelling of performance styles, loudness and timing are the most common global symbolic features used. Literature shows that the differences between performances are in some study cases more significant in the use of timing than in the use of loudness [1], probably as a consequence of cultural constrains such as the stylistic period of the music performed.

    The goal of this project is verifying by means of behavioural experiments which of these two features might be a better discriminant between performances from a listeners perspective, and whether the music period / style being performed might play an important role in this discrimination. As a consequence of this experiment we will be able to validate other computational models in order to better simulate the average listeners behaviour when discriminating performances using these two features.

    While an initial dataset for this study is already available, a first part of the project will focus in designing the experiment and, if necessary, collecting more data. Part of the stimuli used for this experiment will be synthesised and the analysis of the data and models will be done using statistical packages, hence prior basic knowledge of scripting programming (Python or R) will be a strong plus. Affinity for music cognition, statistics and music performance is a must.

    [1] Cheng, E., & Chew, E. (2009)

    Contact: C. Vaquero
    Starting date: Fall 2015.

  4. Prosocial Behaviour at Silent Discos

    Synchronised movement is a driver of social interaction from infancy through adulthood (Phillips-Silver & Keller, 2012; Hove & Risen, 2009), as even non-scientists will have observed from the enduring popularity of social dancing to music. How strong is the effect of synchronised movement relative to other social factors, however, and how strongly do these social factors affect our musical choices when consuming music in a public environment? Silent discos, dance events where multiple channels of music are streamed to participants via wireless headphones, are a tool with great potential for answering these questions. To date, they have been used mostly in controlled laboratory settings (Leman, Demey, et al., 2009; Demey, Muller & Leman, 2013), but we would like to explore their potential as a more ecologically valid research tool. In partnership with the Manchester Museum of Science and Industry, we have obtained 1.5 TB of overhead video footage of actual silent discos, and the goal of this stage is to find strategies and appropriate algorithms for motion tracking and audio synchronisation in this video footage to extract usable research data. If successful, we will have a low-cost research tool that we and other music cognition researchers can employ widely for studying of the social psychology of music.

    Required skills:
    - Familiarity with basic machine learning algorithms and software tools (Python, MATLAB, etc.).
    - Experience with or interest in motion tracking.
    - Interest in music cognition and/or social psychology.

    [1] Demey, Muller & Leman, 2013
    [2] Hove & Risen, 2009
    [3] Leman, Demey, et al., 2009
    [4] Phillips-Silver & Keller, 2012

    Contact: dr J.A. Burgoyne
    Starting date: Fall 2015. [position filled]

  5. Is memory for musical tempo indeed absolute?

    One of the ongoing questions in music cognition is what aspects of music are retained in memory. Are musical aspects such as pitch, tempo or scales part of our memory representation of music? The present project focuses on the aspect of tempo, investigating whether there is evidence for absolute tempo representation in songs from oral transmission.

    Research has shown that perceived and imagined tempo are correlated [1], and that tempo is reproduced faithfully when singers are asked to sing the same song repeatedly [2]. When singing popular songs, participants performed them close to the original tempo of the recordings [3].

    For this study, recordings from the Dutch Song Database [4] will be used to find evidence for absolute tempo in oral transmission. The songs’ tempo will be determined with the support of audio analysis software (e.g. Sonic Visualiser [5]) and the resulting tempos will be statistically analyzed. Therefore, familiarity with audio and statistical analysis techniques, a good ear, and of course interest in music cognition are requirements for this project.

    [1] Halpern (1988)
    [2] Bergesson & Trehub (2002)
    [3] Levitin & Cook (1996)
    [4] Dutch Song Database
    [5] Sonic Visualiser

    Contact: B. Janssen
    Starting date: Spring 2015. [project cancelled]

  6. Can zebra finches distinguish between interval-based and beat-based rhythms?

    Most existing animal studies on rhythmic entrainment have used behavioral methods to probe the presence of beat perception, such as tapping tasks (Zarco et al., 2009) or measuring head bobs (Patel et al., 2009). However, if the production of synchronized movement to sound or music is not observed in certain species (such as in nonhuman primates, seals or songbirds; Schachner et al., 2009), this is no evidence for the absence of beat perception. It could well be that while certain species are not able to synchronize movements to a rhythm, they do have beat induction and as such, can perceive a beat. With behavioral methods that rely on overt motoric responses it is difficult to separate between the contribution of perception and action.

    Instead of testing for entrainment to isochronous rhythms measuring overt motoric behavior (Hasegawa et al., 2011), we will therefore use a perceptual task using a Go/No-go paradigm (Heijningen et al., 2012, Hagmann & Cook, 2010) to be able to test directly whether, first, zebra finches can distinguish between regular (isochronous pulse) and irregular rhythms (random intervals). And second, whether zebra finches can distinguish between beat-based and interval-based rhythms? A third issue that might be explored (if time permits) is how they do this (cf. Heijningen et al., 2012).

    Contact: Prof. dr H. Honing
    Starting date: Spring 2013. [position filled]

  7. Is absolute pitch (AP) indeed wide spread under ordinary people?

    Absolute pitch (AP) is the ability to identify or produce isolated tones in the absence of contextual cues or reference pitches. It is evident primarily among individuals who started music lessons in early childhood. Because AP requires memory for specific pitches as well as learned associations with verbal labels (i.e., note names), it represents a unique opportunity to study musical memory.

    AP is thought to differ from other human abilities in its bimodal distribution (Takeuchi & Hulse, 1993): Either you have it or you do not [1]. Schellenberg & Trehub (2003) demystified the phenomenon of AP by documenting adults’ memory for pitch under ecologically valid conditions. Arguing that poor pitch memory of ordinary adults is an artifact of conventional test procedures, which involve isolated tones and pitch-naming tasks. They were able to show that good pitch memory is widespread among adults with no musical training [2].

    In the current project the Liederenbank (see the Meertens Institute [3]) will be used as a source to explore the potential role of AP in the memory of songs transmitted in oral traditions. Since the 'tunes' in that database are grouped by tune family and partly available as sound files, they can serve as emprical support for the 'AP is wide spread' hypothesis. (Interestingly, this cannot be done on the basis of the available transcriptions since these are all transposed to the same key.)

    - Familiarity with the methods and techniques from computational musicology, especially pitch tracking
    - Familiarity with statistical software
    - Interest in music cognition

    [1] Takeuchi & Hulse (1993)
    [2] Schellenberg & Trehub (2003)
    [3] Liederenbank

    Contact: Prof. dr H. Honing
    Starting date: Spring 2013. [position filled]

  8. [to do]