Last updated: 2015.10.27
Master projects
The Music Cognition Group has several internships available each academic year. Virtually all projects are related to ongoing research supervised by PhD's and/or postdocs. Below an overview of the projects that are still open. Feel free to contact the person listed in the project description directly.

  1. Can rhythm perception in monkeys be probed with EEG and ERP?

    It was recently shown that rhythmic entrainment, long considered a human-specific mechanism, can be demonstrated in a selected group of bird species, and, somewhat surprisingly, not in more closely related species such as nonhuman primates (cf. [1]). While there is currently no evidence for beat perception in monkeys ([1,2,3]), Rhesus Macaques might well be sensitive to regularity in a temporal stimulus. We are now pilotting a novel paradigm that allows us to disentangle regularity perception from beat perception using mMMN as an index (of violation) of rhythmic expectation. To analyse the measurements currently being collected at the Instituto de Neurobiología, Universidad Nacional Autonoma de México (UNAM), we look for a skilled master student with expertise in Matlab and EEG-analyses in both the time and frequency domain.


    - Expertise in analysing EEG, ERP and/or MMN
    - Skilled user of Matlab and statistical software
    - Interest in music and rhythm cognition


    [1] Honing et al. (2012)
    [2] Merchant & Honing (2014)
    [3] Merchant et al. (2015)

    Contact: prof. dr H. Honing
    Starting date: Winter 2015.

  2. Absolute pitch and language acquisition

    It has been suggested that babies are born with the ability to use absolute pitch as tool for the segmentation of the sound stream, including language (Saffran, 2003; Saffran & Griepentrog, 2001). Accordingly, this ability would be replaced by the use of relative pitch, a more sophisticated tool which would better suit the language specificities in speech segmentation. Nevertheless, such a claim is based on previous research using non-linguistic stimulus material.

    In this project, we aim to replicate results from previous research, but using language as stimulus material. In order to truly understand the impact that language acquisition might have on the use of absolute vs. relative pitch as a sound stream segmentation tool, an investigation using linguistic stimulus material should be carried on. Only then one could understand if absolute pitch should be regarded as a first tool for the segmentation of all sound, including language, if this is replaced by the use of relative pitch, or if it finds no place in language processing at all.


    − Knowledge of prosody and music cognition.
    − Experience with conducting psychological experiments.
    − Experience with statistical analyses.


    Saffran, J. R. (2003). Absolute pitch in infancy and adulthood: the role of tonal structure. Developmental Science, 6(1), 35–43.

    Saffran, J. R., & Griepentrog, G. J. (2001). Absolute pitch in infant auditory learning: Evidence for developmental reorganization. Developmental Psychology, 37(1), 74–85.

    Contact: dr M. P. Roncaglia-Denissen
    Starting date: Winter 2015.

  3. The effect of learning a second language on musical rhythmic perception: How proficient must it be and how long does it last?

    Previous research suggests that mastering languages with different rhythmic properties enhances musical rhythmic sensitivity (Roncaglia-Denissen, Schmidt-Kassow, Heine, Vuust, & Kotz, 2013). This could be the case because rhythmic perception in language and in music is based on general acoustic features, such as intensity and duration. Being sensitive to different sets of those properties as a consequence of mastering languages with different rhythms may be transferred to the music domain as well. However, it is still not clear how high the proficiency level of the second language (L2) must be for individuals to experience an increase in musical rhythmic sensitivity. Additionally, it is not known if such an enhanced sensitivity should be regarded as a permanent cognitive advantage or rather as temporary one, namely, present as long as the second language is being used.

    In this research project individuals with different levels of L2 proficiency will be investigated in terms of their rhythmic sensitivity in music. Information about participants’ language and music background will be assessed together with their working memory and phonological memory capacities.


    − Knowledge of prosody and music cognition.
    − Experience with conducting psychological experiments.
    − Experience with statistical analyses.


    Roncaglia-Denissen, M. P., Schmidt-Kassow, M., Heine, A., Vuust, P., & Kotz, S. A. (2013). Enhanced musical rhythmic perception in Turkish early and late learners of German. Frontiers in Cognitive Science, 4, 645.

    Contact: dr M. P. Roncaglia-Denissen
    Starting date: Winter 2015.

  4. Does pitch processing in music affect pitch processing in language?

    There is currently no general consensus on whether pitch in language and music is processed by domain-specific or shared domain-general processing mechanisms (Patel, 2012a, 2012b; Peretz et al., 2015; Peretz, 2006, 2009). As pitch perception in both domains shows a number of parallels, musicians and speakers of tone languages have often been used as a comparative tool in exploring this dynamic relation (see Asaridou & McQueen (2013) for an overview).

    Tone languages use lexically contrastive pitches (tones) on syllables, characterised by height (frequency) and contour (direction or shift) of the fundamental frequency (F0), to differentiate word meaning. Mandarin makes use of 5 different tones: High leveled, rising, dipping, falling, and neutral. In Mandarin, the monosyllable /ma/ can thus have different meanings depending on the contour of the tone attached: With a falling tone /mà/ means ‘to scold’; with a dipping tone /mǎ/ becomes ’horse’; while a high level tone /mā/ changes its meaning to ‘mother’.

    In this project, the influence of pitch from melody on the processing of lexical pitch is investigated. It will assess how simultaneous processing of pitch in language and music affects lexical processing in native speakers of Mandarin. The data set for this experiment will consist of spoken phrases in Mandarin and small melodies. Behavioural data will be collected, and there is the option of expanding the study in an EEG paradigm.


    − Knowledge of linguistics and music cognition.
    − Knowledge of Mandarin is not required but a pre.
    − Experience with conducting psychological experiments.
    − Experience with statistical analyses.


    Asaridou, S. S., & McQueen, J. M. (2013). Speech and Music Shape the Listening Brain: Evidence for Shared Domain-General Mechanisms. Frontiers in Psychology, 4, 321. doi:10.3389/fpsyg.2013.00321

    Patel, A. D. (2012a). Language, Music, and the Brain: A Resource-Sharing Framework. In P. Rebuschat, M. Rohrmeier, J. Hawkins, & I. Cross (Eds.), Language and Music as Cognitive Systems (pp. 204–223). Oxford: Oxford University Press.

    Patel, A. D. (2012b). The OPERA Hypothesis: Assumptions and Clarifications. Annals of the New York Academy of Sciences, 1252(1), 124–128. doi:10.1111/j.1749-6632.2011.06426.x

    Peretz, I. (2006). The Nature of Music from a Biological Perspective. Cognition, 100, 1–32. doi:10.1016/j.cognition.2005.11.004
    Peretz, I. (2009). Music, Language and Modularity Framed in Action. Psychologica Belgica, 49(2&3), 157–175. doi:

    Peretz, I., Vuvan, D., & Armony, J. L. (2015). Neural Overlap in Processing Music and Speech. Phil. Trans. R. Soc. B 370: 20140090. doi:

    Contact: J. Weidema, MA
    Starting date: Winter 2015.

  5. Is timing a more relevant feature than loudness when discriminating between expressive performance styles?

    How do we perceive the expressive differences between various performances of a same piece may depend on the role of the auditory features being listened to. Within the studies of analysis and modelling of performance styles, loudness and timing are the most common global symbolic features used. Literature shows that the differences between performances are in some study cases more significant in the use of timing than in the use of loudness [1], probably as a consequence of cultural constrains such as the stylistic period of the music performed.

    The goal of this project is verifying by means of behavioural experiments which of these two features might be a better discriminant between performances from a listeners perspective, and whether the music period / style being performed might play an important role in this discrimination. As a consequence of this experiment we will be able to validate other computational models in order to better simulate the average listeners behaviour when discriminating performances using these two features.

    While an initial dataset for this study is already available, a first part of the project will focus in designing the experiment and, if necessary, collecting more data. Part of the stimuli used for this experiment will be synthesised and the analysis of the data and models will be done using statistical packages, hence prior basic knowledge of scripting programming (Python or R) will be a strong plus. Affinity for music cognition, statistics and music performance is a must.

    [1] Cheng, E., & Chew, E. (2009)

    Contact: C. Vaquero, MSc
    Starting date: Fall 2015.

  6. Prosocial Behaviour at Silent Discos

    Synchronised movement is a driver of social interaction from infancy through adulthood (Phillips-Silver & Keller, 2012; Hove & Risen, 2009), as even non-scientists will have observed from the enduring popularity of social dancing to music. How strong is the effect of synchronised movement relative to other social factors, however, and how strongly do these social factors affect our musical choices when consuming music in a public environment? Silent discos, dance events where multiple channels of music are streamed to participants via wireless headphones, are a tool with great potential for answering these questions. To date, they have been used mostly in controlled laboratory settings (Leman, Demey, et al., 2009; Demey, Muller & Leman, 2013), but we would like to explore their potential as a more ecologically valid research tool. In partnership with the Manchester Museum of Science and Industry, we have obtained 1.5 TB of overhead video footage of actual silent discos, and the goal of this stage is to find strategies and appropriate algorithms for motion tracking and audio synchronisation in this video footage to extract usable research data. If successful, we will have a low-cost research tool that we and other music cognition researchers can employ widely for studying of the social psychology of music.

    Required skills:
    - Familiarity with basic machine learning algorithms and software tools (Python, MATLAB, etc.).
    - Experience with or interest in motion tracking.
    - Interest in music cognition and/or social psychology.

    [1] Demey, Muller & Leman, 2013
    [2] Hove & Risen, 2009
    [3] Leman, Demey, et al., 2009
    [4] Phillips-Silver & Keller, 2012

    Contact: dr J.A. Burgoyne
    Starting date: Fall 2015. [position filled]

  7. Is memory for musical tempo indeed absolute?

    One of the ongoing questions in music cognition is what aspects of music are retained in memory. Are musical aspects such as pitch, tempo or scales part of our memory representation of music? The present project focuses on the aspect of tempo, investigating whether there is evidence for absolute tempo representation in songs from oral transmission.

    Research has shown that perceived and imagined tempo are correlated [1], and that tempo is reproduced faithfully when singers are asked to sing the same song repeatedly [2]. When singing popular songs, participants performed them close to the original tempo of the recordings [3].

    For this study, recordings from the Dutch Song Database [4] will be used to find evidence for absolute tempo in oral transmission. The songs’ tempo will be determined with the support of audio analysis software (e.g. Sonic Visualiser [5]) and the resulting tempos will be statistically analyzed. Therefore, familiarity with audio and statistical analysis techniques, a good ear, and of course interest in music cognition are requirements for this project.

    [1] Halpern (1988)
    [2] Bergesson & Trehub (2002)
    [3] Levitin & Cook (1996)
    [4] Dutch Song Database
    [5] Sonic Visualiser

    Contact: B. Janssen, MA
    Starting date: Spring 2015. [project cancelled]

  8. Can zebra finches distinguish between interval-based and beat-based rhythms?

    Most existing animal studies on rhythmic entrainment have used behavioral methods to probe the presence of beat perception, such as tapping tasks (Zarco et al., 2009) or measuring head bobs (Patel et al., 2009). However, if the production of synchronized movement to sound or music is not observed in certain species (such as in nonhuman primates, seals or songbirds; Schachner et al., 2009), this is no evidence for the absence of beat perception. It could well be that while certain species are not able to synchronize movements to a rhythm, they do have beat induction and as such, can perceive a beat. With behavioral methods that rely on overt motoric responses it is difficult to separate between the contribution of perception and action.

    Instead of testing for entrainment to isochronous rhythms measuring overt motoric behavior (Hasegawa et al., 2011), we will therefore use a perceptual task using a Go/No-go paradigm (Heijningen et al., 2012, Hagmann & Cook, 2010) to be able to test directly whether, first, zebra finches can distinguish between regular (isochronous pulse) and irregular rhythms (random intervals). And second, whether zebra finches can distinguish between beat-based and interval-based rhythms? A third issue that might be explored (if time permits) is how they do this (cf. Heijningen et al., 2012).

    Contact: prof. dr H. Honing
    Starting date: Spring 2013. [position filled]

  9. Is absolute pitch (AP) indeed wide spread under ordinary people?

    Absolute pitch (AP) is the ability to identify or produce isolated tones in the absence of contextual cues or reference pitches. It is evident primarily among individuals who started music lessons in early childhood. Because AP requires memory for specific pitches as well as learned associations with verbal labels (i.e., note names), it represents a unique opportunity to study musical memory.

    AP is thought to differ from other human abilities in its bimodal distribution (Takeuchi & Hulse, 1993): Either you have it or you do not [1]. Schellenberg & Trehub (2003) demystified the phenomenon of AP by documenting adults’ memory for pitch under ecologically valid conditions. Arguing that poor pitch memory of ordinary adults is an artifact of conventional test procedures, which involve isolated tones and pitch-naming tasks. They were able to show that good pitch memory is widespread among adults with no musical training [2].

    In the current project the Liederenbank (see the Meertens Institute [3]) will be used as a source to explore the potential role of AP in the memory of songs transmitted in oral traditions. Since the 'tunes' in that database are grouped by tune family and partly available as sound files, they can serve as emprical support for the 'AP is wide spread' hypothesis. (Interestingly, this cannot be done on the basis of the available transcriptions since these are all transposed to the same key.)

    - Familiarity with the methods and techniques from computational musicology, especially pitch tracking
    - Familiarity with statistical software
    - Interest in music cognition

    [1] Takeuchi & Hulse (1993)
    [2] Schellenberg & Trehub (2003)
    [3] Liederenbank

    Contact: prof. dr H. Honing
    Starting date: Spring 2013. [position filled]

  10. [to do]