Last updated: 2016.08.21
Master projects
The Music Cognition Group has several internships available each academic year. Virtually all projects are related to ongoing research supervised by PhD's and/or postdocs. Below an overview of the projects that are still open. Feel free to contact the person listed in the project description directly.

  1. What is the role of MMN, P1 and N1 in sensing regularity in rhythm?

    While there is converging evidence that the performance of monkeys is comparable to humans in single interval tasks (such as interval reproduction, categorization, and interception), it differs in multiple interval tasks (such as rhythmic entrainment, synchronization, and continuation). This observation lead to the gradual audiomotor evolution hypothesis [1] that suggests beat perception and synchronization to be gradually developed in primates, peaking in humans but present only with limited properties in other nonhuman primates.

    It remains unclear, however, whether nonhuman primates (while apparently insensitive to the beat [2]) are able to predict the next stimulus in an isochronous rhythm, and as such are sensitive to the intrinsic regularity of the stimulus. In other words, nonhuman primates might be able to make temporal predictions based on an isochronous stimulus using interval-based timing, while lacking beat-based timing that allows for sensing a regular pulse in a varying rhythmic sequence [1].

    This literature thesis aims to review the existing literature in humans and nonhuman primates using scalp-recorded EEG, and especially the event-related (ERP) components, that can be considered markers of auditory cortical processing, such as MMN, P1 and N1 and the role of isochrony and temporal prediction.

    Requirements:

    - Familiarity with interpreting EEG, ERPs and EPs
    - Interest in music and rhythm cognition

    References:

    [1] Merchant & Honing (2014)
    [1] Honing et al. (2012)

    Contact: prof. dr H. Honing
    Starting date: Fall 2016.

  2. Can rhythm perception in monkeys be probed with EEG and ERP?

    It was recently shown that rhythmic entrainment, long considered a human-specific mechanism, can be demonstrated in a selected group of bird species, and, somewhat surprisingly, not in more closely related species such as nonhuman primates (cf. [1]). While there is currently no evidence for beat perception in monkeys ([1,2,3]), Rhesus Macaques might well be sensitive to regularity in a temporal stimulus. We are now pilotting a novel paradigm that allows us to disentangle regularity perception from beat perception using mMMN as an index (of violation) of rhythmic expectation. To analyse the measurements currently being collected at the Instituto de Neurobiología, Universidad Nacional Autonoma de México (UNAM), we look for a skilled master student with expertise in Matlab and EEG-analyses in both the time and frequency domain.

    Requirements:

    - Expertise in analysing EEG, ERP and/or MMN
    - Skilled user of Matlab and statistical software
    - Interest in music and rhythm cognition

    References:

    [1] Honing et al. (2012)
    [2] Merchant & Honing (2014)
    [3] Merchant et al. (2015)

    Contact: prof. dr H. Honing
    Starting date: Fall 2016.

  3. Hierarchical representation of non-isochronous metrical structure

    Rhythms induce the percept of a steady pulse or beat. These pulses are grouped into periodic and hierarchically organised patterns of strong and weak beats (Palmer & Krumhansl, 1990; Honing 2013). Metrical structure has been modeled using generative rules (Lerdahl & Jackendoff, 1983; Longuet-Higgins & Lee, 1984), similar to those used in theoretical models of syntactical structure in language. Such models represent metrical structure as tree structures. Each level in a tree corresponds to a metrical duration. The transition to a deeper level in the tree reflects the subdivision of that metrical duration into beats of equal duration. For example: The root-node of the tree, which is the top level of the tree, may reflect the duration of a bar. The next level subdivides the bar into three pulses with a duration of one third of a bar, as reflected by three child nodes. If the next level down in is subdivided into two, each child node receives two child nodes representing a duration of one sixth of a bar.

    This formalism can capture the metrical structure for most rhythms of Western-European origin, where a bars are mostly subdivided into units of equal duration, but becomes problematic for rhythms where this is not the case. Such rhythms are widespread in for example the Balkan region. Hannon and Trehub (2005) have shown that listeners who have been regularly exposed to such meters are, in contrast to listeners who have not had such exposure, sensitive to the non-isochronous structure of the rhythms. This seems to suggest that cognitive models of metrical structure need to encompass non-isochronous meters as well.

    The goal of this project is to come up with a generative formalism of metrical structure that is capable of representing non-isochronous metrical structure. This formalism would ideally be formulated as an extension of existing formalisms. If time allows, this project can be extended to include a rhythm parser for the proposed representation or inclusion of the representation in a probabilistic model of metrical interpretation of rhythms.

    Requirements:

    - Affinity with formal modeling
    - Interested in cognitive modeling
    - Experience with programming would be good

    Contact: B. van der Weij MSc
    Starting date: Spring 2016. [position filled]

  4. Hooked on Music: Adapting the game for dementia patients

    The aim of this study is to get a better understanding of how a music recognition game can have a therapeutic effect for dementia patients. Firstly, data from Hooked on Music will be analysed using the multivariate item-response method. With this method, latent variables can be found that are representative for different strategies people use to recognise music. Secondly, dementia literature will be studied to get an overview of the cognitive abilities and brain regions that are impaired in the three main types of dementia (Alzheimer’s, vascular dementia, Lewy Body). Finally, the findings from the game will be combined with those from the dementia literature to see if there are any overlaps between the functionalities that the game taps into, and the functionalities that are (un)impaired in dementia patients. From this, a future experiment will be designed to study how dementia patients will respond to music recognition games and how a game can be developed that has a therapeutic effect for the dementia patients.

    Contact: dr. J.A. Burgoyne
    Starting date: Winter 2015/16. [position filled]

  5. Tracing the degeneration of musical ability in dementia patients

    As the body of neuroscientific literature on the major forms of dementia as well as the perception of different musical characteristics grows, it is important to develop and up-to-date understanding of how we expect musical ability to decline over time in patients with dementia. At the same time, neurological music therapy is growing in prominence and success. This project will begin with a literature review and interviews with practicing music therapists to understand the most effective musical intervention points for dementia patients. In the second half of the project, the intern will seek to observe actual therapeutic interventions with patients and, based on these observations, make recommendations about the most fruitful directions for future research within the Music Cognition Group on musical ability and dementia.

    Contact: dr. J.A. Burgoyne
    Starting date: Winter 2015/16. [position filled]

  6. Absolute pitch and language acquisition

    It has been suggested that babies are born with the ability to use absolute pitch as tool for the segmentation of the sound stream, including language (Saffran, 2003; Saffran & Griepentrog, 2001). Accordingly, this ability would be replaced by the use of relative pitch, a more sophisticated tool which would better suit the language specificities in speech segmentation. Nevertheless, such a claim is based on previous research using non-linguistic stimulus material.

    In this project, we aim to replicate results from previous research, but using language as stimulus material. In order to truly understand the impact that language acquisition might have on the use of absolute vs. relative pitch as a sound stream segmentation tool, an investigation using linguistic stimulus material should be carried on. Only then one could understand if absolute pitch should be regarded as a first tool for the segmentation of all sound, including language, if this is replaced by the use of relative pitch, or if it finds no place in language processing at all.

    Requirements:

    − Knowledge of prosody and music cognition.
    − Experience with conducting psychological experiments.
    − Experience with statistical analyses.

    References:

    Saffran, J. R. (2003). Absolute pitch in infancy and adulthood: the role of tonal structure. Developmental Science, 6(1), 35–43. http://doi.org/10.1111/1467-7687.00250

    Saffran, J. R., & Griepentrog, G. J. (2001). Absolute pitch in infant auditory learning: Evidence for developmental reorganization. Developmental Psychology, 37(1), 74–85. http://doi.org/10.1037/0012-1649.37.1.74

    Contact: dr M. P. Roncaglia-Denissen
    Starting date: Winter 2015. [position filled]

  7. The effect of learning a second language on musical rhythmic perception: How proficient must it be and how long does it last?

    Previous research suggests that mastering languages with different rhythmic properties enhances musical rhythmic sensitivity (Roncaglia-Denissen, Schmidt-Kassow, Heine, Vuust, & Kotz, 2013). This could be the case because rhythmic perception in language and in music is based on general acoustic features, such as intensity and duration. Being sensitive to different sets of those properties as a consequence of mastering languages with different rhythms may be transferred to the music domain as well. However, it is still not clear how high the proficiency level of the second language (L2) must be for individuals to experience an increase in musical rhythmic sensitivity. Additionally, it is not known if such an enhanced sensitivity should be regarded as a permanent cognitive advantage or rather as temporary one, namely, present as long as the second language is being used.

    In this research project individuals with different levels of L2 proficiency will be investigated in terms of their rhythmic sensitivity in music. Information about participants’ language and music background will be assessed together with their working memory and phonological memory capacities.

    Requirements:

    − Knowledge of prosody and music cognition.
    − Experience with conducting psychological experiments.
    − Experience with statistical analyses.

    References:

    Roncaglia-Denissen, M. P., Schmidt-Kassow, M., Heine, A., Vuust, P., & Kotz, S. A. (2013). Enhanced musical rhythmic perception in Turkish early and late learners of German. Frontiers in Cognitive Science, 4, 645. http://doi.org/10.3389/fpsyg.2013.00645

    Contact: dr M. P. Roncaglia-Denissen
    Starting date: Winter 2015. [position filled]

  8. Does pitch processing in music affect pitch processing in language?

    There is currently no general consensus on whether pitch in language and music is processed by domain-specific or shared domain-general processing mechanisms (Patel, 2012a, 2012b; Peretz et al., 2015; Peretz, 2006, 2009). As pitch perception in both domains shows a number of parallels, musicians and speakers of tone languages have often been used as a comparative tool in exploring this dynamic relation (see Asaridou & McQueen (2013) for an overview).

    Tone languages use lexically contrastive pitches (tones) on syllables, characterised by height (frequency) and contour (direction or shift) of the fundamental frequency (F0), to differentiate word meaning. Mandarin makes use of 5 different tones: High leveled, rising, dipping, falling, and neutral. In Mandarin, the monosyllable /ma/ can thus have different meanings depending on the contour of the tone attached: With a falling tone /mà/ means ‘to scold’; with a dipping tone /mǎ/ becomes ’horse’; while a high level tone /mā/ changes its meaning to ‘mother’.

    In this project, the influence of pitch from melody on the processing of lexical pitch is investigated. It will assess how simultaneous processing of pitch in language and music affects lexical processing in native speakers of Mandarin. The data set for this experiment will consist of spoken phrases in Mandarin and small melodies. Behavioural data will be collected, and there is the option of expanding the study in an EEG paradigm.

    Requirements:

    − Knowledge of linguistics and music cognition.
    − Knowledge of Mandarin is not required but a pre.
    − Experience with conducting psychological experiments.
    − Experience with statistical analyses.

    References:

    Asaridou, S. S., & McQueen, J. M. (2013). Speech and Music Shape the Listening Brain: Evidence for Shared Domain-General Mechanisms. Frontiers in Psychology, 4, 321. doi:10.3389/fpsyg.2013.00321

    Patel, A. D. (2012a). Language, Music, and the Brain: A Resource-Sharing Framework. In P. Rebuschat, M. Rohrmeier, J. Hawkins, & I. Cross (Eds.), Language and Music as Cognitive Systems (pp. 204–223). Oxford: Oxford University Press.

    Patel, A. D. (2012b). The OPERA Hypothesis: Assumptions and Clarifications. Annals of the New York Academy of Sciences, 1252(1), 124–128. doi:10.1111/j.1749-6632.2011.06426.x

    Peretz, I. (2006). The Nature of Music from a Biological Perspective. Cognition, 100, 1–32. doi:10.1016/j.cognition.2005.11.004
    Peretz, I. (2009). Music, Language and Modularity Framed in Action. Psychologica Belgica, 49(2&3), 157–175. doi:http://dx.doi.org/10.5334/pb-49-2-3-157

    Peretz, I., Vuvan, D., & Armony, J. L. (2015). Neural Overlap in Processing Music and Speech. Phil. Trans. R. Soc. B 370: 20140090. doi:http://dx.doi.org/10.1098/rstb.2014.0090

    Contact: J. Weidema, MA
    Starting date: Winter 2015. [position filled]

  9. Is timing a more relevant feature than loudness when discriminating between expressive performance styles?

    How do we perceive the expressive differences between various performances of a same piece may depend on the role of the auditory features being listened to. Within the studies of analysis and modelling of performance styles, loudness and timing are the most common global symbolic features used. Literature shows that the differences between performances are in some study cases more significant in the use of timing than in the use of loudness [1], probably as a consequence of cultural constrains such as the stylistic period of the music performed.

    The goal of this project is verifying by means of behavioural experiments which of these two features might be a better discriminant between performances from a listeners perspective, and whether the music period / style being performed might play an important role in this discrimination. As a consequence of this experiment we will be able to validate other computational models in order to better simulate the average listeners behaviour when discriminating performances using these two features.

    While an initial dataset for this study is already available, a first part of the project will focus in designing the experiment and, if necessary, collecting more data. Part of the stimuli used for this experiment will be synthesised and the analysis of the data and models will be done using statistical packages, hence prior basic knowledge of scripting programming (Python or R) will be a strong plus. Affinity for music cognition, statistics and music performance is a must.

    [1] Cheng, E., & Chew, E. (2009)

    Contact: C. Vaquero, MSc
    Starting date: Fall 2015. [position filled]

  10. Prosocial Behaviour at Silent Discos

    Synchronised movement is a driver of social interaction from infancy through adulthood (Phillips-Silver & Keller, 2012; Hove & Risen, 2009), as even non-scientists will have observed from the enduring popularity of social dancing to music. How strong is the effect of synchronised movement relative to other social factors, however, and how strongly do these social factors affect our musical choices when consuming music in a public environment? Silent discos, dance events where multiple channels of music are streamed to participants via wireless headphones, are a tool with great potential for answering these questions. To date, they have been used mostly in controlled laboratory settings (Leman, Demey, et al., 2009; Demey, Muller & Leman, 2013), but we would like to explore their potential as a more ecologically valid research tool. In partnership with the Manchester Museum of Science and Industry, we have obtained 1.5 TB of overhead video footage of actual silent discos, and the goal of this stage is to find strategies and appropriate algorithms for motion tracking and audio synchronisation in this video footage to extract usable research data. If successful, we will have a low-cost research tool that we and other music cognition researchers can employ widely for studying of the social psychology of music.

    Required skills:
    - Familiarity with basic machine learning algorithms and software tools (Python, MATLAB, etc.).
    - Experience with or interest in motion tracking.
    - Interest in music cognition and/or social psychology.

    [1] Demey, Muller & Leman, 2013
    [2] Hove & Risen, 2009
    [3] Leman, Demey, et al., 2009
    [4] Phillips-Silver & Keller, 2012

    Contact: dr J.A. Burgoyne
    Starting date: Fall 2015. [position filled]

  11. Is memory for musical tempo indeed absolute?

    One of the ongoing questions in music cognition is what aspects of music are retained in memory. Are musical aspects such as pitch, tempo or scales part of our memory representation of music? The present project focuses on the aspect of tempo, investigating whether there is evidence for absolute tempo representation in songs from oral transmission.

    Research has shown that perceived and imagined tempo are correlated [1], and that tempo is reproduced faithfully when singers are asked to sing the same song repeatedly [2]. When singing popular songs, participants performed them close to the original tempo of the recordings [3].

    For this study, recordings from the Dutch Song Database [4] will be used to find evidence for absolute tempo in oral transmission. The songs’ tempo will be determined with the support of audio analysis software (e.g. Sonic Visualiser [5]) and the resulting tempos will be statistically analyzed. Therefore, familiarity with audio and statistical analysis techniques, a good ear, and of course interest in music cognition are requirements for this project.

    [1] Halpern (1988)
    [2] Bergesson & Trehub (2002)
    [3] Levitin & Cook (1996)
    [4] Dutch Song Database
    [5] Sonic Visualiser

    Contact: B. Janssen, MA
    Starting date: Spring 2015. [project cancelled]

  12. Can zebra finches distinguish between interval-based and beat-based rhythms?

    Most existing animal studies on rhythmic entrainment have used behavioral methods to probe the presence of beat perception, such as tapping tasks (Zarco et al., 2009) or measuring head bobs (Patel et al., 2009). However, if the production of synchronized movement to sound or music is not observed in certain species (such as in nonhuman primates, seals or songbirds; Schachner et al., 2009), this is no evidence for the absence of beat perception. It could well be that while certain species are not able to synchronize movements to a rhythm, they do have beat induction and as such, can perceive a beat. With behavioral methods that rely on overt motoric responses it is difficult to separate between the contribution of perception and action.

    Instead of testing for entrainment to isochronous rhythms measuring overt motoric behavior (Hasegawa et al., 2011), we will therefore use a perceptual task using a Go/No-go paradigm (Heijningen et al., 2012, Hagmann & Cook, 2010) to be able to test directly whether, first, zebra finches can distinguish between regular (isochronous pulse) and irregular rhythms (random intervals). And second, whether zebra finches can distinguish between beat-based and interval-based rhythms? A third issue that might be explored (if time permits) is how they do this (cf. Heijningen et al., 2012).

    Contact: prof. dr H. Honing
    Starting date: Spring 2013. [position filled]


  13. Is absolute pitch (AP) indeed wide spread under ordinary people?

    Absolute pitch (AP) is the ability to identify or produce isolated tones in the absence of contextual cues or reference pitches. It is evident primarily among individuals who started music lessons in early childhood. Because AP requires memory for specific pitches as well as learned associations with verbal labels (i.e., note names), it represents a unique opportunity to study musical memory.

    AP is thought to differ from other human abilities in its bimodal distribution (Takeuchi & Hulse, 1993): Either you have it or you do not [1]. Schellenberg & Trehub (2003) demystified the phenomenon of AP by documenting adults’ memory for pitch under ecologically valid conditions. Arguing that poor pitch memory of ordinary adults is an artifact of conventional test procedures, which involve isolated tones and pitch-naming tasks. They were able to show that good pitch memory is widespread among adults with no musical training [2].

    In the current project the Liederenbank (see the Meertens Institute [3]) will be used as a source to explore the potential role of AP in the memory of songs transmitted in oral traditions. Since the 'tunes' in that database are grouped by tune family and partly available as sound files, they can serve as emprical support for the 'AP is wide spread' hypothesis. (Interestingly, this cannot be done on the basis of the available transcriptions since these are all transposed to the same key.)

    - Familiarity with the methods and techniques from computational musicology, especially pitch tracking
    - Familiarity with statistical software
    - Interest in music cognition

    [1] Takeuchi & Hulse (1993)
    [2] Schellenberg & Trehub (2003)
    [3] Liederenbank

    Contact: prof. dr H. Honing
    Starting date: Spring 2013. [position filled]


  14. [to do]


















www.mcg.uva.nl