MusicStand: Listening to Song Lyrics Using a Map Query Interface
2021
Music is present in numerous forms in our daily lives and is deemed essential to it. Multiple applications have been proposed to let users check into a location and tag that check-in with the song to which they are listening. This is time-consuming and requires much work in voluntary manual tagging. One of our major goals is to automatically determine the spatial scope of a song. Our research challenge is how to identify locations in unstructured and badly-cased lyric texts (e.g., all caps, camel case, non-cased, studly caps, etc.) that are mostly submitted by volunteers from all over the world. Uncertain casing leads to a severe performance drop when using named entity recognition (NER) and geographical information is often lost due to a failure to correctly identify geographical entities. We overcome this failure by normalizing the lyrics in the sense that the information loss is minimized and propose the MusicStand(http://musicstand.umiacs.io/) framework to process/input lyric text that involves three steps: cleaning, truecasing, and geotagging. MusicStand enables users to explore or search a music collection where the goal is to find and play songs about particular geographic entities (i.e., toponyms) using a map query interface. Note that the collection may be static (e.g., a songbook) or dynamic (e.g., a radio playlist).
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
9
References
0
Citations
NaN
KQI