Category Archives: AcousticBrainz

Another AcousticBrainz update and a survey

Last year we started working on features to improve data produced from information about recordings that you submit to AcousticBrainz. First part of it was a way to create datasets that are used to train high-level models. The next were dataset creation challenges.

We already have a significant number of datasets created by AcousticBrainz community. The list of public datasets is available at https://beta.acousticbrainz.org/datasets/list. A couple of days ago our experimental challenge has concluded. It was related to classifying music with and without vocals. You can see final results at https://beta.acousticbrainz.org/challenges/14095b3b-4469-4e4d-984e-ef5f1a55962c.

Your feedback on high-level data

The latest addition to AcousticBrainz is a way to provide feedback about high-level output that you can see on summary pages for recordings. After a model is applied to all of the AcousticBrainz data we can understand how well it performs on a larger scale. This should help us make further improvements to models and underlying datasets. Keep in mind that you need to be logged in with your MusicBrainz account to see this.

Survey about new features

To help us understand how well new features work for you, we created a survey for you to participate in. If you have used AcousticBrainz, please fill out the survey here: https://goo.gl/forms/Oh3a9INBCCsW2I1i1. It shouldn’t take more than 5 minutes. We’ll keep it open for about a week.

Your feedback is very much appreciated. Especially considering that we don’t have a lot of ways to collect it from people. Some come to IRC and tell us about issues they are having, some comment on blog posts or create tickets in JIRA. But at this point we need a better overview of the current state of the project.

Thank you! đŸŽ¶

Dataset creation challenges in AcousticBrainz

Datasets are an important part of the AcousticBrainz project. All machine learning models, that are used to calculate high-level information about recordings (genre, mood, danceability, etc; see https://beta.acousticbrainz.org/485bbe7f-d0f7-4ffe-8adb-0f1093dd2dbf for example), first need to be trained on a dataset. Last year we released a platform which allows people to create and evaluate these datasets within AcousticBrainz. We’ve already seen a number of interesting datasets and now we want to take this process to the next step, make it more interesting.

Recently we started working on a new feature that allows us to organize dataset creation challenges. These challenges allow us to directly compare datasets created for the same classification tasks: genre, mood, instrumentation, etc. After a challenge ends, we can use the best models on all of the AcousticBrainz data.

Everyone can participate in a challenge, so we invite you to try the current version of the system at https://beta.acousticbrainz.org/! Right now there’s only one challenge related to classification of music with and without vocals, but we might add more later. To participate in a challenge:

  1. Create a dataset manually or by importing it from a CSV file created externally (this can be done from your profile page). Make sure it has the same structure (set of classes: “with vocals”, “without vocals”) as defined in the challenge requirements.
  2. Once you have built the dataset, select “Evaluate” link on its page to go to the evaluation page. There select a challenge that you would like to submit your dataset to (search for “Classifying vocals”).
  3. Wait for results! We’ll probably post an update once we have something interesting to show.

Please keep in mind that this is a very early prototype, so some issues are to be expected. This is why we ask you to try it and tell us what you think. We encourage you to report any problems or make suggestions in JIRA or in the #metabrainz IRC channel (https://wiki.musicbrainz.org/Communication/IRC). Feel free to use IRC or the comments section if you have any questions or thoughts. Thanks!

We have several more useful features coming up later. The big ones are improvements to the dataset editor, an extension of the API for datasets that was added recently, and a way to collect user feedback on high-level data. The dataset editor should become easier to work with, especially when working with large datasets. The API will be useful for people who want to build their own tools on top of core dataset functionality in AcousticBrainz. And finally, user feedback will allow us and other dataset creators to see how their models perform on a much larger scale.

We’re actually really going to take the HTTPS plunge!

Closing in on three years after stating that “We’re going to take the HTTPS plunge!”, we’re actually really going to do it now. 🙂

Most of our sites have forced HTTPS for some time (metabrainz.org, critiquebrainz.org, bookbrainz.org, listenbrainz.org), but there are still a couple of stragglers, notably musicbrainz.org and acousticbrainz.org.

For MusicBrainz, our beta site is now all HTTPS, web service and all. The main, non-beta musicbrainz.org will be going HTTPS-only except for what’s under /ws/ (ie., the web service) to allow taggers and other programs not currently using HTTPS some transition time. We do not currently have an ETA for when we will make the final jump to HTTPS-only on the MusicBrainz web service, as that partly depends on feedback from our web service users, which leads me to:

If you’re currently using the MusicBrainz web service, please try and switch your program to using beta.musicbrainz.org and see whether your program breaks or not and let us know the status of it. We are aware that some Python versions and MusicBrainz libraries do not support our setup, so while your program may fail now, it might simply be because of dependencies of your program not being updated yet and you might not need to do anything specifically on your end – however, some programs/libraries might need some updates, so the more people test and report back, the better we’ll be able to judge when we can go all-HTTPS-only on musicbrainz.org.

For AcousticBrainz, we now have a shiny new Let’s Encrypt certificate on https://acousticbrainz.org thanks to our systems administrator Zas! As a result, we are going to start redirecting all HTTP traffic to HTTPS on the AcousticBrainz website, including API queries.

In order to give everyone time to verify that their scripts correctly recognise and validate our Let’s Encrypt certificate, we are going to delay the redirect until July 1, 2016. On this date, any HTTP query will automatically be redirected to HTTPS. We will also enable HSTS, so that compliant browsers will redirect to HTTPS on the client-side.

If you have any questions about either the MusicBrainz or the AcousticBrainz transition, please ask.

AcousticBrainz Update

It’s been over a year since we last posted about AcousticBrainz, but a lot of work has been going on in the background. This post will give an overview about some of the things that we’ve achieved in the last year.

Data contributions

Our last blog post was neatly titled “What do 650,000 audio files look like, anyway?” Back then, we thought that this was a lot of submissions. Little did we know… I’m glad to report that we now have over 3.5 million submissions, of which almost 2 million are for unique MBIDs. This is a great contribution and we’d like to thank everyone who submitted data to us.

Dataset and model building

MusicBrainz coder Gentlecat returned to participate in Google Summer of Code last year and developed a new tool to let us create datasets and create new computational models. We’re really excited about how this can allow community members to help us increase the quality of the semantic information we provide in AcousticBrainz. We will make another blog post soon explaining how it works.

We presented an academic overview of AcousticBrainz (PDF) at the 16th International Society for Music Information Retrieval (ISMIR) conference in Malaga, Spain. The feedback from the academic community was very encouraging. Many people were interested in the data and wanted to know what they could do with it. We hope that there will be some new projects announced using the data at this year’s conference.

Integration with other data sources

MusicBrainz and AcousticBrainz don’t exist in a vacuum. One important thing that we need to make sure we do is interact with other researchers and products in the same field. To that end, we started AcousticBrainz Labs, a showcase of some of the experiments that we’re working on in AcousticBrainz. The first thing we have published is a mapping between AcousticBrainz and the Million Song Dataset, that we hope people will use to compare these two datasets.

Database upgrades and Data format changes

We’ve just upgraded to PostgreSQL 9.5 (from 9.3), which allows us to use the new jsonb datatype introduced in PostgreSQL 9.4. This change lets us store feature data more efficiently. We also made some changes to the database schema to let us start creating new data from datasets and computation models.

One result of this is that we are creating a new complete data dump, and stopping the old incremental dumps. We are also taking the opportunity to automate this incremental dump process, which is something that a number of people have asked for.

Another change is that the format of the high level JSON data is changing. This is to better reflect some of the complexities that exist in hosting such a large and varied dataset.

Contribute to AcousticBrainz development

We’re always interested in help from other people to contribute data, code, and ideas to AcousticBrainz. Once again, MetaBrainz is participating in Google’s Summer of Code, and AcousticBrainz is a possible project to work on. If you’re not a student you’re still welcome to work with us.

Write to us in a comment, in IRC, or in our new Discourse category and say hi.

What do 650,000 audio files look like, anyway?

Hot on the heels of our release of the first 650,000 feature files as part of the first release of AcousticBrainz, we are presenting some initial findings based on this dataset.

We thank Emilia GĂłmez (@emiliagogu), an Associate Professor and Senior Researcher at the Music Technology Group at Universitat Pompeu Fabra for doing this analysis and sharing her results with us. All of these results are based on data automatically computed by our Essentia audio analysis system. Nothing was decided by people. Isn’t that cool?

The MTG recently started the AcousticBrainz http://acousticbrainz.org/ project, in collaboration with MusicBrainz.  Data collection started on September 10th, 2014, and since then a total of 656,471 tracks (488,658 unique ones) have been described with essentia. I have been working for a while with audio descriptors and I followed the porting some of my algorithms to essentia, especially chroma features and key estimation. For that reason, I was curious to get a look this data. I present here some basic statistics. I computed them with the SPSS statistical software.

WHICH KIND OF MUSICAL GENRES DO WE HAVE IN THE COLLECTION?

In order to characterize this dataset, I first thought about genre. In essentia, there are four different genre models: trained on the data by Tzanetakis (2001), another one compiled at the MTG (Rosamerica), Dortmund and a database of Electronic music. Far from providing information on the kind of musical genres, these models seem to be contradictory! For example, in the Tzanetakis dataset “jazz” seems to be the most estimated genre, while the proportion of jazz excerpts is very small in the other models.

Genre estimations using the Tzanetakis dataset

Genre estimations using the Tzanetakis dataset

Genre estimations using the Rosamerica dataset

Genre estimations using the Rosamerica dataset

Genre estimations using the Dortmund dataset

Genre estimations using the Dortmund dataset

Genre estimations using the Electronic dataset

Genre estimations using the Electronic dataset

So in conclusion, we have a lot of jazz (according to the Tzanetakis dataset), electronic music (according to the Dortmund dataset), ambient (according to electronic dataset) and an equal distribution of all generes Rosamerica dataset (which does not include a category for electronic music)….Not very clarifying then! This is definitely something that we will be looking at in more depth.

WHAT ABOUT MOOD THEN?

For Mood characterization, 5 different binary models were trained and computed on the dataset. We observe that there is a larger proportion of non-­acoustic music, non-aggressive, and electronic. It is nice to see that most of the music is not happy and not sad! From this and previous study, I would then conclude that there is a tendency in the AcousticBrazinz dataset for electronic music.

Distribution of accoustic and non-accoustic (e.g. electronic) music

Distribution of accoustic and non-accoustic (e.g. electronic) music

How aggressive our dataset is

How aggressive our dataset is

The amount of electronic music

The amount of electronic music (compare with the acoustic graph above)

...and if the music is happy or not

…and if the music is happy or not

If we check for genre vs mood interactions, there are some interesting findings. We find that Classical is the most acoustic genre and rock is the least acoustic genre (due to its inclusion of electronic instruments):

How much music in each genre is accoustic or not

How much music in each genre is accoustic or not

HOW IS KEY ESTIMATION WORKING?

From a global statistical analysis, we observe that major and minor modes are both represented, and that the most frequent key is F minor / Ab Major or F# minor / A Major. This seems a little strange; A major and E major are very frequent keys in rock music. Maybe there are some issues with this data that need to be looked at.

The keys and modes of the tracks in the database

The keys and modes of the tracks in the database

IS THERE A LINK BETWEEN FEATURES AND GENRE?

I wanted to do some plots on acoustic features vs genres. For example, we observe a small loudness level for classical (cla) music and jazz (jaz), and a high one for dance (dan), hip hop (hip), pop, and rock (roc).

The loudness of songs by genre

The loudness of songs by genre

Finally, it is nice to see the relation between equal-­tempered deviation and musical genre. This descriptor measures the deviation of spectral peaks with respect to equal-­tempered tuning. It’s a very low-­level feature but it seems to be related to genre. It is lower for classical music than for other musical genres.

Variation from equal‐tempered tuning per genre

Variation from equal‐tempered tuning per genre

We also observe that for electronic music, equal tempered deviation is higher than for non-­electronic music/acoustic music. What does this mean? In simple terms, it seems that electronic music tends to ignore the rules of what it means to be “in tune” more than what we might term “more traditional” music.

Variation from equal­‐tempered tuning for songs reported as electronic/non-electronic

Variation from equal­‐tempered tuning for songs reported as electronic/non-electronic

IS THERE A LINK BETWEEN FEATURES AND YEAR?

I was curious to check for historical evolution in some acoustic features. Here are some nice plots on the evolution of number of pieces per year, and some of the most relevant acoustic features. We first observe that most of the pieces belong to the period from 1990’s to nowadays. This may be an artifact of the people who have submitted data to AcousticBrainz, and also of the data that we find in MusicBrainz. We hope that this distribution will spread out as we get more and more tracks.

Distribution of release year for the dataset. 0 represents an unknown year

Distribution of release year for the dataset. 0 represents an unknown year

There does not seem to be a large change of acoustic features as year changes. This is definitely something to look into further to see if any of the changes are statistically significant.

Are the loudness wars true? Can you see a trend?

Are the loudness wars true? Can you see a trend?

Is music getting faster? It doesn't seem so

Is music getting faster? It doesn’t look like it

Songs aren't geting more complex

Songs aren’t geting more complex


We have many more ideas of ways to look at this data, and hope that it will show us some interesting things that we may not have guessed from just listening to it. If you would like to see any other statistics, please let us know! You can download the whole dataset to perform your own analysis at http://acousticbrainz.org/download

Announcing the AcousticBrainz project

MetaBrainz and the Music Technology Group at Universitat Pompeu Fabra are pleased to announce the first public release of the AcousticBrainz project.

http://acousticbrainz.org/

What is AcousticBrainz?
The AcousticBrainz project aims to crowd source acoustic information for all of the music in the world and make it available to the public. The goal of AcousticBrainz is to provide music technology researchers and open source hackers with a massive database of information about music.

AcousticBrainz uses a state of the art research project called Essentia (http://essentia.upf.edu/), developed over the last 10 years at the Music Technology Group.

Data generated from processing audio files with Essentia is collected by the AcousticBrainz project and made available to the public under the CC0 license (public domain). In 6 weeks since its inception, AcousticBrainz contributors have already submitted data for 650,000 audio tracks using pre-release software.

Today we are releasing client programs to submit data to the AcousticBrainz server and our first public release containing audio features for over 650,000 audio files.

What data does it have?
AcousticBrainz contains information called audio features. This acoustic information describes the acoustic characteristics of music and includes low-level spectral information such as tempo, and additional high level descriptors for genres, moods, keys, scales and much more. These features are explained in more detail at http://acousticbrainz.org/sample-data

How can I get it?
You can access AcousticBrainz data via our API. See details at http://acousticbrainz.org/api
We also provide downloadable dumps of the whole dataset. You can download it (all 13 gigabytes!) at http://acousticbrainz.org/download

What can I do with it?
We hope that this database will spur the development of new music technology research and allow music hackers to create new and interesting recommendation and music discovery engines. Here are some ideas of things we would like to see:

  • Music discovery
  • Playlist generation
  • Improving the state of the art in genre recognition
  • Analytics on the musical structure of popular music
  • and more!

This is one of the largest datasets of this kind available for research, and the only one of this size that we know of which contains both freely available data as well as the reference source code used to compute the data.

How can I contribute?
If you are a music researcher, you can help us by contributing to the essentia project. Go to the essentia homepage to see how you can do this. If you do something cool with the data let us know. We’d like to start a “made with AcousticBrainz” page where we will showcase interesting projects.

If you have any audio files, we would love for you to contribute audio features to our project. You can do this by downloading our submission clients from http://acousticbrainz.org/download. We provide clients for Windows, Mac, and Linux.

If you find any bugs or errors in the AcousticBrainz stack please let us know! Report issues to http://tickets.musicbrainz.org/browse/AB.

We can’t wait to see what kind of things you will make with our data.

The AcousticBrainz team.