Category Archives: AcousticBrainz

AcousticBrainz downtime: Migrating hosting to our other servers

Today we’re going to migrate the AcousticBrainz service from its standalone server that we’ve rented in the past few years to our shared infrastructure at Hetzner. We’ve been prepping for this move for a few weeks now and the actual process to follow has been used before, so we don’t expect the downtime to be more than 1 hour.

We’re sorry for the downtime that will be coming — to keep up with what we’re doing, please follow our progress on Twitter. We hope to start the migration in the next hour or two from when this blog post goes up.

 

GSoC 2018: More detailed integration of AcousticBrainz with MusicBrainz

Here comes an end to a fantastic summer for this year and time to wrap up my GSoC project which I have been working in for the last 3 months (the official GSoC coding period).

Hello people!!

I am Rashi Sah, an undergraduate student at the National Institute of Technology, Hamirpur, India. I have been working on a really cool AcousticBrainz project for MetaBrainz Foundation Inc. as a participant in Google Summer of Code ‘18. It has been an amazing experience and I’ve learned a lot over the summer, spending countless days and nights to successfully take the project to the stage of completion. I decided to contribute to MetaBrainz in late December, then spent some time understanding the codebase of the project and then began creating pull requests and pushing commits for many features, tasks and fixing bugs since January 2018. This blog post consists of my GSoC experience as a student and the work I’ve done for the program so far.

Before starting the GSoC program, I started looking for some good-first-bugs initially and found some tickets to work on. Then I talked to the AcousticBrainz community members and started contributing. I created some big PRs mostly for adding new features to AcousticBrainz. I also worked on many bug fixes which are already merged into the AcousticBrainz codebase. New feature additions PRs include AB-21, AB-98 and AB-298. In mid‐February, I started looking for a suitable idea to work on for GSoC program and to create a proposal for the same. As the month of March was approaching, I did a lot of proposal discussion with MetaBrainz community members especially with Alastair, AcousticBrainz project lead who has helped me a lot in reviewing and guiding me to improve my proposal to a better extent. Later April, my proposal for a more detailed integration of AcousticBrainz with MusicBrainz got accepted. In the community bonding period, I mostly tried to continue my work which I was already doing for the past 3–4 months.

Getting entity information from the MusicBrainz database

The first thing I worked on when the official GSoC coding period began was adding a way to directly access MusicBrainz database for different entities to the MusicBrainz database module in BrainzUtils (a Python utility for all of our MetaBrainz projects). I worked on getting artist and release entity information from the MusicBrainz database via a direct connection. (See PRs BU-13 and BU-14.) Later, I worked on setting up the MusicBrainz server by adding a service in AcousticBrainz’s docker-compose files allowing us to easily read data directly from the MusicBrainz database in AcousticBrainz (PR AB-334). Our major aim of the project was to implement both the methods of MusicBrainz database access in AcousticBrainz especially importing the MusicBrainz database in AcousticBrainz from scratch and then to decide which methods works better while implementing a particular functionality in AcousticBrainz using MusicBrainz data.

Import the MusicBrainz data in AcousticBrainz database

MusicBrainz’s database contains a huge number of tables, but I analysed the use case of MB data in AB and made a list of those tables that we would actually require in our AcousticBrainz integrations. Then I made a PR (AB-338) for creating new tables in the AB database under the MusicBrainz schema. Later, I worked on a big PR (AB-340) which imports MB data corresponding to each and every recording present in AcousticBrainz’s database and writes the data into the tables of the MusicBrainz schema in AB. This PR was really huge and I had to take care of a lot of integrity constraints and foreign key dependencies.

Update MB data in AB for every new recording added to AB

Another feature I worked on after importing the MB data was updating the MB data present in AB whenever any new recording is added to the AcousticBrainz database (see PR AB-346) by importing the data from MB’s database via the direct connection. While working on a few bug fixes, I and my mentor, Param realized that the MB data import is taking a lot more time than expected when I applied the MusicBrainz importer script for full MB data dumps (of around 2.8 GB). So, I then worked on making the MusicBrainz importer more efficient and was able to import the data for few recordings within seconds (see PR AB-348). I had to figure out a lot for each table import and to detect the parts of the code which were making things slower.

To reduce the load on the processor, I included a sleep schedule of 5 seconds in the MusicBrainz importer module to wait before importing data for any new recording (see PR AB-354). During my GSoC period, I learned how important it is to write tests and make them run fast. I wrote tests for almost every script inside the db module. Later, I worked on writing tests for the MusicBrainz importer script (AB-352).

Apply replication packets to keep MB data in AB updated with the actual MusicBrainz database

Then came another tricky part of this project which was to update the MusicBrainz schema data in AB whenever there is any change in the actual MusicBrainz database whether it is an update or a deletion taking place. MusicBrainz provides hourly replication packets which describe the changes to the database in a specific period. Replication packets are .tar.bz2 archives with a collection of files in them which can be downloaded via the MetaBrainz API. Lukas Lalinsky, a long-time contributor to MetaBrainz projects, the founder of AcoustID and maintainer of the mbdata Python module, had worked on implementing replication packets on MB data. I did a lot of modifications in his script to apply replication packets to the MusicBrainz schema data till it’s recent update for the recordings data present in AcousticBrainz (see AB-350).

Integration with MB database: Use MBID redirect information to get original entity

After working on the direct connection and importing the MusicBrainz data, keeping it updated by all means, it was time to start working on writing evaluation scripts to decide the better method for any integration we apply in AcousticBrainz. I wrote a script to implement an integration in AB with MB database to use the redirect information of an entity and then returns the original entity corresponding to the MBID provided (see PR AB-356).

Evaluate both methods of MusicBrainz database access in AcousticBrainz

Now moving towards the last work of my GSoC period and the most important as well. After working on both the methods, we really needed to evaluate both in order to test which one is more efficient for any specific integration with the MB database. I first wrote an evaluation script which fetches the data from the recording and low-level tables. For this case, the difference between the time taken by both methods comes out to be really large (approx. 70 seconds for around 250+ recordings). So whenever we would have to get the data from local AB tables and MB tables as well, we would go for the import database method as this method turns out to be faster than the other one. Next I tested with the MBID redirect integration part in which I didn’t find much difference between both the methods (PR AB-357). But I ran these tests locally, the tests in production may yield different results.

All in all, it has been an exciting summer. By this time I am familiar with a very good part of the AcousticBrainz codebase. I really look forward to work on adding a lot more integrations with MB data in AcousticBrainz and plan to completely remove AB’s dependency over the web service to use the MusicBrainz database which would be very useful for the users.

Details of contributions made

By the end of the GSoC coding period, I have opened a total of 39 PRs of which 35 are pull requests to the AcousticBrainz server, 3 are pull requests to BrainzUtils and 1 pull request to the AcousticBrainz client and have made a total of 135 commits (109 in AB, 9 in BU, 3 in AC and 14 in AB master) and out of them, pull requests created and merged during the official GSoC coding period are PRs to AcousticBrainz server and PRs to Brainzutils.

These last three months were full of thrill, excitement and much frustration as well. And this doesn’t end here, I’d love to contribute in the future and act as a maintainer for the AcousticBrainz project. I believe people must try to contribute to open source organizations as it helps you learn and gain much experience in a short period of time especially when working for a great platform like Google Summer of Code.

I am really happy working with the awesome MetaBrainz community and the people here are fantastic. I’d love to stay being a part of MetaBrainz in future as well. So in the end a big thanks to my mentor Param Singh, without his help & support throughout the program, wouldn’t have been possible for me to reach the end phase of GSoC, and my organization admin Robert Kaye, AcousticBrainz project lead Alastair Porter and all of the MetaBrainz Foundation community members for choosing me as a GSoC student and thus providing me such a great opportunity and also for being very kind and helpful throughout the program. And I want to thank Google for making this all possible. Hope I get a chance to work with you all again!!

Another AcousticBrainz update and a survey

Last year we started working on features to improve data produced from information about recordings that you submit to AcousticBrainz. First part of it was a way to create datasets that are used to train high-level models. The next were dataset creation challenges.

We already have a significant number of datasets created by AcousticBrainz community. The list of public datasets is available at https://beta.acousticbrainz.org/datasets/list. A couple of days ago our experimental challenge has concluded. It was related to classifying music with and without vocals. You can see final results at https://beta.acousticbrainz.org/challenges/14095b3b-4469-4e4d-984e-ef5f1a55962c.

Your feedback on high-level data

The latest addition to AcousticBrainz is a way to provide feedback about high-level output that you can see on summary pages for recordings. After a model is applied to all of the AcousticBrainz data we can understand how well it performs on a larger scale. This should help us make further improvements to models and underlying datasets. Keep in mind that you need to be logged in with your MusicBrainz account to see this.

Survey about new features

To help us understand how well new features work for you, we created a survey for you to participate in. If you have used AcousticBrainz, please fill out the survey here: https://goo.gl/forms/Oh3a9INBCCsW2I1i1. It shouldn’t take more than 5 minutes. We’ll keep it open for about a week.

Your feedback is very much appreciated. Especially considering that we don’t have a lot of ways to collect it from people. Some come to IRC and tell us about issues they are having, some comment on blog posts or create tickets in JIRA. But at this point we need a better overview of the current state of the project.

Thank you! 🎶

Dataset creation challenges in AcousticBrainz

Datasets are an important part of the AcousticBrainz project. All machine learning models, that are used to calculate high-level information about recordings (genre, mood, danceability, etc; see https://beta.acousticbrainz.org/485bbe7f-d0f7-4ffe-8adb-0f1093dd2dbf for example), first need to be trained on a dataset. Last year we released a platform which allows people to create and evaluate these datasets within AcousticBrainz. We’ve already seen a number of interesting datasets and now we want to take this process to the next step, make it more interesting.

Recently we started working on a new feature that allows us to organize dataset creation challenges. These challenges allow us to directly compare datasets created for the same classification tasks: genre, mood, instrumentation, etc. After a challenge ends, we can use the best models on all of the AcousticBrainz data.

Everyone can participate in a challenge, so we invite you to try the current version of the system at https://beta.acousticbrainz.org/! Right now there’s only one challenge related to classification of music with and without vocals, but we might add more later. To participate in a challenge:

  1. Create a dataset manually or by importing it from a CSV file created externally (this can be done from your profile page). Make sure it has the same structure (set of classes: “with vocals”, “without vocals”) as defined in the challenge requirements.
  2. Once you have built the dataset, select “Evaluate” link on its page to go to the evaluation page. There select a challenge that you would like to submit your dataset to (search for “Classifying vocals”).
  3. Wait for results! We’ll probably post an update once we have something interesting to show.

Please keep in mind that this is a very early prototype, so some issues are to be expected. This is why we ask you to try it and tell us what you think. We encourage you to report any problems or make suggestions in JIRA or in the #metabrainz IRC channel (https://wiki.musicbrainz.org/Communication/IRC). Feel free to use IRC or the comments section if you have any questions or thoughts. Thanks!

We have several more useful features coming up later. The big ones are improvements to the dataset editor, an extension of the API for datasets that was added recently, and a way to collect user feedback on high-level data. The dataset editor should become easier to work with, especially when working with large datasets. The API will be useful for people who want to build their own tools on top of core dataset functionality in AcousticBrainz. And finally, user feedback will allow us and other dataset creators to see how their models perform on a much larger scale.

We’re actually really going to take the HTTPS plunge!

Closing in on three years after stating that “We’re going to take the HTTPS plunge!”, we’re actually really going to do it now. 🙂

Most of our sites have forced HTTPS for some time (metabrainz.org, critiquebrainz.org, bookbrainz.org, listenbrainz.org), but there are still a couple of stragglers, notably musicbrainz.org and acousticbrainz.org.

For MusicBrainz, our beta site is now all HTTPS, web service and all. The main, non-beta musicbrainz.org will be going HTTPS-only except for what’s under /ws/ (ie., the web service) to allow taggers and other programs not currently using HTTPS some transition time. We do not currently have an ETA for when we will make the final jump to HTTPS-only on the MusicBrainz web service, as that partly depends on feedback from our web service users, which leads me to:

If you’re currently using the MusicBrainz web service, please try and switch your program to using beta.musicbrainz.org and see whether your program breaks or not and let us know the status of it. We are aware that some Python versions and MusicBrainz libraries do not support our setup, so while your program may fail now, it might simply be because of dependencies of your program not being updated yet and you might not need to do anything specifically on your end – however, some programs/libraries might need some updates, so the more people test and report back, the better we’ll be able to judge when we can go all-HTTPS-only on musicbrainz.org.

For AcousticBrainz, we now have a shiny new Let’s Encrypt certificate on https://acousticbrainz.org thanks to our systems administrator Zas! As a result, we are going to start redirecting all HTTP traffic to HTTPS on the AcousticBrainz website, including API queries.

In order to give everyone time to verify that their scripts correctly recognise and validate our Let’s Encrypt certificate, we are going to delay the redirect until July 1, 2016. On this date, any HTTP query will automatically be redirected to HTTPS. We will also enable HSTS, so that compliant browsers will redirect to HTTPS on the client-side.

If you have any questions about either the MusicBrainz or the AcousticBrainz transition, please ask.

AcousticBrainz Update

It’s been over a year since we last posted about AcousticBrainz, but a lot of work has been going on in the background. This post will give an overview about some of the things that we’ve achieved in the last year.

Data contributions

Our last blog post was neatly titled “What do 650,000 audio files look like, anyway?” Back then, we thought that this was a lot of submissions. Little did we know… I’m glad to report that we now have over 3.5 million submissions, of which almost 2 million are for unique MBIDs. This is a great contribution and we’d like to thank everyone who submitted data to us.

Dataset and model building

MusicBrainz coder Gentlecat returned to participate in Google Summer of Code last year and developed a new tool to let us create datasets and create new computational models. We’re really excited about how this can allow community members to help us increase the quality of the semantic information we provide in AcousticBrainz. We will make another blog post soon explaining how it works.

We presented an academic overview of AcousticBrainz (PDF) at the 16th International Society for Music Information Retrieval (ISMIR) conference in Malaga, Spain. The feedback from the academic community was very encouraging. Many people were interested in the data and wanted to know what they could do with it. We hope that there will be some new projects announced using the data at this year’s conference.

Integration with other data sources

MusicBrainz and AcousticBrainz don’t exist in a vacuum. One important thing that we need to make sure we do is interact with other researchers and products in the same field. To that end, we started AcousticBrainz Labs, a showcase of some of the experiments that we’re working on in AcousticBrainz. The first thing we have published is a mapping between AcousticBrainz and the Million Song Dataset, that we hope people will use to compare these two datasets.

Database upgrades and Data format changes

We’ve just upgraded to PostgreSQL 9.5 (from 9.3), which allows us to use the new jsonb datatype introduced in PostgreSQL 9.4. This change lets us store feature data more efficiently. We also made some changes to the database schema to let us start creating new data from datasets and computation models.

One result of this is that we are creating a new complete data dump, and stopping the old incremental dumps. We are also taking the opportunity to automate this incremental dump process, which is something that a number of people have asked for.

Another change is that the format of the high level JSON data is changing. This is to better reflect some of the complexities that exist in hosting such a large and varied dataset.

Contribute to AcousticBrainz development

We’re always interested in help from other people to contribute data, code, and ideas to AcousticBrainz. Once again, MetaBrainz is participating in Google’s Summer of Code, and AcousticBrainz is a possible project to work on. If you’re not a student you’re still welcome to work with us.

Write to us in a comment, in IRC, or in our new Discourse category and say hi.