Category Archives: musicbrainz

GSoC 2018: SpamBrainz – Fighting spam in MusicBrainz using machine learning

Hi, I’m Leo and I spent my summer building and training SpamBrainz, our new solution to fighting spam in MusicBrainz. If you haven’t heard of SpamBrainz before it’s probably because it did not exist before this year’s Summer of Code.

For quite a while now the amount of spam in MusicBrainz has started to become a serious problem. Often this means editors are automatically created with descriptions that look not unlike the spam emails most of us get every day, promoting other websites and services.

During last year’s MetaBrainz Summit we discussed possible solutions to this and came up with the Spam Ninja system. Essentially this means that Soon™ there will be a group of editors that receive spam reports and have the ability to delete editors and entities that are nothing but spam.

Now with MusicBrainz having almost two million registered editors, could we really expect the Spam Ninjas to manually check every single one of them in addition to all the new registrations? Obviously not, and this is where SpamBrainz comes in.

SpamBrainz is a machine learning system that looks at all editors and decides whether or not it thinks they are spammers. If it thinks they are, it automatically notifies the spam ninjas who then decide whether or not SpamBrainz was correct.

What’s great about this system is that a human is guaranteed to look at any report and at no point does a computer decide that you’re a spammer and should be banned, because no one wants machines to run the world, right?

Building SpamBrainz

While most GSoC projects involve adding features to existing systems, SpamBrainz is something entirely new and I had not built anything on this scale before so I started out by doing tons of research.

When building a machine learning project you should always start by doing some good
old statistics first
and trying to figure out what matters about your data and how the
system could use it. I wrote a couple Jupyter notebooks (which are great for working with data) to do this.

As I was not working for MetaBrainz at the time and had to respect our privacy policy, I wrote a script to collect the most common values of a couple different editors, anonymize them and save them to a report. Using that data I could compare all spam and non-spam editors and decide upon a set of datapoints that would be useful for my machine learning model. Yvanzo then ran these on the live database and I could happily do my data analysis without compromising user privacy.

Next I built a pretty boring Flask-based API that would allow MusicBrainz to queue up editor analysis and training. Quite a few different MetaBrainz projects use Python and need to access the MusicBrainz database so a long time ago someone wise decided to move commonly used code into a repository called brainzutils-python. All I had to do was to add some code for accessing editor data through it.

In a surprise move by ruaok I was then hired by MetaBrainz as a contractor with a yearly salary of 100g of chocolate. I probably should have negotiated what kind of chocolate but what mattered most was that I could now work with user data without breaching our privacy policy.

But before I could build my Keras model I had to decide on a final set of input features and do write code for preprocessing the data. Only then could I finally get started building and testing models.

The current SpamBrainz state of the art model is Lodbrok which actually turned out to work really well, reaching a 99% accuracy in detecting spam while only mis‐classifying 0.2% of real users as spammers. Obviously the latter won’t be a problem because after all a Spam Ninja will still check these reports.

Future outlook

Now that GSoC is over I could just disappear with all the money and leave SpamBrainz in its current state but obviously that’s not what I am planning to do.

I would like to work with zas on getting it deployed along with the Spam Ninja system, improve the code documentation and try to tackle the remaining problem that is online learning (which as it turns out, isn’t as easy as I had thought).

With spam always evolving and spammers already moving to more sophisticated methods than just using editor biographies, I’d also look into building separate models for other entities.

After all SpamBrainz is just getting started and I’m very much looking forward to continuing our journey towards reducing the spam we all have to endure on MusicBrainz and other MetaBrainz projects.

GSoC 2018: More detailed integration of AcousticBrainz with MusicBrainz

Here comes an end to a fantastic summer for this year and time to wrap up my GSoC project which I have been working in for the last 3 months (the official GSoC coding period).

Hello people!!

I am Rashi Sah, an undergraduate student at the National Institute of Technology, Hamirpur. I have been working on a really cool AcousticBrainz project for MetaBrainz Foundation Inc. as a participant in Google Summer of Code ‘18. It has been an amazing experience and I’ve learned a lot over the summer, spending countless days and nights to successfully take the project to the stage of completion. I decided to contribute to MetaBrainz in late December, then spent some time understanding the codebase of the project and then began creating pull requests and pushing commits for many features, tasks and fixing bugs since January 2018. This blog post consists of my GSoC experience as a student and the work I’ve done for the program so far.

Before starting the GSoC program, I started looking for some good-first-bugs initially and found some tickets to work on. Then I talked to the AcousticBrainz community members and started contributing. I created some big PRs mostly for adding new features to AcousticBrainz. I also worked on many bug fixes which are already merged into the AcousticBrainz codebase. New feature additions PRs include AB-21, AB-98 and AB-298. In mid‐February, I started looking for a suitable idea to work on for GSoC program and to create a proposal for the same. As the month of March was approaching, I did a lot of proposal discussion with MetaBrainz community members especially with Alastair, AcousticBrainz project lead who has helped me a lot in reviewing and guiding me to improve my proposal to a better extent. Later April, my proposal for a more detailed integration of AcousticBrainz with MusicBrainz got accepted. In the community bonding period, I mostly tried to continue my work which I was already doing for the past 3–4 months.

Getting entity information from the MusicBrainz database

The first thing I worked on when the official GSoC coding period began was adding a way to directly access MusicBrainz database for different entities to the MusicBrainz database module in BrainzUtils (a Python utility for all of our MetaBrainz projects). I worked on getting artist and release entity information from the MusicBrainz database via a direct connection. (See PRs BU-13 and BU-14.) Later, I worked on setting up the MusicBrainz server by adding a service in AcousticBrainz’s docker-compose files allowing us to easily read data directly from the MusicBrainz database in AcousticBrainz (PR AB-334). Our major aim of the project was to implement both the methods of MusicBrainz database access in AcousticBrainz especially importing the MusicBrainz database in AcousticBrainz from scratch and then to decide which methods works better while implementing a particular functionality in AcousticBrainz using MusicBrainz data.

Import the MusicBrainz data in AcousticBrainz database

MusicBrainz’s database contains a huge number of tables, but I analysed the use case of MB data in AB and made a list of those tables that we would actually require in our AcousticBrainz integrations. Then I made a PR (AB-338) for creating new tables in the AB database under the MusicBrainz schema. Later, I worked on a big PR (AB-340) which imports MB data corresponding to each and every recording present in AcousticBrainz’s database and writes the data into the tables of the MusicBrainz schema in AB. This PR was really huge and I had to take care of a lot of integrity constraints and foreign key dependencies.

Update MB data in AB for every new recording added to AB

Another feature I worked on after importing the MB data was updating the MB data present in AB whenever any new recording is added to the AcousticBrainz database (see PR AB-346) by importing the data from MB’s database via the direct connection. While working on a few bug fixes, I and my mentor, Param realized that the MB data import is taking a lot more time than expected when I applied the MusicBrainz importer script for full MB data dumps (of around 2.8 GB). So, I then worked on making the MusicBrainz importer more efficient and was able to import the data for few recordings within seconds (see PR AB-348). I had to figure out a lot for each table import and to detect the parts of the code which were making things slower.

To reduce the load on the processor, I included a sleep schedule of 5 seconds in the MusicBrainz importer module to wait before importing data for any new recording (see PR AB-354). During my GSoC period, I learned how important it is to write tests and make them run fast. I wrote tests for almost every script inside the db module. Later, I worked on writing tests for the MusicBrainz importer script (AB-352).

Apply replication packets to keep MB data in AB updated with the actual MusicBrainz database

Then came another tricky part of this project which was to update the MusicBrainz schema data in AB whenever there is any change in the actual MusicBrainz database whether it is an update or a deletion taking place. MusicBrainz provides hourly replication packets which describe the changes to the database in a specific period. Replication packets are .tar.bz2 archives with a collection of files in them which can be downloaded via the MetaBrainz API. Lukas Lalinsky, a long-time contributor to MetaBrainz projects, the founder of AcoustID and maintainer of the mbdata Python module, had worked on implementing replication packets on MB data. I did a lot of modifications in his script to apply replication packets to the MusicBrainz schema data till it’s recent update for the recordings data present in AcousticBrainz (see AB-350).

Integration with MB database: Use MBID redirect information to get original entity

After working on the direct connection and importing the MusicBrainz data, keeping it updated by all means, it was time to start working on writing evaluation scripts to decide the better method for any integration we apply in AcousticBrainz. I wrote a script to implement an integration in AB with MB database to use the redirect information of an entity and then returns the original entity corresponding to the MBID provided (see PR AB-356).

Evaluate both methods of MusicBrainz database access in AcousticBrainz

Now moving towards the last work of my GSoC period and the most important as well. After working on both the methods, we really needed to evaluate both in order to test which one is more efficient for any specific integration with the MB database. I first wrote an evaluation script which fetches the data from the recording and low-level tables. For this case, the difference between the time taken by both methods comes out to be really large (approx. 70 seconds for around 250+ recordings). So whenever we would have to get the data from local AB tables and MB tables as well, we would go for the import database method as this method turns out to be faster than the other one. Next I tested with the MBID redirect integration part in which I didn’t find much difference between both the methods (PR AB-357). But I ran these tests locally, the tests in production may yield different results.

All in all, it has been an exciting summer. By this time I am familiar with a very good part of the AcousticBrainz codebase. I really look forward to work on adding a lot more integrations with MB data in AcousticBrainz and plan to completely remove AB’s dependency over the web service to use the MusicBrainz database which would be very useful for the users.

Details of contributions made

By the end of the GSoC coding period, I have opened a total of 39 PRs of which 35 are pull requests to the AcousticBrainz server, 3 are pull requests to BrainzUtils and 1 pull request to the AcousticBrainz client and have made a total of 135 commits (109 in AB, 9 in BU, 3 in AC and 14 in AB master) and out of them, pull requests created and merged during the official GSoC coding period are PRs to AcousticBrainz server and PRs to Brainzutils.

These last three months were full of thrill, excitement and much frustration as well. And this doesn’t end here, I’d love to contribute in the future and act as a maintainer for the AcousticBrainz project. I believe people must try to contribute to open source organizations as it helps you learn and gain much experience in a short period of time especially when working for a great platform like Google Summer of Code.

I am really happy working with the awesome MetaBrainz community and the people here are fantastic. I’d love to stay being a part of MetaBrainz in future as well. So in the end a big thanks to my mentor Param Singh, without his help & support throughout the program, wouldn’t have been possible for me to reach the end phase of GSoC, and my organization admin Robert Kaye, AcousticBrainz project lead Alastair Porter and all of the MetaBrainz Foundation community members for choosing me as a GSoC student and thus providing me such a great opportunity and also for being very kind and helpful throughout the program. And I want to thank Google for making this all possible. Hope I get a chance to work with you all again!!

Picard 2.0.1 released! (Windows and macOS users rejoice)

Note – There are no changes for Linux users, so they can safely skip this release if they want.

Given the massive feedback about the shortcomings of the Windows and macOS versions of Picard, we decided to do a minor release addressing some of the issues with our executables.

As usual, you can find the latest downloads on Picard’s Website.

The change-log is as follows –

Bug-fix

  • [PICARD-1283] – Fingerprinting not working on macOS in Picard 2.0
  • [PICARD-1286] – Error creating SSL context on Windows

Improvement

  • [PICARD-1290] – Improve slow start up times by moving to a non single file exe
  • [PICARD-1291] – Use an installer for Picard 2.x windows exe

Basically, the Windows executable is now a proper installer and some missing SSL dependencies are bundled with it.

The macOS builds also include the missing AcoustID fingerprinting binary.

The startup time for both the Windows and macOS version has been improved as well.

Have fun tagging your files!

samj1912 signing off o/

 

Picard 2.0 released

Hey people, samj1912 here again o/

This time we are announcing the release of a new Picard!

Official MusicBrainz cross-platform music tagger Picard 2.0 is now out, containing many fixes and new features and much needed upgrades!

The last time we put out a major release was more than 6 years ago (Picard 1.0 in June of 2012), so this release comes with a major back-end update. If you’re in a hurry and just want to try it out, the downloads are available from the Picard website.

If you have been following our Picard related blogs, you will know that we switched up our dependencies a bit. Python should now be at least version 3.5, PyQt 5.7 or newer and Mutagen should be 1.37 or newer. A side effect of this dependency bump is that Picard should look better and in general feel more responsive.

A couple of things to note – with Picard 2.0, Picard Windows builds will be portable standalone binaries. Also, we will only be supporting 64-bit Windows officially because of lack of resources to build a 32-bit image. The macOS requirements were also bumped up for the same reasons, with macOS 10.10 being the lowest version that is supported.

As such, Picard 1.4.2 will be the last version that is supported for both Windows 32 and macOS 10.7-10.10. You can find it in the Picard downloads section as well.

You can find a detailed change-log on the Picard webiste.

The highlights of this update are –

  • Retina and Hi-DPI display support
  • Improved performance
  • UI improvements

We would like to thank all contributors, from all around the world, who helped for this release: Laurent Monin, Sophist, Wieland Hoffmann, Vishal Choudhary, Philipp Wolfer, Calvin Walton, David Mandelberg, Paul Roub, Yagyansh Bhatia, Shen-Ta Hsieh, Ville Skyttä, Yvan Rivierre and also all of our translators!

Be aware that downgrading from 2.0 to 1.4 may lead to configuration compatibility issues – ensure that you have saved your Picard configuration before using 2.0 if you intend to go back to 1.4.

Note:  If you are facing errors while tagging releases on Windows, do take a look at this FAQ about SSL errors.

MusicBrainz Search Overhaul

Hello people o/, samj1912 here.

I am extremely glad to announce that we are finally launching our Solr search on the MusicBrainz beta server!

Just a little history before I announce the new features and toys you get to play with:

Solr started as something that could replace our existing search infrastructure. If you have been a MusicBrainz user for a while, you might know that our search has quite an indexing latency and it takes as much as 3 hours for new edits to show up in the search results. In part because updating the search index involved doing an entire re-index of the database. With the high latency and the resources it took, the current search server left much to be desired.

Another area that our current search lacked in, was showing popular results and search ranking. Searching for a famous artist or place returned results that contained a lot of noise, and more often than not, contained results that weren’t relevant to what the user had in mind when they searched for it.

These were the two major problems that motivated us to shift to a better infrastructure for our search needs.

Thus, MB-Solr was born.

It has been in development for quite some time now. The coding for the project started with Mineo back in 2014 and was carried forward by Jeff Weeksio in GSoC 2015. But due to lack of development resources and other, more pressing needs, the project was put on a hold for a while, until Roman started working on it. However, he left MetaBrainz before he could finish this work, so when I joined the MetaBrainz team, the first and foremost task that was assigned to me was getting Solr working and ready for production.

After struggling with multiple moving parts and services, tons of issues with maintaining compatibility with our existing web-service API, rowing up and down multi-threading/processing hell, learning just enough about information retrieval to get our search relevance on point and countless hours sifting through Solr documentation to get our Solr cluster fine-tuned and running fast enough to keep up with our web traffic… we are finally here.

I am pretty sure I would’ve rage-quit dozens of times during this last year if I was doing this all alone.

As such, we have our trusty sysadmin Zas to thank for taking care of all the deployment needs and making sure Solr was well-tested (believe me we toyed with Solr like little kids in a sandbox) and wasn’t going to fail and wake him up 3 AM in the morning with red alerts all over. Mineo, Bitmap and Yvanzo were there, with much-needed code reviews and help with all things Solr and MusicBrainz. Our style leader Reosarevok, and CatQuest helped us test our new search relevance configuration. And of course, we had our BDFL, Rob over-seeing things and whipping them into shape (with chocolate and mismatched socks of course).

Anyway, here’s what you are here for:

New features/improvements

  • (Almost) Instantaneous search-index updates – Edit something and immediately see it in the search results. Say goodbye to that note you used to see below the search telling you that you have to wait. Who likes waiting anymore – seriously, it’s 2018.
  • Better search results – We wanted to make sure you were getting the right Queen and London as the top result. You can finally link your favorite artist to London, UK as opposed to London, Arkansas. Don’t believe me? Go try it out.
  • Less load on our servers – Meaning we can serve more of your requests, faster. Getting tired of waiting for tagging your bajillion songs in Picard? Well, you still gotta wait, but less so, now that we are better equipped to handle your requests.

What has stayed the same

  • WS/2 Search API – We know you devs hate doing that extra work to maintain your applications’ compatibility with that one site that changes its API on a whim. Well, we wouldn’t want you to spend those hours following that one int to float change that broke everything ever. As such we have worked hard to make sure that Solr doesn’t change any of our WS/2 search schema.

What’s gone

  • WS/1 Search API – We deprecated WS/1 back in 2011. With the new search servers in place, there are only 3 words for those still using it after WS/1 being deprecated 7 years – ‘poof, it’s gone’. The service still works on our main website, but its search functionality will be phased out soon, while the entire service will be discontinued in August 2018 as announced earlier.

Now, you must be thinking there is some catch, some slip. Well so do I, which is why we are releasing this beta for you to test the heck out of our new search over at the MusicBrainz beta site. If you haven’t used it before, worry not – it has all your personalizations and all our cool music metadata from our main site. You should feel at home. (Note: The MusicBrainz beta site works on the live data. Any edits you make on the MusicBrainz beta site will also be reflected on the main site.)

So please! Go check it out!

If you feel you aren’t getting what we promised you or you want more of those shiny new features or that this blog was too long or like a TV commercial, feel free to complain at our Ticket Tracker for Solr. You get your promised features bug-free and our devs get to earn their living. It’s a win-win.

Happy testing!

Picard 2.0 beta2 announcement

Hello people,

Thank you so much for reporting bugs in our Picard 2.0.0beta1 release. We fixed most of the critical bugs that you guys and gals reported. You can find the beta2 release with the fixes here – Picard 2.0.0.beta2

If you have been following our Picard related blogs, you will know that we decided to release a new stable version of Picard before the beginning of the summer.

To help us, advanced users, translators and developers are encouraged to:

Note – If any of you are seasoned Windows/macOS devs and have experience with PyInstaller, we need some help with PICARD-1216 and PICARD-1217. We also need some help with code signing Picard for OSX. Hit us up on #metabrainz on freenode for more information. We will be very grateful for any help that you may offer!

A simplified list of changes made since 1.4 can be read here.

Be aware that downgrading from 2.0 to 1.4 may lead to configuration compatibility issues – ensure that you have saved your Picard configuration before using 2.0 if you intend to go back to 1.4.

Our next major challenge: Fixing the MusicBrainz site design for an improved user experience

Back in 1998 when I started playing with Perl and wrote the CD Index (the pre-cursor to MusicBrainz). I was learning web development and had little understanding of web design. The tools I was using were primitive at the time and the results were cringeworthy and have not withstood the test of time.

Fast forward some 18 years and we’ve arrived at the current MusicBrainz site design — there have been minor facelifts over time and a bigger one once we released NGS back in 2011. But really, the site design hasn’t changed much and we’ve kept gluing features and new bits of data onto the crappy design, leaving us with the current mess of a UX experience we know as the modern MusicBrainz.

Our community has been asking us to improve UX for a long time — we need to:
Empower our community with better tools for developing, editing, viewing the magnificent data that we have.
Build a stronger foundation for further development, interaction, and extension of our projects in future
Make our projects more welcoming to newcomers, by lowering the learning curve as well as keeps the workflow of an advanced editor intact.

Fortunately for us, Chhavi [a design student from IIT, India] has become an active contributor to the MetaBrainz projects. She has been studying our sites and how we work as a team and has volunteered to drive the process to fix the UI and the user experience issues on the MusicBrainz site. She has proposed a part of this work as her Google Summer of Code project.

Our overall goal as a team is to create a design system which will help the designers and developers stay in sync, give a more unified theme to our projects, and make it easier for new contributors to join our projects. This will also make it much easier for our developers to address your requests for features/bug fixes faster in the future.

We are not barging into your online lives and trying to make our sites pretty — instead, we are focusing on the real experiences you have with them. We held long detailed conversations during our last summit in Barcelona, where Chhavi was also present and discussed a lot of concerns that might be running in your head while you read this.  As part of this initiative, we have been interviewing a number of key members of our project to understand what we and our users really need from this revamp. We have also kept track of community discussions around this topic. From this we decided that our users fall into three broad categories:

  1. There are those who contribute to code and understand database tech.

  2. Experienced/advanced MusicBrainz editors who don’t understand database tech.

  3. New users, who feel hopelessly lost in the current scenario.

To make all this research/discussion/feedback available for everyone to go through, we have started a Jira issue type Design that tracks all the design related tickets of MusicBrainz. The most notable tickets that show mock-ups of future MusicBrainz pages include:

When you look at these pages, please keep in mind that we’re trying to clean up the clutter and to make things simple and clean. Easier to understand for an experienced editor or a new one. The data that we have should be presented in a way that makes sense. The data should present the gaps and holes that it presently has, for people to be able to improve the data gaps. Data should also be our binding link to exploit the full potential of the projects that we have, such as ListenBrainz or CritiqueBrainz.

We are not trying to fluff things up and make them look pretty. Prettiness might come with the simplicity that we are chasing. Having user flows that do not hamper the speed and makes our life easier, is our utmost goal.

That said, we are happy to receive feedback on the upcoming designs as well as the process– if you have any, please post your comments to the appropriate tickets in Jira that we linked above. We’re currently getting some pressing dev tasks out of the way before we start the actual implementation of the redesigned project. Once our team is ready to work on this, we will public more blog posts about how this project will unfold and how it will impact our users.