Category Archives: Uncategorized

Welcome to the team Elizabeth!

I’m very happy to announce that we have a brand new Supporter Catalyst on our team. Elizabeth Bigger, AKA Quesito, joined our team at the beginning of the year and is now coming up to speed.

Her duties include making contact with any supporters who sign up on the MetaBrainz site and to sort out any questions they may have working with such a quirky organization like MetaBrainz. She’ll also be reaching out to established customers to make sure that they are on the right support level and that things are working smoothly for our supporters.

I anticipate her also helping out with other tasks such as putting on our annual summit and other events we may hold in our office in Barcelona.

Welcome on board, Elizabeth!

Server update, 2016-12-19

This release features code from GCI student dpmittal, who fixed four of the tickets below under our mentorship. One of those tickets was for displaying the excellent artist icons that former GCI student (and current mentor) gcilou created. Those icons are displayed to the left of the name at the top of artist pages (examples: person, group, choir, orchestra, character, other). Nice work, gcilou and dpmittal! We also have various fixes and improvements thanks to chirlu and Zastai, listed below.

The git tag is v-2016-12-19.

Sub-task

  • [MBS-4159] – Vimeo relationship under the External links section

Bug

  • [MBS-7009] – Exception if replication type is slave but no data in replication_control
  • [MBS-8268] – Ratings (stars) display does not update on its own
  • [MBS-9117] – CD Stub track count not serialized correctly

New Feature

  • [MBS-8359] – Add “Guess Case” function for Event names

Task

  • [MBS-8870] – Add Setlist.fm links to the sidebar

Improvement

  • [MBS-1352] – Different icon for Unknown/Person/Group on Artist pages
  • [MBS-8542] – Blacklist Jaikoz from making barcode edits

Replication update: Do you have a DB that could help us?

In my post from yesterday I talked about our continuing struggle to fix our replication stream. Overnight we learned two new things:

  1. The lengthy hard drive recovery process has failed and yielded no useful results. 😦
  2. We have a working DB diff program in place that allows us to create missing replication packets from two DBs at known packet numbers.

#2 is a great step forward in fixing our replication stream, but we’re missing a specific replicated database at a specific point in time. If you have a replicated database, please read on and see if you can help us:

  1. Is your database at replication packet 99847? The way to find out your current replication sequence is to look in slave.log in your server directory or to issue the query
    select current_replication_sequence from replication_control;

    at the SQL prompt.

  2. Do you have a complete replicated MusicBrainz database, including the cover_art_archive schema?
  3. Are you willing to make a dump of the DB and send it to us?
  4. Do you have a fast internet connection to make #3 possible?

If you’ve answered yes to all of the above, please send email to support at metabrainz dot org.

Thanks!

 

Move to NewHost and Replication Update

It has been a long week since our move to the new hosting provider in Germany. Our move across the Atlantic worked out fairly well in the grand scheme of things. The new servers are performing well, the site is more stable and we have a modern infrastructure for most of our projects.

However, such moves are not without problems. While we didn’t encounter many problems, the most significant one we did encounter was the failure to copy two small replication packets off the old servers. We didn’t notice this problem until after the server in question had been decommissioned. Ooops.

And thus began a recovery effort that is almost worthy of a bad Hollywood B-movie plot. Between myself traveling and the team finishing the most critical migration bits, it took 2 days for us to realize the problem and find a volunteer to fetch the drives from the broken server. Only in a small and wealthy place such as San Luis Obispo, could a stack of recycled servers sit in an open container for 2 days and not be touched at all. My friend collected the drives and immediately noticed that the drives were damaged in the recycling process, which isn’t surprising. And we can consider ourselves really lucky that this drive didn’t contain private data — those drives have been physically destroyed!

Since then, my friend has been working with Linux disk recovery tools to try and recover the two replication packets off the drive. Given that he is working with a 1TB drive, this recovery process takes a while and must be fully completed before attempting to pull data off the drive. For now we wait.

At the same time, we’re actively cobbling together a method to regenerate the lost packets. In theory it is possible, but it involves heroic efforts of stupidity. And we’re expending that effort, but so far, it bears no fruit.

In the meantime, for all of the people who use our replicated (Live Data) feed — you have the following choices:

  1. If you need data updates flowing again as soon as possible, we strongly recommend importing a new data set. We have a new data dump and fresh replication packets being put out, so you can do this at any time you’re ready.
  2. If the need for updates is not urgent yet and you’d rather not reload the data, sit tight. We’re continuing our stupidly heroic efforts to recover the replication packets.
  3. Chocolate: It really makes everything better. It may not help with your data problems, but at least it takes the edge off.

We’re terribly sorry for the hassle in all of this! Our geek pride has been sufficiently dinged that our chocolate coping mechanisms will surely cause us to put on a pound or two.

Stay tuned!

UPDATE 1: The first recovery examination has not located the files, but my friend will do a second pass tomorrow and turn over file fragments to us that might allow us to recover files. But that won’t be for another 8 hours or so.

Server update, 2016-10-24

Today we have more URL cleanup and general link fixes from chirlu & yvanz. Thanks to those two again. 🙂 This’ll be the last musicbrainz-server release before we move to our new hosting facility. But that move should be done within two weeks, so hopefully the next release won’t be delayed. Thanks again for your patience.

The git tag is v-2016-10-24.

Bug

  • [MBS-7164] – URL cleanup doesn’t allow iTunes “Song” download links
  • [MBS-9044] – iTunes: cleanup “geo.itunes” and block/cleanup “linkmaker.itunes” links

Improvement

  • [MBS-6314] – iTunes sidebar links should specify country
  • [MBS-8802] – Clean up iTunes audiobook and podcast URLs
  • [MBS-9079] – Clean up Wikidata URLs with “/entity”

Task

  • [MBS-8269] – Disallow Commons gallery pages and categories
  • [MBS-9078] – Improve URL cleanup for Wikimedia Commons
  • [MBS-9087] – Link to current VIAF URL format

New MusicBrainz test VM available

There is a new test VM available for anyone who would like to try the latest, possibly not fully debugged, VM. I’m not sure why the VM is nearly 20GB larger than the previous one, while containing roughly the same stuff, but that is what we’re stuck with for this test. I’ll try harder to minimize the size for the final build.

Grab the VM here.

Read these Usage tips.

IMPORTANT: Please ignore the usage tips published on the wiki — they do not apply to this release. For the next release I’ll try and match more of the characteristics of the last version. Do read the usage tips above!

File a bug here.

Server update, 2016-08-29

This is another small release containing URL relationship fixes, so I’ll let the changelog speak for itself. 🙂 Thanks again to yvanz and chirlu for their contributions. The git tag is v-2016-08-29.

Bug

  • [MBS-8793] – Query string is not removed from Twitter links
  • [MBS-9048] – Untouched obsolete URL relationships trigger errors in the editing process
  • [MBS-9057] – ISE for artist with PayPalMe URL