Replication update: Do you have a DB that could help us?

In my post from yesterday I talked about our continuing struggle to fix our replication stream. Overnight we learned two new things:

  1. The lengthy hard drive recovery process has failed and yielded no useful results. 😦
  2. We have a working DB diff program in place that allows us to create missing replication packets from two DBs at known packet numbers.

#2 is a great step forward in fixing our replication stream, but we’re missing a specific replicated database at a specific point in time. If you have a replicated database, please read on and see if you can help us:

  1. Is your database at replication packet 99847? The way to find out your current replication sequence is to look in slave.log in your server directory or to issue the query
    select current_replication_sequence from replication_control;

    at the SQL prompt.

  2. Do you have a complete replicated MusicBrainz database, including the cover_art_archive schema?
  3. Are you willing to make a dump of the DB and send it to us?
  4. Do you have a fast internet connection to make #3 possible?

If you’ve answered yes to all of the above, please send email to support at metabrainz dot org.

Thanks!

 

Move to NewHost and Replication Update

It has been a long week since our move to the new hosting provider in Germany. Our move across the Atlantic worked out fairly well in the grand scheme of things. The new servers are performing well, the site is more stable and we have a modern infrastructure for most of our projects.

However, such moves are not without problems. While we didn’t encounter many problems, the most significant one we did encounter was the failure to copy two small replication packets off the old servers. We didn’t notice this problem until after the server in question had been decommissioned. Ooops.

And thus began a recovery effort that is almost worthy of a bad Hollywood B-movie plot. Between myself traveling and the team finishing the most critical migration bits, it took 2 days for us to realize the problem and find a volunteer to fetch the drives from the broken server. Only in a small and wealthy place such as San Luis Obispo, could a stack of recycled servers sit in an open container for 2 days and not be touched at all. My friend collected the drives and immediately noticed that the drives were damaged in the recycling process, which isn’t surprising. And we can consider ourselves really lucky that this drive didn’t contain private data — those drives have been physically destroyed!

Since then, my friend has been working with Linux disk recovery tools to try and recover the two replication packets off the drive. Given that he is working with a 1TB drive, this recovery process takes a while and must be fully completed before attempting to pull data off the drive. For now we wait.

At the same time, we’re actively cobbling together a method to regenerate the lost packets. In theory it is possible, but it involves heroic efforts of stupidity. And we’re expending that effort, but so far, it bears no fruit.

In the meantime, for all of the people who use our replicated (Live Data) feed — you have the following choices:

  1. If you need data updates flowing again as soon as possible, we strongly recommend importing a new data set. We have a new data dump and fresh replication packets being put out, so you can do this at any time you’re ready.
  2. If the need for updates is not urgent yet and you’d rather not reload the data, sit tight. We’re continuing our stupidly heroic efforts to recover the replication packets.
  3. Chocolate: It really makes everything better. It may not help with your data problems, but at least it takes the edge off.

We’re terribly sorry for the hassle in all of this! Our geek pride has been sufficiently dinged that our chocolate coping mechanisms will surely cause us to put on a pound or two.

Stay tuned!

UPDATE 1: The first recovery examination has not located the files, but my friend will do a second pass tomorrow and turn over file fragments to us that might allow us to recover files. But that won’t be for another 8 hours or so.

Final sprint towards NewHost

So tomorrow is the day when the old servers at Digital West will be put to rest. Our developers and system administrator have been hard at work over the last few weeks (and especially the last few days!) getting everything ready for a hopefully smooth transition to hosting everything at Hetzner in Germany.

The vast majority of our sites and services are in fact already being served from Germany, but the largest one—musicbrainz.org—remains in the US for another night.

Our transition of the final services, such as musicbrainz.org, has already started, but the very last sprint of moving the remaining bits over will start tomorrow, November 8th, at around 6 AM EST / noon UTC / 13:00 CET and will follow the plan laid out in this Google document:
https://docs.google.com/document/d/1MgqZ4hiKC0MZJ400ZCOD8JlcjBjh4GNpakauf51YKQQ/edit?usp=sharing

Expect some downtime, plenty of read-only time, and general wonkiness as we get the last gears set in place to hopefully keep us running smoothly for the next several years to come!

Server update, 2016-10-24

Today we have more URL cleanup and general link fixes from chirlu & yvanz. Thanks to those two again. 🙂 This’ll be the last musicbrainz-server release before we move to our new hosting facility. But that move should be done within two weeks, so hopefully the next release won’t be delayed. Thanks again for your patience.

The git tag is v-2016-10-24.

Bug

  • [MBS-7164] – URL cleanup doesn’t allow iTunes “Song” download links
  • [MBS-9044] – iTunes: cleanup “geo.itunes” and block/cleanup “linkmaker.itunes” links

Improvement

  • [MBS-6314] – iTunes sidebar links should specify country
  • [MBS-8802] – Clean up iTunes audiobook and podcast URLs
  • [MBS-9079] – Clean up Wikidata URLs with “/entity”

Task

  • [MBS-8269] – Disallow Commons gallery pages and categories
  • [MBS-9078] – Improve URL cleanup for Wikimedia Commons
  • [MBS-9087] – Link to current VIAF URL format

Server update, 2016-10-10

With a looming deadline for switching to our new hosting facilities (early November), we didn’t have any release last month, and this one is several days late. But we’ve got some good contributions from chirlu and yvanz, especially in the area of URL validation and cleanup, which we’re finally releasing. Much of yvanz’s work has been to refactor our URL cleanup code so that it’s stricter and more robust. Thanks, chirlu & yvanz! You can view the complete changelog below.

The git tag is v-2016-10-10.

Bug

  • [MBS-8744] – Block Generasia URLs from Releases
  • [MBS-9069] – Submitting cover art with a long comment fails silently

Improvement

  • [MBS-8796] – Set a referrer policy for HTTPS pages
  • [MBS-9012] – New Discogs image URLs need cleaning up into release URLs
  • [MBS-9053] – Clean up commons.m.wikimedia.org into the non-mobile site
  • [MBS-9072] – Verify that tport is an integer

Task

  • [MBS-5733] – Allow “otherdatabases” to be validated in URLCleanup
  • [MBS-5736] – Test that the JavaScript catches bad URL entries
  • [MBS-6378] – Add tests for validationRules constraints in URLCleanup.js

New MusicBrainz test VM available

There is a new test VM available for anyone who would like to try the latest, possibly not fully debugged, VM. I’m not sure why the VM is nearly 20GB larger than the previous one, while containing roughly the same stuff, but that is what we’re stuck with for this test. I’ll try harder to minimize the size for the final build.

Grab the VM here.

Read these Usage tips.

IMPORTANT: Please ignore the usage tips published on the wiki — they do not apply to this release. For the next release I’ll try and match more of the characteristics of the last version. Do read the usage tips above!

File a bug here.

Server update, 2016-08-29

This is another small release containing URL relationship fixes, so I’ll let the changelog speak for itself. 🙂 Thanks again to yvanz and chirlu for their contributions. The git tag is v-2016-08-29.

Bug

  • [MBS-8793] – Query string is not removed from Twitter links
  • [MBS-9048] – Untouched obsolete URL relationships trigger errors in the editing process
  • [MBS-9057] – ISE for artist with PayPalMe URL