A recap of my Google Summer of Code 2025 project to build Libretto, a native Matrix chat archiver for MetaBrainz, detailing the journey, the achievements, and what’s next.
Hello! I’m Jade Ellis, AKA JadedBlueEyes. You might know me from my project with MetaBrainz last year – if not, I’m happy to have the chance to introduce myself. I’m an undergraduate Computer Science student at the University of Kent in England, a music enthusiast and (in my spare time) a climber.
The Setting
In September 2024, MetaBrainz switched from IRC to Matrix as our primary form of communication. Matrix is a more feature-rich alternative to IRC, with capabilities like replies, edits, and reactions, while still being open source and aligning with the principles of our project.
When MetaBrainz primarily used IRC, we had a piece of software called BrainzBot. This was a multi-functional Python app that, most importantly, created a web-accessible archive of all messages in the MetaBrainz channels. Thanks to the bridges between IRC and Matrix, BrainzBot continued to trundle along, but it couldn’t understand modern features like edits, replies, or media. The code itself was also becoming decrepit—a fork of an abandoned project, showing its age.
This led to my GSoC project: to build a replacement for BrainzBot’s archival function – a chat archiver that natively understands and preserves Matrix’s rich features.
I’m Hemang Mishra (hemang-mishra on IRC and hemang-mishra on GitHub). I’m currently a pre-final year student at IIIT Jabalpur, India. This summer, I had the opportunity to participate in Google Summer of Code with MetaBrainz. My mentor for the program was Jasjeet Singh (jasje on IRC).
I contributed to ListenBrainz Android, where I worked on revamping the onboarding experience, improving login, adding listen submission apps, integrating Listening Now, and setting up app updates. The journey has been both exciting and full of learning, and I’m truly grateful for this opportunity.
Project Overview
ListenBrainz is a powerful platform that helps track listening history, share music tastes, and build a community around music.
The main goals of my project were:
Revamping onboarding – introducing users to the app’s core features and handling permissions with clear rationale.
Improving login –replacing simple web pages with a custom Compose-based UI, and experimenting with the DOM tree of the web page to automate form submissions and token extraction in the background.
Listen submission apps – prompting users during onboarding to select which apps to collect listens from, preventing unwanted submissions.
Listening Now integration – adding “Listening Now” into BrainzPlayer.
App updates – enabling updates for both Play Store and non-Play Store (F-Droid or sideloaded) releases.
What I did
Community Bonding Period
During the community bonding period, I worked on Figma designs for the project. These designs went through several iterations with aerozol, which really helped refine the final look and flow. Alongside this, I explored some newly released libraries, such as the new Nav3 API. This API provided deeper access to the backstack, which turned out to be crucial in creating smoother animations and handling tricky edge cases throughout the onboarding process.
Coding Period
Onboarding Revamp
Revamping the onboarding experience with smoother and more intuitive designs was one of the most important parts of the project. Onboarding is the very first interaction a user has with the app, and it needs to clearly introduce the core features.
To achieve this, I also implemented a HorizontalPager for seamless transitions between screens.
Core Challenges
The biggest challenge was handling navigation across different scenarios. For example:
A new user going through onboarding for the first time.
A returning user who is already logged in.
Permissions that might already be granted (in which case, those screens should be skipped).
Most importantly, handling backward movement—allowing the user to go back smoothly after completing onboarding.
Solution
To solve this, I designed a queue system that works alongside the existing backstack (a stack). This queue is initialized right at the start, keeping all edge cases in mind. Whenever a user presses the back button, the corresponding screen is added back into this queue, ensuring backward navigation is handled effectively. Click to view the queue snippet.
fun onboardingNavigationSetup(dashBoardViewModel: DashBoardViewModel) {
if (!dashBoardViewModel.appPreferences.onboardingCompleted) {
onboardingScreensQueue.addAll(
listOf(
NavigationItem.OnboardingScreens.IntroductionScreen,
NavigationItem.OnboardingScreens.LoginConsentScreen,
NavigationItem.OnboardingScreens.LoginScreen,
NavigationItem.OnboardingScreens.PermissionScreen,
NavigationItem.OnboardingScreens.ListeningAppScreen
)
)
}
// Handling all edge cases here
}
For efficient and scalable permission handling, I created a Permission Enum that contains all permission-related logic in one place. This way, adding a new permission only requires updating the enum, not the UI. There’s also an option to skip explicit mentions if a permission doesn’t need to be shown on screen.
enum class PermissionEnum(
val permission: String,
val title: String,
val permanentlyDeclinedRationale: String,
val rationaleText: String,
val image: Int,
val minSdk: Int,
val maxSdk: Int? = null
) {}
Here’s a visual representation of the same:
Improved Login
The login flow in the earlier versions of the app was… let’s just say functional but not friendly. It relied entirely on MusicBrainz authentication, redirected the user to the ListenBrainz settings page, and tokens were extracted from there. It worked, but it wasn’t the smoothest experience, especially because users had to manually go through WebViews.
What Changed
The core authentication process is still the same, but I completely rebuilt the UI using Jetpack Compose. The big improvement is that now, instead of forcing users through clunky WebViews, those WebViews are handled quietly in the background with JavaScript.
What the user sees is just a clean Compose-based login screen, while all the redirects and token extractions happen invisibly behind the scenes.
Handling the Login Flow
I created a sealed class to represent the different states of login. Since I was handling so many background events with JavaScript, having clear states was the only way to manage everything gracefully. Click to view the code.
sealed class LoginState {
data object Idle : LoginState()
data class Loading(val message: String) : LoginState()
data object SubmittingCredentials : LoginState()
data object AuthenticatingWithServer : LoginState()
data object VerifyingToken : LoginState()
data class Error(val message: String) : LoginState()
data class Success(val message: String) : LoginState()
}
The flow now looks like this:
User starts at https://musicbrainz.org/login and submits credentials. (At this point, they’re not authenticated with ListenBrainz yet.)
They’re redirected to a consent screen. Since I already show my own consent screen inside the app, I quietly skip past this step by redirecting to https://musicbrainz.org/login/musicbrainz . Now the user is authenticated to ListenBrainz as well.
After this, user is automatically redirected to ListenBrainz home page, and I redirect to https://listenbrainz.org/settings.
From there, the app extracts the auth tokens directly.
Here’s a snippet of the logic. Click to view.
private fun handleListenBrainzNavigation(view: WebView?, uri: Uri) {
when {
// Step 1: Redirect to login endpoint
!hasTriedRedirectToLoginEndpoint -> {
Logger.d(TAG, "Redirecting to login endpoint")
hasTriedRedirectToLoginEndpoint = true
view?.loadUrl("https://listenbrainz.org/login/musicbrainz")
}
// Step 2: Navigate to settings to get token
!hasTriedSettingsNavigation -> {
Logger.d(TAG, "Navigating to settings page")
hasTriedSettingsNavigation = true
view?.postDelayed({ view.loadUrl("https://listenbrainz.org/settings") }, 2000)
}
// Step 3: Extract token from settings page
uri.path?.contains("/settings") == true -> {
onLoad(Resource.loading())
Logger.d(TAG, "Extracting token from settings page")
view?.postDelayed({
extractToken(view)
}, 2000)
}
}
}
Smoother User Experience
To make things feel more transparent, the UI now shows exactly what’s happening:
when credentials are being submitted,
when the server is authenticating,
when the token is being extracted,
and when validation succeeds or fails.
I even added a timeout option so if something goes wrong (like a network hiccup), users don’t just sit there forever — they can report the issue or retry.
Automating the WebView
To streamline login without leaving the app, we automate interactions inside a WebView. This lets us securely handle the real MusicBrainz website, detect when pages are ready, and programmatically manage inputs and redirects, all while keeping the process seamless for the user.
When the user taps “Login,” the app opens the official MusicBrainz login URL inside a WebView. This is the real website displayed within the app, not a fake screen. Using the WebView ensures a secure, familiar login experience while allowing the app to interact programmatically with the page as needed.
Each time a webpage finishes loading in the WebView, the onPageFinished callback triggers. This acts as a clear signal that the page is fully ready for interaction. By listening to this event, the app knows exactly when to proceed with the next step, like injecting scripts or monitoring page redirects.
After the page loads, JavaScript is injected into the DOM (Document Object Model). This allows the app to interact programmatically with page elements, such as filling in the username, entering the password, or clicking the “Login” button. It simulates a user’s actions while keeping everything automated and seamless.
Once login completes, MusicBrainz redirects through intermediary pages until reaching the authorization success screen. At this point, the injected script captures the authorization code or token directly from the DOM. This process stays fully automated while still relying on the real website, ensuring authentication is secure and standards-compliant.
Here’s a little glimpse of the script that runs in the background. It automatically fills in the login form and submits it, so users never have to deal with the raw MusicBrainz login page themselves. Click to view.
val loginScript = """
(function(){
try {
var formContainer = document.getElementById('page');
if(!formContainer) return "Error: Form container not found";
var usernameField = document.getElementById('id-username');
var passwordField = document.getElementById('id-password');
if (!usernameField) return "Error: Username field not found";
if (!passwordField) return "Error: Password field not found";
usernameField.value = '$username';
passwordField.value = '$password';
var form = formContainer.querySelector('form');
if (!form) return "Error: Form not found";
form.submit();
return "Login submitted";
} catch (e) {
return "Error: " + e.message;
}
})();
""".trimIndent()
Consent Screen
To stay consistent with ListenBrainz itself, I also implemented a consent screen inside the app. The content of this screen is fetched directly from the ListenBrainz website, so it’s always up to date and doesn’t require hardcoding.
Listening Apps Selection
One of the biggest issues in the earlier version of the app was that the listening apps selection wasn’t part of onboarding at all. This led to a lot of unwanted submissions and confusion.
On top of that, there were a few more headaches:
Permissions for reading notifications and handling battery optimization weren’t explained well.
Users weren’t given the choice to completely disable listen submission at the time of onboarding.
The list of available apps wasn’t even ready at startup, which felt clunky.
The Fix
To solve this, I redesigned the onboarding flow. Now, right at the start, users are:
Introduced to what Listen Submission is.
Asked if they want to enable it.
Prompted for the necessary permissions in a clean, step-by-step manner.
This way, users are in control from the beginning.
Behind the scenes, I used two DataStore preferences to keep things clear:
val listeningWhitelist: DataStorePreference<List<String>>
val listeningApps: DataStorePreference<List<String>>
listeningApps → all the apps that the system recognizes as music apps.
listeningWhitelist → the smaller list of apps the user actually chooses to allow.
So the decision of what counts as a “listenable” app is left completely to the user.
How We Detect Music Apps
Fetching music apps reliably turned out to be trickier than it looks. I ended up using a two-step approach:
Check for specific media services"android.media.browse.MediaBrowserService" "android.media.session.MediaSessionService" These are good indicators that an app is a music player.
Check the app categoryif (category == ApplicationInfo.CATEGORY_AUDIO || category == ApplicationInfo.CATEGORY_VIDEO) This helps catch media apps that don’t explicitly expose the services above.
Even with these two checks, some apps still slip through. So as a fallback, I also query all installed apps using an intent. Click to view.
val intent = Intent(Intent.ACTION_MAIN).apply {
addCategory(Intent.CATEGORY_LAUNCHER)
}
packageManager.queryIntentActivities(intent, 0).forEach { resolveInfo ->
try {
val appInfo = packageManager.getApplicationInfo(resolveInfo.activityInfo.packageName, 0)
apps.add(appInfo)
} catch (e: Exception) {
Log.e("Could not fetch ApplicationInfo for ${resolveInfo.activityInfo.packageName}")
}
}
From there, users can open a bottom sheet that lists all apps, search through them, and select multiple at once if needed. This wasn’t possible earlier, and it makes the experience much smoother.
Permissions and Rationale
For listen submission to work properly, the app needs two critical permissions:
Read Notificationspermission = "android.permission.BIND_NOTIFICATION_LISTENER_SERVICE" This lets ListenBrainz detect songs from other apps and submit them automatically. Without it, automatic tracking simply won’t work.
Ignore Battery Optimizationpermission = "android.settings.REQUEST_IGNORE_BATTERY_OPTIMIZATIONS" With optimization enabled, background submissions might fail or get delayed. Disabling it ensures that listens are sent reliably in the background.
In the new onboarding, these permissions are explained with clear rationale screens, so users understand why they’re being asked. If permissions are denied or permanently declined, the app guides the user gently, instead of just failing silently.
Listening Now Integration into BrainzPlayer
For anyone new to ListenBrainz, Listening Now is a feature that shows what a user is currently playing in real time. I wanted to bring this into the existing BrainzPlayer so users can see their live playback sync right inside the app.
To get this working, I first used the already available socket repository. To initialize the state, I made a simple API call:
This returns the currently playing track for a user. Once I had that, I set up a connection with the ListenBrainz socket API and kept it alive so the player stays in sync with any new listens.
For connecting to the socket, we did something like this:
private val socket: Socket = IO.socket(
"https://listenbrainz.org/",
IO.Options.builder().setPath("/socket.io/").build()
)
We then listen for three main events: "connect", "listen", and "playing-now".
On top of just showing the track, I wanted to make the UI feel alive. I used the Palette library to extract colors from the album art and apply them as dynamic backgrounds. This was inspired by the mobile version of the web player. To do this, I created a util function so it could be reused anywhere in the app. The bitmaps were fetched using Coil and then passed into Palette:
val palette = Palette.from(bitmap).generate()
val lightColor = palette.vibrantSwatch?.rgb ?: palette.mutedSwatch?.rgb
So the player background now shifts its mood based on the song you’re listening to .
The integration itself sits on top of the existing BackdropScaffold, which exposes two states: concealed and revealed. One important rule we made was that if BrainzPlayer is already playing a song, it overrides the Listening Now screen.
I also animated the bottom app bar so it feels smooth when switching between modes. Figuring out the right animation logic took me a bit of trial and error. At first, I tied AnimatedVisibility’s targetState directly to the current state of the BackdropScaffold, but that didn’t behave correctly. After some fiddling, I realized I should be checking the target value instead of the current value, which finally made the transitions smooth.
That little switch from currentState to targetValue made all the difference in getting animations to feel natural.
App Updates
This was honestly the trickiest part of the entire project for me. The main challenge was that the app can be installed in three different ways — through the Play Store, F-Droid, or by sideloading. Each of these had to be handled differently, so I had to think carefully about the update flow.
The first step was figuring out where the app was installed from. For that, I used the package manager to check the installer package name. Click to view the implementation.
val installerPackageName = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.R) {
val installSourceInfo = packageManager.getInstallSourceInfo(packageName)
installSourceInfo.installingPackageName
} else {
packageManager.getInstallerPackageName(packageName)
}
If the installer package name was com.android.vending or com.google.android.feedback, I knew the app came from the Play Store. Otherwise, I had to treat it as a non-Play Store install.
Play Store Updates
For Play Store installs, I used the Play Core API, which luckily makes in-app updates a lot easier. I added three main functionalities around flexible updates:
suspend fun checkPlayStoreUpdate(activity: ComponentActivity): Boolean
suspend fun startPlayStoreFlexibleUpdate(
activity: ComponentActivity,
onUpdateProgress: (Int) -> Unit,
onUpdateDownloaded: () -> Unit,
onUpdateError: (String) -> Unit
): Boolean
suspend fun completePlayStoreFlexibleUpdate(activity: ComponentActivity): Boolean
Checking for updates – I used AppUpdateManager to get an appUpdateInfo object. Since the API is callback-based, I wrapped it with suspendCancellableCoroutine so I could work with it more cleanly in a flow-based setup.
Starting the update – here I used appUpdateManager.startUpdateFlowForResult() to kick things off.
Completing the update – finally, appUpdateManager.completeUpdate() is used to finish the update process once everything’s downloaded.
This flow gives a smooth, native experience for users updating through the Play Store.
Non-Play Store Updates
Now, this part was… tricky. 😅 For non-Play Store installs (F-Droid or sideloaded), I had to come up with a completely custom flow.
I used the GitHub API to check if a newer version was available. Once a release is found, the app compares it with the current version. If the new version is higher, the user is prompted to update. I also added an option for users to opt into pre-releases, which meant I had to fetch all releases and then run my comparison logic.
Once the user agrees to update, I trigger a download using the Download Manager. To track progress, I set up a BroadcastReceiver. One issue I hit was: what if the user leaves the app mid-download? To handle that, I cached the download ID locally. On the next startup, the app checks if an update was already in progress or if the APK was already downloaded.
Another important piece here was install permissions. On Android O and above, apps need explicit permission to install other apps. So I added a check. Click to view the code snippet.
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
val packageManager = getApplication<Application>().packageManager
val hasInstallPermission = packageManager.canRequestPackageInstalls()
_uiState.update {
it.copy(isInstallPermissionGranted = hasInstallPermission)
}
} else {
// For older versions, permission is granted by default
_uiState.update {
it.copy(isInstallPermissionGranted = true)
}
}
If the permission isn’t granted, I show a dialog prompting the user to enable it. Once granted, the update can proceed.
Finally, for the actual installation, I trigger an install intent:
val intent = Intent(Intent.ACTION_VIEW).apply {
setDataAndType(uri, "application/vnd.android.package-archive")
addFlags(Intent.FLAG_ACTIVITY_NEW_TASK)
addFlags(Intent.FLAG_GRANT_READ_URI_PERMISSION)
}
This flow ensures that no matter how the app was installed, users get proper update support. And honestly, getting this part to work smoothly felt super rewarding because I had to juggle so many edge cases.
I also made sure updates are checked both at app startup and through a manual “Check for Updates” option in settings, so users always have control.
One important piece that’s still pending is thorough testing of the app updates, especially for the Play Store side. This is a bit tricky since it requires creating release APKs through a Play Console account.
Apart from that, I’d also like to work on my post-GSoC plans:
Adding a feature to delete unwanted listens directly from the app (which isn’t available in the Android app yet).
Building a playlist search feature to make it easier for users to find playlists quickly.
Final Thoughts
I’m truly grateful for this amazing opportunity and for the constant guidance from jasje, whose mentorship made a huge difference throughout the program.
This experience taught me how to write professional-quality code—code that puts the user’s perspective first and pays attention to the smallest details. It also gave me a clearer picture of how industry-level software works and how to contribute to it effectively.
Finally, I want to thank the entire MetaBrainz community for their warmth, support, and encouragement during this journey. I really hope our users enjoy these updates as much as I enjoyed building them!
Hello, my name is Shaik Junaid (IRC nick fettuccinae and fettuccinae on GitHub). I’m an undergrad computer science student from MGIT, Hyderabad, India. My project focused on adding a central notification system for MetaBrainz.
Project Overview
This project’s idea was suggested to me by mentor @ruaok (AKA mayhem on IRC). I submitted my proposal on the MetaBrainz Forum and got it reviewed by @kartikohri13 (AKA lucifer on IRC), and finally got selected for GSoC 2025 .
A centralized notification management system will various MetaBrainz projects send notifications to users without rewriting boilerplate code. It will also keep users informed about the latest events and new features across projects. This is a goal bigger than the scope of a single GSoC project. To keep it reasonable, my project focused on implementing REST APIs, hosted on metabrainz.org, to manage notifications and user preferences for notifications. Additionally, I integrated the system with ListenBrainz to demonstrate its functionality.
I started contributing to MetaBrainz from January 2025, I picked a few tickets from the Jira board, solved a few bugs, added an option for admins to block a user from spamming listens, and added a feature for users to track their listen-import status.
My PRs during the pre-community period can be found here.
Coding Period
Phase 1
1. I started my coding period by creating a notification table and a user_preference table and their respective ORMs.
Schema for the tables.
Schema for notification table:
id INTEGER GENERATED BY DEFAULT AS IDENTITY
musicbrainz_row_id INTEGER NOT NULL,
project notification_project_type NOT NULL,
read BOOLEAN DEFAULT FALSE,
created TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
expire_age SMALLINT NOT NULL, -- in days.
important BOOLEAN DEFAULT FALSE,
email_id TEXT UNIQUE,
subject TEXT,
body TEXT,
template_id TEXT,
template_params JSONB,
notification_sent BOOLEAN DEFAULT FALSE
Schema for user_preference table.
id INTEGER GENERATED BY DEFAULT AS IDENTITY
musicbrainz_row_id INTEGER UNIQUE,
user_email TEXT UNIQUE,
digest BOOLEAN DEFAULT FALSE,
digest_age SMALLINT -- in days.</code></pre>
2. DB functions: I worked on adding database functions for these tables.
Next, I wrote tests for these functions and found some edge case bugs. For example: in mark_read_unread() , If the unread_ids tuple was empty, the function would raise an SQLException for passing in None instead of an empty tuple. Even though, I was incredibly slow in writing tests, I have found a new appreciation for Test-Driven Development.
3. Views: I worked on adding endpoints for these notification functions, I looked into MeB repo and LB repo to align my coding style for these endpoints. I added the following endpoints.
Endpoints:
/<int:user_id>/fetch: Projects can fetch notifications for user <user_id> and allow pagination with offset and count. /<int:user_id>/mark-read: Projects can mark notifications for user <user_id> read or unread with notification IDs in the body of request. /<int:user_id>/delete: Projects can delete notifications for user <user_id> with notification IDs in the body of request. @notification_bp.post("/send"): Projects can utilize this endpoint to send out notifications to users. /<int:user_id>/digest-preference: Projects can set the digest preference for a user <user_id>
I spent quite some time reading up on how to use requests_mock to mock HTTP requests, other testing techniques and added tests for these endpoints.
4. Authentication: I had very little idea about OAuth2 implementation. I had both metabrainz and listenbrainz running locally. Thankfully, the network tab in the browser’s inspect tools was useful. It took me a while but I was finally able to understand how various endpoints need to be secured. Then, I coded a decorator which uses client credentials OAuth2 flow to obtain access tokens on the client side (in this case another server) to properly authorize with the central notification APIs.
I wanted to test invalid tokens in one place and found a very hacky way to do so.
5. Sending Notifications: Now, for the final piece of the puzzle, I used the existing implementation to send mails through brainzutils (a python library with common utilities for various MetaBrainz projects). A shortcoming of BrainzUtils is that it only supports sending plain e-mails and not HTML e-mails.
I added a NotificationSender class to immediately send important notifications directly to the user. Non-important emails respect the digest preferences of the user. Once sent, the notification records in the database are marked as sent. I also used Redis cache to store the notifications which failed to deliver.
6. Cron Jobs: I looked into the documentation of runit to understand how cron jobs were scheduled in the LB repo. I added cron jobs that would fire every day to delete expired notifications and send notifications in digest. I tested these cronjobs by running production image locally and was really happy when I saw that “hello 123 123” in cron logs.
7. Integration into ListenBrainz : I passed the mid-term evaluation and moved into ListenBrainz repo to integrate the new notification system.
I created a notification sender function in ListenBrainz to create a notification using the endpoints created in the first phase. The function also automatically obtains an access token from the MeB OAuth2 provider, caches it in Redis and refreshes it if expired. This token is sent in the Authorization header of the HTTP requests to create the notifications. I then added tests for these functions.
I replaced the instances where e-mails were sent out using BrainzUtils to use send_notification instead.
As I had MeB and LB docker containers running side by side, I felt very happy when I sent a notification from my local LB instance and it correctly generated a token from MeB, and then the notification showed up in the MeB container logs.
8. Notification Settings: The only frontend component in this project (Finally! I can add some images 🙂 ).
I added a page where users can set their preferences for receiving notifications.
I added respective endpoints which initially fetch the digest data from MeB.org, and if the user changes their preference, it sets the preference by sending a POST request to MeB.org’s <user_id>/digest-preference endpoint.
Currently, all of the features mentioned are implemented and (almost) merged. This project is in metabrainz-notification branch of both repositories. It can be deployed to production after user data is migrated to metabrainz.org (PR by @kartikohri13 here) and we have the “user” table for foreign key and user_emails.
Future
Although the project has met all of the expected outcomes, there are still a lot of features I’d like to work on. Some of them are :
Integration into remaining MetaBrainz projects
Integrating MB-mail into the notification system to send HTML e-mails.
A /notifications endpoint in MeB.org for users who need non-important e-mails sent to them immediately.
Creating templates for non-important Notifications and sending them to users instead of just rendering them on User feed.
Conclusion
It was a really great experience participating in GSoC 2025 with MetaBrainz. I have learned so many new things this summer while working on my project — mainly understanding how web development works in a large org with a lot of moving pieces, docker containers , github ci-cd, REST framework, Authorization Framework and the essence of open source software.
Im thankful to my mentor Robert Kaye @ruaok for guiding me through this project and for his insightful reviews. I am also thankful to @kartikohri13 (AKA lucifer) and @saliconpetiller (AKA monkey) for your incredible support.
It’s been a pleasure working with you all! Thank you for an incredible summer.
I am Granth Bagadia (holycow23 on IRC), an undergraduate Computer Science student at Birla Institute of Technology and Science (BITS), Pilani. This summer, I had the opportunity to participate in Google Summer of Code 2025 with MetaBrainz, where I worked on introducing advanced user statistics visualizations for ListenBrainz.
I was mentored by Ansh Goyal (ansh on IRC), Kartik Ohri (lucifer on IRC), and Nicolas Pelletier (monkey on IRC). This post summarizes my project, its outcomes, and my experience over the course of the program.
Project Overview
ListenBrainz already provided some listening statistics, but these were limited in scope and depth. My project set out to design and implement advanced statistics that could offer users more meaningful insights into their listening habits. Since ListenBrainz is a user-centric platform, the idea was to create features that would let listeners explore their behavior from multiple perspectives. My original proposalfocused on introducing a few key statistics.
The core statistics included:
Genre Trends – showing what genres a user listens to at different hours of the day.
Era Statistics – highlighting which musical eras dominate a user’s listening history.
Artist Evolution – tracking how much a user listens to specific artists over time.
Together, these features enrich the user experience, helping listeners discover patterns and reflect on habits.
To support these, I built a complete statistics pipeline. At the data layer, Spark jobs ingest large volumes of listens and apply transformations such as genre classification, temporal bucketing into eras, and aggregations of listens by artist. These Spark jobs write processed statistics back into a dedicated stats database. A Flask API layer then exposes the aggregated results in a request–response fashion.
On the frontend, React and TypeScript components consume these APIs and render interactive visualizations with Nivo. These charts allow drill-down and time-series exploration, ensuring that users not only see their top genres, eras, and artists but also understand how these evolve over time and context. The combined design delivers both scalability and accessibility: Spark and Flask handle the heavy lifting, while the frontend presents clear, engaging dashboards.
Pre-Community Bonding
I actually began contributing to ListenBrainz in January 2025, primarily working on the frontend side of the project. Most of my early contributions focused on improving the user interface, fixing bugs, and adding visualizations for statistics that were already available in the backend. You can find the complete list of my merged pull requests here.
Before the official community bonding period, I started experimenting with a simpler statistic that did not require a Spark backend. This was a Top Artists with Album Breakdown visualization, which used Python transformations over existing data to show the top artists I listened to, bifurcated into the albums they belonged to. This helped me get comfortable with the ListenBrainz data and also provided users with an immediate insight into their listening patterns. My work for this was merged in PR #3170.
Community Bonding
During the community bonding period, I worked closely with my mentors to refine the scope of the statistics and finalize the features that would be implemented. We discussed multiple approaches and agreed on the set of statistics that would provide both value to users and be feasible within the timeframe. I also prepared frontend mockups to demonstrate how the new stats might look and feel for users.
Setting up the development environment proved to be an important part of this phase. Getting Apache Spark running locally required extra effort, and since I did not have immediate access to the full database, I relied on a development server (wolf) for initial development and testing. By the end of the community bonding period, I was familiar with the ListenBrainz stack and ready to move into coding with a clear roadmap in place.
Coding Period
Before jumping into implementation, I spent time understanding the use cases behind each statistic and exploring the data that was already available in the ListenBrainz stack. This step helped me validate that the statistics I was designing would be actually meaningful for end-users. For example, by looking at available artist, genre, and release year data, I realised that we could derive richer patterns without any major backend changes.
With this clarity, I moved on to preparing mockups (like the one shown below), which served as a bridge between the raw data and the final user-facing visualizations. These mockups not only made it easier to align expectations with my mentors but also ensured that every statistic addressed a clear user need before I started coding. These mockups provided a clear vision of what the user experience should look like and acted as a reference point for both backend and frontend development.
Artist Evolution: Stream/area chart showing how listening to each artist evolves over time.
Genre Trends: Donut/radial chart breaking down genres by hour of day for the user.
Era Statistics: Bar chart by decade with zoom to see individual years (all 10 years in the era).
The next step was to create the base SQL queries for each of the three statistics. Below are trimmed versions of the queries to highlight their core logic.
Note: The below queries are shown in user‑specific form (they group by a single user). A sitewide variant simply removes user_id from SELECT/GROUP BY and aggregates across all users.
1) Artist Evolution — “How many times did I listen to each artist over time?”
SELECT user_id,
DATE_TRUNC('month', listened_at) AS time_unit,
artist_mbid,
artist_credit_name,
COUNT(*) AS listen_count
FROM listens
JOIN recording_artist USING (recording_mbid)
GROUP BY user_id, time_unit, artist_mbid, artist_credit_name;
Note: The DATE_TRUNC granularity ('month' in this example) varies depending on the stats_range of the statistic. It can be truncated to day, day of the week, month, or year as required.
Groups listens by time unit and artist so we can draw a time‑series of how much you listened to each artist over time.
2) Genre Trend — “Which genres do I listen to at different hours of the day?”
SELECT user_id,
genre,
EXTRACT(HOUR FROM listened_at) AS hour_of_day,
COUNT(*) AS listen_count
FROM listens
LEFT JOIN genres USING (recording_mbid)
WHERE genre IS NOT NULLGROUP BY user_id, genre, hour_of_day;
Note: A sitewide Genre Trend isn’t meaningful since users are in different time zones—UTC hours don’t map to local hours consistently, and ListenBrainz lacks reliable time zone data. With such data, localized and accurate aggregates would be possible.
Surfaces patterns like “more jazz late at night, more pop in the morning” for the individual user.
3) Era Trend — “From which release years is the music I listen to?”
SELECT user_id,
first_release_date_year AS year,
COUNT(*) AS listen_count
FROM listens
LEFT JOIN release USING (release_mbid)
LEFT JOIN release_groups USING (release_group_mbid)
WHERE first_release_date_year IS NOT NULLGROUP BY user_id, year;
Counts listens by original release year to show which musical eras dominate your history.
Running Spark locally posed challenges due to memory limitations, which led me to switch to using a development server (wolf) for more reliable execution. Once the environment was stable, I iteratively developed the queries and implemented each statistic one by one. I started with Genre Trends, then moved to Era Statistics, and finally Artist Evolution. This staged approach ensured that each feature was independently functional before progressing further. Throughout the coding period, I interacted with my mentors both asynchronously on Element and synchronously on Google Meet. The Meet sessions were especially helpful during tricky debugging and setup issues, allowing us to resolve blockers faster and keep development moving smoothly.
With Spark pipelines producing results, I turned back to the frontend, implementing the corresponding UI components for each of the three statistics in the same order. This required making adjustments to the ListenBrainz interface so that the new visualizations would fit seamlessly with the existing design. Alongside the implementation, I also wrote frontend tests and Spark tests, again following the same sequence: genre first, then era, and finally artist evolution.
By the end of the coding period, all three statistics were implemented end‑to‑end and shipped to the UI. My work can be seen through the following pull requests:
The pull request for Genre Activity Statistics has already been successfully merged into the ListenBrainz codebase, marking the completion of that feature. The pull requests for Era Activity Statistics and Artist Evolution Activity Statistics are also nearing completion, with only the final rounds of review and testing remaining before integration.
Overall Experience
This summer has been an incredible journey, and I’m deeply grateful to my team at MetaBrainz and the Google Summer of Code organizers. Throughout this experience, I’ve had the unique opportunity to contribute to open source and work on real-life projects. It’s rewarding to see my work on advanced statistics now live in production!
Along the way, I picked up several important technical skills. I became much more comfortable with Git (handling branches, rebases, and reviews) and Docker for setting up reproducible environments. On the backend side, I improved at writing queries and working with Spark jobs to process large amounts of listening data. On the frontend, I gained hands-on experience with React and data visualization libraries like Nivo, which taught me how to turn raw statistics into clear, interactive charts. These learnings not only helped me complete the project but will also stay with me for future work.
Just as importantly, I also learnt how to work in a community-driven environment — discussing ideas openly, writing code that others would review, and collaborating under the guidance of mentors. This experience taught me the value of clear communication, iteration, and flexibility when working on an open-source project with a distributed team.
I would like to thank Ansh, Monkey, and Lucifer for their constant support and feedback throughout the project. Whether it was over Element chats or a quick Google Meet call for deeper debugging, their guidance was invaluable in overcoming challenges and shaping the final outcome. I am also grateful to the MetaBrainz community for being welcoming and collaborative at every stage of the project.
Finally, I am thankful to Google and the MetaBrainz Foundation for providing me with this wonderful opportunity to learn, contribute, and grow as a developer.
I am Suvid Singhal (suvid on matrix), an undergraduate Computer Science student at Birla Institute of Technology and Science (BITS), Pilani. I took part in the Google Summer of Code 2025 and have been contributing to Metabrainz Foundation since December 2024. My GSoC project was to develop a file-based listening history importer for ListenBrainz. The project was mentored by Lucifer and Monkey.
Project Overview
Listenbrainz is a platform to track your music habits, discover new music and share your music taste with the community. A feature I missed after creating my ListenBrainz account and connecting Spotify was the ability to see my complete Spotify listening history. My project addresses this gap by allowing users to export their extended streaming history from Spotify and import it into ListenBrainz. Additionally, users can import backups from their old ListenBrainz accounts. With the foundation ready, it will be simpler to add support for more file importers in future. This makes transitioning to Listenbrainz easier.
a frontend for users to create imports and view their status.
comprehensive backend and frontend tests.
My Work
Firstly, I worked on the backend. I created the API endpoints to create a new import, view existing imports and cancel pending imports. Before building these endpoints, I had to make some changes to the database schema to store and manage import information. This ensured that both individual import details and the complete list of imports could be retrieved efficiently.
POST /1/import-listens
This is the most important endpoint for the importer. The endpoint accepts the service the file to be imported is from, and start and end date to filter the listens for import. The endpoint uses token based auth to allow users to access it directly using the API.
Auth Implementation
user = validate_auth_header(fetch_email=True, scopes=["listenbrainz:submit-listens"])
if mb_engine and current_app.config["REJECT_LISTENS_WITHOUT_USER_EMAIL"] and not user["email"]:
raise APIUnauthorized(REJECT_LISTENS_WITHOUT_EMAIL_ERROR)
if user["is_paused"]:
raise APIUnauthorized(REJECT_LISTENS_FROM_PAUSED_USER_ERROR)
This snippet validates the request’s auth header and required scope, then enforces extra restrictions. It rejects submissions from users without an email (if configured) and from users whose accounts are paused. In short, it ensures only authorized, active, and properly set up users can submit listens.
Upon successful validation and authentication, a background import task is created.
This query creates a background task after creating an import successfully:
query = "INSERT INTO background_tasks (user_id, task, metadata) VALUES (:user_id, :task, :metadata) ON CONFLICT DO NOTHING RETURNING id"
result = db_conn.execute(text(query), {
"user_id": user["id"],
"task": "import_listens",
"metadata": json.dumps({"import_id": import_task.id})
})
A background task processor that runs in a separate process will soon pick this task up for processing.
Background Task Processor
There are 2 specific imports for Spotify and Listenbrainz export. The common functions performed by each importer are:
Unzipping the file and checking for zip-bomb attacks
Finding the relevant files in the zip archive
Processing the file contents to extract listens
Parsing the listens
Submitting listens to RabbitMQ queue in batches
The importers for different services just differ in the parsing part as the file formats may be different for every service.
ListenBrainz requires three fields at minimum for a valid listen submission: the timestamp of the listen, track artist name and track name. The Spotify data archives do not contain the track artist name but the album artist name. This can often be different. To obtain the correct track artist name, we use the spotify identifiers in the data archive. Using these identifiers, we lookup metadata in an internal cache and then fallback to retrieving the data from Spotify metadata API.
GET /1/import-listens/<import_id>
Fetches details about a single import. This is used when showing information about a specific import or when refreshing its progress on the importers page. It helps track the current state of an ongoing import.
GET /1/import-listens/list
Fetches details about all imports. This is used to display the full list of past imports on the import listens page, giving users an overview of their entire import history.
POST /1/import-listens/cancel/<import_id>/
This is used to cancel a specific import in progress. It also deletes the listening history file uploaded after successfully canceling the import.
Code to delete an import
def delete_import_task(import_id):
""" Cancel the specified import in progress """
user = validate_auth_header()
result = db_conn.execute(
text("DELETE FROM user_data_import WHERE user_id = :user_id AND id = :import_id AND metadata->>'status' IN ('waiting') RETURNING file_path"),
{"user_id": user["id"], "import_id": import_id}
)
row = result.first()
if row is not None:
db_conn.execute(
text("DELETE FROM background_tasks WHERE user_id = :user_id AND (metadata->>'import_id')::int = :import_id"),
{"user_id": user["id"], "import_id": import_id}
)
Path(row.file_path).unlink(missing_ok=True)
db_conn.commit()
return jsonify({"success": True})
else:
raise APINotFound("Import not found or is already being processed.")
Frontend
This is the final UI that I implemented.
The form submit button is disabled if a no file is selected or an import is in progress.
The blue box shows the import in progress. It shows the progress text and a refresh button to refresh the status. Clicking on the “Details button reveals the additional details about the imports. The cancel import option is available if the import has not already started.
Testing
This was the most frustrating part for me personally. This also taught me how to think about testing though. I wrote some of the backend tests which were then improved by Lucifer.
Frontend tests were mostly written by me but I faced a lot of challenges trying to make them pass. I encountered an issue with React Testing Library for file uploads testing and had to resort to using a hack for a specific test.
The Current State and Future Scope
Currently, the importer supports only 2 services: Spotify and Listenbrainz. It can be further expanded due to the modular class-based structure. The new implementations only need to add the logic for parsing the desired file format from the specific service.
Final Thoughts
As someone who listens to music for at least 3-4 hours a day, a service like ListenBrainz is a godsend. Working on it gave me a sense of satisfaction that I have contributed something meaningful to the application I care for.
Looking forward, I am excited to see people using the feature and migrating to ListenBrainz easily. I plan to continue fixing bugs and adding new features to ListenBrainz. I would be happy to contribute further to Metabrainz Foundation.
Working on this project was a very nice learning experience and it is the largest codebase I have worked with till date. It is hard to find such experience even in many internships. I worked on a feature that will be used by thousands of people, hence thorough testing was required.. The mentors were very supportive and also encouraged good practices. This led to the real development. They encouraged me to think like a user and take ownership.
Working with my fellow GSoCers was also a great experience. We helped each other, built a strong bond, and I also made some wonderful new friends along the way. Overall, it was a very nice learning experience. It was a summer well spent 🙂
Thanks for taking out time to read and I hope that you learnt something from this!
I am Mohammad Amanullah (AKA m.amanullah7 on IRC and mAmanullah7 on GitHub) final year student at National Institute of Technology Agartala, India and along with that I am a diploma level student at Indian Institute of Technology Madras, India. I was thrilled to be selected as a contributor in the Google Summer of Code (GSoC) 2025 program. My project focused on integrating music streaming from Funkwhale & Navidrome. It was mentored by Lucifer and Monkey.
Let’s start 🙂
Project Overview
ListenBrainz has a number of music discovery features that use BrainzPlayer to facilitate track playback. BrainzPlayer (BP) is a custom React component in ListenBrainz that uses multiple data sources to search and play a track. As of now, it supports Spotify, Youtube, Apple Music and Soundcloud as a music service. It would be useful for BrainzPlayer to support stream music web apps like Navidrome and Funkwhale so that users could stream their private collections as well on ListenBrainz. For those unfamiliar, Funkwhale and Navidrome are self-hosted music servers that implement the Subsonic API, a widely adopted standard for streaming and managing personal music libraries.
Before you proceed further, listen to a song and explore new services so you can feel more when you read the rest of the blog! Check out your Connect services page 🎶
Let’s Deep Dive into My Coding Journey!
Welcome back to blog after exploring new services and listening to a song. I started my coding journey during the community bonding period. In this period, I spent time exploring, learning, discussing and creating the mockups, finalizing the backend and frontend flow so on with aerozol, monkey and lucifer!
The UI that we finally decided upon after many iterations!
Integrate music streaming from Funkwhale
Funkwhale supports OAuth2 in addition to Subsonic API. As OAuth2 is more secure, I decided to integrate Funkwhale using OAuth2. The main challenge here, unlike centralized services (Spotify, Apple Music), is that any user can host their own Funkwhale instance. Each server acts as its own OAuth provider and an app needs to be created for it dynamically. The following flowchart explains the various steps taken to connect a Funkwhale server to ListenBrainz.
Flowchart of connection Funkwhale server to ListenBrainz
The initial database schema had a single table which included both the server details (host_url, client_id, client_secret) and the token details (access_token, refresh_token, token_expiry). This was problematic as either the server app details would need to be duplicated for multiple users or a new app would be created for each user. Hence in the next iteration, I split it into two tables: funkwhale_servers, funkwhale_tokens to avoid redundancies.
Database schema for Funkwhale auth tables
CREATE TABLE funkwhale_servers (
id INTEGER GENERATED ALWAYS AS IDENTITY,
host_url TEXT NOT NULL UNIQUE,
client_id TEXT NOT NULL,
client_secret TEXT NOT NULL,
scopes TEXT NOT NULL,
created TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
CREATE TABLE funkwhale_tokens (
id INTEGER GENERATED ALWAYS AS IDENTITY,
user_id INTEGER NOT NULL,
funkwhale_server_id INTEGER NOT NULL,
access_token TEXT NOT NULL,
refresh_token TEXT NOT NULL,
token_expiry TIMESTAMP WITH TIME ZONE NOT NULL
);
Flowchart explaining how a song is played through Funkwhale in BrainzPlayer
I added the option to connect a Funkwhale server in Connect services, the user needs to only input the url of their server.
Note: Funkwhale currently has a bug in the authentication workflow. If you are not logged into your Funkwhale server, before pressing “Connect to Funkwhale” rather than being redirected to the login page, you will be presented with an authentication error! You can look at this issue for more details!
UI Before connecting FunkwhaleUI once Funkwale is conencted and ready for playback
I added the FunkwhalePlayer component to BrainzPlayer by taking reference from the existing BrainzPlayer architecture and DataSourceType interface to understand how to properly integrate with the existing services. It detects when a listen originates from Funkwhale, handles both direct track URLs and search based matching, and manages authenticated audio streaming.
I also created a custom icon component for both Funkwhale and Navidrome that emulates the FontAwesome icons exported from react-fontawesome. It helps us to avoid messy and hardocded styles for the icon.
Once connected, ensure that Funkwhale service is activate and set the desired in the BrainzPlayer settings page.
All set! You are ready to play your Funkwhale collection on ListenBrainz! Enjoy 🎶
One of the complex parts was handling Funkwhale’s multi-artist format variations, Funkwhale’s track search API does not work well if we try to filter tracks by artist name. Hence, we only filter by track name in the API request and manually filter the results on the artist name. The track matching algorithm was particularly complex because Funkwhale supports multiple artist credit formats. Some tracks use the legacy single artist format and some newer instances use multi-artist credits with joinphrases. I implemented a normalization system that handles both the formats and also removes the accents for better matching, and uses a fallback strategy, exact artist as well as title match first, then artist filtered results, then any playable track.
For audio streaming, Funkwhale requires an access token. I implemented a system that fetches the audio as an authenticated blob and creates an object URL, and feeds that to the HTML5 audio element. I also implemented automatic token refresh so that the user does not experience interruptions in playback.
Present Status and Future Improvements
The entire implementation for the Funkwhale integration is contained in the following PR: Integrate music streaming from Funkwhale. The PR has already been reviewed, merged and deployed to proudction.
As future improvements, it would be useful to allow users to connect multiple funkwhale servers to their ListenBrainz account. More thorough unit and integration tests would also be useful in preventing regressions.
Integrate music streaming from Navidrome
Navidrome does not support OAuth2 currently. Effort is underway to add API Key auth support but as of now the the insecure Subsonic Auth is the only viable option. It involves storing the user’s password safely in the database and then using the md5(password+salt) as a token for authentication.
Flowchart detailing how to connect Navidrome to ListenBrainz.
Storing passwords in cleartext in the database is not safe, hence I used Fernet (symmetric encryption). The user’s password is encrypted before storage and only decrypted using the key when needed to generate API authentication tokens. This ensures that even if the database is compromised, the passwords remain secure. Following a similar pattern as Funkwhale created two tables, we can reuse navidrome_servers so we don’t need to save server url for each time and can also be useful when we upgrade this to store OAuth ids, tokens, scopes in future.
Database schema for Navidrome auth
CREATE TABLE navidrome_servers (
id INTEGER PRIMARY KEY,
host_url TEXT UNIQUE
);
CREATE TABLE navidrome_tokens (
user_id INTEGER,
navidrome_server_id INTEGER,
username TEXT,
encrypted_password TEXT -- Fernet-encrypted password
);
The frontend implementation followed the same DataSourceType pattern as Funkwhale and other music services, but there was a major difference in how we are handling authentication. Instead of maintaining an access token like an OAuth based implementation, the player generates fresh MD5 authentication parameters every time for each API request using the user’s stored credentials in the database.
Flowchart of playing a song using Navidrome in ListenBrainz
Connecting to a navidrome server requires three user inputs, host_url, username and password unlike funkwhale which only has host_url. Users can also edit the credentials later.
UI Before connectingUI once Navidrome is connected
As with Funkwhale, you can activate Navidrome playback and set its priority in the BrainzPlayer settings page
All set! You are ready to play your Navidrome collection on ListenBrainz! Enjoy 🎶
The track matching was more straightforward for Navidrome as the Subsonic API provides a better search endpoint. The search3 endpoint allows us to query by both track and artist name simultaneously, and also return simple well structured results that are easier to parse than Funkwhale multi-format artist credits.
Audio streaming was significantly simpler than Funkwhale because Navidrome’s stream URLs include authentication parameters directly in the query string. This means I could set the HTML5 audio element’s src directly to the authenticated stream URL without needing to fetch the audio as a blob.
Current Status and Future Improvements
The Navidrome integration code is contained within the following PR: Integrate music streaming from Navidrome. It is currently pending review, following which it will be merged and deployed soon.
As future improvements, OAuth2 or API Key auth support can be added for Navidrome once available. The ability to connect multiple servers and more tests would be useful as well.
Testing
It was my first timing writing tests but I was successfully able to write basic tests for frontend as well as backend. The existing tests helped were easy to read and served as a great reference! In the future, I will add more functional tests.
Overall GSoC Experience
This summer has been an incredible journey for me to work with the MetaBrainz, and I’m deeply grateful to GSoC for this amazing opportunity. Contributing to ListenBrainz and implementing both Funkwhale and Navidrome music service integrations has been both challenging and rewarding, seeing my work now live in production for users worldwide. Being a part of MetaBrainz is an incredible feeling. I’m gonna miss Monday meetings for sure. I will be continuing to fix bugs, issues or contribute other improvements.
Throughout this journey, I have learned so many things. I am now more comfortable with Git ang Github. Initially, I did’t have much TypeScript knowledge. But, in this period, I worked on my skills, tried – failed – asked for help when stuck and finally finished the implementation. I have become more comfortable with Docker now and stuff like OAuth2 integration and music streaming implementation etc.
I would like to thanks monkey, lucifer, aerozol, a lot for helping me throughout this period, guiding me and for constantly support me. Whether it was the MetaBrainz chat or the PR reviews, I always received detailed feedback, help and suggestions. NGL monkey, I was sure I wouldn’t gonna help to create the Navidrome Icon but God had other plans.
I built some cool stuff this summer and it’s going to be used by people all over the world. Thank you to all others who helped me throughout this journey and helped and guided me! I hope you guys will enjoy listening to songs more with more services.
We’re are excited to announce that the MetaBrainz Foundation has been accepted into Google’s Summer of Code program for 2025! Summer of Code has been instrumental (pun intended) in the development of our projects and growth of our team over the years, so we’re pleased to be part of it for another round.
Ready to rock this summer coding with us? Start with carefully reading the terms for contributors. If you are eligible, go ahead and take a look at our Summer of Code landing page where you can find project ideas that we have listed for this year. Our landing page will also tell you what we require of our participants and how to pick up a project.
A very important note: We will not be considering any proposals from contributors who have not reached out to us before March 31.
Good luck to all who are interested in participating!
PS: If you’re feeling particularly adventurous, check out this entirely optional link for some extra motivation.
It really bugged me that it proved impossible to finish the huge BookBrainz importer project last year.
Fortunately MetaBrainz (and Google) gave me the chance to continue working on my 2023 project during this Summer of Code, thank you!
Our goal is still to import huge external datasets into the BookBrainz database schema.
Last year I worked on the backend services to transform and insert simple entities into the database.
This year’s goal was to support importing multiple related entities and exposing the imported data on the website.
We can now import entities (on the backend), which can be reviewed and approved by our users with ease.
If you want to know the full story, I recommend you to start with my previous blog post to learn more details about the existing importer infrastructure and last year’s problems.
Or just read on if you are only interested in the advanced stuff which I did this year.
I am Ashutosh Aswal (IRC nick yellowhatpro), a Computer Science grad from PEC University, India. This is my second time contributing to MetaBrainz as a GSoC contributor, and unlike the last time, when I contributed to the ListenBrainz Android app, this year, I took a challenge to learn a new language and framework (Rust and Postgres) to create this delightful project, Melba, which stands for MusicBrainz’s External Links wayBack machine Archiver.
As the name suggests, the project saves external webpages linked in the MusicBrainz database to the Internet Archive using Wayback Machine API. Let me walk you through the making of Melba.
Hello! My name is Rimma Kubanova (AKA rimskii on IRC and rimma-kubanova on GitHub). I’m an undergraduate Computer Science student at Nazarbayev University in Astana, Kazakhstan. My inspiration to participate in Google Summer of Code came from seeing my seniors’ experiences. I began contributing to MetaBrainz because I felt their goals and technologies aligned perfectly with my interests and skills.
After making my first contributions, I decided to apply to GSoC, and to my delight, my proposal was accepted!
Proposal
ListenBrainz generates music recommendation playlists based on a user’s listening history and habits. These playlists can be enjoyed directly in ListenBrainz and automatically exported to the user’s Spotify account. However, currently, ListenBrainz only supports exporting to Spotify, which limits the user experience.
My project focused on expanding this functionality by integrating support for exporting these playlists to other external music services like SoundCloud and Apple Music. Additionally, I proposed adding an import feature to allow users to bring their playlists from these services into ListenBrainz.