Music Recommender Systems (MRS) are essential components of online music platforms, offering users personalized music recommendations from vast catalogs. Despite their convenience, there is growing concern over the fairness and explainability of these systems, as they may inadvertently reinforce biases, limit exposure to diverse content, and operate as "black boxes" where the decision-making processes are opaque to users. Studies suggest that Recommender Systems' recommendation quality, including MRS', may vary based on user characteristics such as gender, age, or country of origin, potentially limiting access to quality content and reinforcing demographic-specific filter bubbles. Besides fairness, explaining why certain music items are recommended is important for maintaining user trust and supporting engineers in debugging the recommendation process. Explainability and fairness are noticeably intertwined, as explanations can aid in identifying potential biases and injustices within the systems, although transparent explanations alone do not ensure fair or biased results. This thesis provides several substantial contributions to the research directions of fairness and explainability in Recommender Systems in the music domain. Part I is devoted to assessing the disparate effectiveness of popular recommendation algorithms trained on large music datasets across different user groups. Focusing on the users' gender and personality, we uncover and measure the unfairness of most of the algorithms, demonstrating that they provide different quality of service to different user groups based on these characteristics. In Part II, we explore the role and methods of explainability in MRS by drawing connections to explainable AI and RSs, and we propose two novel explanation methods: ProtoMF, which uses prototypical users and items, as well as their similarities to real users and items, to explain recommendations, and LEMONS, which pinpoints the most meaningful part of the audio for track recommendations. Lastly, in Part III we jointly consider fairness and explainability in prototype-based RS models, proposing a debiasing method to make recommendation predictions less influenced by users' protected attributes like age and gender, effectively debiasing the RSs.