This article analyzes explainability, firstly, as a requirement for reliability, an ethical principle that serves as a foundation for the governance of artificial intelligence. From this perspective, the concepts of governance, reliability and explainability of artificial intelligence (AI) are analyzed, in line with their evolution, mainly in the framework of the European Union. Secondly, we study whether or not explainable artificial intelligence is relevant to liability regimes for damages caused by AI systems. This allows us to reflect on the importance of incorporating explainability in the new liability schemes that will be generated to address the problems generated in terms of damages caused by AI.