Almost all practical software systems today are configurable. Huge configuration spaces, usually of size exponential in the number of configuration options or features, render their design, analysis, and explanation challenging tasks. In this talk, I will introduce the notion of “feature causality” to support explainability of configurable systems. Inspired by the seminal definition of actual causality by Halpern and Pearl, feature causes capture configuration decisions that are reasons for fulfilling functional and non-functional system properties, e.g., safety requirements or quality of service, respectively. I will present various methods to explicate such reasons, e.g., based on well-established notions of responsibility and blame. By means of an evaluation on a wide range of configurable software systems, including community benchmarks and real-world systems, I will demonstrate feasibility of this approach to identify root causes, estimate the effects of configuration options, and detect feature interactions.