Relevance-based explanation is a scheme in which partial assignments to Bayes network variables are explanations (abductive conclusions). We allow variables to remain unassigned in explanations as long as they are irrelevant to the explanation, where irrelevance is defined in terms of statistical independence. Equating irrelevance with exact independence leads to problems when events are almost statistically independent (but still intuitively irrelevant). Approximate independence alleviates the problem. Interesting properties of approximate independence are discussed, as well as an algorithm based on these properties. Another issue is multiple-valued variables: their existence in the system, especially when subsets of values correspond to natural types of events, causes the overspecification problem to resurface. Generalizing the notion of assignments to allow disjunctive assignments is a solution to this problem. We define generalized independence-based explanations as maximum-posterior-probability independence-based generalized assignments (GIB MAPs). GIB assignments are shown to have certain properties that ease the design of algorithms for computing GIB MAPs. One such algorithm is discussed here, as well as suggestions for how other algorithms may be adapted to compute GIB MAPs. Additionally, both approximate independence and GIB explanations are useful constructs for algorithms that approximate marginal distributions by enumeration of high-probability explanations.
- abductive and probabilistic reasoning