DAG With Omitted Objects Displayed (DAGWOOD): A framework for revealing causal assumptions in DAGs
Directed acyclic graphs (DAGs) are frequently used in epidemiology as a guide to assess causal inference assumptions. However, DAGs show the model as assumed, but not the assumption decisions themselves. We propose a framework which reveals these hidden assumptions, both conceptually and graphically. The DAGWOOD framework combines a root DAG (representing the DAG in the proposed analysis), a set of branch DAGs (representing alternative hidden assumptions to the root DAG), a graphical overlay (represents the branch DAGs over the root DAG), and a ruleset governing them. All branch DAGs follow the same rules for validity: they must 1) change the root DAG, 2) be a valid, identifiable causal DAG, and either 3a) require a change in the adjustment set to estimate the effect of interest, or 3b) change the number of frontdoor paths. The set of branch DAGs corresponds to a list of alternative assumptions, where all members of the assumption list must be justifiable as being negligible or non-existent. A graphical overlay helps show these alternative assumptions on top of the root DAG. We define two types of branch DAGs: exclusion restrictions and misdirection restrictions. Exclusion restrictions add a single- or bi-directional arc between two existing nodes in the root DAG (e.g. direct pathways and colliders), while misdirection restrictions represent alternative pathways that could be drawn between objects (e.g., reversing the direction of causation for a controlled confounder turning that variable into a collider). Together, these represent all single-change assumptions to the root DAG. The DAGWOOD framework 1) makes explicit and organizes important causal model assumptions, 2) reinforces best DAG practices, 3) provides a framework for critical evaluation of causal models, and 4) can be used in iterative processes for generating causal models.
READ FULL TEXT