Another way to evaluate the power of a repeated measure assessment system is to assess the variance it can explain on follow-up assessments or some other independent predictor variable of interest. In 2010, McIntosh, Lyons, Jordan, & Wiener [37] created a predictive model of running away. It has been cited by Lyons [38] for the proposition that CANS is “able to predict outcomes of various program types” (p. 74). The model they created could only predict 14.5% of the adjusted outcome variance and the most important items in the predictive model were not CANS’ variables (e.g., age). The sensitivity and specificity tradeoffs were poor. This suggests CANS has poor predictive validity.
Similarly, Sieracki, Leon, Miller & Lyons (2008 [47]) found that CANS could explain less than one percent (1%) of the outcome variance as it relates to the provider and five percent (5%) attributable to the client. The authors knew how poor these results were. Just the year before, two of them (Lyons and Leon) had published a study on a different tool that evidenced eight percent (8%) for the provider and seventeen percent (17%) for the client (Lutz, Leon, Martinovich, Lyons, & Stiles, 2007). This, again suggests, CANS has poor predictive validity.