Richard A. Epstein

a Research Fellow at the University of Chicago’s Chapin Hall

Dr. Epstein’s publications on CANS are a quintessential example of how CANS’ authors use deception to give CANS an aura of validity. For example, in his 2015 CANS’ publication [08] he supports the use of CANS in his study by writing:

 

“Previous research demonstrates that the CANS has adequate inter-rater and internal consistency reliability and concurrent, discriminant, and predictive validity (Anderson, Lyons, Giles, Price, & Estle, 2003[03]; Epstein, Bobo, Cull, & Gatlin, 2011[07]; Epstein, Jordan, Rhee, McClelland, & Lyons, 2009[06]; He, Lyons, & Heinemann, 2004[14]; Leon, Lyons, & Uziel-Miller, 2000 [J]; Leon et al., 2000 [L]; Park, Jordan, Epstein, Mandell, & Lyons, 2009[40]).”

 

Sounds amazing, but none of these citations are real. Sure, the articles exist, but [06], [14], [J], [L], and [40] have nothing to do with CANS research. [06] and [40] are Epstein’s own work, so he obviously made a very conscious choice to perpetrate this fraud. [07] is his work, as well. However, it does not test the reliability or validity of CANS. The closest it comes to reporting on CANS’s validity is to make up even more false citations.

 

In [07], Epstein writes: “Previous research demonstrates this measure to have adequate inter-rater and internal consistency reliability (Anderson et al., 2003[03]; Epstein et al., 2009[06]; Leon et al., 2000 [J]; Leon et al., 1999 [L]; Lyons et al., 2002; Lyons et al., 2004[27]), and concurrent, discriminate, and predictive validity (Epstein et al., 2009[06]; Leon et al., 1999 [L]; Lyons et al., 2001 [N]; Lyons et al., 1997 [P]; Lyons et al., 2000a[S],b[21]; Lyons et al., 2004[27]; Park et al., 2009[40]).” In addition to the false citations listed in [08], he adds the following studies that have nothing to do with CANS: [P], [S], [21].

 

Enjoy reading [27], it’s one of Lyons most deceptive “publications.” It’s a book chapter and not peer-reviewed. We’ll profile it more carefully later. Look at all the vapor-ware statistics starting on page 10 under “Validity.” Look for citations for this work – there are none. On page 4, he, his wife, and another Chapin Hall “expert”, Dana Weiner, write that “high concurrent and predictive validity” are a must for CANS. Yet, on page 11, Table 17.3, even if we are to believe the data is real, those pitiful concurrent validity statistics are “small” not “high.” Also, take a look at the similarly uncited “predictive validity” section that follows.

 

Lyons, another Chapin Hall expert, is smart. He’s been an Endowed Chair of a department and the Editor of a Journal. He knows what “predictive validity” is, yet he chooses to deceive the reader. Here are the facts. Lyons wrote in his book what this type of validity means: “Predictive
validity refers to whether a measure can predict events and states in the future that are relevant to the construct measured” (Lyons, 2009, p. 76). However, look at how he uses it on page 11 of his book chapter. He’s “predicting” – here meaning just correlating – CANS scores with three levels of care at the same point in time.

 

The only peer-reviewed primary source research cited by Epstein is [03] because that’s the
only peer-reviewed study of any version of CANS relating to psychometrics, and we’ve already discussed the deception and poor study design here: http://kickthecans.net/untested-reliability/

 

The hyperlinks for each article listed above will take you to the actual articles so you can see the deception for yourself.

 

Epstein saved his most impressive scientific fraud for last – his 2015 paper on placement disruption [08]. After you get past the deception of CANS being valid, when the data says otherwise, you get to the heart of the study. He collapses the highest levels of care (residential and in-patient) into one level and defines “disruption” as being placed in a higher level of care. He then uses a CANS placement algorithm that places more than 34% of abused and neglected children in this highest level of care (cf. Chor et al., 2013 [B], Table 2, CANS Algorithm
recommendation RTC-Total (34.3%), p. 875). Low and behold, if you follow the recommendations of CANS and place more than a third of the kids in the highest levels of care, they will never “disrupt.” Of course, this major study limitation is not acknowledged in the design or conclusions.

 

Note, that Chor is another Chapin Hall “expert” at making false CANS validity citations.

 

This is the type of deception that gets careless and unethical graduate students kicked out of school. The University of Chicago appears to be a magnet for these folks that enjoy falsifying the scientific record and drawing large salaries. Chapin Hall is full of them. We’ll be profiling more, like Lyons, Chor, and Weiner, soon.