Reliability, validity and treatment sensitivity of the Schizophrenia Cognition Rating Scale

https://doi.org/10.1016/j.euroneuro.2014.06.009Get rights and content

Abstract

Cognitive functioning can be assessed with performance-based assessments such as neuropsychological tests and with interview-based assessments. Both assessment methods have the potential to assess whether treatments for schizophrenia improve clinically relevant aspects of cognitive impairment. However, little is known about the reliability, validity and treatment responsiveness of interview-based measures, especially in the context of clinical trials. Data from two studies were utilized to assess these features of the Schizophrenia Cognition Rating Scale (SCoRS). One of the studies was a validation study involving 79 patients with schizophrenia assessed at 3 academic research centers in the US. The other study was a 32-site clinical trial conducted in the US and Europe comparing the effects of encenicline, an alpha-7 nicotine agonist, to placebo in 319 patients with schizophrenia. The SCoRS interviewer ratings demonstrated excellent test-retest reliability in several different circumstances, including those that did not involve treatment (ICC> 0.90), and during treatment (ICC>0.80). SCoRS interviewer ratings were related to cognitive performance as measured by the MCCB (r=−0.35), and demonstrated significant sensitivity to treatment with encenicline compared to placebo (P<.001). These data suggest that the SCoRS has potential as a clinically relevant measure in clinical trials aiming to improve cognition in schizophrenia, and may be useful for clinical practice. The weaknesses of the SCoRS include its reliance on informant information, which is not available for some patients, and reduced validity when patient׳s self-report is the sole information source.

Introduction

Cognitive impairment in schizophrenia has traditionally been assessed with performance-based cognitive measures (Chapman and Chapman, 1973). Many of these measures were derived from tests developed to assess neurocognitive function for the identification of strengths and weaknesses in patients with brain dysfunction or intellectual impairment, or for examining the effects of aging (Spreen and Strauss, 1998). More recently, tests measuring highly specific cognitive processes, often developed for neuroimaging paradigms, have been utilized as well (Barch et al., 2009). However, there are multiple practical constraints on the assessment of cognition conducted exclusively with performance-based tests. Most clinicians who might wish to evaluate the severity of cognitive impairment in their patients with schizophrenia do not have the required expertise and resources to conduct meaningful performance-based assessments. Furthermore, the interpretation of the clinical relevance of changes in performance-based measures is not immediately accessible to non-experts, including clinicians, consumers, and family members, and may require different approaches or supplemental assessments with greater face validity. Finally, there is no consensus among experts as to how much change on neuropsychological tests is clinically meaningful.

Regulatory bodies such as the United States Food and Drug Administration (FDA) and the European Medicines Agency (EMA) support the use of cognitive performance measures as primary endpoints in clinical trials for the treatment of cognitive impairment in schizophrenia. However, they have also noted the absence of face validity of performance-based cognitive measures as one of the reasons they require a pharmacologic treatment also to demonstrate efficacy on an endpoint that has greater clinical meaning to clinicians and consumers. These indices could include performance-based measures of functional capacity or interview-based assessments of clinically relevant and easily detectable cognitive change (Buchanan et al., 2005, Buchanan et al., 2011). In addition, assuming that some treatments become available, clinicians will need an assessment that they can utilize to assess cognitive change in their patients in situations where performance-based cognitive tests are not practically available. Interview-based assessments have the potential to meet these requirements.

Several interview-based measures of cognition are available. The two that have been utilized the most in large-scale studies with adequate methods, such as the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) project, have been the Schizophrenia Cognition Rating Scale (SCoRS) and the Cognitive Assessment Interview (CAI). These measures examine cognitive functioning through questions about functionally relevant, cognitively demanding tasks. As a result, they measure cognitive functioning from a different perspective than performance-based assessments, and a full overlap with performance-based measures is not expected.

We will focus in this paper on research recently completed with the SCoRS. Information on the SCoRS׳ psychometric properties, relationship to cognitive functioning, as well as other measures of functional capacity, can be found in a variety of peer-reviewed publications, including Keefe et al. (2006), Green et al. (2008), and Harvey et al. (2011). Overall, the strengths of the SCoRS are its brief administration time, requiring about 15 min per interview (Keefe et al., 2006, Green et al., 2008); its relation to real-world functioning (Keefe et al., 2006); good test-retest reliability; and correlations with at least some performance-based measures of cognition (Keefe et al., 2006). However, several challenges remain. Due to the difficulties that patients with schizophrenia have with reporting accurate information regarding cognition and everyday functioning (Bowie et al., 2006, Sabbag et al., 2011; also Durand et al., this issue), the validity of the SCoRS and its correlations with performance-based measures of cognition may depend upon the availability of an informant. Since some patients with schizophrenia may not have people who know them well (Patterson et al., 1996, Bellack et al., 2007), requirements for informant information may reduce the practicality of the SCoRS.

It is important to determine the contexts in which informant information is required and whether there are circumstances where it is not. Also, while the US FDA has expressed general acceptance of interview-based measures of cognition as secondary endpoints in clinical trials for drugs to improve cognitive impairment in schizophrenia (Buchanan et al., 2005, Buchanan et al., 2011) and the SCoRS in particular is being used as a co-primary endpoint in phase 3 registration clinical trials (www.clinicaltrials.gov, accessed May 9, 2014), the effect of treatment on the SCoRS is not well known. Finally, if SCoRS and similar measures are to be useful for clinical applications, it may be helpful to begin to gather information on the reliability and sensitivity of specific items of the SCoRS for the purposes of reducing the length of the assessment down to its crucial components.

In this paper, we will address the following questions about the SCoRS:

  • 1.

    What is the structure of the SCoRS items? Do the items measure a single factor or multiple factors? Based upon correlations with cognitive performance measures such as the MATRICS Consensus Cognitive Battery (MCCB), assessment of the reliability of items, and treatment responsiveness, are there opportunities for data reduction?

  • 2.

    What is the relative benefit of informant information given the potential time and resource cost and unavailability of reliable informants?

    • a.

      What is the relative reliability of different sources of information?

    • b.

      What is the relative association of data from different sources with cognitive performance measures such as the MCCB?

    • c.

      What is the relative sensitivity to treatment of data from the different sources?

  • 3.

    Are there differences in the reliability, validity and sensitivity of the SCoRS based upon geographical region and level of expertise and experience with the instrument?

Section snippets

Experimental procedures

In order to increase the sample size and the diversity of conditions in which SCoRS data were evaluated, we combined data from two very different studies. One study was an academic study conducted at three research centers in the United States. The second study was a phase 2 clinical trial conducted in the United States and Europe.

Validation study: 76 patients with DSM-IV schizophrenia were assessed in the context of a multi-center validation study of functional capacity outcome measures,

Analyses

Demographic information for patients from the two studies is presented. For the purposes of these analyses, the treatment study data includes only subjects from the intent-to-treat population (N=307).

In the treatment study, seven sequential assessments with the SCoRS over both pre-treatment visits (Day −14, Day −7 twice on Day −4) and post-treatment visits (Day 28, Day 56 and Day 77) allowed an assessment of the test-retest reliability of this measure in the context of treatment and in the

Demographics and baseline performance

Demographic and baseline characteristics of patients from the two studies are given in Table 1. Despite considerable differences in the aims and design of the two studies, their demographics were similar. The largest discrepancies were the percentages for Caucasians: 18% Caucasian (Hispanic) and 34% Caucasian (non-Hispanic) for the Validation study; 7% Caucasian (Hispanic) and 59% Caucasian (non-Hispanic) for the Treatment study. Mean performance on the first administration of the SCoRS items

Discussion

The SCoRS is an interview-based rating scale of cognitive impairment and related functioning in patients with schizophrenia. Previous work had suggested that the SCoRS has the potential to be used as a clinical measure of cognitive treatment response and to be used in clinical trials as a co-primary measure with functional relevance due to its good reliability and its relation to real-world functioning and performance-based measures of cognition. However, given its acceptance by the US FDA as a

Role of funding source

One of the studies was sponsored by NIMH SBIR grant 5R44MH084240-03. The other study was funded by FORUM Pharmaceuticals.

Contributors

Richard Keefe conceptualized the paper, designed the statistical analysis plan, and wrote the manuscript. Vicki Davis designed the statistical analysis plan, conducted statistical analyses and wrote the manuscript. Nathan Spagnola designed the statistical analysis plan, conducted statistical analyses and wrote the manuscript. Dana Hilt designed one of the studies. Nancy Dgetluck designed the statistical analysis plan and conducted data analysis for one of the studies. Stacy Ruse oversaw the

Conflict of interest

Dr. Richard Keefe currently or in the past 3 years has received investigator-initiated research funding support from the Department of Veteran׳s Affair, Feinstein Institute for Medical Research, GlaxoSmithKline, National Institute of Mental Health, Novartis, Psychogenics, Research Foundation for Mental Hygiene, Inc., and the Singapore National Medical Research Council. He currently or in the past 3 years has received honoraria, served as a consultant, or advisory board member for Abbvie,

Acknowledgment

We thank Cathy Lefebvre, who assisted with the preparation of the manuscript.

References (20)

There are more references available in the full text version of this article.

Cited by (40)

  • Subjectively-assessed cognitive impairment and neurocognition associations in schizophrenia inpatients

    2022, Schizophrenia Research: Cognition
    Citation Excerpt :

    The current study found a similar correlation between interviewers' global SCoRS-J ratings and composite BACS-J scores as previous studies (Keefe et al., 2006; Keefe et al., 2015; Harvey et al., 2019). In terms of the correlation between global SCoRS-J ratings and the BACS-J subscale scores, the present results showed that the informants and interviewer had a common correlation in assessing cognitive functioning, which did not correlate with patients' assessments in a previous study (Keefe et al., 2015; Harvey et al., 2019). Corresponding correlations between the interviewer and informants were found for attention and working memory; patients' global SCoRS-J rating and BACS-J attention score were also weakly correlated.

  • Alterations in methionine to homocysteine ratio in individuals with first-episode psychosis and those with at-risk mental state

    2020, Clinical Biochemistry
    Citation Excerpt :

    The criteria of subjects with ARMS were same as in previous study [19] (Supplementary Material 1), and were help-seeking outpatients that experienced attenuated psychotic symptoms (APS). The Positive and Negative Syndrome Scale (PANSS) [20], Beck Depression Inventory-II [21], Clinical Global Impression, and modified Global Assessment of Functioning scale [22], the Structured Interview for Prodromal Symptoms/Scale of Prodromal Symptoms (SIPS/SOPS) [23], the Schizophrenia Cognition Rating Scale-Japanese version (SCoRS-J) [24], and the duration of untreated illness were recorded. Individuals with FEP experiencing initial onset symptoms of psychosis were identified by trained psychiatrists using the Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders (SCID-I) [25] (Supplementary Material 1).

View all citing articles on Scopus
View full text