Introduction

The prevalence of sexual problems in adults is high, ranging from 30-50 %. In the U.S. National Health and Social Life Survey (in-person interviews in 1992), 43 % of women and 31 % of men aged 18–59 who had been sexually active in the past year self-reported at least one sexual problem.1 The U.S. National Social Life, Health, and Aging Project (in-person interviews in 2005–06) found that over half of sexually active adults aged 57–85 reported a sexual problem lasting several months or more.2 Despite the high prevalence of problems, assessing sexual function during clinic visits is not commonplace. Limited patient–provider communication about sexual matters has been documented in primary care,3,4 ob/gyn,5 cardiology,6 and oncology.7,8 Patients can be reluctant to initiate discussions with their providers about sexual function, preferring that providers broach the topic.7,9 Providers can also be reluctant to raise the subject,10 especially if they feel they lack the knowledge or skills needed to address this issue,11 yet there is some evidence that a pre-visit questionnaire can promote patient–provider discussions of sexual dysfunction.12 Given the availability of treatments and options for referrals for sexual problems, the routine assessment of sexual concerns might reduce barriers to discussing this issue while also providing a means to assess longitudinal changes in an important area of health. Therefore, the availability of easily administered and interpreted tools to assess sexual concerns could improve clinical care and standardize data collection efforts for research.

Multiple validated instruments are available for self-assessment of sexual function, but validated instruments designed for research (e.g., the International Index of Erectile Function13, the Female Sexual Function Index14, and the Patient-Reported Outcomes Measurement Information System® Sexual Function and Satisfaction Sexual Function and Satisfaction [PROMIS® SexFS] measure19) are typically too long to be practical in general clinical practice settings. Brief self-assessment of sexual problems in a clinical context has the potential to improve clinical care by tracking trends in sexual problems over time and facilitating patient–provider communication about sexual function. Very brief assessments may also reduce missing data, which is desirable in both the clinical and research context. Therefore, our objective was to develop and validate a single-item clinical screener that would capture common sexual problems and concerns for men and women. This work was conducted in conjunction with the development of version 2.0 of the PROMIS SexFS measure and informed by members of the Scientific Network on Female Sexual Health and Cancer (http://cancersexnetwork.org), an international interdisciplinary network of clinicians and researchers.

Methods

We created three single-item screeners, which were informed by the self-report items currently used by several members of the Scientific Network on Female Sexual Health and Cancer in clinic intake forms. This included items used in oncology, gynecology, and sexual medicine clinics. We solicited and incorporated additional feedback on the screeners from other members of this group as well as from members of the PROMIS Sexual Function domain group.

We tested and refined the screeners in conjunction with qualitative and psychometric testing for version 2.0 of the PROMIS SexFS. Qualitative testing included two rounds of cognitive interviews (n = 7 and n = 11) with participants recruited through physician letters and the clinical trials website of the Duke University Health System. Interviews were conducted in person by an interviewer of the same sex as the participant.15 The specific cognitive interview methodology used is described in detail elsewhere.16 Participants in psychometric testing were members of the GfK KnowledgePanel®, which comprises a probability-based sample of U.S. mailing addresses weighted to provide a valid representation of the U.S. population. For those selected who do not have Internet access, GfK supplies a computer and Internet service.17 Participants completed an online self-report survey covering basic sociodemographic and health concepts, the PROMIS SexFS, and the clinical screeners. All screeners were asked of all participants. Additional details about study design are provided in the appendix. The institutional review board of the Duke University Health System approved this study, and participants provided informed consent.

Clinical Screeners

The three screeners, administered in order during item testing, were as follows: 1) a yes/no item with no recall period asking about any sexual problems or concerns (“general screener”), 2) a yes/no item asking whether, in the past 12 months, the person had experienced any problems for 3 months or more, with a detailed list of example problems (“long list screener”), and 3) an item that was identical to the long list except that it moved the examples of problems from the item stem (i.e., the question itself) to the response options and included the instruction to “check all that apply” (“checklist screener”). Respondents who chose “some other problem or concern” in the checklist screener were asked to specify the problem (open-ended response).

PROMIS SexFS

To provide evidence for the validity of the single-item screeners, we related responses on the screeners to robust sexual function scores as measured by the PROMIS SexFS, a comprehensive measure designed for research.1820 The PROMIS SexFS version 2.0 includes 17 domains. For individuals who had been sexually active in the past 30 days, we used the domains most closely corresponding to the response options in the checklist screener for comparisons (shown in Table 4). For each PROMIS SexFS domain, a higher score represents more of that domain; for example, a higher Erectile Function score reflects better erectile function, and a higher Vaginal Discomfort score reflects greater vaginal discomfort. PROMIS SexFS domain scores are expressed on a T-score metric in which a score of 50 corresponds to the U.S. general population average for sexually active adults and has a standard deviation of 10.21 Participants who had not been sexually active with a partner during the past 30 days were asked to provide the reasons for lack of activity. Again, we used the response options related to sexual function and satisfaction that most closely corresponded to the content in the checklist screener.

Our analyses also utilized items from the PROMIS SexFS Bother Regarding Sexual Function domain that asked how bothered the person was by their sexual function in key domains. These items were reported on a five-point scale ranging from “not at all” to “very much.”

Analytic Approach

We used means and standard errors (SEs) to summarize continuous and ordinal variables and the frequencies and percentages for discrete variables with complex survey sample design weighting applied. Assuming that the problems endorsed on the checklist screener represented the true set of problems that people had experienced, we explored the discrepancies between the responses to the long list and checklist screeners in three ways. We hypothesized that discrepancies might be explained by 1) the number of problems someone had, where people who said “yes” to the long list screener would indicate having more problems, 2) the specific problems that people had, where people who said “yes” to the long list screener would indicate having a problem other than “no interest,” and/or 3) the level of bother associated with any problems, where people who said “yes” to the long list would be more bothered by their problem.

We evaluated the construct validity of the checklist screener in two ways. First, among sexually active participants, we estimated the difference in mean and 95 % confidence interval for each domain, hypothesizing that endorsing a problem on the checklist screener would be associated with worse functioning on the corresponding PROMIS SexFS domain. Since 10 points is equivalent to 1 standard deviation in score, by adapting standard criteria used for evaluating for effect sizes,22 we considered two points a small difference, five points a medium difference, and eight points a large difference. Second, among non-sexually active participants, we tested the difference in the percentage of men and women who endorsed each reason using chi-square, hypothesizing that endorsing a problem on the checklist screener would be associated with a higher likelihood of choosing the corresponding reason for not having sexual activity with a partner in the past month.

We used SAS software version 9.2 (SAS Institute, Cary, NC) and a two-tailed significance level of α = 0.05 for all assessments. All statistics were adjusted for the sample design.

Results

Table 1 shows the sample characteristics, weighted to represent the U.S. population. Table 2 shows the responses to the three single-item screeners as well as the number (count) of sexual problems or concerns that men and women endorsed in the checklist screener. Lack of interest (27 %) was the most prevalent concern for women; difficulty with erection (16 %) was the most prevalent concern for men. About 5 % of men and women wrote in an “other” response, some of which could be recoded into existing categories. Most of the responses that were not recoded related to partner issues (e.g., lack of partner, partner’s health, feeling unattractive, or lack of attraction to partner).

Table 1 Weighted Sample Characteristics* (n = 3515)
Table 2 Prevalence of Sexual Problems for Men and Women Based on Single-Item Screeners (n = 3515)

Missing data were minimal for all three screeners, 2–3 % for women and 4–5 % for men. There were large differences in the prevalence of sexual problems or concerns among the screeners. While 15 % of men and 10 % of women endorsed the general screener, 17 % of men and 20 % of women endorsed the long list screener, and 30 % of men and 38 % of women endorsed at least one problem on the checklist screener. Below we describe inconsistencies in responses; additional analyses are presented in the appendix.

Explaining Inconsistencies in Responses

First, there were differences in the number of problems people reported. Women and men who answered “yes” to the long list screener (vs. “no”) endorsed a greater number of problems on the checklist screener, with a mean 2.5 problems compared to 1.3 for women and 1.8 problems compared to 1.2 for men, both P < 0.0001.

Second, there were differences in the specific problems that were endorsed on the checklist screener. Women who answered “yes” to the long list (vs. “no”) were more likely to indicate problems with lubrication (40 % vs. 12 %), pain (37 % vs. 11 %), orgasm (36 % vs. 11 %), sexual enjoyment (38 % vs. 7 %), or anxiety (15 % vs. 6 %). This was not the case for “no interest,” which was endorsed similarly by women answering either “yes” or “no” on the long list screener (72 % vs. 69 %). Men who answered “yes” to the long list (vs. “no”) were more likely to indicate problems with erectile function (71 % vs. 30 %), orgasm (23 % vs. 12 %), or anxiety (27 % vs. 16 %). There were only small differences for interest (39 % vs. 33 %), pain (6 % vs. 7 %), and sexual enjoyment (6 % vs. 4 %).

Third, there were differences in the level of bother associated with problems. Women who answered “yes” to the long list screener (vs. “no”) were more bothered by their level of interest (mean score of 4.4 vs. 3.7 on a 5-point scale, P < 0.0001), by pain (3.1 vs. 2.4, P < 0.01), and by orgasm (3.3 vs. 2.7, P = 0.02). The difference in bother scores for lubrication was smaller and not significant (2.7 vs. 2.5, P = 0.3). Men who answered “yes” to the long list screener (vs. “no”) were more bothered by their level of interest (4.0 vs. 3.6, P = 0.04) and by erectile difficulties (3.5 vs. 2.8, P < 0.0001). The difference in bother scores for orgasm was not significant (3.6 vs. 3.1, P < 0.07). The PROMIS SexFS Bother Regarding Sexual Function domain does not include questions about bother regarding the lack of enjoyment or anxiety concepts.

Construct Validity

We examined construct validity separately for men and women who were and were not sexually active in the past month. Table 3 presents the PROMIS SexFS scores and differences in mean scores comparing individuals who did and did not endorse the general screener and the long list screener. Sexually active women who said they had a sexual problem on the general or long list screener had lower function on the PROMIS SexFS compared to women who said they did not have a sexual problem (i.e., answered "no" to either screener). The differences were medium to large and statistically significant. Likewise, sexually active men who said they had a sexual problem or concern on the general or long list screener had lower function compared to men who said they did not have a sexual problem or concern (i.e., answered "no" to either screener). The differences were large and statistically significant in all domains.

Table 3 PROMIS SexFS Scores and Differences in Mean Scores for Sexually Active Men and Women Who Did and Did Not Endorse the General and Long List Screeners

Table 4 presents the PROMIS SexFS scores and mean differences in scores comparing individuals who did and did not endorse each problem on the checklist screener. Sexually active women who endorsed a specific problem on the checklist screener had decreased function, on average, in the corresponding domain of sexual function on the PROMIS SexFS. All differences were medium or large and statistically significant. Likewise, sexually active men who endorsed a specific problem on the checklist screener had decreased function on average, in the corresponding domain of sexual function as measured by the PROMIS SexFS. All differences were medium or large and statistically significant, with the exception of not enjoying sex, likely due to very small sample size for that response on the checklist screener (n = 19).

Table 4 PROMIS SexFS Scores and Differences in Mean Scores for Sexually Active Men and Women Who Did and Did Not Endorse Problems on Checklist Screener

Table 5 shows the relationship between endorsing a problem on the checklist screener and endorsing the same response as a reason for not having sexual activity with a partner in the past 30 days. Among men and women who had not been sexually active in the past 30 days, those who endorsed a specific problem on the checklist screener were more likely to endorse that same reason in response to why they had not had sexual activity with a partner in the past 30 days, with the exception of enjoyment for men (small sample size, as above) and anxiety for women.

Table 5 Relationship Between Checklist Endorsement and Reason for Not Having Sexual Activity with a Partner

Discussion

In a representative sample of U.S. adults, the prevalence of sexual problems or concerns that were self-reported in an online survey was quite different depending on how the question was asked. When asked as a global yes/no style question, with no recall period and no examples of common sexual problems, 1 in 10 women and 1 in 7 men reported having a sexual problem or concern. When men and women were asked to report specific sexual problems or concerns over the past year, with response options in a checklist style, we found that roughly 1 in 2.5 women and 1 in 3 men reported at least one sexual problem. The discrepancy between the general yes/no screener and the other (long list and checklist) screeners was not unexpected, given the differences in recall period and specificity. However, the discrepancy between the long list and checklist screeners, which were identical in wording but different in format, was striking. The discrepancy appears to be related to the number and type of sexual problems the person had, as well as how bothered he or she was, for many of the domains.

Limitations

The main limitation of this study is the potential for order effects. The screener questions were administered after the PROMIS SexFS items; participants had already answered many questions regarding specific aspects of sexual function by the time they were asked the more general screener questions. Yet the prevalence of reporting a sexual problem on the check list screener, the last one administered to the sample, was within the range found in other national surveys that asked similar questions. The order of administration of the three screeners was not randomized, so it is possible that order magnified or diminished the differences we observed between the items. Because the items within the individual PROMIS SexFS domains were randomized, we were able to test for order effects between items in that context. We tested item pairs in the domains of interest, erectile function, and vaginal discomfort, and found no evidence of order effects (effect sizes 0.002–0.09, P > 0.51).

Recommendation

All of the screeners tested showed evidence for basic validity and had minimal missing data, but they varied substantially in the number of people who endorsed them. A major consideration in determining which screener to recommend for routine use in clinical practice is the extent to which responses to the screener help guide the patient’s clinical care. Using this rationale, the checklist screener is the best choice. It was endorsed by a greater number of participants compared to other ways of asking the question, and thus could facilitate patient–provider communication about sexual problems for a larger group of people. For instance, providers might be more likely to raise the issue with their patients if they are aware of patients’ concerns prior to the visit. Moreover, the checklist screener format allows for efficient identification of specific problems over time and can help to guide the selection of specific interventions. Finally, in the context of identifying and treating patients with sexual concerns, the hazard of over-identifying patients who report concerns on the screener but are not bothered by them or do not prioritize them is small compared to that of missing patients with true problems, further supporting the more specific checklist approach.

Nevertheless, in the context of an existing “review of systems” style intake form, where the inclusion of the checklist screener would prove overly burdensome or infeasible for other reasons, we recommend that the general screener, or specifically the phrase “sexual problems or concerns,” be included as an available field for both men and women. While this screener does not have the specificity to identify particular sexual problems that could inform treatment options, it does identify which patients might benefit from further discussion with the provider, and is therefore preferable to omitting the item entirely.

After analyses, we tested modifications to the wording of the checklist screener in a second round of cognitive interviews. Based on the results, we 1) changed “no interest in sexual activity” to “wanted to feel more interest in sexual activity” in order to incorporate a sense of bother to the response, and 2) clarified the meaning of “erection difficulties.” The final recommended screener is displayed in Table 6.

Table 6 Recommended Clinical Screener

Conclusions

We developed and validated a single-item screener to capture common sexual concerns that men and women had experienced over the past year. Our understanding of the sexual side effects of medical treatments would be improved if this screener were routinely used and analyzed. Adoption of this item across clinical sites would also facilitate multi-site research efforts to improve sexual outcomes for patients.