Introduction

Adaptive decision-making involves successfully predicting appetitive outcomes, as well as selectively approaching rewards (Fields and Margolis, 2015). So far, research on neurochemical modulators of value-based decision-making in humans has centered around contributions of the dopamine, noradrenaline and serotonin systems (eg, Rogers, 2011). Nonetheless, maladaptive decision-making is commonly observed in clinical populations with disrupted μ-opioid receptor (MOR) function, eg, substance dependence (Lubman et al, 2009) and chronic pain (Apkarian et al, 2004). Further, evidence from rodent models demonstrate μ-opioid modulation of important subprocesses of value-based decision-making (Laurent et al, 2015). In rats, MOR agonism primarily enhances preference for high-value rewards, as indicated by measures of consumption (DiFeliceantonio et al, 2012) as well as by effort exerted to achieve rewards (eg, progressive ratio schedules (Zhang et al, 2003)). Blockade of MORs shows the opposite effects (Cleary et al, 1996). Overall, evidence from pharmacological manipulation studies indicate that the rodent MOR system regulates preference and valuation in the food, sexual (Mahler and Berridge, 2012) and social (Moles et al, 2004) domains.

In humans, the MOR system is best known as a moderator of pain and pleasure (eg, Leknes and Tracey, 2008). Molecular imaging has demonstrated endogenous MOR activity during pain relief and certain types of reward (eg, Boecker et al, 2008; Colasanti et al, 2012; Henriksen and Willoch, 2008; Hsu et al, 2013). Hence, MOR signaling in humans likely affects value-based decision-making by altering the value of rewards and punishments. For instance, substance abusers’ preference for drug cues above all other rewards may be mediated by MOR system dysfunction (Ghitza et al, 2010). Preliminary psychopharmacological evidence from our lab showed that in healthy men, MOR agonism increased, and antagonism decreased, reward motivation and preference, specifically for the high-value stimuli, ie, the sweetest drink (Eikemo et al, 2016) and the most attractive opposite sex faces (Chelnokova et al, 2014; Chelnokova et al, 2016). These findings align well with previous evidence that MOR antagonism reduces preference for high-value food rewards (high-calorie foods; Ziauddeen et al, 2013).

Here, we investigated how the MOR system modulates value-based choice in healthy humans. In a psychopharmacological study with a three-way repeated-measures design, 30 opioid-naive men received the MOR agonist morphine (10 mg per-oral), the non-selective opioid antagonist naltrexone (50 mg), and placebo on 3 separate days. Since only the MOR is strongly affected by both drugs, we assume that behaviors influenced bidirectionally reflect drug effects on MOR and not other opioid receptors. Participants completed a two-alternative signal detection task with asymmetric rewards (Figure 1). The task is designed to induce a considerable response bias towards the most frequently rewarded response option (Pizzagalli et al, 2005). The response bias is markedly reduced in patients with disruptions of reward processing caused by, eg, depression (Pizzagalli et al, 2008b).

Figure 1
figure 1

Experimental task. Participants were presented with schematic faces and instructed to identify which of two alternative mouths was shown (eg, short or long mouth). We developed three equally ambiguous stimulus sets for use in the three sessions to avoid any learning effects (Supplementary Figure S1). Unknown to the participant, correct identification of one of the stimulus alternatives led to a monetary reward (NOK1, ~15 cents) three times more often than the other stimulus (75 vs 25% reward probability). The most frequently rewarded response option is considered the high-value stimulus. Non-rewarded and incorrect trials were followed by a fixation cross. Altogether 300 trials were divided into three blocks. The differences between the face stimuli have been inflated for illustrative purposes.

PowerPoint slide

To assess MOR system involvement in the processes underpinning value-based choice, we fitted trial-by-trial accuracy and reaction time data with a Bayesian implementation of the drift–diffusion model (DDM) of decision-making. The DDM is a well-established computational model used to study cognitive components of two-alternative decisions (Ratcliff and McKoon, 2008). It has provided valuable information about how decision subprocesses are influenced by drugs (Pedersen et al, in press; Van Ravenzwaaij et al, 2012), psychopathology (Banca et al, 2015), and by task properties such as time restriction or reward schedule (Mulder et al, 2012; Wiech et al, 2014).

The DDM describes the decision processes as a gradual accumulation of the difference in evidence for the two choice options (Ratcliff and McKoon, 2008). The decision begins at a starting point (z) in-between two decision boundaries, where the boundary separation parameter (a) quantifies the speed-accuracy trade-off; Figure 2. The efficiency of evidence accumulation is captured by the drift rate (v) parameter. A response is initiated once sufficient evidence has been accumulated to reach one of the two decision boundaries.

Figure 2
figure 2

The drift–diffusion model of decision-making. The model assumes that relative evidence for two decision alternatives is accumulated until a decision boundary is reached. The figure illustrates key parameters of the model. The drift rate (v) represents how much evidence is on average accumulated per time unit, and can indicate task difficulty, ability or effort exerted in the task. The jagged lines represent sample paths (the upward sloping line for a correct response; the downward sloping path for an erroneous response). The starting point (z), also called bias, provides information about whether the decision maker is favoring one option before trial onset. The figure here shows no bias. The boundary separation (a) indicates the speed-accuracy trade-off: the larger the separation, the more decision makers prefer accuracy over speed. The parameter t captures non-decision time needed for stimulus processing and response execution. The figure also shows typical response time distributions (error and correct trials) for a larger number of choices.

PowerPoint slide

We expected the starting point parameter to reflect the response bias induced by the skewed reward schedule of the present task (Mulder et al, 2012). Further, since MOR drugs modulate both preference and motivation in rodents, we hypothesized that MOR manipulations in healthy humans would modulate the shift in starting point and/or the efficiency of the evidence accumulation. We reasoned that behaviors modulated by the endogenous MOR system in the healthy human brain would show bidirectional drug modulation, as indicated by the linear contrast (morphine>placebo>naltrexone).

Materials and methods

Study Design and Participants

The study was conducted in a double-blind, placebo controlled manner. Tasks and drug conditions were pseudo-randomized and counterbalanced. Participants were 32 healthy men with normal or corrected-to-normal vision. One participant tested positive on the opiate urine screening (MOP Opiate300 Test Strip; SureScreen Diagnostics Ltd, Derby, UK) and another failed to complete all sessions, yielding a final group of n=30 (mean age 26.9, range 20–36). Exclusion criteria included: history of or current psychiatric/medical illness, prior drug dependence/addiction, use of medication except antihistamines and contraceptives, complex allergies. Alcohol and drug use was assessed during screening (Supplementary Information).

Time Line

Participants were tested on three different days with a minimum inter-session interval of seven days. Each session lasted approximately 3 h during which participants completed a battery of experimental tasks. An optimal test interval (60–120 min after drug intake) was deduced by comparing the time of maximal bioavailability and half-life of oral morphine and naltrexone. The monetary reward task was conducted on average ~90 min post drug administration. Participants were reimbursed 400–500 NOK ($60–70) per session, depending on task performance. The experimental procedures were approved by the Regional Ethics Committee (2011/1337/REK Sør-ØstD). Participants were informed that they could withdraw from the study at any time. Participants were asked to sustain from alcohol since the evening before each test session, to sustain from eating an hour before testing, and not to drive a vehicle for 6 h after drug administration. Participants were not allowed to consume caffeine or nicotine during the test session.

Drug Administration

Morphine is an opioid agonist with high affinity to the μ-receptor. To minimize subjective drug effects, we chose 10 mg per-oral morphine (Morfin, Nycomed Pharma, Asker, Norway). Previous reports have shown only weak subjective effects to larger doses (eg, Zacny and Lichtor, 2008). Naltrexone is a non-selective opioid antagonist with high affinity to μ- and κ-opioid receptors. We used 50 mg per-oral naltrexone (Adepend, Orpha-Devel, Purkersdorf, Austria), a standard dosage that blocks more than 90% of MORs (Weerts et al, 2013). Both drugs induce a plasma concentration plateau ~1 h after intake (eg, Lugo and Kern, 2002; Verebey et al, 1976). Placebo pills were cherry-flavored breath mints that were visually matched to morphine and naltrexone pills. A small amount of the flavored placebo pills were added to the drug dosages, to avoid recognition of medication taste. Participants were asked to swallow, rather than chew, the pills.

Control Measurements

Subjective state (including mood: happiness, anxiousness, irritability, and feeling good) ratings were collected (i) before drug ingestion (baseline); (ii) 60 min after drug ingestion; (iii) ~40 min into testing; and (iv) after completion of all tasks using electronic visual analog scales implemented in MatLab R2012a (Mathworks, Natic, USA). A motor-coordination task was performed ~100 min after drug administration (Supplementary Information).

Value-Based Decision-Making Task

Reward behavior was tested with a two-alternative forced-choice task (adapted from Pizzagalli et al, 2005, Figure 1). In each of 300 trials, a schematic face with no mouth is presented for 500 ms, followed by a brief presentation of one of two mouth stimuli (100 ms). The two stimuli are slightly different in one dimension, eg, length. The task is to correctly identify the mouth stimulus displayed in each trial. The two mouth stimuli are equiprobable and presented in a random order within each block. Participants are informed that correct responses can sometimes, but not every time, lead to a monetary reward. The reward message is presented immediately after a rewarded trial and replaced by a fixation cross after 1750 ms. Incorrect and unrewarded trials are followed by a fixation cross. Unknown to the participant, the task is based on a skewed reinforcement schedule. One of the two stimuli is associated with a larger probability of reward (75% when the correct answer is provided), than the other stimulus (25% reward probability following a correct trial). The most frequently rewarded response option is considered a high-value option. Participants performed a different version of the task in each session to avoid learning effects (see Supplementary Information, Supplementary Figure S1).

Behavioral data from this task has by large been assessed within a signal detection framework using block-wise accuracy (aggregated in three 100-trial blocks). While the bias (criterion) in a traditional signal detection framework is calculated as the ratio of hits for each response option, the DDM integrates both accuracy and reaction time data in the estimation of the decision parameters. The behavioral biases induced by the task can be reflected in the drift rate (v) and starting point (z), parameters reflecting the efficiency of evidence accumulation and a priori preference for one response option respectively. The key advantage of a DDM analysis of the data is that it allows for a clear discrimination of information processing efficiency and speed-accuracy trade-off, whereas the discriminability parameter in the signal detection framework reflects both these parameters. The tasks were presented on a 20″ PC monitor using E-prime software (version 2.0; Psychology Software Tools, Inc., Pittsburgh, Pennsylvania, USA).

Control Data Analysis

To test effects of drug manipulations on motor function and mood we used Bayesian hierarchical implementations of generalized linear models in Stan via RStan (Stan Development Team, 2014; see Supplementary Information for details)

Drift–Diffusion Model Analysis

Reaction time and accuracy data was fit with the DDM of decision-making. We used the HDDM toolbox (Wiecki et al, 2013), which allows for hierarchical modeling of DDM parameters in a Bayesian framework. To capture the within-subjects design of our experiment, we used a regression approach to model effects of the drug manipulation and of learning over blocks in each session. Because the aim of the current study was to examine the bidirectional effects of drugs relative to the placebo condition, we directly incorporated the relevant contrasts into the regression model (three drug conditions times three block conditions, see Supplementary Information for details).

The DDM included the drift rate (v), starting point (z), boundary separation (a), and non-decision time (t) parameters. The drift rate parameter is often interpreted as an index of task difficulty. However, in a within-subjects design with constant stimulus quality as employed here, drift rate can be interpreted as the efficiency of the signal processing, reflecting attention allocated or effort exerted during the task. Because we had no hypotheses concerning the non-decision time parameter of the DDM, t, this parameter was kept constant across conditions. Additional DDM parameters describing trial-by-trial variation in non-decision time, drift rate, and bias were not tested because parameter recovery experiments showed that these are difficult to estimate reliably on the individual level, and because their estimation is computationally expensive (Wiecki et al, 2013).

To estimate contrasts of model parameters in a hierarchical manner, we estimated a group-level mean and SD of model parameters, which served as priors for individual level parameters. The effects reported are group-level means. Contrasts for drug effects on DDM parameters are presented as posterior density plots that represent posterior beliefs about parameter values (Figure 5), commonly used for reporting results from Bayesian analyses. To visualize if and how well the model and the estimated parameters captured the data, we performed a posterior predictive check (Gelman et al, 2014), using the fitted model parameters to simulate data and then comparing these to the observed data in order to look for systematic discrepancies (Supplementary Information). The final DDM analysis included data from 27 participants (for details about data exclusion and Bayesian estimation see Supplementary Information).

Results

Control Measures

As expected since drug doses were optimized to avoid subjective drug effects, Bayesian regression analyses revealed no credible effects on mood (Figure 3; see Supplementary Information and Supplementary Table S3 for details). Importantly, successful blinding (34% correct guesses at debriefing in the third session) confirmed that drug administration of morphine and naltrexone was not associated with sedation, subjective high or other effects likely to affect task performance. Further, comparable performance across drug conditions on a motor-coordination control task indicated that neither of the drugs impaired ability to perform the task (Supplementary Table S4).

Figure 3
figure 3

No credible drug effects were found for any of the mood types. For illustration, average ratings with within-subjects SEM are presented for all measurements throughout the test phase (x-axis values refer to minutes after drug ingestion).

PowerPoint slide

Descriptive Statistics

Inspection of means and variance information for accuracy, and reaction time data for each drug condition indicated the presence of the expected behavioral bias towards the most frequently rewarded stimulus. Overall, participants were faster and more accurate on high-reward stimulus trials (Figure 4).

Figure 4
figure 4

Descriptive data. Average accuracy (a) and reaction time (b) data presented by drug condition and stimulus type (high- and low-reward probability) for the 27 participants included in the final DDM analysis. Error bars represent within-subject SEM.

PowerPoint slide

Drift–Diffusion Model Results

Baseline task performance

To evaluate how the DDM parameters would reflect the behavioral effects of the task, we first examined performance in the placebo condition (block-wise results are presented in Supplementary Information,Supplementary Figure S2, and Supplementary Table S1). A shift in the starting point of the evidence accumulation (z) was evident in the first block and increased over time. This result replicates the key finding of previous signal detection analysis of this task, namely that participants increasingly learn to prefer the most frequently rewarded response option (Pizzagalli et al, 2005), using computational modeling methodology. Estimation of the other relevant parameters revealed that both the drift rate (v) and boundary separation (a) decreased over time, ie, from the first 100 trials (block 1) until the completion of the 300 trials. This likely reflects task fatigue. Note that comparable fatigue effects were observed in the two drug conditions as well.

Drug effects on decision parameters

The DDM revealed bidirectional drug effects on the rate of evidence accumulation, that is the drift rate (v): Opioid antagonism with naltrexone decreased the drift rate compared to the placebo condition (mean difference Δ (90% HDI)=0.17 (−0.29, −0.058), mean Cohen’s d=−0.46). The @@posterior probability of a lower drift rate in the naltrexone condition was 0.99 (p(N<P)). In contrast, drift rate was increased following MOR agonism with morphine (Δ=0.126 (−0.015, 0.268), p(M>P)=0.93, d=0.30). The comparison of posterior distributions for the M>P>N contrast shows that the drift rate following MOR agonism was higher than after MOR antagonism with a posterior probability of 0.99 (Figure 5).

Figure 5
figure 5

Drug effects on drift–diffusion model parameters. (a) Posterior density plots of the primary contrast morphine>naltrexone (M>N) for the three main DDM parameters (row 1–3). Posterior density plots represent the probability of parameter values given the prior, model and data. The horizontal lines below each density plot indicate the 90% highest density intervals (HDI) calculated from the posterior distribution. HDIs can be thought of as Bayesian analogs to confidence intervals (CIs). HDIs however also provide information about the probabilities of the possible parameter values. The 90% HDI contains the 90% most credible values given the prior information, model, and data. Contrasts with at least 90% of the posterior distribution on either side of zero are considered credibly different and are marked with shaded distributions. (b) Posterior density plots showing posterior distributions of group-level mean parameter estimates for drift rate, starting point and boundary separation for both drugs contrasted with placebo (P) estimates. (c) Group-level DDM parameters of interest in the three drug conditions, midpoints mark the mean parameter estimate, error bars are 90% HDIs of the means. Vertical lines illustrate between-subject variation for each drug condition in the estimation of DDM parameters. (d) Bivariate posterior density distributions of the contrasts of posterior distributions for group-level drift–diffusion parameter estimates. Contrasts show Naltrexone–Placebo (N–P) and Morphine–Placebo (M–P) for starting point bias and drift rate posterior distributions. The figure illustrates that MOR antagonism and agonism have opposite effects on starting point (response bias) and drift rate compared to placebo.

PowerPoint slide

A similar drug effect pattern was found for the starting point parameter (z). Across all drug conditions, the starting point was shifted towards the high-value option. Morphine increased the shift of the starting point towards the high-reward boundary compared to the placebo control condition (Δ=0.041 (−0.004, 0.084), p(M>P)=0.94). While the starting point only tended to be less skewed towards the high-reward option in the naltrexone condition compared to placebo (Δ=0.032 (−0.092, 0.03,); p(N<P)=0.81), it was clearly decreased with naltrexone compared to morphine (Δ=0.073 (−0.008, 0.147), p(M>N)=0.94.

We found no credible support that the MOR system influences boundary separation (Figure 5), as evidenced by comparable parameter estimates for naltrexone and placebo (Δ=0.01 (−0.075, 0.54), p(N<P)=0.61, d=0.09);) and morphine and placebo (Δ=0.038 (−0.040, 0.95), p(M>P)=0.76, d=0.34).

In sum, the computational modeling results support MOR system modulation of value-based decision-making by changes in both the starting point towards a stimulus associated with high-reward probability and the rate of the evidence accumulation (Figure 5d and Supplementary Table S1).

Discussion

Results from bidirectional drug manipulation of MORs provide evidence for MOR system modulation of value-based choice in healthy humans. As hypothesized, MOR blockade reduced the propensity to modulate behavior as a function of reward probability, primarily through lowered evidence accumulation rate compared to placebo. Conversely, MOR system activation with a non-sedative dose of morphine led to higher drift rate. Morphine also shifted the starting point of each decision closer towards the high-value boundary. Indeed, the agonist and antagonist drugs appear to ‘pull’ these two decision subprocesses in opposite directions compared to placebo.

When people develop a preference towards one response option, this is usually reflected in a shifted starting point of the decision process. The bidirectional MOR modulation of starting point observed here indicates human MOR system involvement in preference for high-value (high-probability) rewards. This finding is in line with rodent studies demonstrating that MOR stimulation increases while MOR blockade decreases preference specifically for high-value rewards such as sucrose, fat, and sex cues (Mahler and Berridge, 2012). Task-induced changes in starting point have previously been associated with brain activity in regions of a fronto-parietal network, including several MOR-rich cingulate and prefrontal areas (Henriksen and Willoch, 2008; Mulder et al, 2012).

MOR modulation of preference for the high-value response option was also manifest in changes of evidence accumulation efficiency (drift rate). Compared to placebo, morphine increased and naltrexone decreased accumulation efficiency in the task. Since stimulus quality was kept constant across counterbalanced tasks in the three sessions, we interpret the changes in drift rate as a reflection of the motivation to obtain reward. Higher motivation to earn money should increase attention and effort exerted in the task, thereby enhancing the efficiency of evidence accumulation. In principle, changes in drift rate could also result from drug effects on visual acuity, ie, by enhancing or disrupting overall stimulus perception. However, we find no evidence of drug effects on vision. MOR manipulations primarily affected accuracy for the high-reward option. Further, eye-hand coordination performance was comparable across drug conditions. Viewing the drift rate changes as a reflection of altered attention and effort, is supported by rodent findings that MOR manipulation modifies the value an animal places on a reinforcer (DiFeliceantonio et al, 2012). Indeed, infusion of a MOR agonist into the nucleus accumbens increased rats’ willingness to expend effort to obtain a reward, as measured by progressive ratio schedule (Zhang et al, 2003). Furthermore, injecting μ-opioid drugs directly into mesolimbic structures such as the nucleus accumbens increases rats’ incentive motivation (Berridge et al, 2009).

Notably, the present MOR drug effects on value-based choice occurred without credible changes in mood, subjective ‘high’, alertness or motor coordination. Debriefing revealed that participants performed at chance level when asked to identify the drugs at the end of the three sessions. Hence, drug effects on value-based choice are likely to reflect MOR modulation of decision subprocesses and not indirect effects due to subjective drug effects. Since both the starting point and bias measures were modulated bidirectionally, we assume that the effects are driven by the only known receptor strongly affected by both drugs (MOR and not KOR).

The computational modeling approach used here yields mechanistic information above and beyond what can be gleaned from descriptive data and signal detection analysis. Behavioral biases can have systematic effects on the response time distributions for errors and correct responses. These are captured by the DDM, but would be missed in analyses limited to accuracy data. The estimation of DDM parameters enabled assessment of drug effects on latent psychological processes that make up the decision process. The assumptions of the DDM, as a sequential sampling model of decision-making, have not only been shown to accurately describe behavioral results (Ratcliff and McKoon, 2008). Studies linking this model to neurophysiological (Gold and Shadlen, 2007; Shadlen and Newsome, 2001) and neuroimaging mechanisms of decision-making (Basten et al, 2010) indicate that the model can describe neurobiological mechanisms underpinning binary decisions.

In the healthy human brain, endogenous opioids are released to regulate pain (Henriksen and Willoch, 2008) and rejection (Hsu et al, 2013). Preliminary molecular imaging evidence also indicates opioid release during certain rewards: social acceptance (Hsu et al, 2013), exercise ‘high’ (Boecker et al, 2008) and after positive mood induction (Koepp et al, 2009). We have previously reported evidence that bidirectional MOR manipulations affected explicit liking and wanting of high-value rewards in the taste (Eikemo et al, 2016) and face perception (Chelnokova et al, 2014; Chelnokova et al, 2016) domains. Here, we corroborate and extend upon these findings by showing MOR modulation of implicit motivation and preference. Indeed, debriefing confirmed that the task’s skewed reward schedule was unknown to participants even after three test sessions. Further, we extend the evidence of MOR reward modulation to a secondary reinforcer (money). These results support a general role of MORs across reward domains. Indeed, brain regions richly innervated by MORs such as ventral striatum, amygdala and perigenual anterior cingulate cortex have been consistently implicated in human neuroimaging studies of monetary and other rewards (McClure et al, 2004).

The current results in healthy participants also align well with studies showing impaired value-based decision-making across substance-use disorders (Lubman et al, 2009; Paulus, 2007) and with the growing evidence for dysregulation of MOR function across addictions (Ghitza et al, 2010; Mick et al, 2015). Indeed, addiction is characterized by a maladaptive preference for certain activities, such as drug abuse or pathological gambling, at the cost of all other rewards. Such exaggerated preference for drug cues predicted relapse to heroin (Lubman et al, 2009). Thus the increase in preference for high-value reward observed here after acute morphine administration may represent an innocuous precursor to the maladaptive drug preference commonly observed after chronic MOR stimulation with opioids, stimulants or alcohol (Colasanti et al, 2012; Mitchell et al, 2012). Notably, sustained MOR antagonist treatment has led to improvements in craving and other pathological behaviors across addictions (Lobmaier et al, 2011).

The meso-cortico-limbic dopamine (DA) and limbic MOR systems are co-located in the nucleus accumbens, ventral pallidum, and amygdala and are thought to play complementary and central roles in reward processing (Berridge et al, 2009). Thus, the present effects on effort exerted in the reward task may in part reflect MOR modulation of mesolimbic DA signaling (Johnson and North, 1992). The reward task used here also involves implicit learning of reward probabilities which likely recruits dopaminergic neurotransmission (Pessiglione et al, 2006). Indeed, inducing an antagonistic effect on DA was previously shown to reduce learning and preference for high-value reward in this task (Pizzagalli et al, 2008a). However, evidence from substantial rodent opioid research points to important, independent contributions of the MOR system to different aspects of reward processing, notably to reward valuation and ‘liking’ (eg, Berridge et al, 2009; Hnasko et al, 2005; Laurent et al, 2015). Accordingly, we speculate that MOR system modulation of value is the main mechanism underpinning the observed drug effects on starting point and drift rate.

Some limitations of the present study warrant consideration. Only male participants were tested, as opioid drugs have been shown to interact with female cyclic hormones (Smith et al, 1998). Since the present findings mirror results from rodent studies, we consider it likely that future studies will demonstrate similar MOR modulation of value-based decision-making in women. Although we do not explicitly control for intake of nicotine or caffeine in the present analysis, the within-subjects design renders systematic influences of these substances unlikely. Although it is a not limitation as such, we wish to stress that the present results indicate a reduction and not an elimination of reward responsiveness after naltrexone treatment. At 50 mg per-oral, this opioid antagonist induces a high degree of MOR blockade in the healthy brain (>90% of endogenous receptors; Weerts et al, 2013). Value-based choice is likely influenced by the MOR system in interaction with, eg, noradrenaline, serotonin, dopamine, and endocannabinoids (Rogers, 2011). Understanding how these systems together contribute to aversive and appetitive processes in healthy humans and psychopathology remains an important task for the future.

In sum, these results suggest that the human MOR system guides value-based choice by tuning decision-making towards high-value rewards. Further, the findings support a role for the human MOR system promoting adaptive strategies such as effort expenditure and preference for highly valuable rewards across stimulus domains. Knowledge of the basic affective mechanisms of the MOR system in healthy humans is a necessary foundation for development of improved, targeted treatments of the millions of patients whose MOR systems have been disrupted by chronic pain or substance dependence.

Funding and disclosure

The study was supported by grants from the Norwegian Research Council (ES455867) and the South-Eastern Norway Regional Health Authority (2013053). The study is indirectly supported by Aleris Healthcare, Norway, financing the research position of Frode Willoch at the University of Oslo. The authors declare no conflict of interest.