Elsevier

Bioorganic & Medicinal Chemistry

Volume 24, Issue 24, 15 December 2016, Pages 6401-6408
Bioorganic & Medicinal Chemistry

From lead optimization to NDA approval for a new antimicrobial: Use of pre-clinical effect models and pharmacokinetic/pharmacodynamic mathematical modeling

https://doi.org/10.1016/j.bmc.2016.08.034Get rights and content

Abstract

Because of our current crisis of resistance, particularly in nosocomial pathogens, the discovery and development of new antimicrobial agents has become a societal imperative. Changes in regulatory pathways by the Food and Drug Administration and the European Medicines Agency place great emphasis on the use of preclinical models coupled with pharmacokinetic/pharmacodynamic analysis to rapidly and safely move new molecular entities with activity against multi-resistant pathogens through the approval process and into the treatment of patients. In this manuscript, the use of the murine pneumonia system and the Hollow Fiber Infection Model is displayed and the way in which the mathematical analysis of the data arising from these models contributes to the robust choice of dose and schedule for Phase 3 clinical trials is shown. These data and their proper analysis act to de-risk the conduct of Phase 3 trials for anti-infective agents. These trials are the most expensive part of drug development. Further, given the seriousness of the infections treated, they represent the riskiest element for patients. Consequently, these preclinical model systems and their proper analysis have become a central part of accelerated anti-infective development. A final contention of this manuscript is that it is possible to embed these models and in particular, the Hollow Fiber Infection Model earlier in the drug discovery/development process. Examples of ‘dynamic driver switching’ and the impact of this phenomenon on clinical trial outcome are provided. Identifying dynamic drivers early in drug discovery may lead to improved decision making in the lead optimization process, resulting in the best molecules transitioning to clinical development.

Introduction

We currently find ourselves in the midst of a crisis of resistance. Gram-negative pathogens have been highly successful in leveraging multiple resistance mechanisms in combination so as to render our most potent antibiotics virtually useless. This is seen most clearly in the treatment of serious infections in the ICU. It is not rare to see pathogens like Pseudomonas aeruginosa, Acinetobacter spp. or Enterobacteriaceae like Escherichia coli or Klebsiella that possess multiple resistance mechanisms like new β-lactamases that hydrolyze a large number of our agents along with porin deletions and efflux pump overexpression. The advent and spread of these pathogens has explicit costs, but also hidden costs. Many of our most important advances in modern medicine implicitly depend upon the ability to treat nosocomially-acquired infections. Examples include oncologic chemotherapy, immunotherapy, organ transplants and invasive surgical procedures.

We need to have new antibiotics for the clinician’s therapeutic armamentarium. Many BigPharma companies have moved out of the anti-infective discovery arena, leaving biotech companies to bear the brunt of the effort in early discovery/development.

In the last decade, the discipline of pharmacokinetics/pharmacodynamics has grown in sophistication. Regulatory agencies have made it plain that this type of analysis is an expected part of the registration package. The FDA currently has an end-of-Phase 2b meeting at which sponsors are expected to provide a dose justification for their choice of dose and schedule for a Phase 3 trial. Likewise, the European Medicines Agency has a strong emphasis on these types of studies as necessary for approval of a new chemical entity.

Perhaps even more importantly, agents with activity against multi-resistant pathogens have been given access to accelerated approval tracks. This has come to pass in no small part because of reliance on these types of pre-clinical evaluations.

In the first section of this review, an example of using an animal model to help in identifying an appropriate dose for a serious infection (in this instance, pneumonia caused by Pseudomonas aeruginosa) will be set forth. A second example will also be explored for the Hollow Fiber Infection Model (HFIM). Finally, some ways to employ the HFIM earlier in the drug discovery and development process (Lead Identification/Lead Optimization and Candidate Selection) will be set forth.

Our laboratory examined the carbapenem meropenem in a neutropenic murine model of P. aeruginosa pneumonia.1 The neutropenic model was chosen because it provides the clearest insight into ‘bug versus drug’. The initial purpose of the study was to identify a linkage between drug exposure and the ability of that exposure to both kill organisms and, importantly, suppress amplification of less-susceptible populations. Previous work by Tam et al.2 demonstrated that Time > MIC (or, alternatively, Cmin/MIC ratio) was the pharmacodynamically linked index for meropenem in P. aeruginosa. Consequently, because of the rapid clearance of meropenem in the mouse, all regimens were administered on a 4-hourly basis over 24 h.

Because the infection is in the alveolar space, we chose to document both the concentration–time profile of meropenem in plasma as well as in Epithelial Lining Fluid (ELF) in infected mice. The other endpoints examined were the total number of organisms in murine lung at time zero (therapy initiation) and at 24 h later as well as the number of less-susceptible organisms in the total population at these times, as determined by plating samples from lung on plates containing three times the baseline MIC of the organism to meropenem. This amount was chosen because it is greater than 1 tube dilution greater than the MIC, but less than 4 times the MIC, which is the typical MIC change when the parent organisms down-regulates the main porin channel for carbapenems, oprD.3, 4

As part of the process of evaluation, our laboratory generally fits a mathematical model to all system outputs from all time points simultaneously. The enabling equations are given in the primary reference.1 We looked at a no-treatment control cohort as well as cohorts where 50–450 mg/kg every 4 h was administered. The fit of the model to the data was quite acceptable. The model fit after the Bayesian step is displayed for the different system outputs (plasma concentration, ELF concentration, total bacterial burden, less-susceptible bacterial burden) in Table 1.

The first piece of information that is important is how well meropenem penetrates to the alveolar space. This is displayed in Figure 1. In the mouse, the alveolar penetration by meropenem is approximately 40%. However, as in man, there is true between-subject variance in penetration, so a Monte Carlo simulation was performed. This demonstrated a wide range of estimates of penetration with the interquartile range being 21–74%.

The reason for the study was identification of the linkage between exposure and Pseudomonas cell kill as well as exposure and resistance suppression. These endpoints are displayed in Figure 2. Nosocomial pneumonia is one of the most difficult to treat infections for the clinician to face. In order to have a high probability of getting an acceptable clinical response, larger bacterial kills are required.5 As seen in Figure 2, a 2 Log10(CFU/g) cell kill was attained when the meropenem concentration in ELF was above the MIC for 32% of the dosing interval and a 3 Log10(CFU/g) kill required meropenem ELF concentrations to exceed the MIC for 50% of the dosing interval. This latter exposure target also allowed resistant subpopulations to be suppressed.

Now, in order to employ the targets in man, it was necessary to understand the ELF penetration of meropenem in patients. In this instance, we had studied a population of patients with ventilator-acquired bacterial pneumonia (VABP).6 In Figure 3, we see that the median penetration from the Monte Carl simulation is 25%. However the range is greater than the range seen in the mouse, which, given the physiological alterations seen in these patients, is not surprising. The interquartile range is 9–70%.

Finally, we need to bridge to man. This will allow direct estimation of the target attainments (targets derived from the murine model) in patients with the pathophysiologic process to be treated (VABP). This is displayed in Figure 4.

The dose and schedule employed (2 g every 8 h as a prolonged infusion) is the most intense that is FDA or EMA approved. As can be seen, the 2 Log10(CFU/g) bacterial kill target is attained circa 90% when the Pseudomonas MIC value is 1 mg/L. In order to estimate how well this will perform in a clinical trial setting, it is only necessary to take an expectation over the MIC value distribution and the target attainment probability. For the 2 Log10(CFU/g) bacterial kill target, this is 76%; for the 3 Log10(CFU/g)kill and resistance suppression targets, this is circa 65%. It should be noted that this prediction regarding resistance suppression is concordant with the data from Luyt et al.7 Given that VABP is quite difficult to treat and that the patients are quite ill, it is no surprise that failure rates are often in the 30–60% range. It should be recognized we calculated target attainment for a 2 g dose, while the most commonly used dose of meropenem is 1 g. This again makes these results quite credible. Consequently, employing such an approach is one way for drug developers to de-risk Phase 3 trials (where the bulk of the expense is).

Identifying the exposure target for bacterial cell kill and resistance suppression and then using Monte Carlo simulation for the candidate dose and schedule of drug administration allows taking the expectation over the MIC distribution, providing an estimate of expected response for that candidate regimen. When the calculation provides an estimate, for example of 2 Log kill expected target attainment of 40%, it would not be wise to proceed to clinical trial. If, however, the expected target attainment exceeds 90%, then the risk of trial failure is much less.

The HFIM is a somewhat newer approach to help in the process of developing new antimicrobial agents. Here, we will use the drug linezolid, an oxazolidinone, as an example. The pathogen here is Mycobacterium tuberculosis (TB). With the global emergence of Multi-Drug-Resistant (MDR) TB and eXtensively-Drug-Resistant (XDR) TB, the rapid development of new agents is a matter of high priority. Because of the acuteness of the problem, the oxazolidinones were evaluated quickly in patients with few to no other treatment options.8 While the results were quite promising, the long term use of linezolid resulted in substantial toxicity. In the example below, we examined the effect of linezolid on TB cell kill and resistance suppression, but also looked at the linkage of linezolid exposure to mitochondrial toxicity as measured by assaying Cytochrome c Oxidase Complex 4.9

An advantage of the HFIM is that any concentration–time profile can be generated. Consequently, one does not have animal system pharmacokinetic profiles forced onto the evaluation. Another advantage is that the duration of the experiment can be considerably longer. We routinely run our TB experiments for weeks to months. A disadvantage is that there is absolutely no immune function present, making the results conservative.

In this evaluation, our laboratory decided to evaluate the most common clinical dose of linezolid for TB, 600 mg daily. It should be appreciated that when a specific dose of drug is administered to a large patient population there will be a range of exposures achieved because of between-patient variability in important pharmacokinetic parameter values. Consequently, we simulated exposures for the Area Under the concentration–time Curve (AUC) for the mean exposure associated with 600 mg and then exposures that were +1 and +2 standard deviations above the mean (equivalent to the mean exposures for 858 and 1116 mg) as well as −1 Standard Deviation below the mean (equivalent to the mean exposure for 342 mg). We also studied a no-treatment control. The 600 mg dose was also fractionated as 300 mg 12 hourly. This has the advantage of providing an exposure range and, because we know the percentage points of the distribution, we will have an insight into the impact for a population of patients. We can also examine the impact of schedule of administration on cell kill.

In Figure 5, we display the cell kill achieved with the different exposures (Panel A) and the amplification of a less linezolid-susceptible population (panel B). Panel A demonstrates that the mean exposure for 600 mg drives a 1.7 Log10(CFU/ml) decline in bacterial burden over three weeks. The exposures corresponding to +1 and +2 Standard Deviations both drive 3 Log10(CFU/ml) declines, while the −1 Standard Deviation exposure drove only a 0.5 Log decline. Somewhat surprisingly, the 300 mg 12 hourly regimen produced better bacterial cell kill than the 600 mg daily regimens.

All active regimens drove an amplification of a less linezolid susceptible population, which at first glance seems puzzling, as the 600 mg mean exposure regimen drove the greatest amplification. This, however, is consistent with the understanding of selective pressure driving resistance. Lower exposures cause less amplification because there is not as much selective pressure. At higher exposures, amplification is less, generally, because these greater exposures have an effect on the less-susceptible population. The result was described by our laboratory as an ‘inverted U’ phenomenon.10 This is seen in Figure 6. What is clear is that even though there is a decline in the number of resistant organisms with increasing exposure, even the highest exposure still allows resistance amplification.

The HFIM also allows development of relationships between exposure and concentration-driven toxicity. For linezolid, the most common toxicity is hematologic and is caused by mitochondrial toxicity.9, 11 Our laboratory employed the K562 cells, a human erythromyeloblastoid leukemia cell line (ATCC CCL-243) with platelet-like properties. As in the effect studies, we exposed these cells to differing amounts of linezolid. Over 16 days, we integrated the amount of Cytochrome Oxidase c Complex 4 assayed from each regimen (dependent variable) and related this to either the linezolid trough concentration or the linezolid AUC as the independent variable. This relationship is shown in Figure 7. The linezolid trough concentration is more closely linked to toxicity than is AUC. It should be noted that this was done on the basis of only two different iso-exposure regimens (600 mg daily and 300 mg every 12 h).

When this is linked to the observation about linezolid-induced bacterial cell kill, in which 300 mg 12 hourly performed better than 600 mg once daily, it is clear that for linezolid, effect and toxicity are closely linked. One cannot simply drive up the dose without inducing substantially greater toxicity. Likewise splitting the dose to a 12-hourly schedule will generate better effect, but again at the price of increased toxicity and, likely, decreased adherence which is critical in the therapy of TB.

The use of preclinical models such as the murine pneumonia model for Gram-negative pneumonia and the HFIM for MTB have proven valuable in identifying doses and schedules to optimize cell kill, suppress resistance and minimize toxicity. As new drugs are developed for ‘bad bugs’ and the findings extended to populations of patients through mathematical modeling and Monte Carlo simulation, the ultimate Phase 3 trial design can be optimized and the Phase 3 process can be de-risked.

However, in both examples, the use of these approaches was late in the drug development process. This raises the question as to whether this type of approach can be inserted into the process much earlier, perhaps in the lead-identification/lead optimization stage. We will review two publications where findings derived from mathematical models that were employed to analyze HFIM data showed the way to a possible pathway for this.

In an evaluation of the anti-influenza agent zanamivir, it was noted that the measure of drug exposure best linked to viral suppression was the trough concentration.12 This was in contradistinction to oseltamivir, another anti-influenzal drug of the same class (neuraminidase inhibitor), where the index most closely linked to viral suppression was AUC.13 The impact of once-daily versus twice-daily dosing on Influenza Virus suppression is shown in Figure 8.12 The differences are most clearly perceived at the standard clinical dose of zanamivir of 600 mg every 12 h. When this is administered as 1200 mg once daily, at the time of maximal antiviral suppression (hours 48–72), there is a greater than 1 Log10(PFU/ml) difference between once versus twice daily administration in favor of the twice daily schedule. We hypothesized that the half-life of zanamivir (2.5 h) relative to oseltamivir (8 h) played a causative role in the dynamic driver switching from AUC/EC50 to Trough/EC50 (or Time > EC50).

In order to test this hypothesis, we used one of the strengths of the HFIM, which is the ability to mimic any half-life. We then administered the zanamivir exposure as having a 2.5 versus 8 h half-life, with the AUC held constant.12 This required that the volume of distribution become larger with the longer half-life, so that the AUC values would match. The impact of changing half-life and also the impact of once- versus twice-daily dosing with the ‘standard’ zanamivir half-life is shown in Figure 9.14

In Panels A and B, it is clear that there is an impact of dose fractionation when the half-life of 2.5 h is used, while this impact vanishes with the use of the 8 h half-life. This clearly displays the reason for the switch from AUC/EC50 as the dynamic driver for oseltamivir (which has an 8 h clinical half-life) to Trough/EC50 for zanamivir, which has a 2.5 h half-life.

We then employed a large mathematical model to gain insight into this switch. The enabling equations can be found in the original manuscript.12 The model demonstrated that the maximal effect of zanamivir allowed suppression of 99% of rounds of viral replication. The trough concentrations for once-daily versus twice daily or thrice–daily dosing with the 2.5 h half-life drove% viral suppression of 72% versus 93% and 96%, respectively. This explains the impact of dose fractionation with this half-life and explains why the dynamic driver for zanamivir is Trough/EC50. Alternatively, when the 8 h half-life was employed, the trough concentrations drove viral suppression at trough in excess of 96% for all dose fractionation regimens, explaining why with the longer half-life AUC/EC50 is the dynamic driver.

When we began to understand that there could possibly be ‘dynamic driver switching’ as a function of half-life, it became obvious that this could have an impact in the clinical trial circumstance. An underappreciated phenomenon is that there exists true between-patient variability in important pharmacokinetic parameters. Differences in the model parameters will drive differences in volume of distribution, clearance and, hence, half-life.

Raltegravir was the first integrase inhibitor and is dosed twice-daily. Preclinical data demonstrated that raltegravir should be able to be dosed on a daily basis.15 The sponsor performed a large, randomized double-blind controlled trial with once versus twice daily dosing of raltegravir.16 The results of the trial demonstrated that the daily regimen was not non-inferior, despite the strong preclinical data (target of a 10% non-inferiority margin with observed 95% confidence interval −10.7% to −0.8%).

We re-examined the pharmacodynamics of raltegravir17 and compared once and twice daily dosing in the light of our findings with zanamivir (see above). We simulated both regimens with 2, 3, 4, and 8 h half-lives. The underlying hypothesis to be tested was that the between-patient variability would result in a substantial number of patients that would have half-lives that were different from the mean and that this would have an impact of anti-viral activity, particularly in the daily administration experiments. The results are displayed in Figure 10.

In this figure, for each half-life examined (2, 3, 4 and 8 h), the effect seen with daily dosing is connected by a blue line with the effect seen with 12-hourly dosing. As the half-life increases from 2 to 3 and then to 4 h, the slope of the line connecting the 12 hourly and once-daily dosing groups becomes more and more shallow, indicating less and less adverse effect of the once daily dosing. When an 8 h half-life was examined, there was virtually no difference between daily and twice-daily dosing. This caused us to perform a Monte Carlo simulation with the distribution of half-lives observed for raltegravir. We found that approximately 6.6% of simulated subjects had a half-life less than 4 h. The margin of difference in the clinical trial was −5.7%, it is likely that the distribution of half-lives caused a switch in the dynamic driver from AUC/EC90 to % time > EC90 (identical to trough/EC90). Consequently, this raises an important principle. When half-life is short it is likely that time > EC90 (or time > MIC for bacteria) will be the dynamic driver. As half-life increases, AUC/EC90 (or AUC/MIC for bacteria) takes over as the dynamic driver. Therefore, when calculating dose and schedule for clinical trials, it is important to recognize that dynamic switching can take place and to recognize the impact of such a switch for a large population of patients. This also has obvious implications for development of regimens for children, as oftentimes they have higher clearances and shorter half-lives than is seen among adults.

One of the real difficulties in employing pharmacodynamic approaches earlier in the discovery process is that there is substantially less information about any potential compound or class of compounds. Yet, it would be important to drive the lead identification/lead optimization process in a way that results in the best choices.

What are the determinative factors for microbiological effect? One is a measure of potency of the drug for the pathogen in question, whether this is referred to as an MIC or an EC90, or some other metric. Another is protein binding, which will almost always (but not 100%) have a negative impact on potency as the binding increases. Binding may also have an impact on effect site penetration or on the rapidity of effect site penetration. Finally, the pharmacokinetic profile of the drug (volume of distribution, clearance, half-life) will have a major impact on the viability of the drug. For example, ultra-short half-life drugs (<1 h) tend to be difficult to develop and are shied away from by companies. Most of these factors can be rapidly ascertained, such as the MIC and the protein binding. The pharmacokinetic profile is a virtual black box early in the discovery/development process. Allometric species scale up is possible, but this generally occurs much later in the process.

The understanding that dynamic switching can occur as a function of half-life leads to the possibility of employing the HFIM much earlier in the discovery process. Small amounts of drug can be employed to perform pharmacodynamic experiments with empirical half-lives. Very short half-life drugs (circa 1–2 h) will almost always be time above a threshold driven for effect (or equivalently, trough/MIC ratio). When half-life is ⩾8 h, AUC/MIC has the highest likelihood of being the dynamic driver. In between these two points, experimentation is critical. Aminoglycosides and fluoroquinolones are examples of classes of agents where AUC/MIC is the dynamic driver even though half-lives are relatively short. Consequently, performing short free drug exposure experiments with a range of half-lives and a range of exposures can provide substantial insights into rank ordering candidate molecules. Understanding the exposures resulting in good cell kill and resistance suppression for a range of possible half-lives will be helpful in sorting among congeners. This depends upon knowing the potency measure (MIC) and the effect of protein binding prior to the HFIM experiments.

While the examples here are viral, there have been bacterial agent examples such as quinolones, where shorter half-life agents are Time > MIC driven (stopped in development) while the more well-known agents are AUC/MIC-driven. The mechanism behind dynamic switching has not been fully elucidated. I speculate that the mean residence time of the molecule on the target has a central role to play. Drug at the effect site serves as an impedance factor to keep drug on the target site. With faster half-life agents, there is more of a tendency for the drug to dissociate from the target site and the pathogen to return to replication.

There will be times (most frequently) when new chemical series are derived to optimize potency. Other times, driving to lower clearances and longer half-lives may be the best path forward. This is seen when time > MIC is the dynamic driver. At other times, driving to lower protein binding to have less impact on cell kill and more rapid penetration to the infection site may be the best path forward. HFIM experiments can rapidly deconvolute these issues. As more information becomes available, these experiments can be repeated with longer durations as the number of candidates is winnowed down. In this way, better decisions can be taken early on in the discovery/development process to obtain the best molecules to answer our current crisis of resistance.

References and notes (17)

  • J.J. Eron et al.

    Lancet Infect. Dis.

    (2011)
  • G.L. Drusano et al.

    Antimicrob. Agents Chemother.

    (2011)
  • V.H. Tam et al.

    Antimicrob. Agents Chemother.

    (2005)
  • J.P. Quinn et al.

    J. Infect. Dis.

    (1986)
  • B.S. Margaret et al.

    J. Antimicrob. Chemother.

    (1989)
  • G.L. Drusano et al.

    J. Infect. Dis.

    (2014)
  • T.P. Lodise et al.

    Antimicrob. Agents Chemother.

    (2011)
  • C.E. Luyt et al.

    Antimicrob. Agents Chemother.

    (2014)
There are more references available in the full text version of this article.

Cited by (11)

  • Pharmacometrics in tuberculosis: progress and opportunities

    2022, International Journal of Antimicrobial Agents
    Citation Excerpt :

    Pharmacometric analyses have an important role in translational medicine and vertical integration of findings from drug discovery to confirmatory trials. A great deal of important work is currently ongoing in the preclinical space [89–92], most notably with the hollow-fibre infection model [23,93,94] and rabbit [23,95] and murine [96] infection models, although a detailed discussion of this aspect of the field is beyond the scope of this review. In the exposure–response analysis of rifapentine in two phase II clinical trials [58], large cavities were shown to be a predictor of unfavourable outcome; this finding was corroborated in a recent phase III confirmatory trial – the TBTC Study 31/A5349 [2] – and explained by non-clinical studies in a rabbit model of active TB which found that rifapentine penetrates lesions poorly compared with rifampin [97].

  • Predicting mutant selection in competition experiments with ciprofloxacin-exposed Escherichia coli

    2018, International Journal of Antimicrobial Agents
    Citation Excerpt :

    Mechanism-based PK/PD models incorporate knowledge of the studied system. During the last decades, in silico PK/PD models have been developed for a variety of different bacterial strains and antibiotics [5,8,9]. PK/PD models predicting outcomes outside the traditional experimental setting are valuable for making drug development more efficient and attractive [5] and may allow for more reliable adjusted dosing regimens in special patient populations.

  • The Treatment of Tuberculosis

    2021, Clinical Pharmacology and Therapeutics
View all citing articles on Scopus
View full text