Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Current toxicity protocols relate measures of systemic exposure (i.e. AUC, Cmax) as obtained by non-compartmental analysis to observed toxicity. A complicating factor in this practice is the potential bias in the estimates defining safe drug exposure. Moreover, it prevents the assessment of variability. The objective of the current investigation was therefore (a) to demonstrate the feasibility of applying nonlinear mixed effects modelling for the evaluation of toxicokinetics and (b) to assess the bias and accuracy in summary measures of systemic exposure for each method. Here, simulation scenarios were evaluated, which mimic toxicology protocols in rodents. To ensure differences in pharmacokinetic properties are accounted for, hypothetical drugs with varying disposition properties were considered. Data analysis was performed using non-compartmental methods and nonlinear mixed effects modelling. Exposure levels were expressed as area under the concentration versus time curve (AUC), peak concentrations (Cmax) and time above a predefined threshold (TAT). Results were then compared with the reference values to assess the bias and precision of parameter estimates. Higher accuracy and precision were observed for model-based estimates (i.e. AUC, Cmax and TAT), irrespective of group or treatment duration, as compared with non-compartmental analysis. Despite the focus of guidelines on establishing safety thresholds for the evaluation of new molecules in humans, current methods neglect uncertainty, lack of precision and bias in parameter estimates. The use of nonlinear mixed effects modelling for the analysis of toxicokinetics provides insight into variability and should be considered for predicting safe exposure in humans.
The online version of this article (doi:10.1007/s10928-015-9413-5) contains supplementary material, which is available to authorized users.
Keywords: Toxicokinetics, NOAEL, Safety margin, Model-based drug developmentThe purpose of toxicokinetic studies in the evaluation of safety pharmacology and toxicity is the prediction of the risk that exposure to a new chemical or biological entity represents to humans [1, 2]. Understanding of the relationships between drug exposure, target engagement (i.e., activation or inhibition) and downstream biological effects of a given physiological pathway can provide insight into the mechanisms underlying both expected and ‘unexpected’ toxicity [3] (Fig. 1 ). In addition, the use of a mechanism-based approach has allowed better interpretation of time-dependencies in drug effect, which are often observed following chronic exposure to a drug (e.g., delayed toxicity) [4, 5].
Object name is 10928_2015_9413_Fig1_HTML.jpg" />
Diagram displaying the contribution of toxicokinetics and pharmacology for the characterisation of target-related adverse events and safety risk assessment. The circle depicting target efficacy highlights the role of information regarding the primary target engagement for safety risk assessment. Data on the target efficacy is usually obtained during in vitro and in vivo screening. The arrow indicates that inferences can be made about safety and risk based on the evidence from drug exposure and organ-specific toxicity data. Reprinted with permission from Horii 1998 [3]
Despite the increased attention to the importance of toxicokinetics in drug discovery and during the early stages of clinical development, the extrapolation and prediction of a safe exposure range in humans from preclinical experiments continues to be one of the major challenges in R&D (Fig. 2 ) [6]. Irrespective of the choice of experimental protocol, a common practice in toxicology remains the assessment of empirical safety thresholds, in particular the no observed adverse effect level (NOAEL), which is a qualitative indicator of acceptable risk. Even though support for the existence of thresholds has been argued on biological grounds [7–9], the NOAEL has been used to establish the safe exposure levels in humans. In fact, this threshold represents a proxy for another threshold, i.e., the underlying no adverse event level (NAEL).
Object name is 10928_2015_9413_Fig2_HTML.jpg" />
General toxicity data generated to support early clinical trials is gathered in the pre-IND/CTX stage. After IND/CTX submission, the regulatory agency will confirm whether adequate evidence of safety has been generated for human trials. Parameters derived from toxicokinetic data, such as the NOAEL, play a key role in the approval of protocols for first-time-in human studies. IND/CTX investigational new drug application, NDA new drug application, TK toxicokinetic study. Reprinted with permission from Horii 1998 [3]
The definition of the NOAEL varies from source to source [6]. Its calculation involves the determination of the lowest observed adverse effect level (LOAEL), which is the lowest observed dose level for which AEs are recorded. The NOAEL is the dose level below this. If no LOAEL is found, then the NOAEL cannot be determined. Usually, in the assessment of the LOAEL measures of systemic exposure are derived, such as area under the concentration versus time curve (AUC) and peak concentrations (Cmax), which serve as basis for the maximum allowed exposure in dose escalation studies in humans [10]. The aforementioned practices in safety and toxicity evaluation are driven by regulatory guidance [11, 12]. The scope of these guidances is to ensure that data on the systemic exposure achieved in animals is assessed in conjunction with dose level and its relationship to the time course of the toxicity or adverse events (Fig. 2 ). Another important objective is to establish the relevance of these findings for clinical safety as well as to provide information aimed at the optimisation of subsequent non-clinical toxicology studies.
Whilst the scope and intent of such guidance are well described since 1994, when it was introduced by the ICH, there has been much less attention to requirements for the analysis and interpretation of the data. In fact, precise details on the design of toxicokinetic studies or the statistical methods for calculating or estimating the endpoints or variables of interest, are not specified [13–15]. Instead, the assessment of exposure often takes places in satellite groups, which may not necessarily present the (same) adverse events or toxicity observed in the main experimental group. This is because of interferences associated with blood sampling procedures, which may affect toxicological findings. For this same reason, blood sampling for pharmacokinetics is often sparse [16]. Such practice also diverges from efforts in models in environmental toxicology, a field in which deterministic, physiologically-based pharmacokinetic models have been used for a long time [17, 18].
As a consequence, safety thresholds are primarily derived from inferences about the putative pharmacokinetic profiles in the actual treatment group. Furthermore, these thresholds rely on the accuracy of composite profiles obtained from limited sampling in individual animals. Composite profiles consist of pooled concentration data, which is averaged per time point under the assumption that inter-individual differences are simply residual variability, rather than intrinsic differences in pharmacokinetic processes [19]. Pharmacokinetic parameters such as area-under-concentration-time (AUC) and observed peak concentrations (Cmax) can then be either derived from the composite profile or by averaging individual estimates from serial profiles in satellite animals when frequent sampling schemes are feasible. Given that the parameters of interest are expressed as point estimates, within- and between-subject variability as well as uncertainty in estimation are not accounted for. In addition, pharmacokinetic data generated from different experiments are not evaluated in an integrated manner, whereby drug disposition (e.g., clearance) can be described mechanistically or at least compartmentally in terms of both first and zero order processes. This is further complicated by another major limitation in the way exposure is described by naïve pooling approaches, i.e., the impossibility to accurately derive parameters such as cumulative exposure, which may be physiologically a more relevant parameter for late onset or cumulative effects (e.g. lead toxicity, aminoglycosides) [20, 21]. Time spent above a threshold concentration may also bear greater physiological relevance for drugs which cause disruption of homeostatic feedback mechanisms. Such parameters cannot be described by empirical approaches due to limitations in sampling frequency.
By contrast, population pharmacokinetic-pharmacodynamic methodologies have the potential to overcome most of the aforementioned problems. Whilst the application of modelling in the evaluation of efficacy is widespread and well-established across different therapeutic areas [22–24], current practices have undoubtedly hampered the development of similar approaches for the evaluation of adverse events, safety pharmacology and toxicity. It should be noted that in addition to the integration of knowledge from a biological and pharmacological perspective, population models provide the basis for the characterisation of different sources of variability, allowing the identification of between-subject and between-occasion variability in parameters [25]. These random effects do not only reflect the evidence of statistical distributions. They can be used for inference about the mechanisms underlying adverse events and toxicity. In fact, recent advancements in environmental toxicology have shown the advantages of PBPK/PD modelling as a tool for quantifying target organ concentrations and dynamic response to arsenic in preclinical species [26].
The aim of this investigation was therefore to assess the relative performance of model-based approaches as compared to empirical methods currently used to analyse toxicokinetic data. We show that, modelling is an iterative process which allows further insight into relevant biological processes as well as into data gaps, providing the basis for experimental protocol optimisation. We illustrate the concepts by exploring a variety of scenarios in which hypothetical drugs with different disposition properties are evaluated.
Using historical reference data from a range of non-steroidal anti-inflammatory compounds for which pharmacokinetic parameter estimates were known in rodents, a model-based approach was used to simulate the outcomes of a 3-month study protocol, in which toxicokinetic data for three hypothetical drugs were evaluated. The selection of non-steroidal anti-inflammatory compounds as paradigm for this analysis is due to the mechanisms underlying both short and long term adverse events as well as the evidence for a correlation between drug levels and incidence of such events in humans. In fact, a relationship has been identified between the degree of inhibition of cyclooxygenase at the maximum plasma concentration (Cmax) of individual non-steroidal anti-inflammatory drugs and relative risk (RR) of upper gastrointestinal bleeding/perforation [27].
The impact of differences in drug disposition on bias and precision of the typical measures of systemic exposure was explored by including three different scenarios based on a one-compartment pharmacokinetics with linear and nonlinear (Michaelis–Menten) elimination as well as a two-compartment pharmacokinetics. Parameter values for each scenario are shown in Table 1 . In all simulation scenarios, residual variability was set to 15 %. For the purposes of this exercise, we have assumed that the models used as reference show no misspecification. In addition, we have considered the use of a homogeneous population of rodents, avoiding the need to explore covariate relationships in any of the models.
Pharmacokinetic models used to assess the impact of varying disposition properties on the estimation of safety thresholds
Model A: One-compartment model (1 CMT) | ||
---|---|---|
Parameter | Pop estimate | BSV (%) |
KA (h −1 ) | 13.46 | 50 |
V (ml/kg) | 49.4 | 16 |
CL (ml/h) | 2.72 | 20 |