Speeding the Iterative Drug Design process

In this paper, we explore the issues in the effective use of modern pharmaceutical data mining applications. We discuss the development and capabilities that were built into the Discovery 360° application to address these issues, as part of Wyeth Pharmaceuticals Next Generation IT project (NextGen). Wyeth found that simplifying data access and integration of data analysis and collaboration tools, improves both the quality and effectiveness of discovery research and best leverages the intellectual assets of the scientific team.

The Benefits of Modeling and Simulation in Drug Development

Modeling and simulation (M&S) has the ability to influence every phase of the drug development process, including the commercial decisions around the benefits of even bringing a specific drug to market. One of the most important elements of M&S is that it allows carry-over of knowledge and wisdom from one phase to the next, and from one indication to the next, both in terms of successes and failures. M&S transforms data into information and information into knowledge.

Acknowledging the increasingly high costs and inefficiencies in the drug development process, the US Food and Drug Administration (FDA) launched the Critical Path Initiative (CPI) in 2004 to drive innovation in the scientific processes through which medical products are developed, evaluated, and manufactured. Specifically, the CPI called for an aggressive and collaborative effort to create a new generation of predictive tools, such as M&S, for informing crucial drug development decisions. Issues that the FDA wanted to address included first-in-human (FIH) dosing, better understanding of safety and efficacy, linking biomarkers to outcomes, optimizing trial design, and addressing special populations such as pediatrics.

Learning From Failure, Leveraging Modeling and Simulation for Pediatric Drug Development Success

Historically, clinical trials have not examined most pediatric medications. This is due to the ethical and practical challenges of conducting pediatric drug trials. Children are 40 percent of the world’s population. Yet, regulatory agencies have approved only 10 percent of the drugs on the market for pediatrics.1 Without a proper clinical process, pediatricians are stuck with inaccurate dosing and therapeutic approaches. The result is a continuation of off-label, experiential drug prescribing.

To address this urgent need, both the US Food and Drug Administration (FDA) and European Medicines Agency (EMA) now require pediatric trial plans for new drugs. These trials plans are known as the Pediatric Study Plan (PSP) and the Pediatric Investigation Plan (PIP), respectively. The combination of the Best Pharmaceuticals for Children Act (BPCA) and the Pediatric Research Equity Act (PREA) and these new regulatory requirements are moving the pendulum towards safer, more effective medicines for children. Between 2007 and 2013, 469 pediatric studies were completed under BPCA and PREA. And by August 2014, 526 labeling changes were made.2 Since 2007, around 300 products have had label changes approved for safety, efficacy or dosing for pediatrics in the European Union.2

PBPK Modeling in Regulatory Review, Product Labeling and Safety Monitoring

Physiologically-based pharmacokinetic (PBPK) modeling can address various questions raised in drug development and regulatory review, and is used most extensively to predict and quantify the extent of drug-drug interactions (DDIs) from both in vitro and clinical data. This assists with dose selection and the design of clinical studies as well as informing decisions relating to product labeling. Here we present some recent examples of the role of PBPK modeling and how the Simcyp® Simulator has been used in the regulatory approvals process.

Modeling and simulation was used extensively by Janssen® Pharmaceuticals Inc, and FDA reviewers in the development of ibrutinib (Imbruvica™) – indicated for the treatment of patients with mantle cell lymphoma (MCL). Models built using in vitro data were validated using clinical data on the observed effects of both a strong CYP3A4 inhibitor and a strong inducer on ibrutinib exposure. Simulations then predicted the effects of a moderate CYP3A4 inducer and other CYP3A4 inhibitors (strong, moderate and weak) on ibrutinib exposure, as well as investigating the impact of dose staggering and dose adjustment.2

Physiologically-based Modeling Supports Drug Development Decisions, Regulatory Interactions and Drug Labeling

Today’s powerful and actively evolving computational tools enable sponsors and regulators to understand potential drug characteristics and subject responses earlier in development, with greater certainty. Model-based approaches support timely, confident decisions across the development and regulatory life cycle by gathering disparate sources of information about a drug, its competitors, target disease and patients into a mathematical knowledge framework. That framework outlines the candidate’s risk-benefit profile and quantifies uncertainties at each stage in development.

In model-based drug development (MBDD), scientists apply these models to explore new chemistries, extrapolate from in vitro properties to in vivo behaviors, and understand sources of variability in dose-exposure and dose-exposure-response relationships—making it possible to predict results for unstudied doses, formulations, populations, concomitant medications, and more. Drug sponsors and regulators use these tools to:

  • speed discovery of safe and effective new compounds,
  • identify nonviable candidates earlier in development,
  • optimize dosing, sampling schemes and trial designs,
  • anticipate drug interactions and subpopulation effects,
  • evaluate—and often avoid—the need for additional trials

Drug Development and Label Optimization Using Proven Biosimulation Methodology

The approved drug label is the official description of a drug product and includes what the drug is used for, who should take it, side effects, instructions for use, and safety information for clinicians and patients. For drug companies, the label is the culmination of years of work and millions, if not billions of dollars. Every element, word, and comma in the label will impact the potential patient population that can benefit from that new drug, while detailing any associated risks, including staying “silent” when information is not available. In other words, what is included or excluded from the label will affect the overall profit potential of the drug.

While biosimulation has been an important element in drug development for some time, its impact over the past two years with regard to labeling has been profound. Specifically, FDA’s acceptance of Physiologically-based Pharmacokinetic (PBPK) modeling and simulation has impacted key label elements in more than twenty cases, driving down R&D costs and timelines, and increasing the likelihood of both clinical trial and regulatory success.

Achieving Tactical and Strategic Benefits of PKS

An accelerating pipeline of candidate drugs is emerging from discovery, waiting for in vivo evaluation of PK before they can be pushed through to later stages of development. The ever- increasing cost of clinical development is forcing more elaborate and intensive evaluations of PK/PD at earlier stages of development. The FDA Critical Path Initiative calls for greater use of PK/PD modeling. As a result, leading companies are turning to technology to improve productivity and leverage scarce scientific talent in PK/PD analysis, modeling, and reporting. This document covers the PK/PD data problem and the benefits of a clinical pharmacology data repository.

Managers of PK/PD data confront three growing and critical problems:

  • How to provide high-quality, regulatory-compliant storage and analysis of PK data
  • How to optimize PK/PD modeling and simulation workflows
  • How to speed data transfer between users in different departments or across organizations for faster and efficient collaboration without compromising compliance

Return on Investment of a Compliant Clinical Data Repository

An accelerating pipeline of drug candidates is emerging from discovery, waiting for efficient and fast evaluation of their pharmacokinetics before they can be pushed through to the next stages of drug development. The ever increasing cost of clinical studies is forcing more elaborate and intensive evaluations of pharmacokinetic (PK) and pharmacodynamic (PD) characteristics at earlier and earlier stages of development. The FDA Critical Path Initiative has called for more use of PK/PD modeling. As a result, leading companies are turning to technology to improve productivity and leverage scarce scientific talent in PK/PD analysis and reporting. This document highlights the return on investment (ROI) of an online PK/PD study data repository.

Typically, companies without PK/PD data repositories operate in a mixed environment, copying Microsoft® Excel® files into Phoenix WinNonlin and storing data and derived parameters on some file system. This makes later archival, retrieval and integration difficult and cooperation with external partners inefficient because data cannot be shared easily. Creating compliant final reports requires extensive manual intervention.

Choosing the Right Software for PK/PD Analysis

The methods used to characterize the pharmacokinetics (PK) and pharmacodynamics (PD) of a compound can be inherently complex and sophisticated. PK/PD analysis is a science that requires a mathematical and statistical background, combined with an understanding of biology, pharmacology, and physiology. PK/PD analysis guides critical decisions in drug development, such as optimizing the dose, frequency and duration of exposure, so getting these decisions right is paramount.  Selecting the tools for making such decisions is equally important. Fortunately, PK/PD analysis software has evolved greatly in recent years, allowing users to focus on analysis, as opposed to algorithms and programming languages.

Nobel Prize Laureate Barbara McClintock famously said, “They were so intent on making everything numerical that they frequently missed seeing what was there to be seen,” (Gabrielsson and Weiner, 2000). This paper will focus on key considerations when selecting a software package for PK/PD analysis that will demystify the art and science of mathematical modeling, and allow a scientist to “see what needs to be seen.” A robust software solution should be easy to use and address the three main parts of the PK/PD workflow: data management, analysis, and reporting.

Introducing Population Pharmacokinetic Analysis Into Your Early Drug Development Efforts

Population pharmacokinetic analysis has become a key tool for Clinical Pharmacology experts when working with data from human subjects. In the recent past, new drug registrations utilized pharmacokinetic information from healthy volunteers, in whom intensive PK sampling could be performed. In an effort to examine possible dosage adjustments for patients or other subgroups (e.g. elderly, children, individuals with compromised liver function, etc.) population pharmacokinetic techniques were developed. Clinical researchers began to utilize these techniques to assist in therapeutic drug monitoring and during the drug development process. Population pharmacokinetic techniques are able to accommodate sparse blood sampling designs in clinical settings common to therapeutic treatment and large Phase 3 clinical trials. Further development of population pharmacokinetic techniques have focused on trial simulation and optimization as well as supporting dosage recommendations for target patient populations in the absence of dedicated clinical studies. Although many of these applications are late in the drug development effort, the principles and techniques are also applicable in early drug development when making the transition from nonclinical studies to first-in-human clinical trials.

Population pharmacokinetics is based on the principle that the concentration-time profile for each subject can be described with a mathematical model. Systemic drug concentrations (C) are a function time (t) and a set of PK parameters (q) plus residual error (e) as shown in Equation 1.

Model-based Drug Development for Generic Products

As a generic drug manufacturer, the goal of your product development program is to demonstrate bioequivalence of your product to the branded product. Often a straight-forward trial and error process based on the experience of the formulation team has been successful. As innovators plan ahead to protect their product lines, however, they have made this task more difficult. Whether by utilizing unique and patented delivery technology, or by modifying the dissolution and absorption characteristics of the active ingredient through novel formulation, the innovator may confound the efforts of generic formulators.

These challenges may be overcome by a Model Based Drug Development approach. The Food and Drug Administrations (FDA) Center for Drug Evaluation and Research (CDER) has identified in vitro-in vivo correlation (IVIVC) as an opportunity in its Critical Path Initiative for Generic Drugs, where it states:

The development of an IVIVC model for use in steering formulation development decisions and selecting candidate formulations for bioequivalence trials is within the reach of most generics companies. The approach and technical details involved, however, may be unfamiliar to those outside of pharmaceutical development circles. But addressing the knowledge gap and providing appropriate tools would go a long way to speeding generic development and increasing success rates in bioequivalence trials.

QRPEM – A New Standard of Accuracy, Precision, and Efficiency in NLME Population PK/PD Methods

Summary: A new accurate likelihood EM estimation method QRPEM (Quasi-random Parametric Expectation Maximization) has been introduced in the current release of Phoenix NLME. The method belongs to the same general class of parametric EM methods as IMPEM in NONMEM 7, MCPEM in S-ADAPT, and SAEM in MONOLIX, S-ADAPT, and NONMEM 7. The QRPEM method is distinguished by its use of low discrepancy (also called ‘quasi-random’) Sobol sequences as the core sampling technique in the expectation step, as opposed to the stochastic Monte Carlo sampling techniques used in the other EM methods. The theoretical best case accuracy for QR sampling is an error that decays as N-1, where N is the number of samples. This represents a significant advantage over the slower N-1/2 error decay rate characteristic of stochastic sampling. The fundamental characteristics of the types of problems typically encountered in the population PK/PD NLME domain are relatively low dimensionality and high degree of smoothness of the function being sampled. This known to be the ideal case for application of QR techniques and suggests that the best case N-1 behavior may in fact be achievable.

A second distinguishing feature of QRPEM is the use of the SIR (sampling-importance-resampling) algorithm to greatly improve computational efficiency for models where fixed effects cannot be driven by a simple EM update based on the estimated mean and covariance matrix of the posterior distributions for each subject. These include commonly occurring cases such as non-linear covariate models, compound additive-proportional residual error models, standalone fixed effects and in general cases where so-called ‘mu-modeling’ is not or cannot be used to specify all structural parameters. In such cases a computationally expensive auxiliary log likelihood optimization is introduced to drive the iterative updating of these fixed effects. The use of the SIR algorithm allows this optimization, which tends to be the largest computational expense in the overall estimation procedure, to be greatly reduced in size and complexity, with consequent large reductions in overall run time.

Streamline CDISC Electronic Data Submissions: Technology Solution for CDISC PK Study Data

Through our collaborations with regulatory agencies, and partnerships with major global nonprofits, we, at Certara, continuously strive to provide up-to-date solutions to help streamline the drug development process for our clients. Beginning at the end of 2016, in a continuing effort towards international harmonization across regulatory agencies, the US Food and Drug Administration (FDA), the Japan Pharmaceuticals and Medical Devices Agency (PMDA), and other regulatory agencies, will require the mandatory submission of electronic pre-clinical and clinical data using CDISC standard formats. Other regulatory agencies, including the European Medical Agency (EMA), Korea Ministry of Food and Drug Safety (MFDS) and China Food and Drug Administration, are also considering the use of CDISC standards.

The Clinical Data Interchange Standards Consortium (CDISC) is a global, multidisciplinary,  non-profit organization that has established standards to support the acquisition, exchange, submission and archive of pre-clinical and clinical data. Since its formation in 1997, the CDISC mission is “to develop and support global, platform-independent data standards that enable information system interoperability, to improve medical research, and related areas of health care”. CDISC provides data standards for the entire clinical trial process, from source to analysis/reporting and culminating in regulatory submission. This includes standards for Trial Design and Protocol Information, CRF and Subject Data, Analysis Datasets and Regulatory Submissions.


Web-CDISC-Solutions Certara is a CDISC Registered Solutions Provider

Engaging the Public in the Clinical Research Process

The issue of transparency and disclosure of clinical trial data has been growing in importance over the past few years. Clinical trials are essential to developing new therapies for patients. But the individuals that participate in those trial put themselves at risk. In return, the medical community has an ethical obligation to disclose clinical trials information and create transparency around the data. These steps are critical for increasing trust between the public and the industry.

In addition to the ethical mandate for transparency, there is also a business case for treating study volunteers as partners in medical advancement. In surveys of the factors that motivate participation in clinical research, volunteers most frequently cite reasons such as wanting to “learn about their disease,” and “feel part of a community.” The overwhelming majority have positive experiences that they might be willing to share with others, except for one pervasive issue. 90% of clinical trial volunteers expect to be told the overall results of their trial. Unfortunately, most never hear anything back from the sponsor or research center after the last study visit, leaving many volunteers wondering if their participation mattered or was appreciated. The persistent perception that clinical research volunteers are “guinea pigs” rather than people may contribute to low levels of participation in medical research. In fact, the average study has to last for twice the planned duration in order to meet the enrollment target.1 Increasing the public’s trust in the clinical trial process may help increase participation and retention rates, which should help lower the overall cost of drug development.

Integrating Regulatory Writing and Modeling and Simulation into the Drug Development Process

As the adoption of modeling and simulation for drug development expands, so too does the need to integrate it into the documentation submitted to regulatory agencies. Model-informed drug development (MIDD) includes physiologically-based pharmacokinetic (PBPK) and pharmacokinetic-pharmacodynamic (PK/PD) modeling. This methodology is emerging as not just an internal decision-making aid to reduce expenses and time, but also as an ethical imperative to optimize the number of subjects exposed to newly tested therapies.

The use of MIDD to bridge traditional studies with robust modeling that integrates prior knowledge has led to improved predictions, rationale, and justification for formulation, dosing, and study design decisions. There are also recent examples of regulatory agencies accepting it in place of clinical trials for some drug-drug interactions (Shepard et al. 2015) and thorough QT studies (Polak et al, 2015), thus reducing the number of subjects exposed to investigational treatments. Indeed, using MIDD to avoid performing a clinical study can easily save six months to a year and $500K to $1M. There is growing confidence among health authorities that MIDD strengthens traditional approaches by enabling more comprehensive evaluation, minimizing the need for certain studies, and, ultimately, lower risks to patients. This trend has been confirmed through numerous regulatory guidance documents (US Food and Drug Administration, 2012) and publications (Sinha et al. 2014).

Status of Quantitative Systems Pharmacology Modeling in the Pharmaceutical Industry: A Consortium Survey of the What, When and How

A primary cause of failures in pharmaceutical research and development (R&D) was attributed to lack of efficacy (Hay et al. 2014), suggesting a lack of understanding in therapeutic targets biology and their relevance to disease progression or modulation. Quantitative systems pharmacology (QSP) has the promise of increasing probability of success in R&D by bridging scientific gaps between disciplines to enable target validation (Sorger et al., 2011). In 2014, a group of pharmaceutical industry representatives of the Simcyp consortium along with scientists from Certara initiated discussions on formation of a QSP consortium, with the objective of developing, validating and sharing pre-competitive QSP models. The idea of an industry sponsored QSP consortium managed by Certara was well received by the Simcyp member companies, and here we present results of a survey conducted recently to assess the QSP landscape in the industry. This survey was conducted among all 33 consortium members, and 21 companies returned the questionnaire, a response rate of around 60%. A poster paper of results was presented at the New York Academy of Sciences Meeting on this topic in May, 2015.

Best Practices in Drug Development Modeling and Simulation

The use of modeling and simulation (M&S) in drug development has evolved from being a research nicety to a regulatory necessity. Today, modeling and simulation is leveraged to some extent, across most development programs to understand and optimize key decisions related to safety, efficacy, dosing, special populations, and others. Further, the use of M&S as a percentage of an entire drug development program is growing, as the advances in both computing power and our understanding of biological sciences increases, thus propelling both the need and value of the technology.

This paper provides a compilation of best practices to systematically leverage the many benefits of M&S across a drug development program.

Model-based Meta-analysis: An Innovative Methodology Comes of Age

Making the right choices in drug development often means the difference between getting a new medication to patients and it ending up in the scrap heap of failed programs. If we are to progress beyond making decisions based on little more than gut feelings, we must rely on evidence to guide us. According to David Sacket and Gordon Guyatt, founders of evidence based medicine, “medical care and clinical decision making must be based on results (evidence) from empirical quantitative and systematic research.” Drug development decisions are usually made with in-depth quantitative analysis of internal data from the drug candidate and a comprehensive, but less quantitative, review of public data or data from other candidates. While internally generated data is crucial, many important decisions cannot be made with internal data alone.

MBMA integrates internal and external drug development data to inform proprietary commercial and R&D decisions. The insights gained via MBMA support designing less costly and more precise trials with an eye toward achieving commercial success for both the drug and portfolio.

The Emergence of Quantitative Systems Toxicology

Quantitative systems toxicology (QST), a subset of systems biology, is the integration of classical toxicology with quantitative analysis of large networks of molecular and functional changes occurring across multiple levels of biological organization. A goal of QST is to characterize adverse drug reactions (ADRs) by describing modes of action as adverse outcomes pathways and perturbed networks versus conventional empirical end points and animal-based testing. Since QST is at the juncture of Systems Biology with toxicology and chemistry it is important to understand systems biology and the transformational role it plays in how biological systems are investigated.

Drug Asset Evaluation: Increasing “Probability of Success” of a Deal

Partnering and acquiring drug product candidates is an essential part of the pharmaceutical industry. While emerging companies are focused on drug discovery and early development, they often do not have the funding or experience to manage the complex, expensive, and time consuming path to regulatory approval. The go-forward path for these promising drug programs is to work with venture capital, private equity, or established pharma companies.

The process of due diligence enables understanding and quantifying the technical and commercial probabilities of success at each stage of drug development. This informs the relative risk and valuation for each investment.

Read this white paper to learn why Certara—with >200 experts in drug development, clinical pharmacology, and biosimulation—is the right strategic partner to bolster your ability to select the best investment opportunities.

Learn More LinkedIn Twitter Facebook Email