Skip to main content
search

January 14, 2026

Pharmacokinetic/pharmacodynamic (PK/PD) analyses are vital to modern drug development, yet they remain inefficient. Manual processes, disjointed workflows, and heavy documentation requirements often slow analyses and divert scientists from higher-value work. The result is wasted time, increased risk of errors, and delayed decisions that ripple across development timelines. Certara experts hosted a webinar highlighting where bottlenecks occur most often and how cloud-based and AI-enabled approaches can help reduce their impact.

Data preparation

One major bottleneck is the manual assembly and cleaning of datasets. Building an analysis ready dataset often takes more time than the analysis itself. Much of analysis time can be spent on data cleaning and preparation.

This process is also error-prone and iterative. As new data arrive or issues are uncovered, teams must repeatedly refine the dataset.

Quality control (QC) of incoming data adds further drag. Ongoing studies frequently send updated datasets, and checking each update for errors or format issues is time-consuming. Automating dataset QC can save substantial time by catching problems early. Otherwise, scientists must manually perform QC and re-formatting, delaying the start of analysis.

Taken together, these inefficiencies mean scientists spend disproportionate time preparing “analysis-ready” datasets rather than analyzing the data.

Fragmented workflows

PK/PD workflows often rely on specialized software, such as LIMs or SAS for data, NONMEM or Phoenix for non-compartmental analysis (NCA) and R for visualization. When these tools aren’t well integrated, workflows become fragmented and inefficient. Teams must perform manual hand-offs between platforms, re-enter or convert data, and manage multiple versions of files.

A case study at Charles River Laboratories illustrates this challenge. The company’s seven global sites each operated with their own systems and procedures. This resulted in inconsistent methods and no integration in PK analysis. Because different sites even used different software versions, documenting compliance (for example, with 21 CFR Part 11) became extremely time-consuming. Scientists spent excessive time searching for data and proving data provenance across systems. A siloed, non-centralized setup drained resources and introduced risks such as version control mistakes or data loss.

The solution was to implement a unified, cloud-based platform to connect tools and data sources. Centralizing data in a single repository and standardizing software eliminated data handling inefficiencies and gave scientists faster access to their data.

More broadly, effective workflows require integration across computational tools. However, in practice many organizations still struggle with disparate, legacy systems. Fragmentation forces extra manual work and slows the cycle of model building and analysis.

Analysis

Even after data are prepared, inefficiencies during analysis can slow progress. One common challenge with modeling and simulation is the lack of automation in model development and diagnostics. Building a population PK/PD model is often an iterative, labor-intensive process. It can include testing different structures and covariates, running multiple simulations, and reviewing diagnostics, often by manual scripting.

Emerging techniques are beginning to address this, with AI-driven approaches to accelerate model development and hypothesis generation. Faster model-building reduces bottlenecks and helps teams reach a stable, predictive model more quickly. For example, new hybrid tools that combine machine learning with traditional pharmacometrics have already demonstrated shorter development cycles, highlighting the potential to ease this bottleneck.

TFL generation and reporting

Once analysis is complete, results must be communicated through tables, figures, and listings (TFLs) for reports or regulatory submissions. Producing these outputs is often onerous. Many scientists rely on ad-hoc tools such as Excel or custom scripts to tabulate parameters or create graphs. Each table or figure may require unique formatting and verification.

Because these steps are manual, they are prone to transcription errors. This not only consumes valuable time but also introduces quality and compliance risks if mistakes are carried through to regulatory document

Challenges in TFL creation

TFL creation takes too long-and relies on too much manual work.

  • Complex studies, tight timelines
    Increasing demands with shrinking windows to deliver.
  • Too many repetitive tasks
    Templates and scripts don’t adapt – users get stuck bridging gaps.
  • Difficulties sharing work
    Inconsistent approaches make it difficult to review someone else’s work.
  • Manual edits increase mistakes
    Manual edits drain time, increase risk and require extensive QC.
  • High training burden
    Teams must learn technical tools instead of focusing on science.
Create standards
Varying technical expertise and tool preferences make standards difficult to maintain
Apply standards
Data reformatting is required because standards do not adapt to study specific differences
Fine tune
Adjusting TFLs after creation is challenging due to complex steps required to ensure accuracy
QC review
Varied tools and skill sets make review difficult

The lack of automation in report generation also contributes to reporting lag which can impact program timelines. Recognizing this inefficiency, new solutions are emerging to automate PK/PD TFL generation. Certara’s Phoenix TFL Studio, for example, streamlines visualization and generates TFLs from source data. This eliminates intermediate data handling and reduces preparation time by up to half, delivering publication-ready outputs that meet FDA and EMA formatting standards. Adoption of automation in this area will enable scientists to get back to the science and spend less time creating detailed TFLs.

Regulatory documentation

Model-informed drug development operates under strict regulatory standards for traceability and reproducibility. These regulations create a heavy documentation burden. Every step – data sources, data cleaning decisions, model code, model validation, results – must be recorded for audit trails, quality control, and submission dossiers. Tracking all the data, metadata, assumptions, and results in an analysis is challenging, tedious, error-prone, and time-consuming. This low value work pulls scientists away from actual science.

In practice, teams often rely on manual record-keeping. They write model description documents, maintain versioned files, and copy results into Word or PowerPoint. These processes are inefficient and increase the risk of transcription errors or lost information.

The costs become clear when analyses involve multiple updates. Each update can require regenerating tables and narratives for the regulatory report. This forces modelers to manually reconstruct what has changed, why, and with what impact.

Automated audit log capture and workflow standards aim to alleviate this burden by systematically recording each step of the analysis process, reducing errors, and improving knowledge transfer.

Overall, regulatory documentation remains a significant source of inefficiency in PK/PD workflows. Streamlining it through standards and automation is essential to improving both efficiency and compliance.

Communication and collaboration

Effective communication between clinical, statistical, and regulatory teams is critical in pharmacometrics. Yet breakdowns and silos often introduce inefficiencies. One common issue is miscommunication between analysts and data programmers. Requirements for dataset assembly or analysis may be misunderstood, leading to extensive back-and-forth. Each cycle of questions and clarifications wastes time. These exchanges show how poor initial communication can create lengthy rework cycles.

Challenges also extend across functional teams. Analysts must communicate modeling results to clinical and regulatory stakeholders. Yet translating complex scientific outputs into actionable insights is not straightforward.

When communication channels are open and processes are aligned, scientific output reaches decision-makers much faster. Without that infrastructure, organizations risk duplication, delays, and misaligned objectives.

Breaking PK/PD analysis bottlenecks

Scientists spend too much time on non-scientific tasks – assembling datasets, managing files, and formatting reports – rather than on analysis and interpretation.

The operational cost shows up as weeks lost to rework and coordination. The scientific cost is just as high. Every hour scientists spend formatting tables is an hour not spent answering key pharmacological questions. Inefficient workflows ultimately delay dose selection, trial design, and go/no-go decisions, slowing down the development of new therapies.

The response is clear: automation, standardization, and integration. Automated TFL/report generators and standardized documentation frameworks are already cutting reporting times and halving the effort needed for outputs. By addressing pain points in data preparation, toolchain integration, analysis, documentation, and communication, scientists can focus on science and deliver insights faster to guide critical development decisions.

Sources:

González Sales et al., R Journal (2021) – “Assembling Pharmacometric Datasets in R – The puzzle Packagejournal.r-project.org

Certara Case Study (2023) – “Enhancing Pharmacokinetics Workflows at Charles Rivercertara.com

Certara Phoenix TFL Studio (2025) – Product description of TFL automation module certara.com

Wilkins et al., CPT: PSP (2017) – “Thoughtflow: Workflow Definition to Support MID3pure.amsterdamumc.nl

Favia et al., Commun. Biology (2021) – “QSPcc reduces bottlenecks in model simulationsnature.com

Accelerating pharmacokinetic analysis with AI and the cloud

Phoenix Cloud integrates our AWS-hosted solutions for analysis and modeling with cloud-native modules for data management, publication-ready graphing, GenAI-powered reporting, and more.

Learn more about Phoenix Cloud
Certara wheel

Fred Mahakian

Senior Director of Product, Certara

Fred Mahakian is a seasoned Senior Director of Product at Certara, leading the portfolio for Pharmacometrics (PMX) software products. With over two decades of experience, he has driven innovation at companies like Oracle, BillGO, and Siebel Systems, spearheading the launch of new products and the scaling of mission-critical systems.

Kristin Follman, PhD

Principal Research Scientist

Dr. Kristin Follman is a Principal Research Scientist at Certara, where she is a member of the team building the TFL module. She obtained a PhD in Pharmaceutical Sciences from the University at Buffalo where her work focused on drug transporters in the treatment of overdose and renal impairment. Prior to joining the software division, Kristin worked as a consultant at Certara for 5 years where her area of expertise was quantitative clinical pharmacology, with a focus on translational PK/PD modeling and simulation.

Schedule a demo of Phoenix Cloud