Skip to main content
Home / Resources / Blog / Pharmacometrics Skills Competition MIDD Gran Prix

Pharmacometrics Skills Competition MIDD Gran Prix

Sitting around the Certara’s Raleigh office conference room, sketching out a pitch for a pharmacometrics contest at a national meeting, none of us were overly confident in its viability. Mark Lovern had been talking about the importance of communication skills for pharmacometricians. For them to influence critical drug development decisions, they should be able to explain their modeling results in terms that a clinical team would understand. I’m a lifelong academic, not directly involved in drug development. But, I agree in the importance of scientific communication. I often have to convince busy physicians to collaborate on my clinical studies and assure the IRB statisticians that I’m not harming participants because the study outcomes do not involve p-values. Alan Forrest, my UNC-Chapel Hill colleague, and Nathan Teuscher—both teachers at heart, in addition to being outstanding scientists—were on board as well. (Full disclosure: I completed my MS at SUNY Buffalo under Alan’s supervision, and Nathan was the teaching assistant in my pharmaceutics class at the University of Michigan… more years ago than either of us are willing to admit.) We thought we had a great idea and were very excited about developing our pitch for a contest that would involve significant pre-conference technical work, with the top teams presenting at the meeting to show off their communication skills.

But, would busy professionals and students actually participate in this, assuming that ASCPT, our chosen pitch recipient, let us run this contest at their meeting? What would the session itself look like and would people attend? There were many unknowns. We all agreed that it was important to highlight the need for pharmacometricians to not only solve the technical challenges, but to be able to explain what they did to an interdisciplinary team. Talking in thetas, etas, and objective functions, as we do amongst ourselves, is not the way to convince the clinical team of your recommendations and conclusions. And if we can’t influence the decisions being made, what are we doing, anyway?

ASCPT was thrilled with our idea, which I discovered when I checked my junk folder the day before the proposal notification deadline, as my last step before sending a cranky email their way about the proposal’s status. (Phew, dodged a bullet there.) Sharon Swan, ASCPT Chief Executive Officer, asked us to set up a conference call to discuss the session. After much internal discussion, we christened the contest the “Pharmacometrics Skills Competition Model-Informed Drug Development (MIDD) Gran Prix.” ASCPT offered their considerable resources to help us execute the contest, from hosting the data set, receiving reports, facilitating report scoring, and planning the meeting session down to the minute, through a series of conference calls over the six months before the meeting.

Mark and Alan created the technical problem, with help from Shuhua Hu at Certara, to simulate the clinical trial data. Alan gets a certain gleam in his eye when he has the opportunity to get creative with constructing a problem and its corresponding data set. That gleam strikes fear into many of our hearts. It was a challenge, indeed, incorporating many of Alan’s favorite pet peeves about how modelers handle data: restrictive protein binding, influence of renal function on clearance, active metabolites in an indirect response model.

Thirty-three teams received the data set; 15 turned in reports. Thanks to the editorial staff of ASCPT’s PSP journal, the reports were scored in a double-blind fashion to select the teams to present at the meeting. No one who helped plan the session was involved in the report evaluation and scoring process, aside from developing the rubric. ASCPT helped us assemble an all-star cast to be our mock clinical team and session judging panel, consisting of Jill Fiedler-Kelly (Cognigen Corp), Carl Peck (UCSF Center for Drug Development Sciences), Richard Lalonde (retired, formerly Pfizer), Issam Zinneh (US Food and Drug Administration), and France Mentré (University of Paris Diderot).

The session, held in the early morning hours of the last day of the meeting, was a lot of fun and very interesting. The four top teams were Team Maryland from the FDA/University of Maryland, Leiden PMX from Leiden University, the Genentech Modelers (GEMS), and the Certara Supermodels. They analyzed the same data, but all came to slightly different interpretations of the data and made different recommendations for how to move Drug D forward in its development. This highlighted how complex drug development decisions can be.

Three of the four teams wanted to continue clinical development and proposed several different ways to gather more information about the drug in additional Phase 2 studies. Though the specifics differed, they all wanted pharmacokinetic and safety data on higher doses, more information about mechanism of action, and further exploration of other factors driving the observed variability in response. The Certara team, however, recommended stopping development, as their analysis indicated that it was unlikely for the drug to improve upon the standard of care. Dr. Carl Peck, one of the judges, appreciated their willingness to make this suggestion, as such a thing is not always in the best interest of the employee making the recommendation. (Though in the case of Certara, one could argue that the consultants have a bit more impunity in making their recommendations than direct employees of a company with an ailing drug candidate.)

Part of the original idea was to use the data set to probe the strengths and weaknesses of various modeling software packages. Teams were allowed to use any software they felt could support the analysis. Nearly all of the teams used Phoenix WinNonlin to perform initial noncompartmental analysis; R for data management, graphics, and simulation; and NONMEM 7 for population analysis. The winning professional team—the Certara Supermodels—used Phoenix NLME for their population analysis. And another very creative team used closed-form PK, PK/PD and time-dependent dose-response models within SAS 9.2 for their evaluation. Interestingly, no team captured the exact model that was simulated to provide the datasets, regardless of software. Again, this shows the complexity of pharmacometrics and proves the adage that, “All models are wrong, but some are useful.”

At the session itself, the team representatives did a great job of keeping the conversation focused on the decisions, and how their model supported their decisions without getting overly technical. In fact, both trainee teams developed R Shiny-based dosing tools which allowed non-modelers to input the necessary information and obtain a dosing recommendation for a patient. The future is bright, indeed. Way to take advantage of a great tool to facilitate broader applications of pharmacometrics, Team Maryland and Leiden PMX! Our panel of esteemed expert judges asked tough questions that tested the teams’ communication skills. For my part, as the session MC, I channeled my best impersonation of Will Ferrell as “Ricky Bobby” from Talladega Nights (the man is a comic genius, people!) and had more fun at 7 am on a Saturday morning than I possibly ever have.

We plan to make the Pharmacometrics Skills Competition an annual ASCPT event, perhaps highlighting different aspects of clinical pharmacology in subsequent years. You know the pharmacogenomics people want in on the fun! Over the summer, the planning group will reconvene to plan the next iteration for the 120th ASCPT Annual Meeting, to be held in Washington, DC in March 2019.

Want to learn more about communications skills for pharmacometrics? Take a look at this webinar by Drs. Peter Bonate and Stacey Tannenbaum.

 

About the author

By: Julie Dumond