October 23, 2025
Ready to start using GenAI in your medical and regulatory writing? Watch this webinar for everything you need to know.
Data Science and AI Client Solutions Architect
Ian Kerman is a Data Science and AI Client Solutions Architect at Certara, where he leads initiatives at the intersection of artificial intelligence, data science, and life sciences. With over 15 years of industry experience, Ian brings deep expertise in machine learning, MLOps, and scientific informatics, helping life sciences organizations translate complex data into actionable insights. At Certara, he spearheads advanced R&D efforts in large language models, user experience design, and integrated biological and chemical data knowledge systems.
Before joining Certara, Ian held leadership roles at LabVoice and BIOVIA (Dassault Systèmes), where he led cross-functional teams to deliver AI-powered solutions, voice-enabled lab assistants, and custom data platforms for pharma and biotech customers. His work has contributed to innovations in computational drug discovery, antibody developability prediction, and laboratory automation.
Ian is also an experienced educator and advocate for scientific collaboration. He has developed and delivered technical training programs, mentored students on AI-focused research projects, and co-founded the Data Science and AI Topical Interest Group with the Society for Laboratory Automation and Screening (SLAS). A frequent speaker at industry conferences, Ian combines technical depth with a passion for advancing AI in the life sciences.
Ian earned an M.S. in Computer Science, focusing on Machine Learning, from the Georgia Institute of Technology and an M.S. in Biology, alongside undergraduate degrees in Bioinformatics and Molecular Biology from the University of California, San Diego.
1 https://openai.com/index/chatgpt/
2 https://www2.deloitte.com/us/en/pages/life-sciences-and-health-care/articles/value-of-genai-in-pharma.html
3 https://www.bcg.com/press/24october2024-ai-adoption-in-2024-74-of-companies-struggle-to-achieve-and-scale-value
FAQs
What are the biggest risks when using AI prompts in life sciences?
Because outputs may influence research, clinical decisions, or regulatory submissions, the main risks are hallucinations (incorrect claims), bias (misrepresentation of under-studied populations), and safety/regulatory noncompliance. Use constrained prompts, require citations, and set guardrails (e.g. “If uncertain, respond ‘I don’t know’” or “If nothing is found, respond with ‘None’”) to mitigate risk.
It is also important that, when AI is used in these and other critical areas, a human always reviews, checks, and verifies the response from the AI.
How do I evaluate if my prompt is “good enough” in scientific tasks?
You can assess prompts by metrics such as accuracy (correctness vs. reference), consistency (stable outputs across runs), reproducibility, and error rate (number of hallucinations). In critical tasks, maintain a “prompt testing dataset” of known inputs and expected outputs. Using this dataset, you can compare performance after changes to the prompt and underlying models.
Can I reuse prompts across life sciences subdomains (e.g. oncology, immunology)?
Yes – with caution. While many prompting principles are shared (clarity, providing examples, adding constraints), you’ll need to adapt the context, add domain-specific vocabulary, and modify or set additional constraints (e.g. safety thresholds, biomarker ranges). Always test the reuse of prompts on domain-specific validation cases to ensure reliability.
Experience the future of regulatory writing with CoAuthor
Book a no-obligation demo to see how CoAuthor can revolutionize your regulatory writing processes.


