B&B Faculty Shein-Chung Chow, PhD publishes book on Quantitative Methods for Traditional Chinese Medicine Development 

Nov. 20, 2015

In recent years, many pharmaceutical companies and clinical research organizations have been focusing on the development of traditional Chinese (herbal) medicines (TCMs) as alternatives to treating critical or life-threatening diseases and as pathways to personalized medicine.  Dr. Chow’s new book is the first book entirely devoted to the design and analysis of TCM development from a Western perspective, i.e., evidence-based clinical research and development. It provides a comprehensive summary of innovative quantitative methods for developing TCMs but also a useful desk reference for principal investigators involved in personalized medicine and connects the pharmaceutical industry, regulatory agencies, and academia.

This book covers all of the statistical issues encountered at various stages of pharmaceutical/clinical development of a TCM. It explains regulatory requirements; product specifications and standards; and various statistical techniques for evaluation of TCMs, validation of diagnostic procedures, and testing consistency. It also contains an entire chapter of case studies and addresses critical issues in TCM development and FAQs from a regulatory perspective.  This book is published by Chapman and Hall/CRC Press, Taylor & Francis, New York.  


AMIA Spotlight:  Jessica Tenenbaum, PhD

Nov. 9, 2015

Dr. Jessica Tenenbaum is featured this week on the American Medical Informatics Association (AMIA) website.  She discusses her choice to work in informatics as it “has been the perfect combination of my passion for biology, and making people’s health better. It also utilizes my love of computer science, programming, and thinking about data and algorithms, and getting to write code.”

Dr. Tenenbaum’s research is focused around informatics infrastructure to support translational research and precision medicine. She serves as the data science lead for the Alzheimer ’s Disease Metabolomics Consortium (ADMC) and provides domain expertise for the Data Standards Coordinating Center under the NIH’s BD2K initiative. Finally, she is interested in informatics infrastructure to enable precision medicine through genomic clinical decision support and incorporation of genomic data in the electronic health record.  Click here to see the full article.

Ross Prentice visits B & B Department

Nov.5, 2015

The B&B Department was honored to host Ross Prentice, Professor of Biostatistics and former Director of the Public Health Sciences Division at Fred Hutchinson Cancer Research Center as the fifth annual Distinguished Speaker.  

Dr. Prentice gave two lectures, the first, “Multivariate Failure Time Data Analysis Methods” where he presented a new nonparametric estimator of the bivariate survivor function.  This would find applications to genetic epidemiological studies or group randomized clinical trials.  His second lecture was on the “Women's Health Initiative, History, Contributions and On-Going Research” which highlighted his impressive career in cancer research and epidemiology.  

B&B Faculty Laine Thomas receives 2 year AHRQ grant to study of matching methods for comparative effectiveness studies of longitudinal treatments

Nov. 2, 2015

The problem of causal inference from observational data becomes particularly challenging when treatments are initiated over longitudinal follow-up.  Robins and colleagues contributed various methods including marginal structural models (MSMs) and g-estimation of structural nested models (SNMs).  Matching methods for time-dependent treatments offer a convenient alternative.  In essence, the idea is to create pairs of patients, one of whom initiates treatment at time T and another who doesn’t, who are comparable: both eligible for treatment at time T and having a similar covariate history, where that history may include the full, time-dependent information available before time T.  This creates a pseudo-experiment for every treated patient.  The pairs can be combined to create a matched sample and analyzed using standard methods for matched data.  On the surface, this approach produces clinically intuitive results that look just like randomized trials, or observational studies of baseline treatment, including hazard ratios, and survival functions.  That is a tremendous advantage, however, there are important complexities in how to define comparable and eligible patients over time.  Thus, many variations have been proposed for specific applications, with unique terminology such as sequential stratification matching, propensity score matching with time-dependent covariates, and balanced risk set matching.

A researcher, desiring to understand and apply these methods, will encounter various challenges.  Even within the field of biostatistics and epidemiology, related methods use distinct jargon so they are not easily found by searching.  In addition, the existing articles are framed around solving a specific problem, and not in terms of generalizable principles and alternatives.  Aim 1 will address this gap by conducting a comprehensive review of longitudinal matching methods in order to identify the relevant alternatives, advantages, disadvantages and choices that should be considered explicitly in all longitudinal matched analyses.

Statistical methods for time-dependent confounding are under-utilized and poorly appreciated in clinical research.  This gap is exemplified by the fact that propensity score matching with time-dependent covariates has only been referenced in one cardiovascular publication that did not apply the method, but recommended it for future studies.  Similarly, a rigorous statistical paper on sequential stratification matching appeared in the Journal of the American Statistical Association in 2009, but does not appear to have been referenced beyond studies conducted by the original authors, colleagues and ourselves.  Perhaps the most powerful way to promote the use of statistical methods is to show that they matter in real applications.  In Aim 2, we will use the data sources available at Duke Clinical Research Institute, such as the Framingham Heart Study, to assess the impact of longitudinal matching methods when compared to na├»ve methods and alternative strategies for time-dependent confounding.  

Enthusiastic response to the Duke-Industry Statistics Symposium

Oct 25, 2015

On Oct. 22-23 more than 160 participants from across the country attended the Duke Industry Statistics Symposium.  It was held at the Trent Semans Center on the Duke Campus. 

The first day of the event led off with 4 well attended half day short courses.  On Friday, Biostats Department Chair Liz Delong and Rene Kubiak, Head of US Statistics at Boehringer Ingelheim gave welcoming remarks.  Dr Yi Tsong from the Center for Drug Evaluation and Research (CDER) of the Food and Drug Administration gave the keynote address on Duality of significance tests and confidence intervals in drug development. 

The welcoming and keynote was followed by nine sessions that offered 30 presentations, in addition to a poster session, and a closing panel discussion.  There were over 15 posters presented at the Symposium.  Prizes for best posters were awarded to Hyang Kim from Parexel and Laine Thomas of the Biostat Department.  Two Biostats Masters students, Tongrong Wang and Meng Chen also received poster awards.  

B&B Faculty Terry Hyslop’s interdisciplinary proposal, Research Colloquia in Big Data selected for funding

September 21, 2015

The Duke University School of Medicine provides support for interdisciplinary colloquia to bring together basic science, translational and clinical faculty members with common interests in a biomedical problem or area.  Dr. Hyslop’s proposal was chosen for the 2015-2016 cycle.  

Momentum has been built on campus around a T32 application in big data that encompasses the Schools of Medicine, Engineering, and Arts and Sciences.  Forty-four faculty from across these schools have engaged to train computational scientists in biostatistics, statistics, computational biology, and electrical and computer engineering. 

As PI of the T32 application, Dr. Hyslop will establish bi-monthly colloquia to bring together researchers interested in big data with the goal of establishing partnerships and working groups in the development of new grant applications around methodologies leveraging big data, particularly to deepen our understanding of human disease.  Medical partners with interest in big data from DCI, DCRI and other appropriate Centers and Institutes will be invited to participate.  This application is the first of its kind to blend computational faculty from across the Medical School and University sides of campus and unite students from these departments in a unique inter-disciplinary environment.