Duke CTSA Biostatistics Seminars Not Just for Statisticians

Jan 27, 2016

The Duke Translational Medicine Institute (DTMI) Biostatistics Core is featured this week on the DTMI website.  The Biostatistics Core is housed in the Department of Biostatistics and Bioinformatics.  The Core is a team of faculty and staff biostatisticians with diverse and extensive experience in conducting a broad range of research projects. It supports an interdisciplinary network of clinical investigators conducting research at Duke by providing expertise in the implementation of statistical methodology. 

It is essential for clinical researchers to understand basic statistical concepts and understand how to collaborate with statisticians. The Biostatistics Core offers seminars, workshops, office hours, and training programs focused on developing these interdisciplinary skills.  The Biostatistics Core monthly applied seminar exposes statisticians and clinicians to new statistical methodology and resources. Presenters include Duke staff and faculty, as well as guests from other universities. Past seminars have included a REDCap demonstration by DOCR and an introduction on survey design by SSRI. Last fall, Dr. Sujit Ghosh, Professor of Statistics at NC State and Deputy Director of SAMSI, presented “A (small) Bag of Bayesian Tools for Biostatisticians.” Dr. Andrzej Kosinski from Duke gave a talk on generalized estimating equations (GEE). 

The next seminar will be Tuesday, February 9th  from 3-4pm in Hock Plaza 11025 and Dr. Maren Olsen will present "Association between bariatric surgery and long-term survival in veterans: an illustration of sequential stratification matching" Seminars are typically held the second Tuesday of the month 3-4pm in Hock Plaza 11025 and are open to anyone in the Duke community.   Click here to see the full article. 

Masters Students Present to the Biostatistics Core

Jan. 15, 2016

As part of their internships with the Biostatistics Core, second year students Zinan Chen, Xin Liu, and Victor Poon gave presentations of their work during the past semester.  Zinan Chen presented her work on “Unfavorable versus favorable intermediate risk in prostate cancer,” Xin Liu discussed “Perspectives On Clergy Health Problems, and Victor Poon examined "Analysis of Pulmonary Hypertension in Patients with Mitral Valve Surgery.”  

Zinan Chen has been working with Lauren Howard of the Biostatistics Core and under Dr. Stephen Freedland's prostate cancer research team. In her presentation, she introduced a project she worked on using survival analysis.  She validated this classification method for intermediate risk prostate cancer by comparing the cancer outcomes.  This new method allows doctors to group patients based on their baseline characteristics and provide better, more targeted help for them. 

In the study that Victor Poon completed, “Analysis of Pulmonary Hypertension in Patients with Mitral Valve Surgery,” survival analysis was utilized to examine the differences in expected time to death between patients with and without pulmonary hypertension pre mitral valve surgery. Results from a preliminary analysis suggest that patients with pulmonary hypertension pre mitral valve surgery have lower survival when compared to patients without pulmonary hypertension pre mitral valve surgery.

Xin Liu has been doing her internship as a collaborative biostatistician with Clergy Health Initiative Team funded by Duke Endowment, part of the Duke Divinity School. She discussed an observational study based on the clergy health initiative longitudinal survey data collected in 2014. She used hierarchical modelling to evaluate the effect of taking international Sabbath day on clergy's physical, mental health and spiritual vitality. Her presentation successfully interpreted the findings, how she handled the challenges of missing data and confounding variables."

All Masters of Biostatistics degree candidates complete a practicum which usually includes an internship.  This allows students to develop their analytic ability, biological knowledge, and communication skills. The practicum is typically completed during the summer after the first year, but can be completed during the second year. 

B&B Faculty Shein-Chung Chow, PhD publishes book on Quantitative Methods for Traditional Chinese Medicine Development 

Nov. 20, 2015

In recent years, many pharmaceutical companies and clinical research organizations have been focusing on the development of traditional Chinese (herbal) medicines (TCMs) as alternatives to treating critical or life-threatening diseases and as pathways to personalized medicine.  Dr. Chow’s new book is the first book entirely devoted to the design and analysis of TCM development from a Western perspective, i.e., evidence-based clinical research and development. It provides a comprehensive summary of innovative quantitative methods for developing TCMs but also a useful desk reference for principal investigators involved in personalized medicine and connects the pharmaceutical industry, regulatory agencies, and academia.

This book covers all of the statistical issues encountered at various stages of pharmaceutical/clinical development of a TCM. It explains regulatory requirements; product specifications and standards; and various statistical techniques for evaluation of TCMs, validation of diagnostic procedures, and testing consistency. It also contains an entire chapter of case studies and addresses critical issues in TCM development and FAQs from a regulatory perspective.  This book is published by Chapman and Hall/CRC Press, Taylor & Francis, New York.  


AMIA Spotlight:  Jessica Tenenbaum, PhD

Nov. 9, 2015

Dr. Jessica Tenenbaum is featured this week on the American Medical Informatics Association (AMIA) website.  She discusses her choice to work in informatics as it “has been the perfect combination of my passion for biology, and making people’s health better. It also utilizes my love of computer science, programming, and thinking about data and algorithms, and getting to write code.”

Dr. Tenenbaum’s research is focused around informatics infrastructure to support translational research and precision medicine. She serves as the data science lead for the Alzheimer ’s Disease Metabolomics Consortium (ADMC) and provides domain expertise for the Data Standards Coordinating Center under the NIH’s BD2K initiative. Finally, she is interested in informatics infrastructure to enable precision medicine through genomic clinical decision support and incorporation of genomic data in the electronic health record.  Click here to see the full article.

Ross Prentice visits B & B Department

Nov.5, 2015

The B&B Department was honored to host Ross Prentice, Professor of Biostatistics and former Director of the Public Health Sciences Division at Fred Hutchinson Cancer Research Center as the fifth annual Distinguished Speaker.  

Dr. Prentice gave two lectures, the first, “Multivariate Failure Time Data Analysis Methods” where he presented a new nonparametric estimator of the bivariate survivor function.  This would find applications to genetic epidemiological studies or group randomized clinical trials.  His second lecture was on the “Women's Health Initiative, History, Contributions and On-Going Research” which highlighted his impressive career in cancer research and epidemiology.  

B&B Faculty Laine Thomas receives 2 year AHRQ grant to study of matching methods for comparative effectiveness studies of longitudinal treatments

Nov. 2, 2015

The problem of causal inference from observational data becomes particularly challenging when treatments are initiated over longitudinal follow-up.  Robins and colleagues contributed various methods including marginal structural models (MSMs) and g-estimation of structural nested models (SNMs).  Matching methods for time-dependent treatments offer a convenient alternative.  In essence, the idea is to create pairs of patients, one of whom initiates treatment at time T and another who doesn’t, who are comparable: both eligible for treatment at time T and having a similar covariate history, where that history may include the full, time-dependent information available before time T.  This creates a pseudo-experiment for every treated patient.  The pairs can be combined to create a matched sample and analyzed using standard methods for matched data.  On the surface, this approach produces clinically intuitive results that look just like randomized trials, or observational studies of baseline treatment, including hazard ratios, and survival functions.  That is a tremendous advantage, however, there are important complexities in how to define comparable and eligible patients over time.  Thus, many variations have been proposed for specific applications, with unique terminology such as sequential stratification matching, propensity score matching with time-dependent covariates, and balanced risk set matching.

A researcher, desiring to understand and apply these methods, will encounter various challenges.  Even within the field of biostatistics and epidemiology, related methods use distinct jargon so they are not easily found by searching.  In addition, the existing articles are framed around solving a specific problem, and not in terms of generalizable principles and alternatives.  Aim 1 will address this gap by conducting a comprehensive review of longitudinal matching methods in order to identify the relevant alternatives, advantages, disadvantages and choices that should be considered explicitly in all longitudinal matched analyses.

Statistical methods for time-dependent confounding are under-utilized and poorly appreciated in clinical research.  This gap is exemplified by the fact that propensity score matching with time-dependent covariates has only been referenced in one cardiovascular publication that did not apply the method, but recommended it for future studies.  Similarly, a rigorous statistical paper on sequential stratification matching appeared in the Journal of the American Statistical Association in 2009, but does not appear to have been referenced beyond studies conducted by the original authors, colleagues and ourselves.  Perhaps the most powerful way to promote the use of statistical methods is to show that they matter in real applications.  In Aim 2, we will use the data sources available at Duke Clinical Research Institute, such as the Framingham Heart Study, to assess the impact of longitudinal matching methods when compared to na├»ve methods and alternative strategies for time-dependent confounding.  

Enthusiastic response to the Duke-Industry Statistics Symposium

Oct 25, 2015

On Oct. 22-23 more than 160 participants from across the country attended the Duke Industry Statistics Symposium.  It was held at the Trent Semans Center on the Duke Campus. 

The first day of the event led off with 4 well attended half day short courses.  On Friday, Biostats Department Chair Liz Delong and Rene Kubiak, Head of US Statistics at Boehringer Ingelheim gave welcoming remarks.  Dr Yi Tsong from the Center for Drug Evaluation and Research (CDER) of the Food and Drug Administration gave the keynote address on Duality of significance tests and confidence intervals in drug development. 

The welcoming and keynote was followed by nine sessions that offered 30 presentations, in addition to a poster session, and a closing panel discussion.  There were over 15 posters presented at the Symposium.  Prizes for best posters were awarded to Hyang Kim from Parexel and Laine Thomas of the Biostat Department.  Two Biostats Masters students, Tongrong Wang and Meng Chen also received poster awards.  

B&B Faculty Terry Hyslop’s interdisciplinary proposal, Research Colloquia in Big Data selected for funding

September 21, 2015

The Duke University School of Medicine provides support for interdisciplinary colloquia to bring together basic science, translational and clinical faculty members with common interests in a biomedical problem or area.  Dr. Hyslop’s proposal was chosen for the 2015-2016 cycle.  

Momentum has been built on campus around a T32 application in big data that encompasses the Schools of Medicine, Engineering, and Arts and Sciences.  Forty-four faculty from across these schools have engaged to train computational scientists in biostatistics, statistics, computational biology, and electrical and computer engineering. 

As PI of the T32 application, Dr. Hyslop will establish bi-monthly colloquia to bring together researchers interested in big data with the goal of establishing partnerships and working groups in the development of new grant applications around methodologies leveraging big data, particularly to deepen our understanding of human disease.  Medical partners with interest in big data from DCI, DCRI and other appropriate Centers and Institutes will be invited to participate.  This application is the first of its kind to blend computational faculty from across the Medical School and University sides of campus and unite students from these departments in a unique inter-disciplinary environment.