INCORPORATING GENOMICS INTO LYMPHOMA CLASSIFICATION CAN IMPROVE CARE

By Lynne Lederman

Share

The incorporation of genomics into the classification and treatment decision-making process for lymphomas is possible; however, many obstacles have prevented the incorporation of sequencing into standard practice, according to Sandeep Dave, MD.

Dave, professor of medicine and director of the Cancer Genetics and Genomics Program at Duke University in Durham, North Carolina, discussed genomically informed lymphoma classifications at the Association for Molecular Pathology (AMP) 2020 Annual Meeting and Expo.1 He described the history of genomic discovery, using diffuse large B-cell lymphoma (DLBCL) as an example, and proposed a new paradigm for translation that uses the same genomics for both discovery and clinical testing while overcoming many of the current barriers to incorporating genomics into the clinic.

Describing a patient from a decade ago, he noted that the patient had DLBCL with good risk factors, and was expected to do well with R-CHOP (rituximab [Rituxan] plus cyclophosphamide, doxorubicin, vincristine, and prednisone), a standard-of-care treatment that is still used. Nevertheless, the patient died within 3 months of treatment.

“This is a central conundrum in DLBCL, and many other cancers as well, which is that you can have 2 patients whose tumors are essentially identical under the microscope, yet when you treat them with exactly the same therapy, one patient can go on to complete remission and a cure, whereas the other patient does not respond at all,” Dave said.

Genomic technologies have transformed biologic measurements in cancer in the last couple of decades from single measurements to genome-wide next-generation sequencing (NGS).

This parallels changes in computing over the last several decades, as technology went from mainframe computers occupying whole rooms to supercomputers to smart phones. All of these have the same computing power compressed into smaller spaces. The costs have also decreased as the power increased, although this is dwarfed by the revolution witnessed in genomic sequencing. The earliest genomes were sequenced at the cost of $100 million and took years. Now that can be done in a day or so for about $1000.

The output of genomic data from patients has gone from lists of genes that defy interpretations, to color heat maps of gene expression showing levels of heterogenic gene expression that Dave called a “random hodge-podge,” which when rearranged by hierarchical clustering begin to reveal clearly definable groups of patients.

An early leading application of genomics from DLBCL, which was thought to be one disease, was the division of 2 subgroups of DLBCL: activated B-cell (ABC) DLBCL with a much worse survival outcome compared to the other group, germinal center B-cell (GCB) DLBCL.2 There are many other ways to stratify DLBCL though, including by International Prognostic Score and by CD20 positivity. “I want to point out that the good news is that all of these approaches work, and that is also the bad news, which is that there is not one single way to classify DLBCL. It all boils down to the types of phenotypes you are trying to explore or potentially to individual treatments,” Dave said.

Ultimately, cancer is a genetic disease. Every patient with cancer has 2 genomes. One is the genome they were born with, and the second is the tumor genome, acquired second events that allow the tumor to develop and function completely independent from the patient. NGS has emerged as a powerful new tool of choice for examining these events that occur in the tumor.

A fundamental application of NGS is to identify the portions of the tumor that do not match perfectly to the reference genome, and that are new mutations associated with tumor development and survival. Dave’s group looked at over 1000 patients with DLBCL and showed that there are a number of recurrent mutations that characterize the disease, and that no 2 patients take the same path to develop DLBCL.3

One approach to connecting genetic events with survival in DLBCL is to try to subgroup the mutations into larger groups. This has the advantage of trying to understand the disease at a larger level and what is driving the disease heterogeneity and create homogenous larger groups. Doing this, Dave’s group found that although patients with ABC DLBCL have a worse outcome, patients with CREBBP mutations in ABC DLBCL actually do much better than those without. On the other hand, patients with KLHL14 mutations do much worse than ABC DLBCL. It shows the level of complexity that exists beyond just cell of origin, that the interaction of different biological effects as well as different mutations provide.

“Where is the genomic smartphone in my pocket that replicates the supercomputer of a few decades ago?” Dave commented, noting that with about 400 genes in his institution’s panel, the cost of targeted DNA panel sequencing is actually rising, although this is partly a reflection of other costs, like development and analysis. Still, he wonders why genomics has not yet supplanted other technologies.

Not all labs have the expertise and infrastructure to do clinical sequencing well though. “The typical turnaround time is 2 to 3 weeks, and in terms of diagnostic workup, this is an eternity,” Dave pointed out. Most panels take a year to implement clinically, which means they are outdated or out of sync with research findings, which come out at a faster rate.

It is important to distinguish DLBCL from Burkitt lymphoma (BL). They share clinical characteristics but require different clinical approaches. Dave published gene expression profiles in 2006 as a post-doc showing DLBCL and BL are distinct in their expression of over 200 genes.4 He noted that the paper was cited over 1000 times, but applied clinically “exactly zero times, which is disappointing. The question is, why there is widespread acceptance of the findings but so little movement forward in spite of that?”

He pointed to the current paradigm for the translation of clinical assays. Genomics is widely accepted as a discovery method but it is complex, slow, and expensive. The development of clinical assays requires markers selected via genomic technology—whether qPCR, immunostain, or NanoString, or some other clinically validated method—and then these assays have to be validated on new sets of cases. The whole process is slow and requires a daunting amount of work.

“We propose a new paradigm for translation, which is to use the same genomics for both discovery and clinical testing,” Dave said. “When you make these discoveries using genomics you then apply them in patients using the exact same platform, so you know the measurement characteristics from your patients, expression, translocations, copy number, etc., are exactly the same parameters that you would expect to find when you apply them clinically, because these assays are identical.”

Barriers to realizing this paradigm include the same barriers to translating genomics, which include 3 distinct problems: 1) assays for processing samples are complex; 2) analytical approaches for sequencing data are complex and require highly trained personnel; 3) data archiving and retrieval remain expensive and challenging.

DNA and RNA assays provide important information but at the assay level they have distinct workflows that are completely independent of the analysis. “We sought to address these problems in a direct fashion,” Dave said. “Our solution was to generate a unified workflow for sequencing DNA and RNA. Importantly this has to work with formalin-fixed, paraffin-embedded tissue, which is the world-wide standard for tissue preservation and has benefits in the clinical setting of low cost and maintaining tissue architecture.” These 2 assays are combined into a single user-friendly workflow that goes from sequencing to tumor library in 8 hours. Automated quality assurance controls are embedded within the software that operate throughout the process and do not require stopping and starting that would slow the process down.

Automating all the analytical steps possible and connecting bioinformatic analysis with the assays allows the bioinformaticians to do what they were trained to do, which is to find where patients’ genomic differences may make a difference in their care.

Dave’s lab’s solution to managing infrastructure was to move all analytical functions to the cloud. The process is completely scalable, with direct connections to the sequencer, while enforcing the highest levels of HIPAA compliance through data encryption measures. This should be cost saving in the long run.

He noted that an automated process can be used to compare patient data with larger databases that could provide risk features for the patient based on population data, and could guide treatment and determine whether early enrollment in a clinical trial might be appropriate for the patient.

“Genomics is currently an afterthought or the last thing we do for [patients with] lymphoma,” Dave said. Genomics represents the cheapest technology on a “per gene” basis. Every major institution has sequencing technology, most of which is underutilized. The current turnaround times are too long, however, and there is a need for newer approaches for incorporating DNA and RNA sequencing into the diagnostic workup.

He pointed out that genomic data can also provide diagnostic as well as prognostic information. “Every clinical trial should attempt to incorporate genomics to understand response patterns,” he stressed.

“Biology remains a humbling pursuit,” Dave concluded. “There is a lot we don’t know. Even as we define genomic subgroups of patients, we see that there are patients that don’t conform to assigned subgroups. We know there are additional factors and it behooves us to measure these factors. The seamless connection of genomic assays, data, and bioinformatics software provides new opportunities for collaboration.”

Story originally published in Targeted Oncology.


Share