Brion, Eliott
[UCL]
External radiotherapy treats cancer by pointing a source of radiation(either photons or protons) at a patient who is lying on a couch. Whileit is used in more than half of all cancer patients, this treatment suffersfrom two major shortcomings. First, the target sometimes receives lessradiation dose than prescribed, and healthy organs receive more of it.Although some dose to healthy organs is inevitable (since the beam mustenter the body), part of it is due to poor management of anatomicalvariations during treatment. As a consequence, the tumor can fail to becontrolled (possibly leading to decreased quality of life or even death)and secondary cancers can be induced in the healthy organs. Second, theslowness of treatment planning escalates healthcare costs and reducesdoctors’ face-to-face time with their patients.Coupled with steady improvement in the quality of the medical im-ages used for treatment planning and monitoring, deep learning promisesto offer fast and personalized treatment for all cancer patients sent to ra-diotherapy. Over the past few years, computation capabilities, as well asdigitization and labeling of images, have been increasing rapidly. Deeplearning, a brain-inspired statistical model, now has the potential toidentify targets and healthy organs on medical images with unprece-dented speed and accuracy. This thesis focuses on three aspects: sliceinterpolation, CBCT transfer, and multi-centric data gathering.The treatment planning image (called computed tomography, or CT)is volumetric, i.e., it consists of a stack of slices (2D images) of the pa-tient’s body. The current radiotherapy workflow requires contouring thetarget and healthy organs on all slices manually, a time-consuming pro-cess. While commercial suites propose fully automated contouring withdeep learning, their use for contour propagation remains unexplored. In this thesis, we propose a semi-automated approach to propagate thecontours from one slice to another. The medical doctor, therefore, needsto contour only a few slices of the CT, and those contours are automati-cally propagated to the other slices. This accelerates treatment planning(while maintaining acceptable accuracy) by allowing neural networks topropagate knowledge efficiently.In radiotherapy, the dose is not delivered at once but in several smalldoses calledfractions. The poorly measured anatomical variation be-tween fractions (e.g., due to bladder and rectal filling and voiding) ham-pers dose conformity. This can be mitigated with the Cone Beam CT(CBCT), an image acquired before each fraction which can be considereda low-contrast CT. Today, targets and organs at risk can be identifiedon this image with registration, a model making assumptions about thenature of the anatomical variations between CT and CBCT. However,this method fails when these assumptions are not met (e.g., in the caseof large deformations). In contrast, deep learning makes few assump-tions. Instead, it is a flexible model that is calibrated on large databases.More specifically, it requires annotated CBCTs for training, and thoselabels are time-consuming to produce. Fortunately, large databases ofcontoured CTs exist, since contouring CTs has been part of the workflowfor decades. To leverage such databases we proposecross-domain dataaugmentation, a method for training neural networks to identify targetsand healthy organs on CBCT using many annotated CTs and only a fewannotated CBCTs. Since contouring a few CBCTs may already be chal-lenging for some hospitals, we investigate two other methods –domainadversarial networksandintensity-based data augmentation– that donot require any annotations for the CBCTs. All these methods rely onthe principle of sharing information between the two image modalities(CT and CBCT).Finally, training and validating deep neural networks often requireslarge, multi-centric databases. These are difficult to collect due to tech-nical and legal challenges, as well as inadequate incentives for hospitalsto collaborate. To address these issues, we applyTCLearn, a federatedByzantine agreement framework, to our use-case. This framework isshown to share knowledge between hospitals efficiently.
Bibliographic reference |
Brion, Eliott. Deep learning for organ segmentation in radiotherapy : federated learning, contour propagation, and domain adaptation. Prom. : Lee, John ; Macq, Benoit |
Permanent URL |
http://hdl.handle.net/2078.1/244327 |