1
Ameliorate Immune Suppression During Radiation Therapy to Advanced Stage Lung Cancer via a Predictive Algorithm - Krishni Wijesooriya (1), Cam Nguyen (1), Grant Williams (1), Marco Baessler (1), Constance Hodge (1), John Morgan (1), Ryan Gentzler (1), Nilanga Liyanage (1)
Multiple authors have demonstrated that radiation induced immune suppression in the case of advanced stage lung cnacer could lead to reduction in overall survival and disease free survival. We have developed a predictive model to determine patient specific immune suppression in lung RT. We employed this model to demonstrate the feasibility of proactively reducing immune suppression in a replanning study.
2
A Pipeline for Generating Longitudinal Head and Neck Cancer Radiotherapy Datasets for Machine Learning - Kayla O'sullivan-Steben (1), Odette Rios-Ibacache (1), Luc Galarneau (1) (2), John Kildea (1) (2)
AI and Big Data have promised to revolutionize radiotherapy. However, there is a lack of readily available large-scale datasets needed to train AI models. Therefore, we developed an image processing pipeline that automatically sorts, registers, and preprocesses DICOM images and DICOM-RT files for use in downstream machine learning and analysis tasks. We focused on the use case of head and neck cancer (HNC), where patients often undergo anatomical changes that necessitate replanning. These anatomical changes can potentially be modeled and predicted with sufficient longitudinal imaging data. Our Python-based pipeline was validated on 370 HNC patients, resulting in a comprehensive HNC dataset containing 5,402 CBCTs, 530 planning CTs, and related radiotherapy data. Using this dataset, we can produce various image-related metrics and have begun training deep learning algorithms for HNC replanning predictions. Overall, our pipeline is a useful tool for collecting and utilizing longitudinal real-world radiotherapy datasets for machine learning.
3
Comparing Two Methods of Using Toxicity Outcomes to Study Dose-Response Mapping of Bladder and Rectum in Prostate Cancer Patients Undergoing Radiotherapy - Tanuj Puri, Marcel Van Herk (1), Eliana Vasquez Osorio (1), Tiziana Rancati (2), Petra Seibold (3), Eliana Gioscio (2), Andrew Green (4), Andrew P. Morris (5), Jane Shortall (1), Sarah Kerns (6), David Azria (7), Marie-Pierre Farcy-Jacquet (8), Jenny Chang-Claude (3), Alison Dunning (9), Maarten Lambrecht (10), Barbara Avuzzi (11), Dirk De Ruysscher (12), Elena Sperk (13), Christopher Talbot (14), Adam Webb (14), Ana Vega (15), Liv Veldeman (16), Barry Rosenstein (17), Catharine West (1), Ananya Choudhury (1), Peter Hoskin (1), Alan McWilliam (1)
The study aims to assess the differences between two different methods of using late toxicities, namely, with and without baseline normalization, for identifying high-risk sub-regions on DRM of the bladder and rectum in prostate cancer (PCa) patients undergoing radiotherapy. The selection of the toxicity analysis method, whether with or without baseline normalization, influences the identification of high-risk sub-regions in DRMs of OARs. Subtracting baseline from post-treatment toxicity grades reduces the effectiveness of the dose-response relationship of bladder and rectum in prostate cancer patients undergoing radiotherapy. This influence is primarily attributed to variations in the number of events available for the analysis of each toxicity as other variables were fixed within the DRM methodology. Further efforts are required to enhance the robustness of the methodology, ensuring the generation of more reliable and reproducible results. These results may have implications for other OARs in other cancer types.
4
Demonstration of Current Best Practice for Prognostic Model Validation Using the Manchester Score - Charlie Cunniffe (1), Gareth Price (1), Matthew Sperrin (1), Fiona Blackhall (1) (2)
There is a belief in medical research that predictive and prognostic models could aid with difficult decisions in the clinic. The rise of electronic health records makes it easier to acquire data and results in datasets including patients under-represented in trials and, hence, more difficult to make decisions for. Many models have been developed for this purpose; however, very few have seen clinical use. One reason is the lack of thorough validation during development and in external target populations. Current internal and external validation best practices are demonstrated using the Manchester score. This early small-cell lung cancer (SCLC) prognostic model saw clinical use despite not undergoing validation in the 36 years since its development. The model continues to separate patients by predicted survival, and following updates and validation presented here, it can be used to make 6 and 12-month survival predictions for individual patients with SCLC.
5
A Technological Pipeline for Detecting Radiation-Induced Heart Damage Mechanisms and Prevent Major Cardiac Adverse Events - Alessandro Cicchetti (1), Alessandra Catalano (1), Silvia Meroni (2), Anna Cavallo (2), Albina Allajbej (3), Chiara Veronese (4), Paola Vallerio (4), Claudia Sangalli (3)
The heart is believed to be the most critical normal structure responsible for competing events after RT in locally advanced lung cancer patients. A better knowledge of heart substructures' tolerance to RT is essential to defining a safe dose escalation protocol and improving patients' overall survival. Here, we combine a retrospective study to determine dose tolerances for the heart substructures through a prediction model of survival limiting toxicity and a prospective analysis to investigate the pathophysiological process leading to chronic symptoms and major adverse cardiac events. In silico applications will be adopted to (i) design a decision-making tool for advanced RT patient selection and (ii) test the mechanisms of cardiac dysfunctions.
6
A Non-Invansive and Self-Managed Microscope for Predicting Radiation-Induced Acute Toxicities in Cancer Patients - Alessandra Catalano (1), Sophie Veronique Materne (1), Fabio Badenchini (2), Eliana La Rocca (3), Eliana Gioscio (1), Francesco Pisani (1), Luca Possenti (1), Barbara Avuzzi (4), Barbara Noris Chioda (4), Carlotta Giandini (4), Riccardo Roy Colciago (4), Maria Carmen De Sanctis (4), Tommaso Giandini (5), Alessandro Cicchetti (1), Tiziana Rancati (1)
Microcirculation is the terminal vascular network of the systemic circulation. Damage to microcirculation caused by radiotherapy (RT) contributes to toxicity development in a non-clarified manner. This work aims to investigate quantitatively the role of microcirculation in predicting acute toxicities after breast and prostate cancer RT. We enrolled 247 patients and assessed the single-patient baseline microvasculature health status (MVHS) before RT using a microscope coupled to the GlycoCheck™ software. The system records videos showing the live movement of red blood cells in the microvessels and computes the MVHS value for the overall health of microcirculation. We fitted a logistic model for the association between MVHS and acute toxicities, and we found a quantitative relationship between microvasculature health status and radio-sensitivity for breast and prostate cancer patients. The information obtained from the sublingual microscope could help personalise predictive models for toxicity and tailor them to the functional status of each patient.
7
Whole-Body Organ Dose Assessment of Cancer Patients Undergoing External Beam Radiotherapy Using Body-Size-Dependent Phantom Library for Secondary Cancer Risk Prediction - Kyeong Hyeon Sung (1), Bo-Wi Cheon (1), Hyung-Joo Choi (1), Soo Min Lee (1), Saerom Seong (1), Yoon Seo (1), Donghyouk Kim (1), Hyun Joon Choi (2), Chul Hee Min (1)
Intensity-modulated radiotherapy (IMRT), while effective for conformal dose irradiation, may inadvertently expose out-of-field areas to radiation due to scattering and leakage doses from the treatment head, potentially leading to secondary cancer. In this study, we evaluated whole-body organ doses to predict the risk of secondary cancer using body-size-dependent tetrahedral mesh-based computational phantoms (TM phantoms) with our developed TET2DICOM software. TET2DICOM software converts TM phantoms into DICOM format, enabling their integration into commercial treatment planning systems (TPS). We utilized MONACO TPS for whole-body organ dose assessment. All selected TM phantoms, with similar BMI to the patients, were successfully transformed into DICOM format. A comparison of average doses at the target volume between patient CT images and TM phantom CT images revealed an error rate of 1.2% to 2.6%. After verifying this method through this comparative analysis, it was finally possible to quantitatively calculate the whole-body organ dose.
8
Data Preparation Pipeline for Artificial Intelligence in Radiation Oncology: Lessons From a Large Prospective Imaging Trial - Hannah Thomas (1), Amal Joseph Varghese (1), Manu Mathew (1), Devadhas Devakumar (2), Aparna Irodi (3), Joel T (1), Timothy Peace (1), Henry Finlay Godson (1), Rajesh Isiah (1), Simon Pavamani (1), Balu Krishna Sasidharan (1)
The integration of Artificial Intelligence (AI) into clinical practice demands robust data preparation pipelines, especially in Radiation Oncology where challenges in data quality and access persist. This work presents a bottom-up approach for AI implementation in Indian Radiation Oncology, focusing on a prospective observational trial for predicting locoregional recurrence in head and neck cancer. The data preparation pipeline encompasses patient consenting, clinical data collection, image acquisition, annotation, curation, de-identification, and storage. The workflow, established in 2020, has successfully imaged over 1100 patients. The Data preparation involves metadata analysis, automated quality control, and integration with open-source platforms like Orthanc DICOM server and XNAT. While providing practical insights for data preparation for AI research, the work emphasizes the importance of tailored frameworks for specific healthcare contexts, outlining current practices and future considerations in the evolving landscape of AI in Radiation Oncology.
9
Towards Gaussian Processes Modelling to Study the Late Effects of Radiotherapy in Children and Young Adults With Brain Tumours. - Angela Davey (1), Arthur Leroy (1), Eliana M. Vasquez Osorio (1), Kate Vaughan (1), Peter Clayton (2), Marcel Van Herk (1), Mauricio A Alvarez (1), Martin McCabe (1) (3), Marianne Aznar (1)
Survivors of childhood cancer need life-long monitoring for side-effects from radiotherapy. However, longitudinal data from routine monitoring is often infrequently and irregularly sampled, and subject to inaccuracies. Due to this, measurements are often studied in isolation, or simple relationships (e.g., linear) are used to impute missing timepoints. In this study, we investigated the potential role of Gaussian Processes (GP) modelling to make population-based and individual predictions, using insulin-like growth factor 1 (IGF-1) measurements as a test case. With training data of 23 patients with a median (range) of 4 (1-16) timepoints we identified a trend within the range of literature reported values. In addition, with 8 test cases, individual predictions were made with an average root mean squared error of 31.9 (10.1 – 62.3) ng/ml and 27.4 (0.02 – 66.1) ng/ml for two approaches. GP modelling may overcome limitations of routine longitudinal data and facilitate analysis of late effects of radiotherapy.
10
Transfer Learning for the Detection of Lung Lesions in 18F FDG PET/CT Images Using GraphNet - Maryam Fallahpoor (1), Biswajeet Pradhan (1)
Early detection of lung cancer relies on advanced imaging techniques such as PET/CT scans, with 18F FDG being one of the most commonly used radiotracers in PET/CT imaging. Deep learning has been demonstrated to yield promising results in automated lung lesion detection. Challenges, including data scarcity, computational costs, and robust deep learning models to detect small and complex patterns, hinder fully training deep learning models for accurate lung lesion detection. To address these challenges, this study employs sixty 18F FDG PET/CT images for small and complex lung lesion detection using transfer learning with GraphNet model, alongside accurate lung segmentation and co-registration performed using Python-based codes for pre-processing. We achieved an accuracy of 82% and a sensitivity of 78.79%, which shows the potential of transfer learning with GraphNet for identifying small lesions in PET/CT scans. This approach may also help to detect radiation-induced complex lesions, offering valuable insights for treatment monitoring. Keywords: Lung cancer; Deep learning; PET/CT; Transfer learning; GraphNet
11
Explainable Artificial Intelligence for the Prediction of Non-Small Cell Lung Carcinoma Recurrence - Lorna Tu (1) (2), Herve Choi (1) (2), Haley Clark (1) (3), Samantha Lloyd (1) (2)
Despite careful radiotherapy treatment planning, recurrence may occur in patients with non-small cell lung carcinoma (NSCLC). If clinicians can identify patients at high risk of recurrence, treatment plans could be tailored with risk-adapted approaches. Although artificial intelligence (AI) can predict patient outcomes, it often does not explain its underlying decision-making process and therefore is not widely adopted in clinical practice. In this retrospective study, we use explainable AI methods to predict post-stereotactic ablative radiotherapy NSCLC recurrence in 74 patients treated across a Canadian province and interpret the results. The eXtreme Gradient Boosting model outperformed the random forest and Explainable Boosting Machine models, achieving an accuracy of 0.73, F1-score of 0.5, and area under the curve of 0.65. Radiomic features were considered to be the most important features, whereas clinical features were not as important overall. This work shows the utility of explainable AI in the context of recurrence prediction.
12
Optimising Physics-Informed CRNNs for Longitudinal Volumetric Medical Image Data - Denis Page, Eliana M. Vasquez Osorio (1), Robert Chuter (1) (2), Andrew Green (1) (3), Alan McWilliam (1)
Anatomical changes during radiotherapy can make highly conformal treatment plans, calculated on pre-treatment anatomy, sub-optimal or unsafe to deliver. Daily images of patients are evaluated against the treatment plan to evaluate delivery. Where delivery is deemed unsafe, re-planning will occur which, in a reactive workflow, can draw finite resources from elsewhere or may cause delays in patient treatment. In this work, we present the first use of a physics-informed convolutional recurrent neural network for longitudinal volumetric medical image data. This work aims to optimise the hyperparameters in our model for medical image data. The best performing model produced anatomical predictions at the next five treatment fractions with a mean squared error of 0.077 on images normalised between 0 and 1.
13
Is Comorbidity Associated With Interruption of Radiotherapy in Patients With HNSCC Cancer? - Golnoosh Motamedi-Ghahfarokhi (1), Azadeh Abravan (1), Eliana M. Vasquez Osorio (1), Marcel Van Herk (1) (2), James Price (2), Hitesh Mistry (1), Gareth Price (1) (2)
In cancer management, timely completion of radiotherapy (RT) is crucial, as interruptions impact overall survival. We investigated whether comorbidity burden is associated with RT completion in a cohort of 1,877 patients with head and neck squamous cell carcinoma (HNSCC) who underwent curative-intent RT with/without chemotherapy. Multinomial logistic regression explored associations between treatment completion categories, comorbidities, and overall comorbidity scores. Patients with moderate and severe overall comorbidity scores had a poorer survival than those with no comorbidity. Based on the ΔAIC analysis, accounting for each comorbidity category has the most pronounced effect on predicting treatment completion (ΔAI>9). The risk of not completing treatment increased for patients with moderate comorbidity score, and for patients with cardiovascular comorbidities. The risk of late treatment completion increased for patients with both respiratory and cardiovascular comorbidities. We found that comprehensive consideration of each comorbidity category significantly enhanced the accuracy of predicting treatment completion.
14
i-SART: A Prototype AI-Based Tool to Enhance Patient Safety in Radiotherapy - Anastasia Sarchosoglou (1), Natalia Silvis-Cividjian (2), Yijing Zhou (2), Periklis Papavasileiou (1), Athanasios Bakas (1), Evangelos Pappas (1)
In the complex field of radiotherapy (RT), ensuring patient safety is crucial. The implementation of proactive risk management aids RT departments in systematically addressing hazards in their systems. Disseminating information about potential failures has been recognized as paramount to improving patient safety globally. This paper presents i-SART, a novel web application designed to facilitate proactive risk management and share knowledge about potential failure modes (FMs) in a timely and efficient way. Based on the Failure Modes and Effects Analysis methodology, i-SART currently features a database of 419 FMs. The innovation lies in the integration of AI techniques engaging RT professionals in interactive learning by generating discussions about potential failures and safety measures. This is a work in progress and will be released soon for user evaluation. We anticipate i-SART to effectively promote proactive learning in FMs, enhance patient safety, and pave the way for advanced AI-based risk management in RT.
15
Prediction of Acute Pulmonary Toxicity Events With 3D Convolutional Neural Networks From Radiotherapy Dose Maps - Pedro Juan Soto Vega (1), Vincent Bourbonne (2) (3), Wistan Marchadour (4), Gustavo Andrade-Miranda (1), François Lucia (2) (5), Martin Rehn (2), Ulrike Schick (1) (6), Dimitris Visvikis (7), Franck Vermet (8) (9), Mathieu Hatt (1)
Predicting toxicity events in radiation therapy (RT) is highly beneficial for managing patients effectively. Identifying patients who are at a high risk of experiencing toxicity early on during their treatments can help in taking measures to reduce the risk of adverse events produced by this undesirable effect. Recent works in a related application, namely, acute pulmonary toxicity (APT) in lung cancer patients treated by RT, have demonstrated high accuracy in predicting such an event using dose maps features processed by a multilayer perception network. Thus, motivated by the success of convolutional neural networks (CNN) in learning semantically rich representation directly from images, this work investigates the suitability of CNN architectures in predicting APT directly from dose maps. Our results demonstrate the ability of some CNN models to predict APT from planning dose maps with an accuracy of up to 81% in terms of receiver operative characteristic's area under the curve. However, most of the architectures and configurations under evaluation led to non-satisfactory accuracy, as only shallower architectures using resized dose maps as inputs were able to train models with good accuracy in the testing set.
16
Deep Learning-Based Tumor Shape Variation Prediction of Lung Cancer Patient's Pre-Radiation Treatment Response - Sangwoon Jeong (1), Kyungmi Yang (2) (3), Youngyih Han (2) (3), Hee Chul Park (2) (3), Sang Hee Ahn (2)
Tumors decrease in size and change in shape with the progress of radiation therapy. Changes in tumor size are checked during treatment through Cone-beam Computed Tomography (CBCT). We predict CBCT based on deep learning using treatment plan information. A total of 376 data sets were used, and one data set was planning CT (pCT), dose map, RT structure, and CBCT. The study used a modified pix2pix model. Generator inputs are pCT, dose map, and RT structure, Generator output is predicted CBCT, and actual CBCT and predicted CBCT are distinguished through discriminator. In the PSNR, SSIM, and FID of the test sets, the averages were 10.03, 0.25, and 359.90, and the best performances were 14.11, 0.36, and 290.80. This study predicted tumor changes over the course of treatment using treatment plan information. Prediction of tumor changes can be used as a reference for when to change treatment plans.
17
Deep Learning Based Three-Dimensional Radiation Dose Prediction With Dose Level Guidance - a Spine SBRT Example - Tushar Shinde (1), Gregory Arthur Szalkowski (1), Cynthia Chuang (1), Lei Wang (1), Lianli Liu (1), Scott Soltys (1), Erqi Pollom (1), Elham Rahimy (1), Quoc-Anh Ho (1), Neelan Marianayagam (2), David Park (2), Ulas Yener (2), Antonio Meola (2), Hao Jiang (3), Mingli Chen (4), Zi Yang (1) (4), Weiguo Lu (4), Xuejun Gu (1) (4)
Deep learning models have emerged as promising tools in predicting radiation doses, but their application to spine stereotactic body radiotherapy (SBRT) encounters challenges like critical structure proximity and dose heterogeneity, resulting in sub-optimal clinical accuracy. We introduce an innovative workflow for three-dimensional radiation dose prediction. Our goal is to enhance the accuracy of 3D dose prediction tasks using dose-level guidance. We propose a two-stage model approach: a dose level prediction model employing nnU-Net and a voxel-wise dose prediction model utilizing a cascaded 3D U-Net. The first model predicts clinically relevant dose levels. Whereas the second model incorporates the weighted predicted dose levels as an additional input to generate precise voxel-wise radiation dose predictions. We trained and evaluated these models using a dataset of 102 spine SBRT plans, comparing them to a baseline cascaded 3D U-Net. Our comprehensive evaluation involved metrics such as Dice score, dosimetric indices (D2, D98, and Dmean) for planning target volumes (PTVs) and 0.035 cc for spinal cord volumes, as well as clinical indices (conformity index, Paddick conformity index, and gradient index). The results showed a superior performance as compared to the baseline approach, demonstrating the potential of dose level guidance to boost the performance of radiation dose prediction tasks.
18
Deep Learning for in Vivo EPID Dosimetry Classification: Relating Gamma Analysis and DVH Metrics for Lung Cancer Patients - Femke Vaassen (1), Sebastiaan Nijsten (1), Richard Canters (1), Frank Verhaegen (1), Cecile Wolfs (1)
A deep learning classification model was trained on 2D γ-maps to relate γ-analysis to clinically relevant dose volume histogram (DVH) metrics for in vivo treatment verification using EPID dosimetry. The aim of the model was to classify γ-maps into two categories: ‘error' and ‘no error'. Measured and predicted portal dose images for lung cancer patients were compared per beam using γ-analysis. Ground truth classification was based on DVH metrics from 3D dose recalculations on CBCT. The γ-maps and their corresponding label were used as input for convolutional neural networks. Results showed that training performance was good, but indicated overfitting, seen by lower performance on the validation set. This study showed that applying deep learning for relating 2D γ-results to DVH metrics has the potential for implementation of decision protocols in clinical practice, but the network architecture implemented here is unable to result in useful models.
19
Can Input Reconstruction Be Used to Directly Estimate Uncertainty of a Dose Prediction U-Net Model? - Margerie Huet Dastarac (1), Dan Nguyen (2), Steve Jiang (2), John Lee (1), Ana Barragan-Montero (1)
Artificial Intelligence is gaining momentum in fields like radiation therapy. However, there is a lack of reliable and easy to implement quality assurance tools. To address this problem, we present a direct uncertainty estimation method for dose prediction models using input reconstruction error. Our method achieves a higher correlation between uncertainty and prediction loss than the state of the art and allows an easier identification of out-of-distribution data.
20
Adaptive Radiotherapy Dose Prediction on Head and Neck Cancer Patients With a Multi-Headed U-Net Deep Learning Architecture - Hui-Ju Wang (1), Austen Maniscalco (1), David Sher (1), Mu-Han Lin (1), Steve Jiang (1), Dan Nguyen (1)
Adaptive radiotherapy (ART) optimizes radiation treatment based on the changing anatomy and tumor during the treatment course. Current deep-learning prediction models do not fully utilize pre-existing ART information, which can result in a loss of physician/patient-tailored intents. To address this, we introduce a Multi-Headed U-net (MHU-net) that incorporates intents and compares it to a standard single-headed U-net (SHU-net). The MHU-net was trained to predict adaptive session dose distributions using data from the adaptive session as the primary head input and pre-plan as the secondary head input. We found high agreement between ground truth dose and MHU-net, resulting in a statistically significant reduction in MSE from 0.0048 (SHU-net) to 0.0038 (MHU-net) on 10 test patients (p-val = 0.006). In conclusion, our study demonstrates the feasibility of incorporating a pre-plan as the physician's preference for the patient to tailor the dose prediction further.
21
Investigation of Unsupervised Race-Based Predictive Modeling Strategies to Improve Prediction Power of Tumor Recurrence in Head and Neck Cancer Patients in Diverse Racial Groups - Hassan Bagher-Ebadian (1) (2) (3), Marissa V. Gilbert (1), Farzan Siddiqui (1), Benjamin Movsas (1), Kundan Thind (1) (2) (4), Indrin J. Chetty (3) (5)
Purpose: This study investigates the predictive value of different race-based models constructed from radiomics and clinical factors, to predict tumor recurrence at three years for head/neck cancer patients. Methods: Two-hundred-two patients (78 African-American, and 124 Caucasian, treated with definitive radiation/chemoradiation (70Gy, 35-Fraction) were studied. A five-fold nested-cross-validation technique was used to evaluate forty-five unsupervised Kohonen-Self-Organizing Maps (KSOMs, topology: 5×5, five-folds for nine different models corresponds to different combined cohorts and information modalities) constructed from twenty-six one-hot encoded clinical factors and 441 radiomics features extracted from patients' gross-tumor-volume on planning-CT images. Results: NCV-based unsupervised predictive performances for African-American and Caucasian-based models were: 0.712±0.048 and 0.633±0.066 for clinical factors, 0.736±0.069 and 0.678±0.074, for radiomics, and 0.756±0.088 and 0.693±0.093, for combined radiomics and clinical-factors. Conclusion: Although radiomics can serve as a useful tool for distinguishing differences in outcome between diverse patient groups, we note this study does not offer causal associations.
22
Prediction of Stress in Radiation Therapy Patients Using Artificial Intelligence and Biological Signals - Sangwoon Jeong (1), Hongryull Pyo (2) (3), Won Park (2) (3), Youngyih Han (2) (3)
Patients undergoing radiation therapy can experience stress because of the fear of treatment. Stress causes muscles to contract, which can reduce the accuracy of the patient setup. In this study, we used biological signals to identify and artificial intelligence to predict stress during radiation therapy. Stress was calculated by analyzing biological signals measured before and during radiation therapy. We used various artificial intelligence models to verify the models optimized for stress prediction. Our findings indicate that over 90% of patients experience stress during treatment and artificial intelligence can predict this stress with over 80% accuracy. This study is pivotal for identifying patients requiring stress reduction before therapy, potentially enhancing the precision of cancer radiation therapy.
23
Robust Models for Esopohageal Toxicity Prediction From Radiation Dose Maps - Matthew Gil (1) (2), William Nailon (2) (3), Stephen Harrow (2), Paul Murray (1), Stephen Marshall (1)
Esophageal toxicity is a common group of adverse events caused by radiotherapy treatment in the chest region. Previous work has shown that dose-based prediction models are possible but there has been a lack of testing of the robustness of these models. We use patient dose information and outcomes (N=397) from the RTOG-0617 clinical trial to create esophageal toxicity prediction models using an ANN and LSBoost achieving an AUC of 0.707 and 0.691 for the prediction of grade ≥ 3 and grade ≥ 2 esophageal toxicity respectively. A MAE of 0.884 was achieved when predicting exact toxicity grades. The robustness of the models was evaluated by applying random noise to the test sets and observing the degradation of performance. Methods to improve the robustness including L2 regularisation, SMOTE, random noise-based data augmentation and ensemble model approaches were shown to be effective for different models.
24
Habitat Imaging Based on Voxel-Wise GLCM Joint Energy Clustering: Prediction of Disease-Free Survival in Localized Soft-Tissue Sarcoma. - Benoît Allignet (1) (2), Benjamin Leporq (2), Amine Bouhamama (1), Frank Pilleul (1), Alexandra Meurgey (1), Vaz Gualter (1), Marie Pierre Sunyach (1), Waisse Waissi (1), Olivier Beuf (2)
We evaluated if disease-free survival (DFS) of localized soft-tissue sarcoma (STS) patients can be predicted by tumor habitats based on a single voxel-wise radiomics map. The GLCM joint energy map was calculated on fat suppressed T1-weighted contrast-enhanced MRI, performed before neoadjuvant radiotherapy. Habitats were individualized based on k-means clustering method. The median relative volume of habitat 1 (RVH1, reflecting high heterogeneity) was 56.1%, and was used for dichotomization of the sample. Between 2010 and 2020, 149 soft-tissue sarcoma patients were retrospectively included. In this real-life cohort, RVH1 group was independent of MRI acquisition parameters. After a median follow-up of 46.3 months, 4-year DFS was significantly shorter in RVH1-high compared to RVH1-low group (52.5% vs 71.5%, p=0.029). In multivariate analysis, RVH1-high and FNCLCC grade 3 remained significantly correlated with shorter DFS (both HR=1.8, p=0.046 and p=0.033, respectively). Habitat imaging based on voxel-wise radiomics may be a valuable tool to evaluate intra-tumor heterogeneity to predict prognosis and/or to personalize treatment.
25
Improving Dose Volume Histogram (DVH) Based Analysis of Clinical Outcomes Using Modern Statistical Techniques. A Systematic Answer to Multiple Comparisons Concerns - Mirek Fatyga (1), Jiuyun Hu (2), Jing Li (3)
Abstract Dose Volume Histogram (DVH) indices remain the backbone of treatment plan optimization in radiation therapy (RT). Optimization constraints are derived from clinical outcomes studies which search for DVH indices that are predictors of toxicity. Conventional studies treat DVH indices as independent candidate predictors, selecting indices that meet the fixed statistical significance threshold. This process leads to inconsistent results among studies and multiple comparisons concerns. We present a more systematic approach to toxicity modelling, using knowledge constrained Generalized Linear Model (KC-GLM). This approach identifies one or more dose thresholds beyond which DVH indices become truly predictive and generates Normal Tissue Complications Probability (NTCP) models which are more interpretable and likely more generizable. We propose a hybrid model for clinical use, by using a single index NTCP model in the predictive dose range to establish optimization constraints, with the full model used for final NTCP verification.
26
A Deep Learning-Driven Framework for Automatic Target Volume Localization and Delineation in Spinal Metastases Stereotactic Body Radiotherapy - Zi Yang (1), Mahdieh Kazemimoghadam (2), Gregory Szalkowski (1), Cynthia Chuang (1), Lei Wang (1), Lianli Liu (1), Scott Soltys (1), Erqi Pollom (1), Elham Rahimy (1), Hao Jiang (3), David Park (4), Amit Persad (4), Yusuke Hori (4), Jie Fu (1), Ignacio Romero (1), Laszlo Zalavari (1), Mingli Chen (2), Weiguo Lu (2), Xuejun Gu (1)
Stereotactic body radiotherapy (SBRT) for spinal metastases requires accurate target delineation to achieve optimal plan quality. However, the delineation of target volume is a labor-intensive process and subject to inter- and intra- observer variations. To address the clinical needs and improve the spine SBRT workflow, we propose a deep learning-driven framework for the auto-localization and delineation of the target volume for spine SBRT, using multi-modal imaging. Consisting of a vertebrae segmentation network and a target volume segmentation network, the proposed workflow can localize the lesion on images based on clinical chart diagnosis, and further generate clinical target volumes (CTV) with a Dice score of 0.86 on the independent testing set. Additionally, this framework is supported by a comprehensive web platform, allowing efficient patient database management and user interactions such as review and revise. Our proposed framework can be a promising tool to streamline the spine SBRT treatment workflow.
27
On the Implementation and Evaluation of Loss Functions for Robust Multiple Anatomy Segmentation on CT Images - Chengyin Li (1), Rafi Ibn Sultan (1), Hassan Bagher-Ebadian (2), Yao Qiang (1), Kundan Thind (2), Indrin Chetty (3)
In the realm of CT-based medical image segmentation, the selection of a loss function significantly influences deep neural network training. Loss functions, such as Cross Entropy (CE), Dice, Boundary, and Top-K, excel in specific areas but have inherent limitations for tissue segmentation. Our motivation arises from the understanding that relying solely on a single loss function can introduce bias to models. We have implemented a combination of CE, Dice, Boundary, and Top-K loss functions employing both loss-level linear combination and model-level ensemble on two large CT dataset cohorts to investigate model accuracy for auto-segmentation of normal tissues for different anatomic locations. Through extensive experiments on institutional and public datasets, consistent observations emerged: (1) linear combination of loss functions did not exhibit a clear advantage over single loss methods; (2) ensemble-based methods demonstrated a 2% - 7% average increase in DSC scores, and appreciable reductions in Hausdorff distances (HD's) and Average Surface Distances (ASD's) compared to both linear combination or single loss methods; and (3) ensemble approach with optimized weights generally characterized finer details in predicted masks, as evidenced by qualitative analyses. This study provides quantitative results on the accuracy of different loss functions employed in machine learning networks for automatic medical image segmentation. Enhanced accuracy was achieved using an ensemble approach with optimized model weights, compared to the use of a single loss function or a linear combination of loss functions.
28
Safeguarding Clinical AI-Based Auto-Segmentation: Open-Source Models as Quality Assurance Against Catastrophic Failures - Daniel Alexander (1), Rafe McBeth (1)
This purpose of this work is to propose the use of an open source whole body CT segmentation model as a secondary independent check for validating outputs from a commercial auto-segmentation tool in radiation therapy planning. Evaluating uncertainty and comparing contours in 52 prostate cancer cases, we observe robust overall reliabil- ity with notable challenges in rectum contouring. Agreement is strong between the independent and clinical models for bladder and femoral heads (>0.9 DSC for >97% of patients), with moderate performance for prostate. Advocating for an automated secondary model check during contour generation, we emphasize the need for a verification of model output for the continued quality assurance of the planning workflow, aiming to enhance accuracy and confidence in AI-assisted radiation therapy planning.
29
Scripted Dosimetric Validation of AI Auto-Contouring in Prostate Radiotherapy - Daniel Warren (1), Adam Chalkley (1), Ian Stronach (1)
Artificial Intelligence (AI) contouring tools have the potential to optimise radiotherapy workflows and increase standardisation. Verification of these tools remains an important issue, complicated by rapid development cycles and explainability issues. The dosimetric impact of AI tools is of primary clinical importance, but models have largely been validated via geometric measures. In this study a simple scripted planning process has been applied to fifty prostate cases using the original clinical contours and four AI auto-contouring products, with unedited structures used to produce planning target volumes. Dosimetric clinical goals were assessed for the auto-contoured plans, and compared to standard geometric measures. Commonly used geometric measures, such as Dice, were found to be less useful surrogates for dosimetric measures than a simple metric that took into account the plan geometry (auto-contoured prostate length in z direction), which correlated with a systematic underdose when assessed on the clinical contours. This work suggests that a simple scripted planning approach to assessing auto-contouring output may help identify clinically relevant dosimetric changes.
30
Automatic Target Segmentation and Beam Setup for Electron Skin Radiotherapy - Rik Westendorp (1), Miranda Beumer (2), Tristan Van Heijst (1), Kelvin Ng Wei Siang (3) (4)
Accurate superficial target segmentation on a CT scan is challenging task, since the 2D segmentations must accurately follow the 3D surface curvatures projected onto orthogonal planes. For radiation oncologists this is a considerable time investment. We present a technique to automatically segment a target from a wire placed on the patient's skin using a CT scan. This method furthermore allows for automatic beam and couch angle determination as well as placement of the treatment plan reference point as defined by fiducial markers. Multiple wire materials were considered (Al, Cu, Sn) to define the CTV borders on the skin surface; from these, Sn proved to be superior with respect to detectability on CT. A phantom study was performed to assess the properties of the wires. The performance of the detection and segmentation algorithm were tested and evaluated on 25 patients. The algorithm correctly detected all fiducial makers, wires and other objects.
31
Implementing a Simple, Dynamic and Controllable AI Delineation Tool in Clinical Settings - Nis Sarup (1), Maximilian Konrad (1), Marie Louise Madsen Lange Sevdal (1), Simon Long Krogh (1), Christian Rønn Hansen (1) (2), Irene Hazell (1), Ebbe Laugaard Lorenzen (1), Carsten Brink (1)
This paper describes some of the development, decision-making processes, and outcomes of implementing a locally hosted service for automatic segmentation, using in-house trained models complying with national and international guidelines, of medical scans in cancer radiation therapy. By integrating this service, we significantly reduce the time from scanning to treatment planning, as the system automates tasks traditionally performed manually, thereby reducing the need for manual editing. We detail the service's architectural structure, discussing critical decisions in its design and addressing the constraints posed by the hospital's broader IT infrastructure. Our results highlight that automatic segmentation improves workload, and sheds light on practical considerations and challenges of embedding such technological solutions in a clinical setting.
32
Comparative Analysis of Deep Learning Models for Flap Segmentation in Head and Neck Computed Tomography Images: A Generalization Study - Abir Fathallah (1), Alice Blache (2), Zacharia Mesbah (3), Romain Modzelewski (3), Juliette Thariat (4), Mathieu Hatt (1)
Segmentation of flaps plays a pivotal role in the treatment planning of radiotherapy for patients with head and neck cancer. Accurate and robust flap segmentation could significantly enhance the accuracy and safety of radiotherapy. In this work, a comparative study was carried out using two distinct models for fully automated medical image segmentation: the nnUNet framework, leveraging the U-Net convolutional neural network architectures, and MedSAM, founded on transformers. Both model were previously trained and validated using a well-curated GORTEC (Groupe d'Oncologie Radiothérapie Tête Et Cou) prospective multicentric cohort. The present work focuses on the external validation of trained models in a "real-life" external cohort. As it was already observed in the previous study, the nnUNet outperformed MedSAM (Mean Dice score of 55 versus 74). The performance of nnUNet was lower than it was in the test set of the previous study , although the drop in performance is mostly due to the fact that the method works well on a majority of cases but completely fails in some studies that suffer from artifacts or with flaps with previously unseen or rare locations/characteristics. Our future work will focus on improving the performance of the segmentation in these cases.
33
A Robust Auto-Segmentation Approach to Accelerate Adaptive MRI-Guided Radiotherapy for Pancreatic Cancer Patients - Mehdi Shojaei (1), Bjoern Eiben (1), Jamie McClelland (2), Simeon Nill (1), Alex Dunlop (1), Uwe Oelfke (1)
Pancreatic cancer radiotherapy is challenging due to anatomical changes between fractions and organs-at-risk (OAR) radiation toxicity. MR-Linacs facilitate daily treatment plan adaptation to these changes based on session MR images. For this target and OAR contours on these images are required and involve labor-intensive manual contouring. Therefore, auto-contouring is highly desirable in an online adaptive MR-Linac workflow. One major challenge in training auto-segmentation models remains availability of sufficient data. We investigated an auto-segmentation approach using nnU-Net, with generative model (GAN)-based augmentations to improve performance. Balanced 3DVane images from multiple fractions of 9 patients were used for auto-segmentation. An abdominal CT dataset was utilized for GAN inference, generating 325 images. Compared to conventional augmentation, the GAN-based augmentation increased the average Dice score from 0.86 to 0.88 and reduced the average surface distance from 4.46mm to 2.78mm across 8 OARs. Our method can address time-intensive manual contouring in an adaptive MR-Linac workflow.
34
An Automated Framework of Accurate Dominant Intraprostatic Lesion Delineation Based on Multimodal Images for MRI-Guided Adaptive Prostate SBRT - Zirong Li (1), Xiao Liu (1), Qichao Zhou (1), X. Sharon Qi (2)
Prostate cancer is the most common non-skin cancer for men. Despite of early diagnosis and intervention, the recurrence rate remains high after primary treatment. The most frequently occur at the site of the dominant intraprostatic lesion (DIL). Dose escalation to the DIL may improve local tumor control. However, accurate delineation of DIL based on on-board MR images poses a challenge for MRI guided prostate adaptive therapy. We propose an algorithm of edge enhancement confidence based on multimodal images, to simultaneously utilize the DIL position information of pre-treatment PET images and the alignment information derived from daily MR images, to accurately identify DIL areas. The effectiveness of the framework was evaluated on prostate cancer patients treated with SBRT on a low field MRI system . The obtained DIL confidence images, account for daily motion on daily MR images improving the accuracy and robustness oftarget delineation and dose escalation for adaptive radiotherapy.
35
A National Infrastructure for Sharing AI Segmentation Tools - Simon Long Krogh (1), Ruta Zukauskaite (1), Ebbe Lorenzen (1), Nis Sarup (1), Maximilian Konrad (1), Janne Nørlykke Drudgaard (1), Mette Klüver-Kristensen (1), Eva Samsøe (2), Mohammad Farhadi (2), Thomas Hinz-Berg Johansen (2), Carsten Brink (1), Christian Rønn Hansen (1) (3) (4)
Introduction: The development of new automatic contouring solutions has escalated rapidly in recent years. Implementing these requires resources and specialised skills, limiting access to them for resource sparse sites. This work explores a standardised infrastructure for sharing automated delineation technologies among centres in a manner so easy to integrate that any site can access it. Materials and methods: The proposed solution is based on the national RT dose plan storage infrastructure, DcmCollab, sharing a head and neck OAR model. Results: A set of extensions to the DcmCollab system were implemented, and the test setup was evaluated over the course of seven months, processing 116 patients. The participating clinicians reported a potential time saving of 30%-40%. Conclusion: The proposed infrastructure was implemented and appears to be a viable means of sharing an automated delineation solution. Before a full clinical integration, legal aspects must be considered, especially MDR compliance in the EU.
36
Medical Image Segmentation Assisted With Clinical Inputs via Language Encoder in A Deep Learning Framework - Hengrui Zhao (1), Biling Wang (1), Deepkumar Mistry (1), Jing Wang (1), Michael Dohopolski (1), Daniel Yang (1), Weiguo Lu (1), Steve Jiang (1), Dan Nguyen (1)
Medical image segmentation plays a crucial role in clinical imaging services, with automatic segmentation techniques being pivotal for enhancing diagnostic accuracy and saving time. Despite their importance, current auto-segmentation methods exhibit suboptimal accuracy, marked by a significant gap between automated results and the ground truth. Traditional methods rely heavily on the image data alone, while clinicians typically utilize additional contextual information beyond the images for accurate contouring of targets and organs. To bridge this gap, our study introduces an innovative approach that incorporates clinical inputs into the auto-segmentation model. We collected clinical data that could potentially alter the segmentation, translated it into meaningful text, and employed a transformer-based text encoder to convert these texts into distinct vectors. These vectors effectively scale the features in our auto-segmentation model, allowing it to incorporate and benefit from multi-modal clinical information. This methodology significantly enhances segmentation quality, as demonstrated in our application of prostate segmentation for radiation therapy, where the delineation of prostate target involves integrating further clinical findings and physician preferences. Our experiments focused on five key factors that influence the segmentation of prostate targets for radiation therapy, with results showing that our proposed method not only outperforms the baseline model lacking clinical input but also surpasses current state-of-the-art methods. This advancement in auto-segmentation technology holds the potential to revolutionize diagnostic processes, offering a more accurate and efficient tool for clinicians.
37
Use of Complementary Multispectral Data for Improving the Accuracy of Deep Learning Solution for Bioluminescence Tomography Reconstruction in Glioblastoma Tumor Models - Behzad Rezaeifar (1) (2), Cecile Wolfs (1), Natasja Lieuwes (3), Rianne Biemans (3), Brigitte Reniers (2), Ludwig Dubois (3), Frank Verhaegen (1)
We previously proposed an AI-based solution for obtaining the location and shape of a glioblastoma tumor in a preclinical rat model using information obtained from bioluminescence imaging (BLI). The utilization of spectral data in bioluminescence tomography (BLT) reconstruction through conventional model-based algorithms has demonstrated notable advantages. Therefore, in this study, we investigate how the spectral data can be employed in the deep-learning approach. For this purpose, a novel multi-channel 3D U-Net deep learning model is proposed, concurrently incorporating bioluminescence skin fluence (BSF) and two additional input channels comprising spectral data obtained by applying specific optical filters to the detected BSF. Results indicate enhanced performance, as evidenced by a 74 ± 12% median Dice Similarity Coefficient (DSC) between the predicted tumor and the ground-truth CT-based tumor contour, compared to the previous full optical spectrum solution's 61%, affirming the efficacy of incorporating spectral data in the deep-learning approach.
38
Influence of Electron Transport Algorithm on Monte Carlo Based Treatment Planning - Grisel Mora Paula (1), A Eldib (2), J Li (2), Charles Ma (2)
Purpose: To investigate the effect of the global electron cutoff energy (ECUT) on the calculation accuracy of Monte Carlo based Treatment Planning. Systematic studies on heterogeneous phantoms are done to determine how ECUT affects the dose uncertainty and consequently the dose volume histogram (DVH) and dose mass histogram (DMH). Methods: Square phantoms made of different density materials are included in this study. Dose calculations (6MV-IMRT with two beams) are performed using the MCSIM code with the same number of histories and two different ECUT settings (521 keV and 700 keV). Dose Volume Histograms (DVH) and Dose Mass Histograms (DMH) are calculated using MCSHOW code. Results: The uncertainty on the average dose calculated for low density region (0.001 g/cm3) is about 2% with ECUT=521 keV and 9% with ECUT=700 keV, respectively. For regions with densities of 0.3 g/cm3, the uncertainty is about 1% in both 521 keV and 700 keV calculations. We observe differences up to 20% in the Total Target DVH between 521 keV and 700 keV while the DMH values differ by less than 5% between the two ECUT settings. Conclusions: DVHs for targets including large (about 50%) low-density regions may be significantly affected by the ECUT value used with the Monte Carlo algorithm. In contrast, DMH is not affected by high ECUT in the treatment plan evaluation, and thus we can use high ECUT and DMH to improve the efficiency and accuracy of Monte Carlo based Treatment Planning.
39
Monte Carlo Photons Simulation From Alpha-Emitter Radionuclides - David Sarrut (1) (2) (3), Ane Etxebeste (3), Jean Létang (3)
We present a Monte Carlo virtual source model for generating photons emitted during the decay chain of alpha-emitter radionuclides. The model, integrated into GATE 10, utilizes Geant4 databases to extract photon emission lines from decaying daughters. Employing Bateman equations for decay chain abundances and activities within a specified time range, it generates photons' energy and temporal distribution. By avoiding the need to simulate the entire decay chain, it reduces the computation time compared to reference ion source. The source model can be applied to any alpha-emitter radionuclide in the Geant4 database. It facilitates the investigation of (SPECT) imaging photon emission from alpha-emitters.
40
GMC [gɪ́mɪk] Then and Now: (r)evolution of a One-Variable Proton Monte Carlo Dose Algorithm. - Nicolas Depauw (1), Michael Williams (1), Benjamin Clasie (1), Kyung-Wook Jee (1), Thomas Madden (1), Hanne Kooy (1)
GMC [gɪ́mɪk] is a one-variable Monte Carlo dose algorithm for proton therapy. Its accuracy and performance were presented at the ICCR in Melbourne in 2013. Here, we present the current state of GMC after porting to a GPU architecture. Elevated from the original version, we demonstrate the impressive speed gain while conserving the system's accuracy. Features now include dose-averaged LET and BioDose computations, support for PBS and SOBP fields, as well as RBE modeling based on the McNamara model with arbitrary alpha-beta ratios. The DICOM data processing and loading steps for transparent integration in the clinical workflow take the major part of the total running time, while the computation runs in order of seconds, regardless of patient size, number of beams, prescription, etc. A live demo may be performed.
41
A Full GATE/Geant4 Model of a Helical Radiotherapy Device - Sheraz Blas (1), Luc Simon (2)
Radiation treatment relies on Treatment Planning Systems (TPS) for precise dose calculations. However, challenging cases, like strong heterogeneities or complex target movements, needs reference calculations such as Monte Carlo model. This study introduces a Monte Carlo model of a helical radiotherapy device (Accuray Radixact). GATE platform v9.1 simulates the linac head using data from the manufacturer. The model is validated by comparing measured and simulated dose distributions. A distinctive feature is its consideration of all accelerator element movements including individual Leaf Open Time. The model will find application in diverse contexts.
42
Assessing Neutron-Induced Carcinogenesis Risk in High-Energy Radiation Therapy: a Monte Carlo and Track Structure Study Using Realistic Neutron Spectra and DNA Damage Clusters - Sachin Dev (1), Felix Mathew (1), John Kildea (1)
High-energy photon radiation therapy (> 8 MeV) can lead to unintended patient exposure to secondary photoneutrons, posing a risk for iatrogenic cancers. Existing studies to evaluate the potential of photoneutrons to induce carcinogenesis have been performed solely with flat neutron spectra irradiating the ICRU-4 sphere. Therefore, to replicate a real-world cell irradiation scenario, our study is modeling an in-vitro cell irradiation geometry irradiated with realistic photoneutron spectra encountered in high-energy external-beam photon radiotherapy and aims to evaluate the resulting neutron-beam RBE. As a first step, we have quantified the secondary particle spectra in the ICRU-4 sphere for monoenergetic neutron beams of initial energies 1 keV, 1 MeV, and 10 MeV. An in-house nuclear DNA model and an algorithm for recording clustered DNA damage will be used to calculate DNA damage yields. The neutron RBE obtained from the simulation will be validated by performing in-vitro irradiations of B-lymphoblastoid cells with a 6 MV photon beam used as reference radiation and with the photoneutron spectra arising from a 15 MV linac.
43
Impact of Temporal Structure of Electron, Proton and Photon on (H2O2) Concentration Using Montecarlo Simulation Toolkit (TOPAS-NBio) - Mustapha Chaoui (1), Othmane Bouhali (2), Tayalati Yahya (1) (3)
FLASH radiation therapy is an innovative modality that relies on the delivery of ultra-high dose rates (UHDR), threshold of 40 Gy/s in a pulsed beam structure. This modality reduces normal tissue complications without compromising tumor control, termed as "FLASH effect". However, the mechanism governing this effect remains elusive, prompting ongoing debates on the plausible mechanisms. This effect might depend upon the early physical-chemical events and biochemical events. The main aim of this study is to track the yields of water radiolysis, particularly the toxic reactive oxygen species Hydrogenperoxide (H2O2). TOPAS-nBio Monte Carlo track structure simulations were conducted across diverse conditions, such as dose rates, number of pulses, and particle beam energy. The yields of (H2O2) exhibited a notable linear increase with the delivered dose, and the dose rates were significantly impacting the (H2O2) across all three particle beams. Whereas these yields (H2O2) can be ranked: UHDRphoton > UHDRproton > UHDRelectron > CONVelectron.
44
Recent Progress in GMicroMC Package for Microscopic Monte Carlo Simulations - Youfang Lai (1), Yuting Peng (1), Meiying Xing (2), Yujie Chi (3), Xun Jia (1)
Fundamentally understanding biological responses to ionizing radiation is of crucial importance. Mechanistic modeling by microscopic Monte Carlo (MC) simulation of the radiation physics and chemistry process triggered by radiation, as well as the process to generate initial DNA damages, plays an important role. However, such simulation is often very time consuming, limited the scope of simulation affordable. Our group has been developing open-source high performance microscopic MC simulation package -- gMicroMC based on Graphical Processing Units (GPU). This paper summarizes our recent progress in recent developments in modeling the radiation transport process and DNA damage calculations for various DNA structures. To demonstrate the utility of this tool, we also present two example applications, namely radical production in electron FLASH irradiation, and relative biological effectiveness (RBE) of heavy ions calculated by considering DNA damage of various complexities.
45
Log File-Based Patient Specific QA Using GPU-Accelerated Monte Carlo Simulation for Proton Pencil Beam Line Scanning Therapy - Chanil Jeon (1), Sung Hwan Ahn (2), Jungwook Shin (3), Hoyeon Lee (4), Youngyih Han (5) (1)
Monte Carlo (MC) simulation is one of the accurate dose calculation methods in radiation therapy applications. However, long calculation time for simulation is major drawback makes it difficult to utilize MC simulation in practical applications. In this study, we developed log file-based MC calculation application using GPU for proton pencil beam line scanning therapy. The beam parameters of the simulation were determined using beam measurement data for clinically used energy range. To validate the log file-based GPU MC method, eight patient-specific quality assurance plans was used. The two-dimensional absorbed dose at a depth 2cm of water phantom from log file and GPU-based MC was compared with that of the log file and CPU-based MC calculation with gamma evaluation using 2% / 2mm and 5% high dose threshold criterion. The average, maximum, minimum, and standard deviation of 2% / 2mm gamma evaluation were 99.6, 100, 97.9 and 0.55%, respectively.
46
Evaluation of a New Radiosurgery System (infinityARC) Using Geant4 - Patrick Johan Ngontie Ngounou (1) (2), Philippe Despres (1) (2), Yannick Lemaréchal (1) (2)
In this study, Geant4 simulations were used to evaluate the infinityArc (iArc) radiosurgery system currently under development. This new system is equipped with a robotic arm that allows for the precise and safe movement of the equipment's head around the patient while delivering the beam. The radiation from 22 60Co sources passes through a collimation system towards the isocenter. To meet FDA approval standards, a comprehensive study protocol utilizing Geant4 simulations has been established, adhering to IEC 60601-2-11:2013 and NCRP 102 guidelines which are defining performance criteria to be achieved. Monte Carlo simulations were used to calculate the dose distribution in a spherical water phantom and to assess leakage radiation around the equipment's head. The dose profiles and isodose curves will be compared with physical measurements once the prototype is completed. Preliminary results suggest the efficacy of the shielding geometry. Nevertheless, further simulations and in-depth investigations are required to further characterize the system.
47
A Fast 3D Range-Modulator Delivery Approach: Sensitivity of the Resulting Dose Distribution on Potential Setup Misalignments - Yuri Simeonov (1), Miriam Krieger (2), Michael Folkerts (2), Gerard Paquet (2), Pierre Lansonneur (2), Petar Penchev (1), Klemens Zink (1) (3), Uli Weber (4)
3D-range-modulators (3DRM), optimized for a single energy and individual tumor shape are a promising and viable solution for FLASH dose delivery in particle therapy. In combination with a high-intensity accelerator, irradiation time can be reduced tremendously. However, a 3DRM is an additional beam modifying device in the beam line, and as such it is important to understand and analyze the impact of potential misalignments on the dose distribution. For this purpose, a reference Monte Carlo simulation and five additional modified simulations were performed and compared. The modifications included modulator translation and rotation. The results show that the SOBP transitions to a broad, Gaussian-like distribution with increasing RM rotation, whereas the sharp distal dose is laterally shifted. Precise positioning is important. However, increasing the pin base can mitigate the SOBP dose deterioration to a large extent. Even complex 3D modulators with long pins are robust against 0.5° rotation or 1.5mm shift.
48
Oxygen Effect on Tumor Response to Irradiation: Insights From GATE Simulation Studies - Zakaria Ait Elcadi (1), Othmane Bouhali (1)
The Oxygen effect significantly influences tumor cell response to irradiation, particularly in cases of tumor hypoxia. This study, using the GATE simulation code, evaluates dose distribution in a spherical tumor within the right lung under varying oxygen concentrations. The Oxygen Enhancement Ratio (OER) is employed to quantify the impact of oxygen pressure on radiation response. Two phases of investigation involve simulations of an XCAT phantom with a 1 cm diameter spherical tumor, applying oxygen levels from 5 to 80 mmHg. Results demonstrate a correlation between deposited energy, OER values, and oxygen pressure. The study reveals that low oxygen pressure regions require higher doses for comparable effects, emphasizing the need for tailored treatments based on tumor oxygenation. This research sets the groundwork for future studies on physiological factors influencing hypoxic tumors and their spatial relationship to varying tissue densities.
49
Benchmark of Geant4 Hadronic Models for Carbon-Therapy Within the CLINM Project - Marie Vanstalle (1), Lévana Gesson (1), Claire Reibel (1), Nicolas Arbor (1), Aurélia Arnone (1), Christian Finck (1), Catherine Galindo (1), Stéphane Higueret (1), Thé-Duc Lê (1), Arshiya Sood (1), Quentin Raffy (1), Philippe Peaupardin (1), Marco Pullia (2)
Optimization of hadrontherapy treatments implies careful consideration of the secondary particles produced by the ion beam fragmentation in the patient. Unfortunately, these processes are still poorly reproduced by softwares used for treatment planning, and research projects are needed to improve the multi-scale calculations of secondary particle production and associated radiobiological effects. In order to accurately replicate biological responses to ionizing radiation, understanding both the physical and chemical effects of ions is crucial. The CLINM project, aims to characterize secondary particles from ion fragmentation on tissues and their associated chemical effects, comparing both simulation and experimental results. In this context, an experiment was conducted at the CNAO hadrontherapy center, where physical properties (energy, charge) of the fragments produced by interaction of incident 12C ion beam with tissue-equivalent target were measured thanks to a ΔE-E telescope. This work aims at presenting the achieved results, compared to simulations performed with Geant4.
50
Monte-Carlo Modeling of Intratumoral Radionuclide Distribution in TAT: Comparison to Experiment - Victor Levrague (1), Mario Enrique Alcocer-Avila (2), Etienne Testa (2), Michaël Beuve (2), Rachel Delorme (1)
In the context of predicting biological effects in Targeted Alpha Therapy, nano- and micro-dosimetry are essential because of the heterogeneous dose deposition at cell level. Combining CPOP to create complex spheroidal multicellular geometries and Geant4 to perform Monte-Carlo simulations, cell survivals can be calculated through the NanOx biophysical model for a given low energy ion irradiation. The objectives are to investigate the impact of intratumoral radionuclide heterogeneities on cell survivals with our model, and to compare the results with measures and MIRDcell. At 1% cell labeling, MIRDCell and our model overestimate with several order of magnitudes measured cell survivals. For 10% and 100% labeling, our predictions went up to 0.89 R² for lognormal distributions. This study highlights the consistency of our model's predictions with the MIRDcell ones and the importance of considering realistic radionuclide distributions. It also highlights the difficulty in reproducing as faithfully as possible in vitro experiments.
51
Monte Carlo Modeling of Elekta VERSA HD for Out-of-Field Dose Characterization - Maxime Jacquet (1), Thomas Baudier (1), Eric Deutsch (2), Charlotte Robert (2), David Sarrut (1)
Modern radiotherapy has significantly improved cancer patient survival rates, yet challenges remain in optimizing both tumor response and quality of life. Out-of-field doses play a role potentially impacting patient overall survival, but are not well-estimated by the conventional treatment planning systems. We propose here the development of a full Monte Carlo simulation model of the Elekta VERSA HD accelerator to assess the out-of-field deposited doses. We introduce an original Compton splitting method aiming to compensate for simulation low convergence in the out-of-field region. Results emphasize that achieving a 5% statistical precision on the far out-of-field deposited dose requires 200 days of simulations on a single thread. The proposed variance reduction technique led to a 1.4 efficiency gain. This model will be used to generate out-of-field dose maps on a large database of patients that will be used to train a deep learning model for quick dose calculation.
52
DINo: Deep Learning Intelligence for Nuclear Reactions - Lévana Gesson (1) (2), Greg Henning (1), Uli Weber (2), Marie Vanstalle (1)
In radiation therapy, precise dose calculations are crucial for treatment plan improvement, relying on advanced algorithms based on Monte Carlo simulations. Persistent discrepancies between experimental data and hadronic models in particle therapy highlight the need for improved accuracy. This study introduces DINo (Deep learning Intelligence for Nuclear reactions), a deep learning algorithm designed to explore solutions for these challenges. DINo uses partial proton nuclear reactions cross-sections to extrapolate total cross-sections, mitigating limitations in existing models. Focused on protons, with an extensive representation in the TENDL nuclear data library, DINo, as a dense neural network, learns from TENDL's 2019 model values, effectively predicting total cross-sections for unexplored initial energies. Preliminary results tend to indicate DINo's superior predictive capabilities compared to the TENDL model, promising enhanced precision in nuclear reaction data and refined dose calculations for particle therapy
53
Deformable Voxel Geometry for Photon and Electron Monte Carlo Dose Calculations - Björn Zobrist (1), Hannes A Loebner (1), Jenny Bertholet (1), Daniel Frei (1), Werner Volken (1), Florian Amstutz (1), Marco F M Stampanoni (2), Peter Manser (1), Michael K Fix (1)
Purpose: To account for dynamically deforming anatomy during photon and electron dose calculation by implementing and validating a deformable voxel geometry for Monte Carlo dose calculations: DefVoxMC. Materials and Methods: DefVoxMC is implemented in the Swiss Monte Carlo Plan framework and validated for non-rigid deformations for photon and electron beams with a compression test and a Fano test. DefVoxMC is used for a clinically motivated case with a breathing motion, and the dose differences between the reference (static) and deformed situations are assessed. Results: The compression and Fano tests were successful. For the clinically motivated case, areas with dose differences of up to 1.4 Gy for electron beams and 6.9 Gy for photon beams are observed between the static and deformed situations with a prescribed dose of 50 Gy. Conclusion: DefVoxMC is successfully implemented and validated for photons and electrons and can be used to consider deformations in clinically motivated cases.
54
Development of the Low Dose Radiotherapy Machine for the Osteoarthritis: Optimization of Design Based on the Monte Carlo Simulation - Hyojun Park (1), Jung-In Kim (1), Sang Hoon Heo (2), Song Yi Heo (2), Minjae Choi (1), Ju Yeol Shin (1), Chang Heon Choi (1)
This study aims to optimize the design of the low dose radiotherapy machine for the osteoarthritis to develop the dedicated machine. Monte Carlo (MC) simulation was employed in the optimization by using the GEometry ANd Tracking 4 (Geant4). Firstly, the appropriate source for the osteoarthritis treatment was defined by comparing physical characteristics of different radionuclides currently used in radiotherapy. Next, geometry of the collimator, housing and source were optimized. Geometry of the collimator and housing was optimized in terms of its thickness by evaluating transmission rate of photons from the source according to different tungsten thickness. The source geometry was optimized in terms of its length and diameter as it had cylindrical shape. Finally, the dose distribution produced by the dedicated machine was evaluated according to its rotation steps. The results is expected to contribute development of compact and simple treatment machine for the osteoarthritis.
55
A GPU-Based Patient-Specific CT Dosimetry Tool Combining Monte Carlo Calculations and ML-Based Segmentations - Ronan Lefol (1), Yannick Lemaréchal (1), Jonathan Boivin (2), Philippe Despres (1)
The continuous growth of Computed Tomography (CT) examinations underscores the pressing need for enhanced personalized dose estimation in medical imaging. While patient specific Monte Carlo simulations are recognized as the gold standard, they remain prohibitively time-consuming and unfit for massive, population-scale calculations. This study showcases fast dose-to-organ extraction facilitated by the GPU Monte Carlo Dose (GPUMCD) software and automated segmentation tools for CT acquisitions. Doses simulated within anthropomorphic phantoms were within 7% of measured values. Mean organ dose variation of up to 50% from the cohort average were observed for identical scanning parameters. The pipeline with a runtime of 5 to 7 minutes, with 140s for the segmentation and 70s for GPUMCD. This works demonstrates the ability to generate DICOM compliant dose distributions, segmentations, and dose reports within minutes. This pipeline lays the groundwork for swift, personalized CT dosimetry and facilitates large-scale dose-to-organ studies.
56
Prompt-Gamma Timing With a Track-Length Estimator in Proton Therapy - Jean Létang (1), Oreste Allegrini (2), Etienne Testa (2)
We present a variance reduction technique (VRT) for Geant4/GATE Monte Carlo (MC) simulations, tailored to the simulation of prompt-gamma (PG) radiation caused by proton inelastic processes in the patient. This technique is based on a track-length estimator to generate 3D sources of PG energy and emission-time yields. Results show minor discrepancies with respect to analog MC. A gain in efficiency of about 50 is reported for the detection of PG.
57
Accuracy Comparison Between GATE and TOPAS for 6MV Varian CLINAC IX - Zakaria Ait Elcadi (1), Othmane Bouhali (1)
Monte Carlo simulations in radiation therapy often face CPU timing challenges, prompting the development of various Geant4-based platforms for optimization. Our study compares GATE and TOPAS, both Geant4-rooted, aiming to guide users toward platforms offering optimal CPU tools and high precision. The assessment involves CPU timing and electron beam configuration comparisons. Firstly, exploring platform-specific physics, phase space, and variance reduction techniques; secondly, scrutinizing electron beam configurations using stringent criteria. Results show GATE's 7% efficiency edge over TOPAS in CPU timing, attributed to distinct techniques. However, electron beam assessments reveal discrepancies in mean energy and spot size. TOPAS excels in agreement with measurement percentage dose points due to its advanced physics configuration. This analysis emphasizes GATE's computational efficiency and TOPAS's electron source modeling precision. It stresses the importance of platform-specific capabilities when selecting radiation therapy simulation tools.
58
Identification of Suitable Probing Locations for a Proton Radiography-Based Quality Control of CBCT-Based Synthetic-CTs - Måns Lundberg (1) (2), Arturs Meijers (2), Antony Lomax (2) (3), Damien Weber (2) (4) (5), Kevin Souris (6), Antje Knopf (1)
Proton Radiography (PR) can be used as a Quality Control tool for Cone-Beam CT-based synthetic-CTs (sCT) by comparing PR simulations in the sCT with PR measurements after the patient. The sensitivity of proton beams to density variations necessitates spatially invariant PR probing locations, minimizing the influence of confounding factors. We investigated if the energy bandwidth in an area around a PR probing location is predictive for its robustness towards any confounding factors to assess this robustness measure, positioning errors were applied to the probing locations, and resulting range shifts were evaluated. We demonstrated that the energy bandwidth of an area surrounding a probing location correlates with the magnitude of the resulting range shift when simulating positioning errors. Considering an allowed range shift uncertainty of 0.6%, we located a minimum of 19 suitable PR probing locations for sCT Quality Control in each of the five investigated patients.
59
Can Low-Cost Magnetic Resonance Distortion Phantoms Be 3D Printed? - Haley Clark (1)
Expanding use of diagnostic MR fusion, MR-simulation, and MR-linacs in radiotherapy has increased awareness of the quality assurance challenges associated with spatial distortions. While precise and effective commercial phantoms exist, they are costly and can present challenges when monitoring multiple MR scanners. In this work, we evaluate whether extremely low-cost 3D printed phantoms exhibit the spatial accuracy needed to monitor spatial distortions. Using several printed variations, we find that clinically sufficient accuracy is possible for up to four orders of magnitude cheaper cost than a commercial phantom.
60
External Validation of an AI-Based Tool for the Early Detection of Radiotherapy Treatment Planning Errors-An Updated Multicentric Study - Petros Kalendralis (1), Samuel Luk (2), Alan Kalet (3), Andre Dekker (1), Inigo Bermejo (1) (4), Tomasz Piotrowski (5) (6) (7), Adam Ryczkowski (5) (6), Joanna Kazmierska (5) (8)
Bayesian Networks (BNs) represent a graphical modelling approach derived from the field of artificial intelligence (AI). They have demonstrated promise in assisting with the identification of errors in radiotherapy treatment plans. This study externally validated the BN developed with data from the University of Washington (UW), University of Vermont (UVM), and MAASTRO clinic in the Netherlands, using an independent dataset from the radiotherapy department of the Greater Poland Cancer Centre (GPCC) in Poznan. We used a dataset of 378 Volumetric Modulated Arc Therapy VMAT treatment plans, consisting of 802 treatment arcs. The results showed that the BN trained on the three centres (MAASTRO, UVM and UW) outperformed (AUC:78%) of the BNs trained in single centres (UW 66% and MAASTRO 75%). Future steps include the construction of a BN applied to specific treatment modalities and tumour locations for the creation of a more robust model.
61
Automatic Contour Quality Assurance (CAT-QA) to Speed up Online Adaptive Radiotherapy - Lisa Fankhauser (1) (2), Andreas Smolders (1) (2), Renato Bellotti (1) (2), Damien Charles Weber (1) (3) (4), Antony John Lomax (1) (2), Francesca Albertini (1)
One of the bottlenecks in the implementation of online adaptive is the time required to review and approve daily contours. In response to this challenge, our study introduces an automated Contour Quality Assurance (CAT-QA) workflow aimed at enhancing efficiency. The CAT-QA workflow uses geometric and dosimetric tests to identify automatically generated contours that would benefit from manual review. It is split in two phases: an offline phase conducted before the treatment and an online phase conducted after acquiring the daily image. Across the nine cases utilized for testing, a maximum of three structures were selected for manual revision. Using CAT-QA ensures to follow the initial prescription and avoids dose increase in OARs due to uncertainties of the propagated contours. In conclusion, although more validation is required, the proposed workflow could enable safe use of auto contouring for adaptive radiotherapy whilst reducing time spent on manual contouring.
62
Development of an Automated Radiotherapy Daily QA System - Marcus Tyyger (1), Matthew Carlisle (1), Jack P. C. Baldwin (1), Richard Homer (1), Phillip Rixham (1), Steve Weston (1)
Purpose: Develop an EPID-based daily constancy check system, which does not require a phantom, and can reduce overall quality assurance (QA) burden. Materials and Methods: Daily images were acquired across 12 linear accelerators for 6 and 10MV FFF and flattened beams. Local computing infrastructure was setup to automatically transfer data and analyse images. Results were made visible through a webserver for online and offline analysis. Image analysis was validated against standard QA measurements. Results: Output median error 0.1% (range: -0.9, 1.7). The median symmetry and flatness errors were less than 0.1 % (range: -2.3, 2.0), and -0.6% (range: -3.1, 3.3) respectively. Conclusion: In-house development can provide solutions for RT departments where commercial solutions are unsuitable or unavailable. The developed system can be employed as an output daily constancy check, and trend analysis of beam characteristics enables potential reduction of monthly QA tasks.
63
QATrack+: An Open-Source Routine Quality Assurance Platform - Randle Taylor (1), James Kerns (1)
QATrack+ is an open-source quality assurance platform able to record data, attach relevant files, and perform custom calculations. The software is extensible and allows users to write their own scripts for advanced data processing. Service events, reporting, and charting are all built-in and work seamlessly with data inputs. Advanced permissions also allow for specialized permissions and multi-role environments.
64
Isocenter Deviation Assessment and Model for Halcyon Radiotherapy Systems - Linda Lankinen (1) (2), Antti Kulmala (3), Jouko Lehtomäki (1), Ari Harju (1)
The radiation isocenter position on a Halcyon radiotherapy system was measured and the deviation from the intersection of the rotation axes, the ideal isocenter, implemented in Monte Carlo (MC) model. With a test VMAT plan, the isocenter deviation was assessed by analyzing the difference between the film measurement and MC simulated dose distribution. The implementation of the deviation improved the agreement between the MC model and the measured dose distribution especially in the steep dose gradient regions. While the treatment system delivery met all isocenter related specifications, the improvements demonstrated by the modeling of the isocenter displacements could lead to more accurate dose predictions for treatments where steep dose gradients are needed around small targets.
65
Big Data Methods for CT Doses Analysis and Monitoring - Pierre-Luc Asselin (1) (2) (3), Philippe Despres (1) (2) (3), Jonathan Boivin (2), Yannick Lemaréchal (1), Gabriel Couture (1), Samuel Ouellet (3)
Computed Tomography is the principal contributor to ionizing radiation exposure of medical origin for the Canadian population. The Canadian Computed Tomography Survey by Health Canada provides guidance in the safe use of radiation-emitting devices, notably by proposing Diagnostic Reference Levels (DRLs). Dose associated with studies can be compared to DRLs, but this remains a periodic one-off procedure that might fail to capture the reality of clinical activities. There is a need to monitor more closely the use of ionizing radiation in CT, as increasingly demanded by authorities. This project aims to implement big data methods to provide CT dose visualization and analytic tools to managers and technical personnel. Dashboards were deployed to provide an overview of radiation usage in CT, leading to opportunities to study clinical practice trends to improve heath care. The technology used was found to be well adapted to massive datasets without interfering with clinical activities.
66
Patient-Specific Quality Assurance in Radiotherapy: Prediction With Deep Hybrid Learning for VMAT Plans on Rapid Arc and IMRT Plans on Halcyon Machines - Noémie Moreau (1) (2), Christine Boutry (1), Laurine Bonnor (1), Cyril Jaudet (1), Nadia Falzone (3), Alain Batalla (1), Laetitia Lechippey (1), Cindy Bertaut (4), Aurélien Corroyer-Dulmont (1) (2)
Arc therapy in radiotherapy (RT) allows for better conformation of the dose deposition but requires patient-specific quality assurance (QA). The aim of this study was to develop a predictive model of Delta4-QA outcomes based on RT-plan complexity indices to reduce QA workload. Methods: From 1632 RT-VMAT-plans, six complexity indices were extracted. Machine learning (ML) and deep hybrid learning (DHL, for complex tumor locations, ie breast, pelvis and head and neck) models were developed to predict QA-plan compliance. Results: ML model achieved 100% specificity and 98.8% sensitivity for non-complex RT-plans (with brain and thorax tumor locations). For more complex RT-plans, sensitivity drops to 87%, innovative method using DHL achieved 100% sensitivity and 97.78% specificity. Conclusion: QA results were predicted with high accuracy. A similar approach was realized to predict intensity-modulated RT QA compliance for breast tumor treatment with Halcyon machine. These results will also be presented during the conference.
67
AutoMRISimQA: an Automated System for Daily Quality Control of a 3T MRI Simulator - Aitang Xing (1) (2) (3), Gary Goozee (1) (2) (3), Gary Liney (1) (2), Sankar Arumugam (1) (2) (3), Shrikant Deshpande (1) (2) (3), Anthony Espinoza (1) (2) (3), Alison Gray (1) (2) (3), Vasilis Kondilis (1), Doaa Elwadia (1) (2), Robba Rai (1) (2), Phillip Chlap (3), Lois Holloway (1) (2) (3)
A software system named AutoMRISimQA was developed to monitor the daily performance of a wide-bore 3T scanner (MRI) which was designed and dedicated to radiotherapy simulation. AutoMRISimQA can monitor the performance of the MRI simulator not only by using image quality indices such as signal-to-noise ratio (SNR), uniformity, ghosting, and contrast but also performing a quick check of geometric accuracy as well as the external lasers quantitatively. It was implemented into the daily clinical workflow in 2013 and has been used for more than 10 years. It was also seamlessly integrated with QATrack+, allowing continuous monitoring of the consistency of the MRI simulator's performance.
68
Analysis of a Decade's Worth of Oncology Information System Data From a Large US Healthcare System - Christian Velten (1) (2), Ping Yan (1) (2), Madhur Garg (1), Wolfgang Tomé (1) (2)
Over a decade of oncology information system data were analyzed to investigate practice patterns and changes with respect to the treatment planning process across several sites. Evaluated metrics were physician conformity to submit target contours and prescriptions, available time for treatment planning, including quality assurance and physics checks, overall development of the number of plans treated, and treatment planning and physics check workload. Changes in practice patterns, e.g. the impact of the COVID-19 pandemic and its variation in the type of treatments performed at different sites, as well as the increasing number of patient plans treated with decreasing time available was identified. Interventions e.g., increasing time allotments and the implementation of a treatment planning progress monitoring system were identified in the data. Leveraging big data can generate insights and drive pro-active over reactive workflow changes.
69
NoPAUSE: Feasibility of a General-Purpose, Physician-Initiated Automatic VMAT Treatment Planning System for Curative Treatments. - Nick Chng (1), Stacy Miller (1), Robert Olson (1) (2), Maryam Golshan (1), Fred Cao (1), Quinn Matthews (1)
The Northern Plan Automation Services (NoPAUSE) project is a collection of in-house applications and scripts designed to improve efficiency and quality in radiotherapy treatment planning. The Treatment Planning Automation System (TPAS) is one of these services, and enables fully automatic VMAT plans to be produced within our Eclipse/ARIA environment. It supports a physician-initiated workflow that allows contouring, planning, and approval to occur in one encounter. This workflow is deployed at three centres within our institution for rapid-palliative VMAT. The purpose of this work is to investigate extending our system to curative protocols, beginning with prostate, rectum, and H&N. Retrospective clinical cases were used to tune the system. After tuning, a further five validation cases for each site were used to compare auto-plans to the manual clinical ones. For each site, TPAS produced clinically-acceptable plans that compared favorably to the matched retrospective control, and demonstrated comparable quality from a DVH perspective.
70
Fast Many-Scenario Robust IMPT Planning Through Constrained Dose-Mimicking - Franziska Knuth (1), Michelle Oud (1), Wens Kong (1), Hazem Nomer (1), Joep Van Genderingen (1), Linda Rossi (1), Steven Habraken (2) (3), Ben Heijmen (1), Sebastiaan Breedveld (1)
Patient setup variations and proton range uncertainties need to be taken into account in robust treatment planning for intensity modulated proton therapy (IMPT). In this study, a two-step approach is proposed that aims at fast generation of IMPT plans that are robust for many robustness scenarios. In the first step, automatic multi-criterial optimization (MCO) with Erasmus-iCycle is used for generation of a plan that is robust for 13 scenarios. In the following dose mimicking step, this plan is converted into a plan that takes into account 53 scenarios. The potential of this approach is demonstrated for 14 oropharyngeal cancer patients. Compared to the initial plans generated with the 13 scenarios, the converted plans show much improved target coverage robustness for the final 53 scenarios, similar to plans directly generated with 53 scenario based automatic MCO. The two-step approach with dose mimicking was also a factor 10.7 faster than direct automatic MCO, with dose mimicking times of on average 1.4 min. The proposed dose-mimicking is applicable to various treatment sites, providing a practical solution for enhancing IMPT plan robustness with significant time savings compared to direct robust optimization.
71
A Complete Portfolio for Automated Treatment Planning for Whole Breast Radiotherapy - Hana Baroudi, Leonard Che Fru (1), Deborah Schofield (1), Dominique Roniger (1), Donald Hancock (1), Kent Gifford (1), Tucker Netherton (1), Joshua Niedzielski (1), Adam Melancon (1), Manickam Muruganandham (1), Simona Shaitelman (1), Sanjay Shete (1), Melissa Mitchell (1), Laurence Court (1)
This study aims to develop automated 3D conformal radiation therapy planning for the treatment of intact breast, considering individual patient factors and available resources. In total, three automated treatment approaches were created. These comprise conventional tangents for whole breast treatment, and two options for additional regional lymph nodes treatment: wide tangents photon fields with supraclavicular nodes (SCLV) field, and a regular photon tangents field with a matched field to treat the internal mammary nodes, and a SCLV field. Each approach includes the choice of single or two isocenter setup (with couch rotation) to account for a full range of patient sizes. All algorithms begin by generating automated contours for breast CTV, regional lymph nodes, and organs at risk using in-house nn-Unet deep learning models. Treatment fields are then automatically generated. Then gantry angles and field shapes are optimized, ensuring coverage of target contours while limiting dose to nearby organs. Algorithms were integrated into the Raystation treatment planning system, and then tested on 10 whole breast patients for clinical acceptability. Evaluation criteria included ensuring adequate coverage to targets and adherence to dose constraints for normal structures. For the treatment of breast, the algorithms successfully replicated clinical plans, meeting dosage goals for all 10 cases. For breast and regional lymph nodes treatment, 5 out of the 10 plans met all preferred objectives. For the rest of the plans, axillary nodes coverage or lung or heart dose constraints were not met indicating that a different choice of treatment choice should be considered. This study showcases the feasibility of a comprehensive automated treatment planning model for whole
72
DeepTuning: Tune Radiotherapy Plan Dose Distribution in Real Time by Deep Learning - Lin Ma (1), Huan Liu (2), Mingli Chen (3), Weiguo Lu (3)
Radiotherapy treatment planning is a collaborative effort. Physicians inform dosimetrists their desired trade-offs, while dosimetrists provide the best achievable plans. It may necessitate multiple rounds of consultation between physicians and dosimetrists to tune plans and select one with desired trade-off. To mitigate the back-and-forth, we proposed a new framework called DeepTuning. It utilizes a deep learning model to predict a distribution of dose distributions with different trade-offs from contours. The predicted doses can be tuned in real-time by manipulating the deepest layer of the model. We validated this method using a prostate VMAT dataset. The model can generate plans ranging from prioritizing PTV coverage to prioritizing rectum sparing. DeepTuning framework has the potential to empower physicians to tune dose distributions immediately after contouring and choose a desired dose distribution. Since the trade-off selection occurs before dosimetry planning, the back-and-forth can be minimized, improving the efficiency of planning workflow.
73
An Open Source Multi-Criteria Optimization Framework for Radiotherapy - Tobias Becher (1) (2) (3), Amit Ben Antony Bennan (1) (2), Remo Cristoforetti (1) (2) (3), Niklas Wahl (1) (2)
We present an implementation of a Pareto plan navigation framework in the open source treatment planning toolkit matRad. It expands on the existing weighted sum approach and allows for calculation and navigation of Pareto surfaces. Furthermore we introduce a lexicographic approach as an alternative to the weighted sum method.
74
Automating Physics Plan Checks With Monaco Scripting - Hill Paul (1)
The Scripting Application Programming Interface (API) of the Monaco Treatment Planning System (TPS) allows code to be written which accesses many of the capabilities of Monaco and much of the data of the treatment plans. The API utilizes User Interface (UI) automation to provide access to Monaco functionality. Many of the functions of the Monaco UI are available through the API. A Script been created to automate a number of elements of the Physics Check of treatment plans. It performs checks on the validity of the dose calculation parameters, the treatment beam parameters, details of the treatment couch assignment and the values of the inverse planning optimizer inputs. The Script is in clinical use and is executed by the Dosimetrists prior to plan approval by the Radiation Oncologist. It has led to efficiency improvements in treatment planning as defects can be identified and then rectified early in the workflow.
75
Zero Click Treatment Preparation: From CT to Plan - Geert Wortel (1), Rita Simões (1), Johan Trinks (1), Peter Remeijer (1), Eugène Damen (1), Mark Gooding (2), Tomas Janssen (1)
Automation of segmentation and treatment planning is an important development in radiotherapy. While this automation can save time, the auxiliary workflow steps are still time consuming, and typically involve multiple hand-overs between personnel, hampering an efficient process. To create a fully automated workflow, segmentation and planning need to be embedded within an automation framework. In this work we describe a fully automated workflow of the complete treatment preparation chain from CT to RTPlan export. All key components of this workflow are currently in clinical use. Starting from the CT, auto-segmentation is performed by Mirada WorkflowBox. The CT and resulting structure set are sent to Pinnacle, which in combination with our automation framework FAST creates and exports RTPlan and Dose. We discuss the implications of full automation of all auxiliary steps on the architecture of the workflow.
76
Optimization in Radiotherapy Integrating a Multidisciplinary Framework With GEANT4 - Elisabetta Sergi (1), Mara Severgnini (2), Enrico Rigoni (1), Francesco Longo (3) (4)
Radiotherapy (RT) is a complex treatment that requires optimization to deliver the right dose to the target while minimizing side effects. This work proposes a novel approach that combines optimization techniques from other engineering fields with Monte Carlo (MC) simulations to optimize RT plans. The coupling of GEANT4, an MC simulation toolkit with modeFRONTIER, an optimization platform, enabled the efficient identification of optimal linac parameters for tumor treatment. This approach demonstrated improved accuracy and personalized treatment plans for RT, offering promising advancements in medical physics.
77
Development of Bolus Optimization System for Electron Conformal Therapy - Sang-Won Kang (1), Woong Cho (2), Dongsan Kang (3), Moo-Jae Han (1), Jin-Beom Ching (1)
Aim of this study is to develop a bolus optimization system for electron conformal therapy (ECT). To develop the optimization system, the electron dose calculation engine and an optimization algorithm should be built. Thus, Hogstrom's dose calulation algorithm and backtracking line search were implemented and applied in the optimization system. Accuracy of the developed dose calculation algorithm was verified by a 94.97% passing-rate in a gamma evaluation with a commercially available program. With evaluating the dose of the target-line by applying the optimzed bolus, the average dose and root mean square error were 95.23% and 15.17, respectively. In this study, we evaluated the developed bolus optimization system for ECT. Although some limitations are presented, it has shown feasibility of the opimization system to apply in clinical practice. In future study, we will imporve the performance of the system and incorporate 3D printing technology into the optimization system.
78
A Personalized Automatic Radiotherapy Plan Scoring Mechanism Based on Multi-Organ Individually-Objective Optimization - Zirong Li (1), Zhenjiang Li (2), X. Sharon Qi (3), Kuo Li (2), Huashan Sheng (1), Zhigang Yuan (1), Yong Yin (2), Qichao Zhou (1)
Radiation therapy plan optimization is a time consuming and very complex process, and may not always result in an ultimate optimal solution. It is difficult to uniformly define what kind of plan is a high-quality plan when multiple complex standards need to be balanced. We proposed an individualized automatic and objective plan evaluation method based on multiple organ individually objective optimization (MIO). The proposed plan evaluation method is possible without manual intervention and captures overall plan quality using a plan scoring system. The scoring system is constructed based on the achievable dose limit for relevant OARs. In double-blind comparative experiments, the consistency between the results of the manual evaluation and the automatic plan evaluation system was 0.963. The proposed mechanism significantly reduces manual review time, the automatic plan evaluation MIO takes an average of 33 seconds, and the evaluation process takes an average of 17 seconds.
79
A Python Script to Analyse Treatment Planning Throughput - Ezhlalan Ramalingam (1)
Abstract: The aim of this study is to estimate radiotherapy treatment planning throughput from the data extracted from our Oncology Information System (Mosaiq, OIS). We have developed an in-house Python script1 to analyse the data exported from the OIS. The Quality Check List (QCL) data captured between April 2021 and March 2022 and from April 2022 to March 2023 by our treatment planning team has been used in the data analysis. The QCL data includes but not limited to planning categories such as radical, palliative, and re-planning. The OIS data has been exported in an Excel file format, preprocessed to eliminate extraneous information and saved as a Comma Separated Value (CSV) file. Tasks like image fusion, contouring, and the reconstructions of previous treatments are inherent components of the planning process and have been excluded from the data analysis. Measure of central tendency and dispersion such as mean, median, mode, and standard deviation have been documented for the daily, weekly, and monthly treatment planning throughput. While the throughput distributions of individual planners deviate from normality, the collective probability distributions of all planners in the study display nearly normal (symmetrical) distributions. The selection of the specific time period (April to March) for analysis aligns with our center's leave calendar, ensuring the inclusion of annual leave taken by the planners involved in the study. From April 2022 to March 2023, an average of 3.9 planners generated a mean of 9.5 treatment plans per day. Similarly, the mean number of treatment plans produced per week by an average of 6.8 planners is 45.6, and the mean number produced per month by an average of 7.9 planners is 197.7. With a total of 249 working days, averaging 4.8 days per week, the daily throughput can be approximately scaled to weekly (9.5 × 4.8 = 45.6), and the weekly figure can be approximately scaled to monthly (45.6 × 4.33 = 197.5). From April 2021 to March 2022, an average of 3.9 planners generated a mean of 7.7 treatment plans per day. Similarly, the mean number of treatment plans produced per week by an average of 7.3 planners is 35.9, and the mean number produced per month by an average of 8.8 planners is 146.8. With a total of 230 working days (with approximately 3 weeks lost due to a cyber-attack), averaging 4.7 days per week, the daily throughput can be approximately scaled to weekly, and the weekly figure can be approximately scaled to monthly, as outlined earlier. Due to variations in individual planner's work patterns, their throughput also needs to be analysed independently.
80
Dynamic Collimator Rotation Treatment Plan Optimization for O-Ring Treatment Systems - Fix Michael (1), Werner Volken (1), Daniel Frei (1), Silvan Mueller (1), Gian Guyer (1), Daniel Schmidhalter (1), Peter Manser (1)
In intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) static collimator angles are used per field or arc. While dynamic collimator rotation during beam on was investigated for C-arm linear accelerators, this was not investigated for O-ring ETHOS/Halcyon treatment systems. This work aims in the development of an optimization framework for treatment plans with dynamic collimator rotation during beam on for ETHOS/Halcyon systems. A previously developed in-house hybrid direct aperture optimization algorithm was first extended with a Monte Carlo (MC) model of the ETHOS/Halcyon treatment head including the double stack multi leaf collimator to calculate MC beamlet dose distributions and second with options for dynamic collimator rotation optimization for IMRT and VMAT. The framework was tested for a clinically motivated case showing improved dose sparing for organs at risk using dynamic collimator angle rotation compared with static collimator angle treatment plans, while the target coverage was the same.
81
Development of Dynamic Collimator Mixed Beam Radiotherapy to Explore the Dosimetric Benefits of Non-Coplanar and Mixed Photon-Electron Beam Setups - Chengchen Zhu (1), Gian Guyer (1), Jenny Bertholet (1), Silvan Mueller (1), Hannes A Loebner (1), Marco F M Stampanoni (2), Michael K Fix (1), Peter Manser (1)
Dynamic collimator mixed beam radiotherapy (colli-DYMBER) is a novel treatment technique that uses non-coplanar table angles, dynamic gantry and collimator rotation, mixed photon-electron beams/arcs, and intensity modulation. The aim of this work is to solve the beam angle optimization (BAO) problem dosimetrically using a direct aperture optimization (DAO)-based pathfinding strategy that iteratively adds promising apertures to form paths. We show how the developed dosimetrically motivated pathfinding strategy can be used to investigate the dosimetric benefit of using non-coplanar beam setups and/or mixed photon-electron modality by comparing colli-DYMBER plans with non-coplanar photon-only plans (dynamic collimator trajectory radiotherapy, colli-DTRT), coplanar mixed beam plans (coplanar-DYMBER), and coplanar photon-only plans (coplanar-VMAT) for three clinically motivated cases. The dosimetric benefit observed by incorporating non-coplanar/mixed beam setups varied for the considered cases.
82
Dosimetry for Head and Neck Radiotherapy Using Personalized Quantitative MRI - Laura Sayaque (1), Benjamin Leporq (2), Charlène Bouyer (3) (4), Frank Pilleul (1) (4), Vincent Gregoire (3), Olivier Beuf (1)
Radiotherapy for head and neck cancers is challenging, due to the complexity and heterogeneity of the anatomy. In order to improve this process and make it more accurate, MRI-based planning has been assessed in several studies. However, compared to brain and prostate, head and neck area may have anatomical and tumor type variations from one patient to another, leading to limitations in the existing solutions, mostly based on learning. To address this issue, a direct physical-based method is proposed in this work, to obtain quantitative information which would lead to a more personalized treatment, through the proton density. This method uses an ultra-short echo time sequence to measure the MR signal almost without any decay. The dosimetry on a reference CT and on the synthetic CT generated with this method were compared for nasopharyngeal, oropharyngeal, laryngeal and oral cavity squamous cell carcinoma patients treated by radiotherapy. A 125 HU MAE was obtained between the CT and the sCT and a dose difference of 1.5% of the 70 Gy prescription dose was computed as a mean of the Planning Tumor Volumes median dose. The maximum dose for several organs at risk showed a percentage of dose difference below 1% for 5 out of 6 organs. The mean global gamma index for a 3%/3mm criteria was 0.21 ± 0.05 with a 93 ± 2.8% gamma pass rate. The proton density method allowed to generate a synthetic CT with only one dedicated sequence, giving results comparable to the CT-based dosimetry and to the literature for head and neck radiotherapy.
83
Online Adaptive Proton Therapy for Esophageal Cancer: Adaptation Rate and Automated Strategies Comparison - Camille Draguet (1) (2), Pieter Populaire (2) (3), Macarena Chocan Vera (1), Karin Haustermans (2) (3), John A. Lee (1), Edmond Sterpin (1) (2), Ana Barragan-Montero (1)
Adaptive radiotherapy shows potential in enhancing patient outcomes through the acquisition of longitudinal images during treatment, facilitating the management of anatomical changes that happen during treatment. Given the resource and time cost of adaptive treatment, careful consideration is put to the selection of patients who might benefit from it. The goal of this work is twofold. First, it investigates the sensitivity of proton therapy adaptation rate to various residual setup errors in patients with esophageal cancer. Second, four different fully automatic strategies for online adaptive planning are implemented and compared. These strategies combine deformable registration and deep learning based methods, both for contouring and planning. Our preliminary results, on a single patient, serve as a proof of concept for the feasibility of the presented planning strategies, which are compared to a fully manual approach in terms of timing and target coverage (D98%).
84
IOeRT Conventional and FLASH TPS Implementation Exploiting Fast GPU Monte Carlo: the Case of Breast Cancer - Gaia Franciosini (1) (2), Angelica De Gregorio (2) (3), Massimo Di Francesco (4), Fabio Di Martino (5) (6) (7) (8), Giuseppe Felici (4), Jake Harold Pensavalle (4) (5) (9), Michela Marafini (2) (10), Annalisa Muscato (2) (11), Vincenzo Patera (1) (2), Philip Poortmans (12) (13), Adalberto Sciubba (14), Angelo Schiavi (1) (2), Marco Toppi (1) (2), Giacomo Traini (2), Antonio Trigilio (3) (14), Alessio Sarti (1) (2)
Breast cancer is widely treated with IOeRT (Intra Operative electron RadioTherapy) in the case of partial breast irradiation treatments; one of the main limitations of this technique is the absence of a Treatment Planning System (TPS). In this contribution, we present an online IOeRT TPS developed using a custom fast GPU-based MonteCarlo and an ultrasound imaging system. This tool aims to provide the best irradiation strategy (electron beam energy, applicator position, and bevel angle) within the limited amount of time available during surgery, and facilitate the optimization of dose prescription and delivery to the target volume. Ultrasound-based input was employed to compute dose maps for different irradiation strategies, and a quantitative comparison was performed. The system demonstrated the capability to identify the optimal strategy within a few minutes. The impact of ultra high dose rate irradiations was modelled as well, integrating the benefits from minimally invasive surgery techniques.
85
AI-Guided Unattended Plan Generation - Christian Velten (1) (2), Michael Bowers (3), William Martin (1), Madhur Garg (1), Julie Shade (3), Todd McNutt (3) (4), Wolfgang Tomé (1) (5)
Dose volume histogram predictions from a custom random forest ensemble model were integrated into an automated heuristic planning engine for unattended plan generation. Optimization objectives are derived from the prediction of the 5% quantile doses at given volumes. Two optimization iterations were performed separated by an intermediate dose calculation. IMRT and VMAT plans were generated in batch mode for fifty previously treated prostate cancer patients and compared to the clinically approved and treated plans. Doses covering large portions of bladder and rectum were substantially lower compared to reference, with 9 Gy and 7 Gy reductions in D80%, respectively. Prescription dose coverage of target volumes was not compromised, and homogeneity was comparable. Combining personalized dose predictions with an automated planning engine has the potential to efficiently generate clinically acceptable patient-specific treatment plans in a standardized way. This would decrease planning staff workloads and have the potential to increase radiotherapy access in low- and middle-income countries, and underserved areas.
86
Exploring VHEE Treatment Efficacy Through In-Silico Investigations in Conventional and FLASH Irradiation Regimes - Annalisa Muscato (1) (2), Angelica De Gregorio (2) (3), Daniele Carlotti (3) (4), Michele Fiore (4), Gaia Franciosini (1) (2), Teresa Insero (4), Valerio Marè (4), Sara Ramella (4), Alessio Sarti (1) (2), Angelo Schiavi (1) (2), Marco Schwarz (5), Vincenzo Patera (1) (2)
Radiotherapy (RT) is crucial for treating deep-seated tumors; options include photon RT, particle therapy (PT) with protons or heavier ions like 12C. Very High Energy Electron (VHEE) beams (100-200 MeV) were proposed for deep tumors, but clinical implementation faced challenges. Recent advancements in compact C-band electron acceleration and FLASH radiotherapy have renewed the interest in this option. FLASH, demonstrated in pre-clinical studies, reduces healthy tissue toxicity while maintaining cancer-killing efficacy whenever the dose rates are radically increased (~40 Gy/s) with respect to current conventional radiotherapy (~0.1 Gy/s or less). This study explores the potential impact of electron beam therapy for the treatment of deep-seated tumors focusing on the stereotactic treatment of pancreatic cancer. Results are presented and compared against conventional radiotherapy performed with a VMAT approach.
87
Impact of Artificial Intelligence Solution Aiming to Accelerate MRI Acquisition on the Therapeutic Care in Oncology in Radiology and Radiotherapy Departments - Raphaëlle Lemaire (1), Charlotte Raboutet (2), Thomas Leleu (3), Cyril Jaudet (1), Loïse Dessoude (3), Ferdinand Missohou (3), Yoann Poirier (2), Pierre-Yves Deslandes (4), Joëlle Lacroix (2), Ilyass Moummad (5), Stéphane Bardet (6), Juliette Thariat (3), Dinu Stefan (3), Aurélien Corroyer-Dulmont (1) (7)
The objective of this prospective study was to evaluate the impact of commercially available solution (SubtleMR®), which halve the acquisition time on the detectability of brain lesions in radiology and radiotherapy in a clinical routine way. Methods: T1-MRI of 33 patients with brain metastases were analyzed. Images acquired quickly have a matrix divided by two which halve the acquisition time. The visual quality and lesion detectability of the AI images were evaluated by radiologists and radiation oncologist as well as pixel intensity and lesions size. Results: Analysis of the lesion detectability shows a specificity of 1 and a sensitivity of 0.92 and 0.77 for radiology and radiotherapy respectively. Undetected-lesions on IA image were with a diameter
88
Reconstruction Toolkit (RTK) V2, an Insight Toolkit (ITK) Module for Tomographic Reconstruction - Simon Rit (1), Sébastien Brousmiche (2), Julien Finet (3), Gregory C. Sharp (4), Philipp Steininger (5)
Since its first presentation at ICCR 2013, the Reconstruction Toolkit (RTK) has become a reference software for tomographic reconstruction of computed tomography (CT) images. In this period, RTK has become a remote module of the Insight Toolkit (ITK), thus enabling improved continuous integration and Python wrapping. New functionalities have also been developed, including simulation of Forbild phantoms, handling of cylindrical detectors and new iterative reconstruction algorithms for conventional CT, single photon emission CT (SPECT), four-dimensional (4D) CT and spectral CT. This article summarizes these eleven years of developments to today's RTK v2.
89
The Automatic Body Composition (ABC) Toolkit - Donal McSweeney (1) (2), Craig Jones (1) (2), Michael Brown (1), Ashwin Sachdeva (1) (2), Noel Clarke (2), Marcel Van Herk (1) (2), Alan McWilliam (1) (2)
Body composition, muscle mass in particular, is an established prognostic factor in oncology as it provides complementary information to performance status about patient fitness. Manual assessment is time-consuming so current practice does not use information about body composition, even though it could be extracted from routine CT or MRI scans. Instead, clinical practice still relies on broad and subjective criteria, like performance status and BMI, for patient stratification and monitoring. To facilitate large-scale research and promote (eventual) integration into routine clinical care we have developed the Automatic Body Composition (ABC) Toolkit. ABC was designed with flexibility in mind; it provides a browser-based front end for use by those without programming experience and exposes a set of APIs for simple integration into research and clinical workflows. It includes AI-based segmentation and efficient quality assurance. ABC is currently available for collaborative projects upon request.
90
Validation of an Open Source Contouring Tool - Haley Clark (1)
While radiotherapy increases in sophistication, and software complexity continues to grow, there remains a lack of validated open source software for routine radiotherapy tasks. This work presents a validation of the geometric accuracy, positional precision, and interoperability of the open source DICOMautomaton manual contouring subsystem. Using 2D and 3D contour arrangements, we demonstrate that DICOMautomaton is sufficiently accurate for radiotherapy purposes.
91
OSIRIS-RT : a Minimum Data Set for Data Sharing and Interoperability in Radiation Oncology - Vincent Le Texier (1), Thomas Baudier (2), Sébastien Guihard (3), Thomas Lacornerie (4), David Pasquier (5), Charlotte Robert (6)
Purpose Radiation oncology is a highly specialized field requiring an accurate and clinically relevant treatment summary. There is a clear need for standardization of radiation therapy data to facilitate care coordination. Methods For a year, the OSIRIS (Interoperability and data sharing of clinical and biological data in oncology) group based in France has worked on structuring radiotherapy data and identifying technical solutions for collecting and sharing them. The group used a multidisciplinary approach that included weekly scientific and technical meetings to foster a national consensus on a minimal data set. Results The resulting set and event-based data model, which is able to capture radiation therapy treatments, was built with 32 clinical and 60 imaging items. The group made it compatible with the HL7 Fast Healthcare Interoperability Resources (FHIR) format to maximize interoperability. The OSIRIS set was reviewed, approved by a National Plan Strategic Committee, and freely released to the community. A proof-of-concept study is underway in France to put the OSIRIS set and Common Data Model into practice using a national and multi-centric cohort of 10.000 patients. Conclusion Using a national and bottom-up approach, the OSIRIS group has defined a model including a minimal set of radiation therapy data that can be used to accelerate data sharing. The model relies on clear and formally defined terminologies and, as such, may also benefit the larger radiotherapy community.
92
Harmonizing Multicentric T1-Weighted Magnetic Resonance Imaging of Children With Ependymomas: Can Deep Learning Based Image Synthesis Help? - Soumia Bengoufa (1), Fatima Tensaouti (2) (3), Pedro Juan Soto Vega (1), Stéphanie Bolle (4), Alexandre Escande (5), Xavier Muracciole (6), Claire Alapetite (7), Line Claude (8), Julie Leseur (9), Aymeri Huchet (10), Jérôme Doyen (11), Georges Noel (12), Stéphane Supiot (13), Valerie Bernier-Chastagner (14), Julien Welmant (15), Pierre Leblond (16), Dimitris Visvikis (17), Vincent Jaouen (17), Anne Laprie (2) (3), Mathieu Hatt (17)
This work was carried out within the Ependymomics project, focusing on the analysis of multi-omics data of children and teenagers with epndymoma. In this paper, we present preliminary results of the harmonization of multicentric brain T1 MRIs from 14 sites in France. We aimed to mitigate image properties heterogeneity using an unsupervised deep learning GAN-based CUT model. Using T1-weighted MRI (n=217 subjects), we tested the model (i) by using different references, and (ii) with a single or multiple slices as input references. Qualitative evaluation of harmonized images, as well as the use of image quantitative metrics, show that the method efficiently reduced the inherent variability of images, with before and after variance values of 2,81e 11 and 1,15e 3 respectively. Our next step will be to evaluate the impact of these harmonization strategies on tumor-derived radiomics, and the resulting improvements in the performance of radiomics models predicting patient outcomes (progression free survival).
93
Towards a Fully Automatic Workflow for Investigating the Dynamics of Lung Cancer Cachexia During Radiotherapy Using Cone Beam Computed Tomography - Lars Daenen (1) (2), Wouter Van De Worp (3), Behzad Rezaeifar (4), Joël De Bruijn (5), Peiyu Qiu (3), Justine Webster (3), Ramon Langen (3), Frank Verhaegen (4), Cecile Wolfs (4)
Cachexia is a devastating condition, often occurring during cancer, diminishing cancer treatment effects and increasing mortality. Currently, there are no methods to accurately monitor cachexia during cancer treatment. Therefore, we present a proof-of-concept for monitoring cachexia based on cone beam computed tomography (CBCT) images acquired during radiotherapy treatment. Data from 140 stage III non-small cell lung cancer patients was used. Two deep learning models, CycleGAN and CUT, were used for unpaired training of CBCT-to-CT conversion. U-Net was used for automatic pectoralis muscle segmentation. Both CycleGAN and CUT restored the Hounsfield Unit fidelity and reduced artefacts in CBCT, with CUT displaying the best performance. The U-Net model achieved a DSC of 0.88 on a hold-out test set. This study shows a proof-of-concept for automatic monitoring of the pectoralis muscle area during radiotherapy treatment based on CBCT, which can serve as a surrogate for cachexia progression.
94
Performance Evaluation of a Pix2Pix Synthetic Xray Image Generator for Preclinical Applications - Vasileios Eleftheriadis (1)
In this work, we evaluate a pix2pix conditional generative adversarial network as a potential solution for generating adequately accurate synthesized morphological X-ray images by translating standard photographic images of mice. Continuing our previous work, we expanded our dataset to 940 paired photographic/X-ray mice images and used data augmentation and Transfer Learning techniques to improve the performance and generalization of our model. We explore how the expansion of the training dataset affects the models' performance and use five image quality metrics to quantitatively evaluate the models' performance in the present dataset and compare the results to visual inspection by experts to assess the metrics' ability to evaluate image-to-image translation tasks. In conclusion, a combination of metrics is essential for generated image quality evaluation, and we propose an ensemble of the used metrics as an essential methodological step to evaluate the performance of image-to-image deep learning models.
95
Enhancing CBCT Image Quality for Radiotherapy Using Multi-Channel CycleGAN for CBCT-to-CT Synthesis - Chelsea Sargeant (1), Robert Chuter (1) (2), Jane Shortall (1), Andrew Green (3), Jacqui Bridge (2), Phil Fendall Amaro (2), Rachael Bailey (2), Alan McWilliam (1)
Deep learning-based image synthesis techniques, like cycle-consistent adversarial networks (cycleGANs), have improved the image quality of cone-beam computed tomography (CBCT) scans for radiotherapy by generating synthetic CTs (sCTs). By leveraging multi-channel input to provide a cycleGAN with a broader range of information upon training, our approach aims to preserve soft tissue contrast and reduce artefacts. The results demonstrate the successful generation of high-fidelity sCT images, with multi-channel models outperforming single-channel input across various image similarity metrics. Furthermore, our approach demonstrates an ability to generalise to an unseen test data set of CBCT scans acquired from prostate cancer patients. A qualitative assessment, conducted by three expert radiographers, shows promising results in terms of overall image quality and artefact reduction and offers valuable insight into the clinical utility of sCTs. We emphasise the need for standardized qualitative assessments and downstream task evaluations to ensure robust insights.
96
Fast and Automated Dose Verification Procedure for Reinforced Online Adaptive MRI-Guided RT - Teo Stanescu (1)
This work presents a dose verification procedure tailored for online adaptive MRI-guided RT workflows specific to the Unity MR-Linac system. The methodology was implemented as a fully automated pipeline. The pipeline input consisted of adapt-to-shape (ATS) plan and 3D image data acquired for treatment verification (VER) and data collected during beam-on (BON). The automated analysis relies on deformable image registration (DIR) between ATS and VER/BON images to propagate contours and perform dosimetric analysis. 100 treatment fractions were investigated for liver, pancreas, oligometastases and lung SBRT cancer patients. 620 DIR-mapped contours were evaluated by expert reviewers, and 98% of them were found to be clinically acceptable. The results showed a significant correlation between DICE values and intra-fraction motion. The 3D dose verification tool on VER/BON datasets detected significant dosimetric changes which were not apparent during the online treatment sessions. This analysis holds potential as a valuable tool for assessing plan quality.
97
Evaluating Treatment Delivery Performance of SCINTIX Biology-Guided Radiotherapy Under Non-Periodic In-Treatment Motions - Chenyang Shen (1), Thomas Banks, Sagar Ghimire, Yang Kyun Park, Andrew Godley, Steve Jiang, Bin Cai
The RefleXion SCINTIX system uniquely enables novel biology-guided radiotherapy (BgRT), providing real-time treatment guidance via on-board positron emission tomography (PET). By directly targeting tumor response via PET, SCINTIX has the potential to substantially enhance the accuracy and efficacy of radiotherapy. This technology was only recently released for clinical usage, however, so its performance when dealing with in-treatment patient motion has not been fully explored. In this paper we report a novel study to investigate the performance of SCINTIX under discrete motions using an in-house developed fluorodeoxyglucose phantom. Ion chamber and film were used to measure point dose and 2D dose, respectively, under various scenarios of different discrete motions. The results show good agreement between delivered dose and planned dose, illustrating the efficacy of SCINTIX in resolving discrete motions during treatment delivery.
98
Optimizing the Contrast to Noise Ratio for Tumor Tracking in the Lung Using Dual Energy Imaging Strategies for Soft Tissue Enhancement - Nawal Alqethami (1), Wentao Xie (1), Prasannakumar Palaniappan (1), Felix Ginzinger (2), Philipp Steininger (2), Marco Riboldi (1)
This study optimizes soft tissue contrast in x-ray-based imaging for enhancing lung tumor tracking using Dual Energy (DE) imaging with fast energy switching. DE Cone-Beam CT projections were acquired at (80-120 kV) and (60-120 kV), experimenting with different energy gaps. Tumor inserts of varied materials underwent deformable registration to correct frame misalignment followed by weighted logarithmic subtraction (WLS) using multiple weighting factors and various image filters to reduce the noise including median filter (MF) at different sizes and anti-correlated noise reduction (ACNR). Results indicated superior performance at (60-120 kV) for bone suppression and soft tissue enhancement. Optimal weighting factors were identified as (0.75-0.8) for (80-120 kV) and (0.65-0.7) for (60-120 kV). Smaller energy ranges required smaller MF sizes (11 pixels for 80-120 kV and 9 pixels for 60-120 kV), with ACNR+MF achieving optimal noise suppression. This study demonstrates the efficacy of DE imaging and WLS for enhanced lung tumor tracking capabilities.
99
Detection and Correction of Rigid Patient Motion in SPECT With Exponential Data Consistency Conditions - My Hoang Hoa Bui (1), Antoine Robert (1), Simon Rit (1), Etxebeste Ane (1)
Targeted radionuclide therapy has gained an increasingly important role in cancer treatment in recent years, in which molecular imaging techniques such as SPECT can be employed to plan and monitor the treatment. However, rigid patient motion is degrading the quality of SPECT images and may impact the treatment planning. It has been recently proposed to use exponential data consistency conditions — equations describing the redundancy of exponential SPECT measurements — to detect and correct for patient motion. This study aimed at developing a data consistency-based method to estimate and correct for rigid patient motion during SPECT acquisitions. The proposed method was evaluated on Monte Carlo simulated data of a liver patient. The method led to a significant improvement in the image quality. An activity recovery coefficient above 95% with respect to the reconstructed images from motion-free projections was obtained across all tested cases. In particular, for a motion in the middle of the acquisition, the activity recovery coefficient was improved from 66 % (non-corrected projections) to 99% (motion-corrected projections).
100
Performance Evaluation of Virtual Laser-Based Surface-Guided Radiation Therapy Setup System Using Humanoid Head Phantom - Jeong-Woo Lee (1) (2), Byoung-Moon Park (1), Yong-Ki Bae (1), Min-Young Kang (1), Semie Hong (1) (2)
We aimed to verify patient setup accuracy using a combination of virtual laser-based SGRT systems (LUNA 3D, LAP, Germany). The experiments were divided into (1) skin-surface guidance with virtual laser and (2) mold-surface guidance with optical laser. The humanoid head phantom setups were analyzed into vertical (mm), lateral (mm), longitudinal (mm), and rotational (yaw in degree). In the virtual laser-based SGRT results, the displacement in the lateral direction is less than 0.5 mm, and the displacement in the longitudinal direction is more pronounced. In comparison, the vertical direction shows a difference of about 1 mm. The mold-based optical laser system showed vertical and longitudinal direction errors up to 2 mm. The SGRT system with the virtual laser can provide accurate alignment information of a large area and the area of interest set before treatment, providing accuracy of treatment and stable treatment setup. Keywords: SGRT, virtual laser, optical laser, head phantom
101
Preliminary Study on Residual Motion Evaluation During Breath Hold SBRT - Weihua Mao (1), Amol Narang (1)
Breath hold has been widely used to limit patient respiratory motion for radiation therapy, however residual motions have been reported by many groups, making it essential to assess these daily residual motions. Fiducial markers have been implanted to locate the target for pancreatic cancer treatment at our institution. A new method is presented to locate fiducial positions on every projection image during cone-beam computed tomography (CBCT) scans for patient setup or positioning verification. Data of three pancreatic cancer patients were studied retrospectively. Every patient exhibited residual motions during breath hold maneuvers, with an average drifting range of 6.5 mm observed across 34 CBCT scans. Residual motions within a single CBCT scan ranged up to 10.6 mm, and significant drifting occurred between breath holds. This method does not need any additional equipment or deliver any extra radiation doses to patients. It provides important information on daily residual motions and is invaluable for planning margin adjustments towards online adaptive radiation therapy.
102
Preliminary Evaluation of a Software Application for Target Tracking Without Fiducials During Cyberknife Treatment of Pancreas - James Bedford (1), Simeon Nill (1), Uwe Oelfke (1)
This study aims to visualize and track the pancreatic planning target volume (PTV) without fiducials, based on orthogonal Cyberknife kV images, using in-house software CKImage v1.1. Rigid registration is used to determine the offset of the PTV projected onto the image planes from that defined on digitally reconstructed radiographs. The method is applied to a high-contrast test object to verify the performance of the software, and also retrospectively applied to four patients with pancreatic cancer The median accuracy of tracking with CKImage for the 177 image pairs acquired of the high-contrast test object is 0.8 mm (range 0.0 – 21.8 mm). The median accuracy of tracking for the 1249 image pairs acquired of the pancreas patients is 4.1 mm (range 0.2 – 16.1 mm). There is much scope for improvement in the method, but the novel imaging approach shows promise for minimally invasive treatment of pancreas cancer.
103
Super Resolution Dose Calculation Via Implicit Neural Representation Based Blind Deconvolution - Siqi Ye (1), Sheng Liu (1), Yu Gao (1), Lei Xing (1)
In radiation therapy, achieving high-resolution (HR) dose distributions is crucial but time-intensive, posing a significant bottleneck in the clinical workflow of treatment planning. A common workaround is to first calculate a dose distribution at low resolution and then refine it using super-resolution (SR) techniques. Building upon this approach, we introduce a novel super-resolution (SR) technique for fast and high-quality dose calculation. The proposed SR technique leverages a cutting-edge implicit neural representation (INR) network to learn a continuous function that accurately represents the target HR dose distribution image. In such a way, our model can produce dose maps at virtually any resolution. Unlike previous methods which assume a pre-known blurring kernel, we tackle a blind deconvolution problem where the blurring kernel is unknown and thus is estimated together with the HR image. We tested our method on a pancreas SBRT plan using the MRIdian MRgRT system, demonstrating significant improvements in image quality over an existing INR-based SR method that relies on predefined kernels. Our technique offers rapid SR dose calculation, achieving results in milliseconds.
104
The Impact of Training Dataset Size and Model Size on the Accuracy of Deep Learning Dose Prediction - Joep Van Genderingen (1), Dan Nguyen (2), Franziska Knuth (1), Hazem Nomer (1), Luca Incrocci (1), Abdul Sharfo (1), Uwe Oelfke (3), Steve Jiang (2), Linda Rossi (1), Ben Heijmen (1), Sebastiaan Breedveld (1)
Deep learning has demonstrated the capability of predicting realistic radiotherapy dose distributions. Collecting and curating clinical data is a labour-intensive and time-consuming task, making large, curated datasets scarce for training dose prediction models. Partly due to the lack of training data, model size has never been fully explored in previous studies. In this study, we investigated how training dataset size and model size impact 3D dose prediction performance for prostate cancer. For a curated dataset of 1250 contoured planning CT-scans, ground truth dose distributions were fully automatically generated using Erasmus-iCycle, guaranteeing consistent trade-offs. Three HD U-Nets with two, four and six pooling layers were trained with different sized datasets (50/250/1000 plans) for dose prediction. Mixed precision was applied to optimize GPU VRAM usage. Differences were most notable for the 3D dose distributions, where details improved with increasing dataset and model size. For evaluated plan parameters, this dependency was less visible.
105
Evaluation of Organ Dose Reduction Effectiveness in Liver 3D/4D Computed Tomography Simulation Through Radiation Shielding: an Anthropomorphic Phantom Study - Jongin Oh (1), Minsik Lee (2), Bonghyo Lee (3), Doohyun Lee (4) (1), Min-Young Kang (5), Semie Hong (5) (1), Jun-Bong Shin (2), Jeong-Woo Lee (5) (1)
This study aimed to evaluate the effectiveness of radiation shielding in reducing organ doses during liver 3D/4D Simulation Computed tomography (Sim-CT) before radiation therapy. A tunnel-shaped acrylic structure was constructed to support the shield, and two lead equivalents of 1 and 3 mm/Pb were used to confirm the dose reduction effect of radiation shielding. Optical stimulated luminescence dosimeters (OSLD) were inserted at 11 locations, including the thyroid, within an anthropomorphic phantom. Under clinical conditions, liver 3D/4D Sim-CT was repeated three times, and radiation shielding was applied to analyze the dose-reducing effects. The use of a self-produced tunnel-type shielding tool is more effective in reducing unnecessary effective doses of normal tissue outside the examination site during liver 3D/4D Sim-CT.
106
Deep Learning for Instantaneous Estimation of the Out-of-Field Dose in External Beam Photon Radiotherapy. A Proof of Concept - Nathan Benzazon (1) (2), Mohammed El Aichi (1) (2), Alexandre Carré (1) (2), François De Kermenguy (1) (2), Stéphane Niyoteka (1) (2), Pauline Maury (1) (2), Julie Colnot (1) (2) (3), Meissane M'hamdi (1) (2), Cristina Veres (1) (2), Rodrigue Allodji (4), Florent De Vathaire (4), David Sarrut (5), Neige Journy (4), Claire Alapetite (6), Vincent Gregoire (7), Eric Deutsch (1) (2), Ibrahima Diallo (1) (2), Charlotte Robert (1) (2)
Doses deposited outside the treatment field during external photon-beam radiotherapy, known as out-of-field doses, appear to favor the appearance of radiation-induced cancer and hematological toxicities. These low doses are not currently estimated on a routine clinical basis, due to the lack of a suitable solution. With this proof of concept, we demonstrate the value of deep learning for the development of a tool enabling rapid, routine-compatible out-of-field dose estimation. For this purpose, a 3D U-Net, considering as inputs the in-field dose, as computed by the treatment planning system, and the patient's anatomy, was trained to predict out-of-field dose maps, using a dataset of 3151 pediatric whole-body dose maps estimated with analytical methods as ground truth. Consistent results in line with the literature were obtained (RMSD of 0.28, 0.41 and 0.32 cGy.Gy-1 for the training, validation and testing), arguing in favor of the use of neural networks for out-of-field dose estimation.
107
Artificial Intelligence Dose Calculation Engine Based Pencil-Beam for Intensity Modulated Radiation Therapy - Zirong Li (1), Yaoying Liu (2) (3), Xuying Shang (2) (3), Huashan Sheng (1), Wei Zhao (2), Gaolong Zhang (2), Qichao Zhou (1), Shouping Xu (2) (3)
The study proposed a data-generating method for training a deep learning (DL) model to the boost pencil beam (PB) algorithm accuracy of the dose, and comparable to Monte Carlo (MC). The study only involved 43 clinical cases but generated 11278 pairs of beam dose data for training, validating, and testing. The DL model showed significant ability in converting PB dose to MC dose, which improved the GPRs (1mm/1%) from 83.73±6.55%, 84.28±6.55% and 82.72±6.75%, to 96.07±2.22%, 96.17±1.90% and 95.31±2.80%, for training, validating and testing data. Moreover, the DL model has an ultra-speed with 180ms calculation time cost. The study successfully realized fast-accurate photon dose calculation by DL model.
108
Accurate Selection of Calculation Grid Size for Radiation Plans With Small Brain Tumors in Intracranial Stereotactic Radiotherapy - Tagreed Alalawi (1), Mazen Al-Gaoud (1), Elham Rashaidi (1), Nesreen Shorbaji (1)
Purpose/Objective(s): The main purpose of this study is to evaluate the effect of different calculation grid size (GS) on the accuracy of calculated dose for small brain lesions in intracranial stereotactic radiosurgery (ICRS). Also to investigate the changes in quality of the volumetric modulated arc therapy (VMAT) plans with change calculation grid size , and to evaluate dosimetrical parameters such as the planning target volume 95% coverage PTV95 , sparing of OAR (organ at risk), also to estimate the values of homogeneity index HI , conformity index CI and gradient index GI for each treatment plan. Materials/Methods: The study design is a retrospective study using the CT planning image acquired with radiosurgery treatment approach for 16 patients having small brain lesions treated between Jan 2021 to Jul 2023 and prescribed dose is 22.5Gy in five fractions. Eight treatment plans are to be generated for each patient using Varian Eclipse AAA analytical anisotropic algorithm for dose calculation and PO (photon optimizer) for dose optimization. First: we will create four plans with the same MUs (monitor units) and with different grid sizes from 1mm to 4mm. Second: another set of four plans will be planned with normalization at target mean and different grid sizes from 1mm to 4mm. Each plan contains two complete arcs with two different collimator angle sets. 6FFF energy (flattening filter free) beam will utilized for all plans to avail the advantage of high dose rate 1400MU/min, that has the benefit of using limited number of radiation beams to deliver such highly dose per fraction and to minimize the total treatment time. Results: In terms of CI and GI it was clearly observed that for the first four plans where NN (no normalization) is selected, as the GS increase the CI and GI decrease significantly, whereas for the second group where TM (target mean) is selected , CI and GI are almost not affected by changing the grid size. However, the homogeneity index is not affected by changing GZ for all types of plan normalization selected. Also there were no significant dosimetric differences in terms of OAR maximum doses as the grid size changes with both normalization types. Finally the target volume PTV covered by 95 % of the prescribed doses is impacted significantly with changing the grid size , as the grid size increases the coverage decreases in the plans with NN is selected but there were no change in the target coverage as we keep plan normalized to target mean even with different GS. Conclusion: Selection of the calculation grid size is significantly influences the quality of the treatment plans in both conformity, gradient indexes and target coverage. For an accurate calculation of the treatment plans, it is recommended that the use of smallest possible grid size GS.
109
Development and Evaluation of Novel Parameters for Describing Anatomical Changes and Predicting Radiotherapy Replanning for Head and Neck Cancer Patients - Odette Rios-Ibacache (1), James Manalad (1), Aixa X. Andrade Hernandez (2), Kayla O'sullivan-Steben (1), Emily Poon (1), Luc Galarneau (1) (3), John Kildea (1) (3)
Head and neck cancer patients under radiotherapy treatment are prone to anatomical changes due to the effects of weight loss on non-rigid structures, such as inflammation and tumor shrinkage. Across several treatment fractions, these changes may become significant enough to render the initial treatment plan invalid and require treatment replanning. However, replanning often demands a high amount of resources and may be poorly timed in radiotherapy workflows with competing priorities. To address this, we aim to identify and generate parameters extracted from three-dimensional radiotherapy structures, that can describe anatomical alterations and can affect the decision for replanning. These parameters are intended to be used in the development of a machine learning classification model to predict if and when patients should undergo replanning, taking into account the progression of the information over time.
110
To Calibrate or Not to Calibrate: a Decision Support Approach Using Microcontroller-Based Tiny Machine Learning - Nicolas Boussion (1) (2), Lou Bosc (1), Gaëlle Goasduff (1), Yoann Jegat (1), Anne-Sophie Lucia (1), Emmanuelle Martin (1), Laetitia Lann (1), Dimitris Visvikis (1) (2)
A decision support tool is proposed to determine whether it is reasonable to calibrate a linear accelerator with an ionization chamber according to recent changes in atmospheric pressure. The response of an ionization chamber is indeed sensitive to variations in atmospheric pressure, which can be very unstable in certain regions at some periods of the year. For this purpose, a neural network is trained on a two-year local weather station database classified using tools from statistical process control. The originality of the method relies on the use of machine learning code explicitly implemented for deployment on a microcontroller. More specifically, the training was done using a TensorFlow neural network implemented on a standard desktop computer. The obtained model was then deployed on an IoT Arduino board, by means of the TinyML library. The resulting device is low-cost, low-energy and has an accuracy of 0.834 for classifying atmospheric pressure.
111
Software-Assisted Optimisation of Linear Accelerator Energy Consumption - Genotan Reggian (1), Catherine Stanford-Edwards (1)
In the pursuit of environmental sustainability within healthcare, this study introduces a pioneering approach to optimise the energy consumption of linear accelerators using a software-based approach. Through a comprehensive analysis at the South West Wales Cancer Centre, the energy utilisation of various linac models was scrutinised, as well as stereotactic ablative body radiotherapy (SABR) and hypofractionated radiotherapy (HFRT) treatment methods. The findings reveal that SABR and HFRT treatments exhibit significantly lower energy consumption compared to conventional methods, indicating the potential for substantial energy savings. Furthermore, the newer Agility and Versa HD Elekta linacs had significantly decreased energy consumption compared to the older Precise linac demonstrating that replacement of linac technology can also lead to substantial energy savings. Further research and collaboration targeted at sustainability in radiotherapy can facilitate the optimisation of treatment delivery while maintaining a commitment to sustainability within radiotherapy.
112
Experiment of Proof-of-Principle on C-Arm CT/SPECT System for In-Vivo Dose Verification in Online Image-Guided Adaptive Brachytherapy - Saerom Sung (1), Kim Hyemi (2), Min Chul Hee (1), You Sei Hwan (2), Choi Hyun Joon (2)
Traditional methodologies often fail to accommodate anatomical shifts or source movement, compromising treatment efficacy. In addressing the limitations of current brachytherapy, particularly the challenge of precise dose delivery and minimizing radiation exposure to normal tissues, this study experimentally validates the use of an integrated C-arm CT/SPECT system for real-time adaptive, image-guided brachytherapy. Utilizing a DRTECH detector, our prototype demonstrated significant advancements by accurately distinguishing between materials within a PMMA-Agar phantom and tracking Co-60 source movements with high precision. Experimental findings revealed the system's capacity for real-time modifications, with the in-house developed image intensity enhancement algorithm reducing the full width at half maximum of the Co-60 source to 3.80 mm and 2.91 mm along the x/y direction, respectively. This result shows the system's potential to surmount the prevailing challenges in brachytherapy by ensuring precise radiation delivery and enhancing patient safety and treatment outcomes through adaptive imaging and dose monitoring capabilities.
113
Deep Reinforcement Learning for Custom Applicator Design in Personalised High Dose Rate Brachytherapy - Cedric Loiseau (1), Philippe Berejny (1), Marlon Silva (2), Juliette Thariat (2), Alain Batalla (1)
Brachytherapy is a radiotherapy technique that uses radiation from a radioactive source to treat a tumour. In personalised brachytherapy, the source is guided by an applicator adapted to the patient's anatomy and the positions of the source (PoS). The dwell time (DT) associated with the PoS are the spatial and temporal parameters to be optimised, which govern the dose distribution. This distribution must comply with dosimetric constraints: coverage of the tumour and minimum dose to organs at risk. Usually, to meet the dosimetric and geometric constraints of applicator manufacture, the definition of PoS and DT is long and laborious. Our project exploits deep reinforcement learning models and 3D printing technology to design custom applicators. For a simplified case of a virtual patient, we have demonstrated the feasibility of our model, trained on virtual data, to offer PoS by dealing with spatial and temporal optimisation to meet constraints.
114
Multiphysics Modeling and Simulations for Y-90 Liver Radioembolization - Emilie Roncali (1), Carlos Ruvalcaba (1), Gustavo Costa (2)
Yttrium-90 (90Y) radioembolization is a form of liver cancer therapy where radioactive microspheres are injected into the liver arterial bloodstream via a catheter, delivering localized radiation to the tumor cells. Accurate dosimetry for treatment planning has been shown to improve patient outcome but is not implemented, with current standard-of-care dosimetry practices in liver radioembolization rely on generic models that do not accurately consider individual patient anatomy, vasculature, and tumor burden. To address this limitation, we present an approach involving computational fluid dynamics (CFD) and Monte Carlo simulation. Patient cone-beam CT (CBCT) images are collected to extract the liver vasculature, which is then used to simulate the blood and microsphere transport through CFD in OpenFOAM. The volumetric distribution of microspheres and 90Y absorbed dose distribution in the liver is then computed with the Geant4-based application GATE. The absorbed dose can optimize the dose distribution for personalized treatment planning.
115
Angle Dependence on Positional Radioisotope Source Using Monte Carlo Simulation in Brachytherapy - Moo-Jae Han (1), Sang-Won Kang (1), Woong Cho (2), Dongsan Kang (3), Jin-Beom Jung (1)
In this study, Monte Carlo Simulation was employed to investigate the angle dependence for tracking the location of a radioisotope (RI). The analysis revealed margin of error values for the top 10% of each slope, with measurements of 5.90% at 0°, 8.08% at 30°, and 20.90% at 60°. The total absorbed dose ratio was observed at 83.77% (30°) and 53.36% (60°) relative to 0° (100%), exhibiting a tendency to decrease with increasing slope. Notably, the maximum value for all gradients was consistently observed at 30° No. 9 pixels, with a corresponding 7.24% decrease at No. 10 pixels. The study highlights a substantial degree of angle dependence, leading to the recommendation that maintaining a proper distance of at least 1 cm between the RI and dosimeters is essential to minimize these effects.
116
Partial-Ring PET Image Correction Using Implicit Neural Representation Learning - Shubhangi Makkar (1) (2), Siqi Ye (3), Marina Béguin (2), Günther Dissertori (2), Jan Hrbacek (1), Antony Lomax (1) (2), Keegan McNamara (1) (2), Christian Ritzer (2), Damien C. Weber (1) (4) (5), Lei Xing (3), Carla Winterhalter (1)
This work introduces a deep learning-based approach to enhance image quality in partial-ring PET scanners, which often suffer from significant artifacts due to incomplete angular field of view. Our method utilizes implicit neural representation (INR) learning with coordinate-based multilayer perceptrons (MLPs) featuring sinusoidal activations. These MLPs parameterize partial-ring PET images, training the patient-specific network to predict fully sampled images from partially sampled ones. This approach was validated using twenty digital brain phantoms, Monte Carlo simulated Derenzo phantom, and experimentally acquired hot-rod phantom data from the partial-ring PETITION PET scanner. The results show that the INR learning method significantly improves image quality, achieving a PSNR above 30dB and SSIM over 0.95 for all cases. This method presents a promising solution for enhancing PET images from partial-ring scanners.
117
State-of-the-Art of CT Enhancement Methods for RT Planning: a Review - Özgür Özer (1) (2) (3), Juliette Thariat (1) (2), Alexandre Haut (3), David Gibon (3)
Computerized tomography (CT) scan uses X-rays and is the primary material for defining ROIs delineated by radiation oncologists for radiotherapy (RT) planning and radiotherapy quality assurance (RTQA). Additionally, doses are calculated from the CT electronic densities of tissues, but CT quality varies. CTs often exhibit blur, noise, or artefacts stemming from metals, which can be remedied by super resolution (SR) and metal artifact reduction (MAR) models, respectively. A literature search on AI models that can exploit the latest computer vision algorithms and combine both tasks is presented. An interesting approach is based on a content consistent super-resolution (CCSR) model, which utilises Stable Diffusion. The two tasks can be conducted by this model by simulating both SR and MAR distortions in the training data.
118
Artificial-Intelligence Enhanced Image Reconstruction for Proton Range Verification - Lena Setterdahl (1), Ilker Meric (1), William Lionheart (2), Sean Holman (2)
Proton therapy is a cancer treatment method that utilizes proton beams to irradiate cancerous tissue, with a key objective of minimizing the dose to surrounding healthy tissue. Its full potential is limited by range uncertainties stemming from causes such as variations in patient anatomy and CT Hounsfield unit conversion uncertainties. In 2023, the NOVO collaboration introduced the NOVCoDA system, utilizing fast neutron and prompt-gamma rays emitted along the beam's path to detect beam range shifts. However, limitations in NOVCoDA, such as measurement uncertainties, pose challenges to achieving desired millimeter range shift detection precision. In general, conventional methods like MLEM and MAP result in noisy or less informative reconstructed images for proton range verification. In this work, we propose a hybrid approach, using images predicted by an encoder-decoder network (EDN) trained on Monte-Carlo data to guide MAP reconstruction. Tested on a single case, the MAP-EDN algorithm yields an image with nearly 4 times smaller mean squared error than the implemented list-mode MAP. Further exploration is warranted to assess the utility of the MAP-EDN algorithm for range shift identification.
119
A Tool for Individualised Dose Mapping Uncertainty Quantification in the Reirradiation Setting: Head and Neck Cancer - Chelmis Muthoni Thiong'o (1)
Deformable image registration plays an important role in the assessment of prior radiation doses in the reirradiation setting. However, the effect of geometric uncertainties on dose mapping from previous treatments onto the reirradiation planning scan remains a key challenge, limiting its clinical application. We propose a novel tool for individualised dose mapping uncertainty quantification in the reirradiation setting for head and neck cancer. The tool includes a Graphical User Interface and creates a report using a commercially available treatment planning system. We demonstrate the use of this tool in a dataset of eight patients, evaluating both the brainstem and parotid glands. 87.6% of 1524 registrations for all patients' organs at risk met AAPM TG 132 threshold of 0.3cm and used in dose mapping. However, this was very patient and organ specific. For example, 100% of registrations for the brainstem were plausible, while 91.7% were plausible for each of the parotids.
120
Head and Neck Deformable Registration Using Self-Attention With Positional Encoding - Elizabeth McKenzie (1), Taman Upadhaya (1), Hengjie Liu (2), Ke Sheng (3)
This study proposes a completely attention-based deformable image registration (DIR) model which we call AmorWarp. An attention-based model allows one to incorporate long-range dependencies in registration techniques. Large motion is not well handled in DIR, and we endeavor to harness attention's unique abilities to improve DIR in the setting of large head-and-neck motion. Our resulting model demonstrates improved robustness for both inter- and intra-patient DIR relative to an established CNN model and a B-spline technique. In the setting of large motion, our method reduced the average 95% Hausdorff distance across 17 contours from 17.12mm (pre-registration) to 6.88mm for intra-patient registrations, and from 15.52mm to 7.92mm for inter-patient registrations. Our method reduced the average target registration error across 4 landmarks from 9.80mm to 3.37mm. Our attention-based model performed similarly to more established methods, and we hope that this proof of concept initiates further DIR applications of this powerful technique.
121
Robustness of Radiomics Features to Reconstruction Kernel Batch Effect: How Bad Is the Situation? - Elfried Salanon (1), Aditya Apte (1), Anqi Fu (1), Usman Mahmood (1), Belkhatir Zehor (1), Amita Shukla-Dave (1), Joseph Deasy (1)
The aim of this study was to evaluate the impact of kernel batch effect on radiomic features extracted from CT scans. Radiomic features extracted from CT scans of a 3D printed phantom using CERR, comprised 282 features. Batch effects were assessed via principal component analysis (PCA) and correlation measures. Robustness was measured using the concordance correlation coefficient (CCC) and Pearson correlation coefficient, with a CCC threshold of ≥ 0.9. Statistical analysis was performed using R software. PCA identified two clusters: Standard, ASIRs, ASIRV, and Soft kernels in one, and Lung and Bone Kernels in the other. A gradient from ASIR10 to ASIR50 and ASIRV1 to ASIRV5 was observed in the distance to the Standard Kernel. Correlation matrices revealed little change in ASIRs, ASIRVs, and the Standard Kernel but significant changes in Bone and Lung Kernels. Combat-based correction improved robustness, particularly in first-order statistics, mitigating batch effects in ASIRs and the Standard Kernel. 40 robust features were identified. However, challenges persist in harmonizing Bone and Lung kernels. The robustness of central tendance parameters in contrast to the lack of robustness in dispersion parameters, suggest a non-constant and a nonlinear batch effect influencing specific image regions
122
The Prognostic Value of Foundational Radiomics and Clinical Features in Locally Advanced Non-Small Cell Lung Cancer in NRG Oncology Trial RTOG 0617 - Upadhaya Taman (1), Elizabeth M. McKenzie (1), Indrin J. Chetty (1), Katelyn M. Atkins (1)
This study investigated the feasibility of the foundational radiomics technique on the downstream task of predicting overall survival outcomes in patients with locally advanced non-small cell lung cancer (NSCLC). In an analysis of 449 patients from NRG Oncology/Radiation Therapy Oncology Group (ROTG) 0617, univariate cox regression identified 241 foundational radiomic and 19 clinical features as significant (p-value≤0.05) independent predictors of survival. Subsequently, the performance of popular predictive models, such as Least Absolute Shrinkage and Selection Operator (LASSO) in conjunction with a generalized linear model (GLM), Random Forest (RF), and Support Vector Machine (SVM) were evaluated using 5-repeat 10-fold cross-validation (n=278) and in an unseen independent test set (n=140), yielding an AUC of 0.67 (95% confidence interval [CI], 0.64-0.70) and 0.64 (95% CI, 0.54-0.73) based on foundational radiomics model solely employing SVM and 0.66 (95% CI, 0.73-0.68) and 0.67 (95% CI, 0.58-0.76) based on the clinical features model solely employing LASSO/GLM, respectively. When the foundational radiomics features were combined the clinical features, the AUC using a RF model increased to 0.74 (95% CI, 0.72-0.77) for the validation datasets and 0.67 (95% CI, 0.58-0.76) for the unseen independent dataset. This suggests that radiomics signatures provide complimentary information to clinical features for prediction of survival outcomes in patients with locally advanced NSCLC.
123
Leveraging Hybrid FLT-PET/MR Imaging to Distinguish Tumour Progression From Radionecrosis in Brain Metastasis Patients Post-Radiosurgery - Michael Maddalena (1), Adam Farag (2), Noha Sinno (1), Paula Alcaide-Leon (2), Benjamin Schultz (3), Catherine Coolens (1)
An estimated 20% of stereotactic radiosurgery (SRS) patients relapse and experience tumour progression (TP) within 6 months of treatment. SRS patients may also be afflicted by radionecrosis (RN), a radiation-induced tissue injury virtually indistinguishable from TP via routine treatment follow-up scans. Invasive histopathology remains the only “gold-standard” TP/RN diagnostic method: non-invasive image-based protocols are urgently required to stratify patients while minimizing harm. Our study proposes leveraging FLT-PET/MR to achieve a superior diagnostic accuracy versus historical image-based protocols. Preliminary results suggest that the dynamic FLT-PET modelling-derived kinetic parameters Ki and Pf, as well as 10 MR-derived radiomic features, can significantly distinguish TP from RN. However, an increased sample number is required to confirm the generalizability of these findings. Our study underscores the potential of FLT-PET/MR in classifying metastatic brain lesions and suggest a path towards non-invasive diagnostic protocols to enhance patient stratification and favour outcomes in brain metastasis patients post-SRS.
124
Exploring How Prostate Cancer Treatment Choices Affect Radiomic Features: A Look at Healthy Tissue vs. Dominant Lesion in MR-Guided Radiotherapy - Aaron G. Rankin (1), Swinton Martin (1) (2), Jim Zhong (3) (4), Priya Maitre (2) (5), Angela Davey (1), Marianne Aznar (1), Ananya Choudhury (1) (2), Alan McWilliam (1)
In this work we contrast feature values acquired from different treatment groups for patients receiving MR-guided radiotherapy as part of treatment. A total of 69 patients with localised prostate cancer were included in this study. We explored the impact of different fractionation schemes and androgen deprivation therapy on radiomic feature values acquired from on-treatment T2W-images. Using conventional radiomic approaches, by assessing robustness against contour variation and correlation to volume, we identified 41 features out of 95 to be redundant across treatment groups. An in-house longitudinal methodology for determining representative feature trajectories was used to identify 8 features that were then used for further analysis in assessing relations between feature values within the dominant lesion and entire prostate.
125
Towards Fast Plan Adaptation for Proton Arc Therapy - Benjamin Roberfroid (1), Ana Barragan (1), John A. Lee (1), Edmond Sterpin (1) (2) (3)
Current approaches for dose optimization of proton arc therapy (PAT) plans suffer from optimization times ranging from several minutes to hours. In the context of adaptive proton therapy, we aim to speed up the plan reoptimization process. To achieve this, instead of optimizing a new PAT plan from scratch for each adaptive fraction, we propose to update the initial spot positions and their corresponding energies to the new anatomy of the day. Following this, we proceed to re-optimize only the most pertinent spot weights (less than 50% of all the spots). This results in less parameters to tune and thus in an accelerated plan adaption. Our fast-adaption method showed very similar dose quality for the 3 evaluated plans when compared to plans fully re-optimized from scratch while decreasing substantially the optimization time.
126
On Computational Challenges in Proton Therapy Treatment Planning - Mohammadhossein Mohammadisiahroudi (1), Zou Wei (2), Lei Dong (2), Yuriy Zinchenko (3), Tamás Terlaky (1)
In radiation therapy treatment planning, finding optimal machine configurations, to get maximal coverage of the tumor and minimize damage to critical and healthy organs, is an important optimization problem. Proton Therapy is a new technology enabling high-precision treatment of the tumor close to critical organs. However, treatment planning for proton therapy comes with computational challenges arising from the large number of decision variables and the sensitivity of the treatment to the movements of the patient. In this research, we experiment with different optimization models for proton therapy and evaluate their performance. To tackle large-scale problems, we developed sparsifying and size reduction techniques, including cropping CTs, and prolonging voxels in the interior of organs. In addition, we develop an efficient successive relaxation method for the mixed integer formulation that provides an optimal plan, achieving prescribed DVH constraints. Numerical results show a high level of comformality to DVH goals.
127
Fast Robust Dose Evaluation for IMPT Using Deep Learning - Hazem Nomer (1), Franziska Knuth (1), Joep Van Genderingen (1), Dan Nguyen (2), Margriet Sattler (1), Uwe Oelfke (3), Steve Jiang (2), Linda Rossi (1), Ben Heijmen (1), Sebastiaan Breedveld (1)
In robust IMPT treatment planning for head-and-neck cancer, we explored three DL strategies for dose predictions in different robustness scenarios. Using a single model for all scenarios, DL strategy 3 demonstrated the best correspondence to ground truth plans for 78 test patients. Achieving within 1%-point agreement for 91% of low dose CTV and 80% of high dose CTV, it outperformed strategies 1 (8% and 0%) and 2 (56% and 20%). Employing wish-list driven MCO, high-quality 9-scenario ground truth plans were automatically generated for a total of 392 patients used in training, and testing of the three strategies. This integrated approach provides efficient and accurate IMPT dose distribution assessment on robustness, completing the process in under one.
128
A Neural Network-Based Dose Calculation Engine for Proton Spots Dose: a Clinical Implement Study - Xuying Shang (1), Yaoying Liu (1), Zishen Wang (2), Nan Li (2), Chunfeng Fang (2), Yue Zou (2), Wei Zhao (1), Xiaoyun Le (1), Gaolong Zhang (1), Shouping Xu (3)
A fast-accurate dose calculation algorithm can benefit planning and QA in proton therapy with pencil beam scanning. Recently, a study proposed a neural network (NN)-based dose calculation method for proton spots to accelerate calculation speed. However, there is a lack of a method to apply the NN method in calculating spot dose with complex configurations, such as specific energy, range shifter, isocenter, and scanning location. The study presented a detailed method to calculate the spot dose of clinical plans. In the phase of NN training, a total of 119244 pairs of MC spots dose and CT array were generated. The results showed significant accuracy of the NN. Testing spots' gamma passing rates (GPRs, 3mm/2%) were 98.76 %. The global GPR of 2 untrained patient doses was 97.76 %, and the mean time cost was 0.5s. We successfully established a frame for using NN to calculate complex plan doses.
129
Inverse Robustness to Control Uncertainties in the RBE Parameter in Proton Therapy Planning - Mara Schubert (1)
There is no applicable model to include variable relative biological effectiveness (RBE) into proton therapy planning. Instead, a constant RBE factor of 1.1 is used, risking the over- or underestimation of radiation effects. We present inverse robustness as model to maximize robustness of plans with regards to variations of the RBE parameter, without relying on a RBE model. Instead, our approach maximizes the range of RBE values a plan covers while the dose objectives stay within a predefined limit. We applied the concept to two example cases. Our results show that it is possible to slightly shift the dose distribution to gain robustness against deviation from 1.1 of the RBE parameter for some structures, without significantly changing the treatment plan. With the results of the inverse robust optimization one can evaluate the robustness of a plan with regards to RBE variations and identify structures that require additional measures against them.
130
JulianA.jl - A Flexible Open Source Julia Package for Pencil Beam Scanning Proton Therapy - Renato Bellotti (1) (2), Tony Lomax (1) (2), Andreas Adelmann (3), Jan Hrbacek (2), Damien C. Weber (2) (4) (5)
We present JulianA.jl, an open source Julia package tailored to the needs of proton radiotherapy. JulianA accelerates proton radiotherapy research in multiple ways. First, it enriches the proton radiotherapy community with the full power of the Julia ecosystem for scientific computing. This allows researchers to use efficient implementations of the latest algorithms to improve treatment success and efficiency. Second, JulianA provides a flexible and modular toolkit of common functionality that relieves researchers from reinventing the wheel. Third, it includes a powerful fully automatic spot weight optimisation algorithm for proton radiotherapy. Fourth, its code is easily readable, usable and extensible even by domain experts with limited programming skills and easy to use even for students with a limited time budget. We demonstrate how a treatment plan of clinical relevance can be generated in only 19min with just 38 lines of code, fully automatically and without providing objective weights or training data. Finally, we demonstrate JulianA's general utility for proton radiotherapy research with an application in deep learning.
131
Machine Learning for Dose Prediction in Carbon Ion Therapy - Miriam Schwarze (1), Leo Thomas (1), Hans Rabus (1), Ali Kamal (1) (2), Pichaya Tappayuthpijarn (1) (2)
Fast and accurate dose prediction in carbon ion therapy is crucial for optimizing treatment efficacy and minimizing adverse effects by precisely targeting tumors while sparing healthy tissues. This study explores machine learning (ML) models for accelerated dose prediction in carbon ion therapy. While GANs and transformer models have been previously introduced for dose prediction in other therapies, this work introduces the diffusion model as a novel approach. Trained on high-quality datasets from Geant4 simulations, these models exhibit promising performance results, showing a potential for both precision and speed in prediction dose distributions in mixed radiation fields of therapeutic carbon ions.
132
Monte Carlo Verification of Proton Treatment Planning - Klara Badraoui Cuprova (1)
In Proton Therapy Center in Prague we have developed a method for independent calculation and verification of proton treatment plans. By means of Monte Carlo method, dose computational algorithms of local treatment planning system RayStation 12A can be verified as explicit modelling of particle transport provides the highest calculation accuracy. Linear energy transfer distribution in patient can be simulated alongside. Our project is built on open-source software GATE - a Monte Carlo simulation toolkit for medical physics applications. High precision and accuracy of this powerful method is paid by large computation costs. We reduce the computation time through parallel computing with HTCondor – an open-source high-throughput computing software. Currently we compute on 324 CPU cores. The workflow is fully automated and we are in the phase of integrating the system into clinical practice.
133
Investigating Associations Between Radiomics Features and Biomarkers in an Orthotopic Non-Small Cell Lung Cancer Mouse Model - Gabriel Giampa (1) (2), Ranjan Yadav (1) (2), Liming Wang (1) (3), John Kildea (1) (2), Martin Vallières (4), Norma Ybarra (1) (2) (3)
The aim of this study is to investigate whether there exists a correlation between radiomics features and histological expression of biomarkers in an orthotopic non-small cell lung cancer (NSCLC) mouse model. Tumours were induced in thirty-two athymic male mice by implanting a human NSCLC line in their lungs. The mice were split into four treatment groups. Radiomics features were extracted from cone-beam computed tomography images taken over the course of two weeks of treatment. The mice were then euthanized and their tumours are currently undergoing immunohistochemical staining to detect several biomarkers. Thus far, many extracted features show significant variations across treatment groups, even at the beginning of treatment. Further, many features show strong linear correlations with other features. Future work will account for these findings with appropriate normalization of features by a baseline value, and with appropriate feature selection protocols.
134
RadiSeq: A Whole-Genome DNA Sequencing Simulation Pipeline to Investigate Radiation-Induced Carcinogenesis - Felix Mathew (1) (2), John Kildea (1) (2)
DNA sequencing holds significant promise for probing the biophysical mechanisms underlying radiation carcinogenesis. However, experimental implementation is hampered by cost and time constraints. To address this, we present a computational pipeline for simulating radiation exposure and subsequent whole-genome DNA sequencing of human cells. Utilizing the TOPAS-nBIO Monte Carlo framework, we constructed a single-cell geometric nuclear DNA model for radiation-exposure simulations. Subsequently, we developed "RadiSeq", an application in C++, to enable bulk-cell and single-cell whole-genome DNA sequencing simulations of radiation-exposed cell models. RadiSeq exhibits acceptable performance, with early validation results showing promise. Upon validation completion, we believe that RadiSeq will be a useful tool for further investigations into the mechanisms of radiation carcinogenesis.
135
Towards Patient-Specific Computational Models of Microvasculature to Investigate Radiation Toxicity - Sophie Materne (1) (2), Luca Possenti (1), Alessandro Cicchetti (1), Francesco Pisani (1), Alessandra Catalano (1), Piermario Vitullo (2), Paolo Zunino (2), Tiziana Rancati (1)
The microvasculature plays an essential role in the microenvironment, supplying nutrients, drugs and facilitating immune cell trafficking. In previous studies, microvascular health status was predictive of radiotherapy toxicities in different cohorts of patients. In this study, we present a computational workflow that generates synthetic microvascular networks leveraging data from a sublingual microscope collected in a cohort of 63 Head and Neck Cancer patients. The workflow facilitates the development of personalized computational models to analyze the role of the microvasculature in radiation toxicity. The synthetic networks replicate the microvascular density and vessel diameter of the patients, enabling mechanistic simulations of microvascular flow and drug/nutrient delivery. The results demonstrate the capability of the computational framework to generate relevant microvascular scenarios, with blood flow parameters aligning with physiological ranges. This model will serve as a valuable tool for mechanistically evaluating how the microvasculature shapes the microenvironment, contributing to current models on RT toxicity in diverse patient cohorts.
136
Detectability of Iodine Contrast Agent With Ultra-Low Concentration by Photon-Counting CT: A Simulation Study - Xiaoyu Hu (1), Yuncheng Zhong (1), Kai Yang (2), Xun Jia (1)
To investigate the contrast-to-noise (CNR) of the iodine contrast agent with low concentrations in CT images acquired with photon-counting detector (PCD), we developed a simulation pipeline to model a multi-channel PCD CT and an energy-integrating detector (EID) CT. A 2D digital phantom was generated with iodine solution inserts of 0.25 to 10 mgI/mL concentrations as the object, and 1/2/3-channel PCD and EID CTs were simulated to compute the CNR of these iodine solutions under a 120 kV x-ray tube voltage with various CT dose levels. Numerical results show that PCD CT outperforms EID CT in terms of CNR in general. Furthermore, the multi-channel PCD-CT provides extra degrees of freedom in the energy dimension for finding the optimal channel, by which an improved CNR can be obtained. When the CTDIfree air is 3.3 mGy, the PCD CNR averaged over all iodine inserts were 1.26 to 2.13 times greater than the CNR from EID CT. When the CTDIfree air is 98.7 mGy, the CNR from 1-channel PCD is close to that of EID, but the 2/3-channel PCD CT is able to obtain about 17% higher CNR.
137
Modeling the Geometric Distortion and Fan Beam Imaging of an On-Rails Computed Tomography Scanner for 2D/3D Registration - Giovanni Fattori (1), Riccardo Via (1), Antony John Lomax (1) (2), Sairos Safai (1)
Intraoperative CT scanners are increasingly present in high- precision radiotherapy units, enabling the acquisition of tomographic images of the treatment geometry. While providing high quality images, geometric uncertainties most often preclude the use of on-rails scanners as imagers for patient position verification. We studied the distortion of volumetric images and topograms, while modelling the fan-beam projective geometry for the calculation of digitally reconstructed topogram (DRT) images in the context of 2D/3D registration. Distortion models parameterized by comparison with laser tracker measurements enabled image-based localization with errors below 0.3 mm on individual directions in 3D. Finally, in a phantom study we evaluated the accuracy of a new 2D/3D positioning approach from orthogonal CT topogram image pairs, reporting similar accuracy and in conformity to clinical practice guidelines.
138
Unraveling Interstitial Fluid Pressure: Analyzing Drug Transport Through Cross Voxel Exchange Model and Dynamic Contrast Enhanced MRI - Yeyoung Kim (1) (2), Noha Sinno (2), Michael Milosevic (2) (3) (4), Catherine Coolens (1) (2) (4)
Cancer treatment encounters challenges due to tumour heterogeneity, leading to unpredictable patient outcomes. Interstitial fluid pressure (IFP) is reported to play a role in drug resistance in solid tumors. To examine the relationship between IFP, and drug transport, the novel Cross-Voxel eXchange Model (CVXM) was applied. CVXM created various image-derived transport parameter maps, such as extravasation, convection, and diffusion maps. The fitted results were compared to the Tofts model. The CVXM convection map was converted to a gradient pressure map, which was further compared with the actual IFP measurements obtained using a microtip pressure transducer. Characteristics of IFP gradient, where the velocity flow is elevated near the periphery and decreased at the tumour center, were both found from CVXM convection maps and actual IFP measurements. CVXM provides unparallel information in understanding the pharmacokinetics of drugs in the tumour microenvironment, which will help us move closer towards personalized cancer treatment.
139
Bladder Cancer Hypoxia: Combining OE-MRI and Genomics for Biomarker Discovery - Rekaya Shabbir (1), Brian A Telfer (1), Ben Dickie (1), Muhammad Babur (1), Kaye Williams (1), Catharine Ml West (1), Ananya Choudhury (1) (2), Tim Ad Smith (1)
Introduction: Hypoxic bladder patients benefit from hypoxia modification added to radiotherapy, but no biomarkers are used clinically for patient stratification. We aimed to investigate the potential of OE-MRI to identify hypoxia in xenografts derived from MIBC. Methods: MIBC cells were inoculated into nude mice. Mice were imaged 1h after receiving a pimonidazole injection with an Agilant 7T 16cm bore magnet interfaced to a Bruker Avance III console and a T2-TurboRARE sequence using a dynamic MPRAGE acquisition. Dynamic Spoiled Gradient Recalled Echo images were acquired with 0.1mmol/kg Gd-DOTA injection. Tumours were harvested, sectioned, RNA extracted for transcriptomic analysis. Results: Hypoxia scores were higher for pimo-high vs pimo-low samples (p=0.0002). Expression of known hypoxia-inducible genes were higher: CA9 (p=0.012) and SLC2A1 (p =0.012). Conclusion: Hypoxic regions determined using OE-MRI were greater in the larger tumour. OE-MRI reflects other measures of successfully implemented in MIBC derived xenografts and differences in their transcriptomic profiles.
140
Revealing Head and Neck Tumors Spatial Heterogeneity Through Quantitative MRI Protocol: Initial Findings. - Walid Dandachly (1), Benjamin Leporq (1), Laura Sayaque (1), Chloé Miran (2), Benoit Allignet (1) (2), Frank Pilleul (1) (3), Olivier Hamelin (3), Charlène Bouyer (2) (3), Vincent Gregoire (2), Olivier Beuf (1)
Nowadays, radiotherapy planning is done on CT images, which results in a limited tissue contrast and lack of functional information about the tumor. However, the use of MRI in this context presents a high potential for improvement. MRI is known for its superior capability to provide clear visualizations of organs at risk and to offer valuable tumor characterization. Due to all of this, MRI is considered as a valuable tool in improving the accuracy and effectiveness of radiotherapy planning. In this study, a new MRI protocol was introduced that includes the quantitative mapping of 13 unique functional parameters. Through the exploitation of the IVIM theory, Blood perfusion and oxygen extraction fraction will be mapped alongside other important parameters, which will allow us to reveal the heterogeneity inside the tumor volume. Our aim is to use the resultant maps to accurately characterize the tumor and identify its status.