2024-03-28T13:53:48Zhttp://oai-repositori.upf.edu/oai/requestoai:repositori.upf.edu:10230/331802018-01-24T08:35:46Zcom_10230_20650com_10230_16441col_10230_33158
2017-11-09T10:04:03Z
urn:hdl:10230/33180
Ultrasound segmentation for vascular network reconstruction in twin-to-twin transfusion syndrome
Perera Bel, Enric
Treball fi de màster de: Master in Computational Biomedical Engineering
In this work we present a placenta and vessel segmentation method for a medical application for Twin-to-Twin Transfusion Syndrome (TTTS). TTTS is a fetal disease that occurs in twin monochorionic pregnancies and can be fatal if left untreated. Right now it is treated with fetoscopic laser coagulation. This method highly improves prognosis, but still presents some risks since the intervention is critical in order to avoid abortion risks. Therefore, it can benefit from image segmentation techniques for surgery planning and guidance. Placenta segmentation is not easy due to a high variability on its location and shape, thus semiautomatic methods are the ones that have shown better results for ultrasound (US) segmentation. We implement one of them, the random walker (RW) algorithm, and include it in a graphic user interface for medical use. Thirty-one sets of US and Doppler US images were available in this study, but four are discarded due to poor gradient quality between tissues. Individual segmentation of placenta and vessel from different images is performed (US and Doppler US, respectively), as well as combined in a multi-label segmentation (Doppler US). The implemented method is compared with previous studies, and it is modified in order to accelerate its computation using a graphics processing unit (GPU). We show that this algorithm offers a fine boundary adherence for US images for both placenta and vessel segmentation, mostly in regions with high tissue gradients, but it is dependent and sensitive on the protocol followed for the manual initialization, which is in concordance with the literature study. We also observe that single and multiple segmentation show similar segmentation results, mostly in vessel and not so much in placenta. The GPU implementation shows faster computation rates, but needs of more iterations to converge to a solution, compared to the already optimized CPU implementation. However, using a high end graphics card accelerates the overall computation, while there is still room for improvement.
The RW algorithm had already been used for placenta segmentation and we have validated its accuracy. However, there does not exist a gold standard in this procedure so we plan on including more methods in the medical application, so the clinician can choose the approach that fits the best to each anatomy and image characteristics.
In this project we tightly collaborate with BCNatal | Barcelona Center for Maternal Fetal and Neonatal Medicine Hospital Clínic and Hospital Sant Joan de Déu, Universitat de
Barcelona. We aim to create a surgery planning and tracking tool that can be used to improve fetoscopic laser coagulation prognosis and, later, that can be extended to other surgeries.
2017-11-09T10:04:03Z
2017-11-09T10:04:03Z
2017-07
info:eu-repo/semantics/masterThesis
http://hdl.handle.net/10230/33180
eng
http://creativecommons.org/licenses/by-nc-nd/3.0/es/
info:eu-repo/semantics/openAccess
Atribución-NoComercial-SinDerivadas 3.0 España
Atribución-NoComercial-SinDerivadas 3.0 España
oai:repositori.upf.edu:10230/428122019-12-19T16:49:10Zcom_10230_20650com_10230_16441col_10230_33158
2019-11-08T13:39:59Z
urn:hdl:10230/42812
Patient stratification using unsupervised clustering and radiomics
Company Se, Georgina
Treball fi de màster de: Master in Computational Biomedical Engineering
Tutor: Karim Lekadir
Medical care is currently provided in clinical practice according to the “one-size-fitsall”
approach, through which all patients suffering from the same symptoms and
diseases receive the same treatment. Despite its wide use, the current approach has
reached its maximal performance, as a treatment that performs well for a majority of the
population may not be suitable for a specific patient or subgroup of patients. Alternative
approaches have therefore been proposed, including the so-called personalized
medicine. However, this patient-specific paradigm is yet to find its way to clinical
practice as it makes clinical decision making highly complex. Consequently, stratified
medicine was proposed to provide medical care on a subgroup basis. Specifically, the
population of diseased individuals are stratified according to subgroups of patient
characteristics, disease manifestations and treatment responses. Subsequently, the
treatments are adapted according to the subgroup to which a given patient belong, thus
potentially optimizing recovery. However, this approach depends on the definition of
the subgroups in question, which is not trivial.
To address this from a computational point of view, a potential solution would be to
apply unsupervised clustering. Yet, the techniques that are most commonly used are
limited by the fact that they require as input the number of clusters, which varies
between diseases and which is often not known in advance. This thesis aims to
implement and validate a recently proposed clustering technique called “Cancer
integration via multi-kernel learning” (CIMLR). While it was developed for oncology,
this study will apply it for the first time for cardiac stratification and will test its
applicability by considering a group of hypertensive patients. The phenotypes used as
input for CIMLR-based stratification are cardiac radiomics data, a new type of imaging
features that describe a range of shape, size, intensity and texture characteristics of the
organs. The results show that the obtained clusters are clinically meaningful and that
they are in correspondence with key lifestyle information of the patients. This indicate
the future potential of the approach for image-based stratified medicine in cardiology.
2019-11-08T13:39:59Z
2019-11-08T13:39:59Z
2019-07
info:eu-repo/semantics/masterThesis
http://hdl.handle.net/10230/42812
eng
https://creativecommons.org/licenses/by-nc-nd/3.0/es/
info:eu-repo/semantics/openAccess
Reconeixement-NoComercial-SenseObraDerivada 3.0 Espanya
oai:repositori.upf.edu:10230/428132019-12-19T16:49:43Zcom_10230_20650com_10230_16441col_10230_33158
2019-11-08T13:45:51Z
urn:hdl:10230/42813
Evaluating a nonlinear interdependence measure in electroencephalographic recordings from epilepsy patients
Espinoso Palacín, Anaïs
Treball fi de màster de: Master in Computational Biomedical Engineering
Tutor: Ralph Gregor Andrzejak
The analysis of neuronal recordings is important for the understanding of the human brain. Epilepsy is a neurological disorder characterized by a synchronous neuronal activity in the brain, that causes seizures. This synchrony can also be found in seizure-free intracranial electroencephalographic (EEG) recordings. Using nonlinear interdependence measures we can characterize the dynamical interdependencies in epileptic brain. Nowadays, there are different methods able to identify interaction between different brain areas, such as the cross-correlation or the linear coherence. However, these methods may not be optimal because they are not sensitive for nonlinear interdependence. In this thesis, we use a rank-based nonlinear interdependence measure (L) able to detect the direction of coupling and the strength between two dynamics. However, L can be affected by noise or cross-correlation of the underlying dynamics. To enhance the specificity of the approach, we apply a surrogate correction (ΔL) to test the results against a specific null hypothesis. We apply this technique to EEG signals measured at different spatial scales of neuronal organization (micro and macrocontacts), stages of the sleep-wake cycle and hemispheres. These hemispheres can be focal, where seizures are produced, or nonfocal, where there are no evidences of seizures.
We first evaluate our measure with bivariate model dynamics (stochastic and deterministic), where ΔL demonstrates to find the correct direction of coupling. Then, we apply ΔL to 960 seizure-free EEG signals from 3 patients. We obtain the interdependency between macrocontacts, between microcontacts and across macro and microcontacts. In general, results show higher values of interdependency for the focal hemisphere as compared to the nonfocal hemisphere. This high interdependency is consistent in all patients when ΔL is applied between macro and microcontacts. Regarding the stages of the sleep-wake cycle, the deepest sleep stages present more differences between focal and nonfocal hemispheres.
In conclusion, we found that ΔL shows promising results to localise the focal hemisphere without the presence of seizures. This is very important since seizures are considered a health impairing phenomenon. A potential therapy for epilepsy patients is resecting the area that produces seizures performing a surgery. This thesis provides further evidence that the analysis of multichannel EEG recordings using nonlinear techniques can be useful for diagnostic purposes.
2019-11-08T13:45:51Z
2019-11-08T13:45:51Z
2019-07
info:eu-repo/semantics/masterThesis
http://hdl.handle.net/10230/42813
eng
https://creativecommons.org/licenses/by-nc-nd/3.0/es/deed.ca
info:eu-repo/semantics/openAccess
Reconeixement-NoComercial-SenseObraDerivada 3.0 Espanya
oai:repositori.upf.edu:10230/428142019-12-19T16:50:18Zcom_10230_20650com_10230_16441col_10230_33158
2019-11-08T13:52:07Z
urn:hdl:10230/42814
Left Atrium blood flow characterization using 4D Flow MRI and CFD
Galán González, Agustín
Treball fi de màster de: Master in Computational Biomedical Engineering
Tutors: Óscar Cámara Rey, Andrea Guala
Left atrial hemodynamics has acquired certain interest due to its role in the pathophysiology
of thrombus formation in patients with atrial fibrillation. Computational
fluid dynamics (CFD) has proven to be a valuable tool for studying blood
flow dynamics while 4D Flow MRI is gaining importance for its ability to capture
three-dimensional flow in vivo. The objective of this study was to assess differences
in left atrial blood flow derived from in vivo flow velocities provided by a 4D
Flow MRI of a real patient and CFD simulations with boundary conditions derived
from this data. In the first part of the study, a pipeline for registration and
visualization of left atrial flow from 4D Flow MRI velocity data was created. In
the second part, we performed CFD simulations using boundary conditions derived
from the 4D Flow data. The comparison of both techniques revealed some
aspects of left atrial flow that were not captured in the CFD simulations, having
made conventional assumptions about flow conditions such as laminarity, but
also manifested the capabilities of 4D Flow MRI as a tool for validation of CFD
models in the LA. This results further support that the combination of these techniques
offers advantages relevant both for research and clinical applications.
2019-11-08T13:52:07Z
2019-11-08T13:52:07Z
2019-07
info:eu-repo/semantics/masterThesis
http://hdl.handle.net/10230/42814
eng
https://creativecommons.org/licenses/by-nc-nd/3.0/es
info:eu-repo/semantics/openAccess
Reconeixement-NoComercial-SenseObraDerivada 3.0 Espanya
oai:repositori.upf.edu:10230/428152019-12-19T16:51:02Zcom_10230_20650com_10230_16441col_10230_33158
2019-11-08T14:17:32Z
urn:hdl:10230/42815
Characterization of patients with heart failure with preserved ejection fraction using machine learning
Martí Castellote, Pablo-Miki
Treball fi de màster de: Master in Computational Biomedical Engineering
Tutors: Bart Bijnens, Sergio Sánchez-Martínez i Scott D. Solomon
The pathophysiological complexity and heterogeneity of the heart failure with preserved ejection fraction (HFpEF) syndrome is not fully captured by clinical guidelines, which oversimplify the condition to standardize diagnosis, thus leading to suboptimal diagnosis.
Machine learning (ML) tools have proven useful at the time to find therapeutically homogeneous patient subclasses, which can potentially improve the prognosis of HFpEF.
The aim of this project is to find archetypal patients and obtain information about different mechanistic processes underlying the worsening of the HFpEF syndrome. This goal is in line with the personalized medicine paradigm, allowing for patient-specific treatments instead of the one-size-fits-all approach used in the current clinical guidelines.
More precisely, this paper presents the analysis of a cohort of patients with HFpEF from the Treatment of Preserved Cardiac Function Heart Failure with an Aldosterone Antagonist (TOPCAT) clinical trial. Clinical and demographic descriptors are used as well as complex echocardiographic descriptors of cardiac function such as the trans-mitral inflow Doppler trace, the aortic outflow Doppler trace and full cycle traces of regional left ventricular strain assessed by 2D speckle tracking on the apical 4-chamber view.
The analysis is based on the reduction of data dimensionality through manifold learning and subsequent regression and clustering. Firstly, the use of unsupervised Multiple Kernel Learning for dimensionality reduction (MKL-DR) makes it possible to combine the information from the different descriptors to generate a space of reduced dimensions. Regression to the input descriptors as a function of the newly computed dimensions makes it easy to interpret how the variance is embedded in the new representation. Lastly, clustering in the resulting space resulted in three homogenous patient groups, which help resolve heterogeneity in the overall population.
2019-11-08T14:17:32Z
2019-11-08T14:17:32Z
2019-07
info:eu-repo/semantics/masterThesis
http://hdl.handle.net/10230/42815
eng
https://creativecommons.org/licenses/by-nc-nd/3.0/es
info:eu-repo/semantics/openAccess
Reconeixement-NoComercial-SenseObraDerivada 3.0 Espanya
oai:repositori.upf.edu:10230/429682019-12-19T16:51:32Zcom_10230_20650com_10230_16441col_10230_33158
2019-11-25T11:40:20Z
urn:hdl:10230/42968
Deep learning surrogate of computational fluid dynamics for thrombus formation risk in the
left atrial appendage
Morales Ferez, Xabier
Treball fi de màster de: Master in Computational Biomedical Engineering
Tutor: Oscar Camara Rey
Atrial fibrillation (AF) is the most common clinically significant arrhythmia, often
severely disrupting cardiac hemodynamics and drastically increasing the risk of
thromboembolic events. Around 90% of such intracardiac thrombus formation in AF
patients takes place in the left atrial appendage (LAA). Such thrombus have been
related to blood stasis, which at the moment, can be only assessed through noisy
imaging data from transesophageal echocardiography (TEE) at one single point in
space and time, vastly oversimplifying the characterization of the complex 4D nature
of blood flow patterns. Alternatively, attempts have been made to relate LAA
morphology to the risk of thrombi formation, some studies suggesting reduced risk
of thrombosis on chicken-wing morphologies. Nonetheless, such classification of the
LAA morphology has been found to be highly inconsistent and subjective, excluding
as well, several fundamental morphological parameters such as the ostium size or
the pulmonary vein (PV) orientation among others.
More recently, computational fluid dynamics (CFD) have been employed on the left
atrium (LA), seeking to assess the risk of thrombogenesis more quantitatively. CFD
has proven to be an invaluable tool in establishing a mechanistic relation between
patient-specific organ morphology and its characteristic hemodynamics. In fact, it
has long been implemented in other human tissues, such as the coronary arteries,
cerebral aneurysms and the aorta with unparalleled success, enabling early diagnosis
and risk assessment of various cardiovascular diseases. Nevertheless, traditional CFD
methods are renowned for their large memory requirements and long computing
times, which severely hinders its suitability for time-sensitive clinical applications.
Hence, this thesis seeks to harness the immense potential of deep learning (DL) by
developing a deep neural network (DNN), with the objective of generating a fast
and accurate surrogate of CFD, capable of instantaneously evaluating the risk of
thrombus formation in the LAA. Already having revolutionized fields such as data
processing, it has only recently begun to employ DNNs in high-dimensional, complex
dynamical systems such as fluid dynamics. In fact to our knowledge, this study represents the first successful implementation of a DL surrogate of CFD analysis in
a structure as complex as the LAA, which had only been previously attempted in
the aorta. For that purpose, two DL architectures have been successfully designed
and trained, which receive the specific LAA geometry as an input, and accurately
predict its corresponding endothelial cell activation potential (ECAP) map, parameter
linked to the risk of thrombosis. The first approach, is based on a simple
fully-connected feedforward network, while the latter, also embeds unsupervised
learning. An statistical shape model (SSM) of the LAA was created to generate the
training dataset, encompassing 210 virtual shapes, on which CFD simulations were
performed to attain the ground truth ECAP mappings. Once trained, the final DL
networks have accurately predicted the ECAP distributions resulting in an average
error of 4.72% for the simple fully-connected network and 5.75% for the unsupervised
learning model. Most importantly, the obtention of the ECAP predictions was
quasi-instantaneous, orders of magnitude faster than conventional CFD. Therefore,
this study is one of the first to demonstrate, the feasibility and unparalleled potential
of DL models as accurate and substantially faster surrogates of CFD, potentially
enabling future real-time assessment of thrombogenesis risk on the LAA.
2019-11-25T11:40:20Z
2019-11-25T11:40:20Z
2019-07
info:eu-repo/semantics/masterThesis
http://hdl.handle.net/10230/42968
eng
http://creativecommons.org/licenses/by-nc-nd/3.0/es/
info:eu-repo/semantics/openAccess
Atribución-NoComercial-SinDerivadas 3.0 España
oai:repositori.upf.edu:10230/455282020-10-21T01:30:55Zcom_10230_20650com_10230_16441col_10230_33158
2020-10-20T08:59:47Z
urn:hdl:10230/45528
Reconstruction and validation of electrocardiographic imaging: Inverse problems
Borràs Argemí, Marta
Treball fi de màster de: Master in Computational Biomedical Engineering
Tutor: Judit Chamorro Servent
A considerable number of sudden unexpected cardiac deaths occur every year in developed countries. Non-invasive techniques to identify patients at risk, provide accurate diagnosis and ablation guidance therapy currently under study. One of them is the electrocardiographic imaging (ECGI), which is a non-invasive imaging modality used to reconstruct cardiac electrophysiological data on the heart, and to map cardiac electrical excitation in relation to the heart‟s anatomy. Solution of the ECGI inverse problem (or signal reconstruction) depends on specification of the relationship between potential sources on the cardiac surface and the body surface measured potentials (the forward problem). Despite all the success of the ECGI technique in the last years, the understanding and treatment of many cardiac diseases is not feasible yet without an improvement of the inverse problem solution. In this work, we first compare two configurations of the method of fundamental solutions (MFS), a meshless forward problem. Afterwards, we transfer and adapt four inverse problem methods to the ECGI setting: algebraic reconstruction technique (ART), random ART, ART Split Bregman (ART-SB) and range restricted generalized minimal residual (RRGMRES) method. We test all these methods with data from the Experimental Data and Geometric Analysis Repository (EDGAR) and compare their solution with the reference heart recorded potentials provided by EDGAR and a generalized minimal residual (GMRES) iterative method computed solution. Isochrone activation maps are also computed and compared. The results show that ART reaches the most stable solution and, in many cases, returns the best reconstruction. Differences between ART and random ART are almost negligible, and the accuracy of their solution is followed by RRGMRES, ART-SB and finally the GMRES (which performs the worst reconstructions). The RRGMRES method provides the best reconstruction in terms of morphology in some case, but it results to be less stable than ART when comparing different datasets. To conclude, we show that ART, random ART and RRGMRES proposed methods improve the GMRES, which was the best suggested inverse problem method when using MFS.
2020-10-20T08:59:47Z
2020-10-20T08:59:47Z
2020
info:eu-repo/semantics/masterThesis
http://hdl.handle.net/10230/45528
eng
http://creativecommons.org/licenses/by-nc-nd/3.0/es/
info:eu-repo/semantics/openAccess
Atribución-NoComercial-SinDerivadas 3.0 España
oai:repositori.upf.edu:10230/466022021-02-27T02:30:54Zcom_10230_20650com_10230_16441col_10230_33158
2021-02-26T14:15:06Z
urn:hdl:10230/46602
Development of a protocol aimed at the creation of patient-specific knee geometries and atlases from magnetic resonance images including cartilaginous tissue mapping based on quantitative magnetic resonance images signals
Stefanizzi, Fabio
Treball fi de màster de: Master in Computational Biomedical Engineering
Tutors: Simone Tassani, Jérome Noailly
This work cannot be considered to stand alone, in fact it is part of a larger national project called HOLOA, which stands for Holistic Osteoarthritis. In the HOLOA project, the study of Osteoarthritis disease begins with the clinical information of each patient and ends with the multiphysics simulations of the specific OA knee joints. Nowadays, a large number of people suffer from Osteoarthritis (OA), which is a joint disease that affects the knee joint in particular. This disease leads to the partial or complete loss of the articular cartilage tissue. Because of the pain experienced, the motor functions of the patient become limited and lead to medical treatments that, in most cases, include arthroplasty surgery. The most diffuse OA classification method foresees the radiography image technique to detect the loss of cartilage, however, this method can only detect an already ongoing loss of cartilage. In this case, an early diagnosis can help to slow or stop the ongoing cartilage loss by avoiding surgery. This can be achieved with new medical imaging methodologies which include Magnetic Resonance Images (MRI) and quantitative MRIs. The first imaging method gives us information on the geometry of the joint, which we exploit for the first purpose of this study: to create patient specific knee 3D models. To achieve this first goal, we label each image pixel giving it a biological meaning, and this process is called segmentation. Once an MRI is segmented and validated from a trained expert, it will represent a training OA knee image called an atlas. This work, that lies in the first part of the HOLOA project, is aimed to create a procedure that lead to the creation of an atlas. In order to do this, we develop two protocols. The first one foresees the segmentation of the principal bones and then the intervention of a trained expert to complete and validate the labels. In the second protocol, instead, we perform the complete segmentation and the expert have just to validate the quality of our outcomes. In order to make this process replicable we also create a guideline to follow in order to achieve our same results. The second imaging method, the quantitative MRI, is used to obtain information on the composition of the cartilage. Once a representative 3D knee model is created, we need to assign biochemical information to the cartilaginous tissues of this model realistic. This is the second purpose of this work and it can be reached with a procedure called mapping. This procedure will be discussed in the second section of this work. From the whole project, we hope to obtain useful information linking the
tissue composition and the progressive degeneration caused by OA within reasonable clinical times. The ultimate goal is to provide an effective standard method for early diagnosis and a better treatment.
2021-02-26T14:15:06Z
2021-02-26T14:15:06Z
2018-07
info:eu-repo/semantics/masterThesis
http://hdl.handle.net/10230/46602
eng
https://creativecommons.org/licenses/by-nc-sa/4.0/
info:eu-repo/semantics/openAccess
Reconeixement-NoComercial-CompartirIgual 4.0 Internacional (CC BY-NC-SA 4.0)
oai:repositori.upf.edu:10230/580912023-10-19T01:30:28Zcom_10230_20650com_10230_16441col_10230_33158
2023-10-18T14:14:45Z
urn:hdl:10230/58091
Enhancing Deep Learning CT pancreas segmentation with Test-Time Augmentation and Merging Techniques to Leverage Inter-rater variability and uncertainty
Sastre García, Blanca
Treball fi de màster de: Master in Computational Biomedical Engineering. Tutors: Miguel Ángel González Ballester, Meritxell Riera i Marín, Javier García López
In recent years, Deep Learning (DL) models have achieved state-of-the-art performance in automatic segmentation of medical images. However, to obtain good results, large and accurately annotated datasets are required for training. In practice, especially in the field of medicine, acquiring such datasets is challenging as they need to be manually annotated by experts, which is a time-consuming task that requires specific skills. Consequently, these datasets often include noisy labels, where the structures of interest are not well-defined and also present variability among annotations performed by different experts. This introduces uncertainty in the predictions obtained by neural network models, which needs to be identified, measured, and addressed.
In this project, a pipeline is proposed to improve the outputs of a 3D U-net model trained with a noisy dataset generated from the public Computed Tomography (CT) pancreas dataset provided by the Medical Segmentation Decathlon (MSD) Challenge. Once the noisy model is trained, Test Time Augmentation (TTA) technique is applied to each test image, generating a set of 50 images with different rotation angles and its corresponding automatic segmentations performed by the noisy model. To obtain a consensus label from them, different merging algorithms are used (intersection, union, majority voting, and STAPLE) and also the aleatoric uncertainty is computed as the variance between them. Finally, in areas of high uncertainty, a relabeling of the noisy output is performed. Each label is compared with the clean label in terms of Dice coefficient (DC) to evaluate if there is an improvement with any merging algorithm or the relabeled output compared to the noisy model's output.
The employed 3D-Unet model, with the selected hyperparameters and the available dataset, achieves a DC of 0.6301 over the test set, when trained with the clean labels. When using the noisy dataset in the training process, as expected, the performance decreases to 0.6050. Results show that there is not a significant difference in terms of DC among the different merging algorithms, with a maximum value of 0.6423 for the STAPLE method and a minimum of 0.5740 for the intersection. However, the STAPLE method incorporates the model's variability in its predictions, resulting in a more comprehensive output and obtaining better performance than the clean model.
Regarding the relabeled output, it does not improve the result with respect to the noisy output, yielding a DC of 0.5940 but it is 78% similar to the output that the clean model obtains. This result shows the significance of the relabeling process in refining the output of a noisy model bringing it closer to the results obtained when training with correct labels.
The main limitations of this project include the difficulty of acquiring large and accurate datasets to investigate the problem, the high computational costs during DL models training and the complex and variable sizes and shapes of the pancreas between patients. More efforts should be done to draw more robust conclusions, for instance, evaluating
the results in a larger test set and comparing this methodology when applied to a different dataset.
2023-10-18T14:14:45Z
2023-10-18T14:14:45Z
2023-10-18
info:eu-repo/semantics/masterThesis
http://hdl.handle.net/10230/58091
eng
https://creativecommons.org/licenses/by-nc-nd/3.0/es/deed.es
info:eu-repo/semantics/openAccess
AttributionNonCommercial- NoDerivs 3.0 Spain