Deep Learning For Computational Imaging
DOWNLOAD
Download Deep Learning For Computational Imaging PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Deep Learning For Computational Imaging book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages. If the content not found or just blank you must refresh this page
Deep Learning For Computational Imaging
DOWNLOAD
Author : Reinhard Heckel
language : en
Publisher:
Release Date : 2025-04-30
Deep Learning For Computational Imaging written by Reinhard Heckel and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2025-04-30 with Mathematics categories.
This textbook offers an introduction to deep learning for solving inverse problems. It introduces deep neural networks and deep neural network based signal and image reconstruction techniques. It discusses robustness aspects, how to evaluate and test different methods, and data-centric aspects.
Computational Imaging Through Deep Learning
DOWNLOAD
Author : Shuai Li (Ph.D.)
language : en
Publisher:
Release Date : 2019
Computational Imaging Through Deep Learning written by Shuai Li (Ph.D.) and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2019 with categories.
Computational imaging (CI) is a class of imaging systems that uses inverse algorithms to recover an unknown object from the physical measurement. Traditional inverse algorithms in CI obtain an estimate of the object by minimizing the Tikhonov functional, which requires explicit formulations of the forward operator of the physical system, as well as the prior knowledge about the class of objects being imaged. In recent years, machine learning architectures, and deep learning (DL) in particular, have attracted increasing attentions from CI researchers. Unlike traditional inverse algorithms in CI, DL approach learns both the forward operator and the objects’ prior implicitly from training examples. Therefore, it is especially attractive when the forward imaging model is uncertain (e.g. imaging through random scattering media), or the prior about the class of objects is difficult to be expressed analytically (e.g. natural images). In this thesis, the application of DL approaches in two different CI scenarios are investigated: imaging through a glass diffuser and quantitative phase retrieval (QPR), where an Imaging through Diffuser Network (IDiffNet) and a Phase Extraction Neural Network (PhENN) are experimentally demonstrated, respectively. This thesis also studies the influences of the two main factors that determine the performance of a trained neural network: network architecture (connectivity, network depth, etc) and training example quality (spatial frequency content in particular). Motivated by the analysis of the latter factor, two novel approaches, spectral pre-modulation approach and Learning Synthesis by DNN (LS-DNN) method, are successively proposed to improve the visual qualities of the network outputs. Finally, the LS-DNN enhanced PhENN is applied to a phase microscope to recover the phase of a red blood cell (RBC) sample. Furthermore, through simulation of the learned weak object transfer function (WOTF) and experiment on a star-like phase target, we demonstrate that our network has indeed learned the correct physical model rather than doing something trivial as pattern matching.
Combining Learning And Computational Imaging For 3d Inference
DOWNLOAD
Author : Xinqing Guo
language : en
Publisher:
Release Date : 2018
Combining Learning And Computational Imaging For 3d Inference written by Xinqing Guo and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2018 with categories.
Acquiring 3D geometry of the scene is a key task in computer vision. Applications are numerous, from classical object reconstruction and scene understanding to the more recent visual SLAM and autonomous driving. Recent advances in computational imaging have enabled many new solutions to tackle the problem of 3D reconstruction. By modifying the camera's components, computational imaging optically encodes the scene, then decodes it with tailored algorithms. ☐ This dissertation focuses on exploring new computational imaging techniques, combined with recent advances in deep learning, to infer 3D geometry of the scene. In general, our approaches can be categorized into active and passive 3D sensing. ☐ For active illumination methods, we propose two solutions: first, we present a multi-flash (MF) system implemented on the mobile platform. Using the sequence of images captured by the MF system, we can extract the depth edges of the scene, and further estimate a depth map on a mobile device. Next, we show a portable immersive system that is capable of acquiring and displaying high fidelity 3D reconstructions using a set of RGB-D sensors. The system is based on structured light technique and is able to recover 3D geometry of the scene in real time. We have also developed a visualization system that allows users to dynamically visualize the event from new perspectives at arbitrary time instances in real time. ☐ For passive sensing methods, we focus on light field based depth estimation. For depth inference from a single light field, we present an algorithm that is tailored for barcode images. Our algorithm analyzes the statistics of raw light field images and conducts depth estimation with real time speed for fast refocusing and decoding. To mimic the human vision system, we investigate the dual light field input and propose a unified deep learning based framework to extract depth from both disparity cue and focus cue. To facilitate training, we have created a large dual focal stack database with ground truth disparity. While above solution focuses on fusing depth from focus and stereo, we also exploit combing depth from defocus and stereo, with an all-focus stereo pair and a defocused image of one of the stereo views as input. We have adopted the hourglass network architecture to extract depth from the image triplets. We have then studied and explored multiple neural network architectures to improve depth inference. We demonstrate that our deep learning based approaches preserve the strength of focus/defocus cue and disparity cue while effectively suppressing their weaknesses.
Deep Learning Optics For Computational Microscopy And Diffractive Computing
DOWNLOAD
Author : Bijie Bai
language : en
Publisher:
Release Date : 2023
Deep Learning Optics For Computational Microscopy And Diffractive Computing written by Bijie Bai and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2023 with categories.
The rapid development of machine learning has transformed conventional optical imaging processes, setting new benchmarks in computational imaging tasks. In this dissertation, we delve into the transformative impact of recent advancements in machine learning on optical imaging processes, focusing on how these technologies revolutionize computational imaging tasks. Specifically, this dissertation centers on two major topics: deep learning-enabled computational microscopy and the all-optical diffractive networks. Optical microscopy has long served as the benchmark technique for diagnosing various diseases over centuries. However, its reliance on high-end optical components and accessories, necessary to adapt to various imaging samples and conditions, often limits its applicability and throughput. Recent advancements in computational imaging techniques utilizing deep learning methods have transformed conventional microscopic imaging modalities, delivering both enhanced speed and superior image quality without introducing extra complexity of the optical systems. In the first topic of this dissertation, we demonstrate that deep learning-enabled image translation approach can significantly benefit a wide range of applications for microscopic imaging. We start with introducing a customized system for single-shot quantitative polarization imaging, capable of reconstructing comprehensive birefringent maps from a single image capture, which offers enhanced sensitivity and specificity in diagnosing crystal-induced diseases. Utilizing these quantitative birefringent maps as a baseline, we employ deep learning tools to convert phase-recovered holograms into quantitative birefringence maps, thereby improving the throughput of crystal detection with simplified system complexity. Extending this concept of deep learning-enabled image translation, we also explore its applications in histopathology. Our technique, termed as "virtual histological staining", transforms unstained biological samples into visually rich, stained-like images without the need for chemical agents. This innovation minimizes costs, labor, and diagnostic delays as well as opens up new possibilities in histopathology workflow. The evolution of deep learning tools not only facilities the optical image analysis and processing, but also provides guidance in design and enhancement of optical systems. The second topic of this dissertation is the development and application of diffractive deep neural networks (D2NN). Developed with deep learning, D2NNs execute given computational tasks by manipulating light diffraction through a series of engineered surfaces, which is completed at the speed of light propagation with negligible power consumption. Based on this framework, a lot of novel computational tasks can be executed in an all-optical way, which is beyond the capabilities of the traditional optics design approaches. We introduce several all-optical computational imaging applications based on D2NN, including class-specific imaging, class-specific image encryption, and unidirectional image magnification and demagnification, demonstrating the versatility of this promising framework.
Deep Learning Enabled Computational Imaging In Optical Microscopy And Air Quality Monitoring
DOWNLOAD
Author : Yichen Wu
language : en
Publisher:
Release Date : 2019
Deep Learning Enabled Computational Imaging In Optical Microscopy And Air Quality Monitoring written by Yichen Wu and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2019 with categories.
Exponential advancements in computational resources and algorithms have given birth to the new paradigm in imaging that rely on computation to digitally reconstruct and enhance images. These computational imaging modalities have enabled higher resolution, larger throughput and/or automatic detection capabilities for optical microscopy. An example is lens-less digital holographic microscope, which enables snapshot imaging of volumetric samples over wide field-of-view without using imaging lenses. Recent developments in the field of deep learning have further opened up exciting avenues for computational imaging, which offer unprecedented performance thanks to their capability to robustly learn content-specific complex image priors. This dissertation introduces a novel and universal modeling framework of deep learning -based image reconstruction technique to tackle various challenges in optical microscopic imaging, including digital holography reconstructions and 3D fluorescence microscopy. Firstly, auto-focusing and phase recovery in holography reconstruction are conventionally challenging and time-consuming to digitally perform. A convolutional neural network (CNN) based approach was developed that solves both problems rapidly in parallel, enabling extended depth-of-field holographic reconstruction with significantly improved time complexity from O(mn) to O(1). Secondly, to fuse advantages of snapshot volumetric capability in digital holography and speckle- and artifact-free image contrast in bright-field microscopy, a CNN was used to transform across microscopy modalities from holographic image reconstructions to their equivalent high contrast bright-field microscopic images. Thirdly, 3D fluorescence microscopy generally requires axial scanning. A CNN was trained to learn defocuses of fluorescence and digitally refocusing a single 2D fluorescence image onto user-defined 3D surfaces within the sample volume, which extends depth-of-field of fluorescence microscopy by 20-fold without any axial scanning, additional hardware, or a trade-off of imaging resolution or speed. This enables high-speed volumetric imaging and digital aberration correction for live samples. Based on deep learning powered computational microscopy, a hand-held device was also developed to measure the particulate matters and bio-aerosols in the air using the lens-less digital holographic microscopic imaging geometry. This device, named c-Air, demonstrates accurate, high-throughput and automatic detection, sizing and classification of the particles in the air, which opens new opportunities in deep learning based environmental sensing and personalized and/or distributed air quality monitoring.
Deep Learning With Physical And Power Spectral Priors For Robust Image Inversion
DOWNLOAD
Author : Mo Deng (Ph. D.)
language : en
Publisher:
Release Date : 2020
Deep Learning With Physical And Power Spectral Priors For Robust Image Inversion written by Mo Deng (Ph. D.) and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2020 with categories.
Computational imaging is the class of imaging systems that utilizes inverse algorithms to recover unknown objects of interest from physical measurements. Deep learning has been used in computational imaging, typically in the supervised mode and in an End-to-End fashion. However, treating the machine learning algorithm as a mere black-box is not the most efficient, as the measurement formation process (a.k.a. the forward operator), which depends on the optical apparatus, is known to us. Therefore, it is inefficient to let the neural network to explain, at least partly, the system physics. Also, some prior knowledge of the class of objects of interest can be leveraged to make the training more efficient. The main theme of this thesis is to design more efficient deep learning algorithms with the help of physical and power-spectral priors. We first propose the learning to synthesize by DNN (LS-DNN) scheme, where we propose a dual-channel DNN architecture, each designated to low and high frequency band, respectively, to split, process, and subsequently, learns to recombine low and high frequencies for better inverse conversion. Results show that the LS-DNN scheme largely improves reconstruction quality in many applications, especially in the most severely ill-posed case. In this application, we have implicitly incorporated the system physics through data pre-processing; and the power-spectral prior through the design of the band-splitting configuration. We then propose to use the Phase Extraction Neural Networks (PhENN) trained with perceptual loss, that is based on extracted feature maps from pre-trained classification neural networks, to tackle the problem of low-light phase retrieval under low-light conditions. This essentially transfer the knowledge, or features relevant to classifications, and thus corresponding to human perceptual quality, to the image-transformation network (such as PhENN). We find that the commonly defined perceptual loss need to be refined for the low-light applications, to avoid the strengthened "grid-like" artifacts and achieve superior reconstruction quality. Moreover, we investigate empirically the interplay between the physical and con-tent prior in using deep learning for computational imaging. More specifically, we investigate the effect of training examples to the learning of the underlying physical map and find that using training datasets with higher Shannon entropy is more beneficial to guide the training to correspond better to the system physics and thus the trained mode generalizes better to test examples disjoint from the training set. Conversely, if more restricted examples are used as training examples, the training can be guided to undesirably “remember" to produce the ones similar as those in training, making the cross-domain generalization problematic. Next, we also propose to use deep learning to greatly accelerate the optical diffraction tomography algorithm. Unlike previous algorithms that involve iterative optimization algorithms, we present significant progresses towards 3D refractive index (RI) maps from a single-shot angle-multiplexing interferogram. Last but not least, we propose to use cascaded neural networks to incorporate the system physics directly into the machine learning algorithms, while leaving the trainable architectures to learn to function as the ideal Proximal mapping associated with the efficient regularization of the data. We show that this unrolled scheme significantly outperforms the End-to-End scheme, in low-light imaging applications.
Data Driven Sparse Computational Imaging With Deep Learning
DOWNLOAD
Author : Robiulhossain Mdrafi
language : en
Publisher:
Release Date : 2022
Data Driven Sparse Computational Imaging With Deep Learning written by Robiulhossain Mdrafi and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2022 with categories.
Typically, inverse imaging problems deal with the reconstruction of images from the sensor measurements where sensors can take form of any imaging modality like camera, radar, hyperspectral or medical imaging systems. In an ideal scenario, we can reconstruct the images via applying an inversion procedure from these sensors’ measurements, but practical applications have several challenges: the measurement acquisition process is heavily corrupted by the noise, the forward model is not exactly known, and non-linearities or unknown physics of the data acquisition play roles. Hence, perfect inverse function is not exactly known for immaculate image reconstruction. To this end, in this dissertation, I propose an automatic sensing and reconstruction scheme based on deep learning within the compressive sensing (CS) framework to solve the computational imaging problems. Here, I develop a data-driven approach to learn both the measurement matrix and the inverse reconstruction scheme for a given class of signals, such as images. This approach paves the way for end-to-end learning and reconstruction of signals with the aid of cascaded fully connected and multistage convolutional layers with a weighted loss function in an adversarial learning framework. I also propose to extend our analysis to introduce data driven models to directly classify from compressed measurements through joint reconstruction and classification. I develop constrained measurement learning framework and demonstrate higher performance of the proposed approach in the field of typical image reconstruction and hyperspectral image classification tasks. Finally, I also propose a single data driven network that can take and reconstruct images at multiple rates of signal acquisition. In summary, this dissertation proposes novel methods on the data driven measurement acquisition for sparse signal reconstruction and classification, learning measurements for given constraints underlying the requirement of the hardware for different applications, and producing a common data driven platform for learning measurements to reconstruct signals at multiple rates. This dissertation opens the path to the learned sensing systems. The future research can use these proposed data driven approaches as the pivotal factors to accomplish task-specific smart sensors in several real-world applications.
Computational Imaging And Sensing In Diagnostics With Deep Learning
DOWNLOAD
Author : Calvin Brown
language : en
Publisher:
Release Date : 2020
Computational Imaging And Sensing In Diagnostics With Deep Learning written by Calvin Brown and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2020 with categories.
Computational imaging and sensing aim to redesign optical systems from the ground up, jointly considering both hardware/sensors and software/reconstruction algorithms to enable new modalities with superior capabilities, speed, cost, and/or footprint. Often systems can be optimized with targeted applications in mind, such as low-light imaging or remote sensing in a specific spectral regime. For medical diagnostics in particular, computational sensing could enable more portable, cost-effective systems and in turn improve access to care. In the last decade, the increased availability of data and cost-effective computational resources coupled with the commodification of neural networks has accelerated and expanded the potential for these computational sensing systems.First, I will present my work on a cost-effective system for quantifying antimicrobial resistance, which could be of particular use in resource-limited settings, where poverty, population density, and lack of healthcare infrastructure lead to the emergence of some of the most resistant strains of bacteria. The device uses optical fibers to spatially subsample all 96 wells of a standard microplate without any scanning components, and a neural network identifies bacterial growth from the optical intensity information captured by the fibers. Our accelerated antimicrobial susceptibility testing system can interface with the current laboratory workflow and, when blindly tested on patient bacteria at UCLA Health, was able to identify bacterial growth after an average of 5.72 h, as opposed to the gold standard method requiring 18-24 h. The system is completely automated, avoiding the need for a trained medical technologist to manually inspect each well of a standard 96-well microplate for growth. Second, I will discuss a deep learning-enabled spectrometer framework using localized surface plasmon resonance. By fabricating an array of periodic nanostructures with varying geometries, we created a "spectral encoder chip" whose spatial transmission intensity depends upon the incident spectrum of light. A neural network uses the transmitted intensities captured by a CMOS image sensor to faithfully reconstruct the underlying spectrum. Unlike conventional diffraction-based spectrometers, this framework is scalable to large areas through imprint lithography, conducive to compact, lightweight designs, and, crucially, does not suffer from the resolution-signal strength tradeoff inherent to grating-based designs.
Deep Learning For Biomedical Image Reconstruction
DOWNLOAD
Author : Jong Chul Ye
language : en
Publisher: Cambridge University Press
Release Date : 2023-10-12
Deep Learning For Biomedical Image Reconstruction written by Jong Chul Ye and has been published by Cambridge University Press this book supported file pdf, txt, epub, kindle and other format this book has been release on 2023-10-12 with Technology & Engineering categories.
Discover the power of deep neural networks for image reconstruction with this state-of-the-art review of modern theories and applications. The background theory of deep learning is introduced step-by-step, and by incorporating modeling fundamentals this book explains how to implement deep learning in a variety of modalities, including X-ray, CT, MRI and others. Real-world examples demonstrate an interdisciplinary approach to medical image reconstruction processes, featuring numerous imaging applications. Recent clinical studies and innovative research activity in generative models and mathematical theory will inspire the reader towards new frontiers. This book is ideal for graduate students in Electrical or Biomedical Engineering or Medical Physics.
Physics Informed Deep Learning For Terahertz Computational Imaging Phase Depth And Occluded Object Reconstruction
DOWNLOAD
Author : Mingjun Xiang
language : de
Publisher:
Release Date : 2025
Physics Informed Deep Learning For Terahertz Computational Imaging Phase Depth And Occluded Object Reconstruction written by Mingjun Xiang and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2025 with categories.