My LinkedIn Profile
My Github Profile
Email: gajjar.a@husky.neu.edu
Phone: 857-225-4456
Caption: Reconstructed 3D medical image.
Idea and Introduction: Currently a lot of 3-D medical imaging reconstruction takes a lot of computational power and tends to only provide a “pretty picture” and does not provide any additional information for diagnostics (source). Since edges often highlight the details of images, an edge based approach to medical imaging reconstruction hopes to increase the diagnostic potential of 3D medical images, and to reduce computational needs for 3D image reconstruction. Note that medical images are often conducted in 2D slices that move through a volumetric portion of a patient (Thus 3D views can be made from these 2D slices).
Summary: I devised an image processing algorithm utilizing edge detection, median filtering, unsharp masking, and histogram normalization that preprocessed the images for 3D image construction. The 3D image reconstruction was conducted using the volshow() function in MATLAB utilizing a maximum intensity projection. Using a sample lung CT phantom, I tested my processing. Although there are several improvements that are needed to be made, this project demonstrated that rapid 3D medical imaging is possible.
The code written for this project is on my github.
The fundamental step for this approach of 3D image reconstruction, is to run a medical image through an edge detection algorithm. However, it is not ideal to solely use edge detection on a medical image; as an image will contain noise and edges that are not “easy” to detect. The goal here is to both increase the true positive rate while decrease the false positive rate for an edge. Therefore both a low pass filter (noise reduction) and a high pass filter (sharpening) is ran on the image along with a edge detection algorithm.
The noise reduction method that is used is simply median filtering as it easily reduces noise by applying a median intensity over a selected slice of an image. This noise reduction method is used as it is not computationally heavy.
The sharpening method that is used is unsharp masking as this is a commonly used method to sharpen an image.
Another aspect to consider for the overall image processing algorithm is the edge detection algorithm to use. There are four overall edge detection algorithms: Sobel, Prewitt, Laplacian, and Canny.
I decided to use the Sobel operator as this edge detection model has a short computation time. The Sobel operation involves just convoluting a gradient over the image, and is a relatively accurate edge detection algorithm. While the algorithm may be sensitive to noise, but since the image undergoes noise reduction (through a median filter), noise is not necessarily a issue.
The next step is to decide what order of steps the overall algorithm should undertake. There are six possible orders the image could undergo:
The following image shows those following processes, with the following order:
Caption: A test of all six edge detection models outlined above.
The above processes were ran on this image:
Caption: The original image undergoing edge detection.
The second image on the first row shows that undergoing noise reduction first then edge detection then sharpening provides the ideal edge detected image. This image also has the highest resolution and the highest amount of correct edges; reducing the edge detection false negative rate, while increasing the true positive rate.
This is the image showing an image solely undergoing the Sobel edge detection model:
Caption: The original image undergoing edge detection solely through the Sobel operator.
While this is the image showing the image undergoing the process of noise reduction, edge detection, and sharpening:
Caption: The original image undergoing edge detection undergoing noise reduction, edge detection, and sharpening.
Thus the process of undergoing noise reduction, edge detection, and sharpening in the order was found to be the ideal edge detection model.
I initially thought that running my edge detection model on sample medical images to generate a 3D model would run succesfully. However that was not the case. The 3D volume was generated using a maximum intensity projection with the MATLAB volshow() function.
Caption: Initial 3D volume generation attempt.
As seen in the generated volume, no major details can be seen. This was due to the images not having enough information (edges are simply not enough for the volume generation).
My first attempt to counteract the issues above, was to simply overlay the edge detected images on the original image using a simple averaging of intensity values in the images. However, this averaging lowered the general intensity of the image which lowers the brightness and contrast of the image. The figure below shows the effect of this overlay and the lowered brightness and contrast.
Caption: Edge detection (of the original image) overlayed on the original image using pixel-wise averaging.
After overlaying the edge detected image, a histogram equalization is conducted on the image. A histogram equalization in general involves equalizing the histogram of an image over a set intensity range (generally 0 to 255) to increase the contrast of an image. This increase is contrast is due to the fact that the equalization makes the image data as close to a normal distribution as possible.
This histogram equalization is also made to be adaptive by identifying the best equalization value that gives the best contrast.
The histogram equalization is done with the following steps:
The identification of the best equalization value is conducted as following:
This intensity value indicates that the least amount of data would be lost if an image with the calculated histogram replaces an image with the histogram of the ideal histogram.
However after the adaptive equalization was conducted, it was found that the brightness of the image did not increase, so a “non-adaptive” equalization was conducted again with an equalization value of 216. This equalization value was used as it is about 85% of the maximum intensity of the image (so an image that is bright but not “too” bright was created). The figures below shows this equalization process.
As shown by the images below, the adaptive normalization was able to increase the contrast of the overlayed image, while the renormalization value was able to increase the brightness of the overlayed image while maintaining the increased contrast from the adaptive equalization.
Caption: Original image.
Caption: Edge detected image.
Caption: Edge detected image overlayed on original image.
Caption: Adaptively normalization of above image.
Caption: Renormalized image (to 85% maximum image intensity) of above image.
The final step for the processing algorithm for the medical images was thresholding out the noise. Due to the process of the normalization on the images, the signal and noise components of the image are split, effectively allowing thresholding at an intensity value just below the maximum intensity of each processed image.
The following image shows the thresholding on a slice of a Lung phantom image.
Caption: The original image.
Caption: The thresholded image.
After completing the processing algorithm development. The following process was used to generate the 3D volume:
The following images shows the results of the volume generation for a lung phantom from the Cancer Imaging Archive(TCIA):
Caption: A view of the generated 3D volume.
Caption: A view of the generated 3D volume.
The following video shows the 3D volume being generated:
Caption: A video of the 3D volume generation.
And finally the video shows the viewing of the 3D volume:
Caption: A video of the 3D volume generation.
Positives:
Negatives:
Future Improvements: