Friday, June 15, 2018

Week 1

My clinical mentor is Dr. Ajay Gupta and he is a professor of radiology. This week I shadowed him in the neuroradiology reading room and watched how he and other doctors read magnetic resonance images and do diagnosis.

Firstly, after several doctors introduced themselves in the reading room, I was surprised at the long training period of medical students to be a qualified doctor. I learned that a medical student needs four year to get a bachelor degree and four years in medical school to get a PhD degree, and then he or she still needs four years in hospital as a resident to become an attending physician. I'm astonished by their good understanding of the images. After going through thousands of images of healthy and unhealthy person, they can correspond images with anatomical structure much better than me. I mean although I learned a lot about the generation of MR images and a lot of anatomical structure in my undergraduate study, I don't know what our organs will be like under those imaging tools (except brain, as that's my research interest, maybe). They can tell the position of 12 cranial nerves from MR images but I never noticed them on MR images before.  Also, they can notice very inconspicuous details on images (for example, hyper intensity of one or two pixels, or small asymmetry of left and right brain) and judge if that area is normal or not, but I can't even distinguish these features from background.

Secondly, I learned about the difference of clinical used MRI and scientific MRI. Before I went into their reading room, I thought for a patient, only one or two sequence will be applied depending on their disease, just like in research, we often just have paired T1, T2 images. But in reading room I found that about 20 sequences are used in one MR exam. Basically they will do a scan of whole body(or whole brain) using one or two sequence and for some region of interest they will use more different sequences to get more detailed information. Contrast agent is also used sometimes to enhance the image. These images are then registered (I don't know the exact method) and can be shown at the same time on doctor's computer. 

After I watched doctors reading through several images, I found this procedure quite repeating and tiring. So I was thinking of designing an algorithm to deal with those images automatically. The long time training of reading healthy and unhealthy body images is quite like the supervised learning method, and our final goal, image classification (healthy or not) and segmentation (region of interest) is quite common in machine learning area. As we all know, deep convolutional neural network can have better performance than human in image classification and segmentation, so I think if we can collect enough images with doctor's diagnosis, we can train a neural network to help doctors do these repeated task and reduce their burden. The biggest problem here may be data privacy and security.

On Friday, I attended our first immersion term meeting. Dr.Prince introduced more about different imaging tools, like X-ray, CT, ultrasound and MRI, and showed images about different disease (like bone fraction).

No comments:

Post a Comment