Machine Learning: Tensorflow Deep Learning Tutorial – “The heart size is normal.”

TensorFlow is developed by Google and released as an open source software library for machine learning, specifically to training neural networks to to detect and decipher patterns and correlations. I would not say that the software would be analogous to human reasoning, but the behavior is more akin to teaching a dog to follow commands. However, the more data you have, the smarter your software is, making your deep neural network smarter than a dog, slightly dumber than a human, but probably more consistent than both. Today, the tutorial will be based on reading chest radiographs.

logo

The chest radiograph is easy to learn but hard to master. The basics of radiology is embedded within the chest radiograph itself and tests the reader on evaluating the soft tissues, bones, air and sometimes fluid.

The very first lesson taught to new radiology trainees with regards to the chest radiograph will be the heart size. Most people use a cardiothoracic ratio of 50-55% to determine if the heart is enlarged provided the film is performed PA-erect. More experienced radiologists incorporate heart morphology, depth of inspiration and sometimes commit to calling hearts on AP images normal in size or enlarged. We will do the same, we will teach our deep neural network how to learn to read chest radiographs and then discuss on the practical implications on improving the software to assess heart size better.

The use of Tensorflow requires basic understanding of Linux/Mac, the use of command lines as well as good knowledge of Python. This is not available on Windows but can be run under a virtual machine on Windows, of which I recommend Ubuntu/Lubuntu. I have run the software on both Mac and Linux and will show images of both. I used Docker for Mac and PIP on Linux. There are many other methods to install Tensorflow apart from Docker and PIP. As there are more Mac users at the time of writing, this tutorial will be covering using Docker on Mac.

Installation

First, install Docker. Download the DMG and drag the icon to your applications folder. Give the software rights to run and Docker should load in the background.

docker

Next, open terminal and run the following command.

docker run -it gcr.io/tensorflow/tensorflow:latest-devel

You should see “root@xxxxxxx#”

Activate python by simply typing python.

python
python

Exit docker by typing ‘ctrl + D’ twice

Sample Chest Radiographs to teach your software

Remember just like teaching a dog how to sit, the software requires you to tell it what sit actually means. A few folders have to be created and these folder names naturally serve as classification titles.

Go to your home folder and create a folder called ‘tf_files’.
Next download the samples from here. The link contains a zip file with a folder called ‘xray’. Place the folder in the ‘tf_files’ folder.

The folder has two folders within, one is ‘normal’, consisting of 27 chest radiographs with normal heart sizes and the other ‘large’ which consists of 29 chest radiographs of large hearts. The samples are extracted from https://openi.nlm.nih.gov/.

Downloading the full Tensorflow samples

Run Docker

docker run -it -v $HOME/tf_files:/tf_files gcr.io/tensorflow/tensorflow:latest-devel

Retrieve the training code (this is outdated, please note that retrain.py and HeartSize.py files are inside the tf_files code and should be able to run under tensorflow==1.15)

cd /tensorflow
git pull
Screen Shot 2017-05-03 at 12.53.15 PM

Training your software

Now that you have the Tensorflow training code, it is time to train your software.

python tensorflow/examples/image_retraining/retrain.py \
 --bottleneck_dir=/tf_files/bottlenecks \
 --how_many_training_steps 500 \
 --model_dir=/tf_files/inception \
 --output_graph=/tf_files/retrained_graph.pb \
 --output_labels=/tf_files/retrained_labels.txt \
 --image_dir=/tf_files/xray

This will take “some” time. The trained data will be stored somewhere in the ‘tf_files’ folder. You may adjust the number of training steps in the code to cut down time.

Using the trained code

There is a certain code to follow for the python software. Fortunately for you I have put it in my dropbox and you may download it here. Place it in your ‘tf_files’ folder.

In Docker, run the command

python /tf_files/HeartSize.py /tf_files/xray/normal/images.jpg
Screen Shot 2017-05-03 at 1.11.18 PM

The higher score means the software recognise the heart size as normal rather than large, which is given a very low score.

This runs the python file to read one of the images that we have trained it with. You can also use a new image and ask it to read.

Further improvements regarding reading heart size

Multilayered AI can first classify images into suboptimal (AP, supine or inadequate inspiratory effort) or optimal. Optimal images can be easily classified as large or small. Suboptimal images will be evaluated further, and in cases which are ambiguous or the large and small size scores are too close, we can classify those as ‘unable to assess’.

a simple equation would be

HeartLargescore – HeartSmallscore = HeartSizedifference, if range <0.3 or -0.3, then a text is generated: “The heart size cannot be accurately assessed in this projection.”

Running in Linux

Realistically, the software is only viable in a Linux server with assistance of an Nvidia GPU to speed up the process. As most RIS software is on windows with no/weak Nvidia graphics card, it is a better option to transfer those images into a Linux Server, generate the results and resend the results back to RIS using a macro software.

screen

Remember to activate the Tensorflow environment if using PIP installation.

Questions?

Feel free to ask any questions in the comments section or if you encounter any difficulties.

– Dr Poh Pei Ghim

This entry was posted in Software, Tutorials and tagged , , , , , , , . Bookmark the permalink.

2 Responses to Machine Learning: Tensorflow Deep Learning Tutorial – “The heart size is normal.”

  1. Paul says:

    Hi,

    Great stuff ! I tried it on a small (25/77) training set of normal/cardiomegaly in dogs (I am a veterinarian) and it got a pretty good accuracy (92% with 500 iteration) but I have the impression it is overfitting the data because accuracy goes down to 65% on my validation set (20 normal + 20 cardiomegaly) and it get worse when I increase the number of iteration.
    I know I don’t have enough cases, but I am working on it ! Any hint on how to improve the training ?
    Paul

  2. Dr Poh says:

    Hi Paul.

    It is true that the data is overfitted but with 65% it means that the system is ‘working’ and not based on randomness. Overfitting reduces accuracy so that is a good guide on the number of iterations you need.

    The best way to refine accuracy is to add more training images. Training images may be hard to obtain but working with other vets with common interests would be a good start.

    You can improve accuracy by adding distortions. add –random_crop, –random_scale and –random_brightness to the training script will distort the images slightly so it will artificially increase the number of training images. However this can significantly increase training time. Do this only at the later phase when you are sure the settings are optimised.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.