Article

Artificial Intelligence Techniques in Nuclear Cardiology

Register or Login to View PDF Permissions
Permissions× For commercial reprint enquiries please contact Springer Healthcare: ReprintsWarehouse@springernature.com.

For permissions and non-commercial reprint enquiries, please visit Copyright.com to start a request.

For author reprints, please email rob.barclay@radcliffe-group.com.
Average (ratings)
No ratings
Your rating
Copyright Statement:

The copyright in this work belongs to Radcliffe Medical Media. Only articles clearly marked with the CC BY-NC logo are published with the Creative Commons by Attribution Licence. The CC BY-NC option was not available for Radcliffe journals before 1 January 2019. Articles marked ‘Open Access’ but not marked ‘CC BY-NC’ are made freely accessible at the time of publication but are subject to standard copyright law regarding reproduction and distribution. Permission is required for reuse of this content.

The term 'artificial intelligence' (AI) was coined in 1956 by Professor John McCarthy at the Massachussetts Institute of Technology (MIT).While the ambitious goal of 'making computers think and behave like humans' is still largely unrealized (to date no computer has ever passed the Turing test—defined as the capability to engage undetected in a natural language conversation with a human party), human behavior has been rather effectively emulated by computers in game playing, as well as in select scientific and medical applications. For the purposes of this article, a software application will be considered to be 'intelligent' if it follows different courses of action based on the result of evaluation(s), e.g. expressed by 'if...then' statements in the code. By this criterion, an algorithm that filters an image using a filter of predefined characteristics would not qualify as intelligent, whereas an algorithm that tailors the amount and type of filtration to image quality and count statistics would.

A Schematic View of Nuclear Cardiology Imaging

Figure 1 presents a schematic view of the steps involved in the performance, analysis, and interpretation of a nuclear cardiology study—in this case a gated perfusion single photon emission computed tomography (SPECT) study. A standardized acquisition protocol produces a number of sets of projection images that are then filtered and reformatted into tomographic images by choosing appropriate reconstruction limits and by identifying the location of the left ventricle (LV) long axis in the three-dimensional (3-D) space. Perfusion quantification requires sampling of the LV myocardium's uptake at a number of locations, comparing it with the expected or 'normal' uptake pattern and producing numerical and graphical information as to the portion of the myocardium that is hypoperfused. Function quantification requires tracking (or modeling) the epicardial and endocardial surfaces of the LV as they move during the cardiac cycle, so as to obtain dynamic measurements of the LV from which both global (ejection fraction and diastolic parameters) and regional (myocardial wall motion and thickening) parameters of LV function can be derived. Parameters other than perfusion and function can be similarly quantified by determining the LV shape, its relative uptake compared with other organs (lung/heart ratio (LHR)) or its size under different study conditions (transient ischemic dilation ratio (TID)). Study assessment is a complex process that starts with quality control and the identification of imaging artifacts (often related to photon attenuation, patient/organ motion and gating errors) and involves ascribing perfusion and function abnormalities to specific coronary vascular territories, correcting the automated software where/if necessary, combining perfusion and functional information, and interpreting it in the appropriate clinical context.

The final output is a report that contains information on the individual patient's demographics and clinical history, the type of nuclear procedure performed, the quantitative, semi-quantitative and visual data that resulted from it, and an answer to the clinical question that caused the patient to be referred, with a prognostic statement included (if possible and appropriate).

It is apparent that each of the blocks and sub-blocks in Figure 1 accomplishes tasks that require some degree of 'thinking', whether it is distinguishing the LV from other body structures, deciding whether a certain feature or behavior is normal or abnormal, or making sure that the reported data is internally consistent. The two main classes of AI techniques that have traditionally been used to help scientists automate many of those tasks are now discussed.

Expert Systems and Neural Networks

An expert system (more appropriately referred to as a rule-based system) is a software application that uses rules to store knowledge, and 'rule chaining' to apply that knowledge to solving problems. A typical example is the fact that most adult human hearts comprise an ellipsoidally shaped LV with a cavity volume of 20-500ml, even in severely pathological cases. Software that aims to identify the LV can therefore safely discard candidate structures that are too small, too large, or inappropriately shaped.1 A rule is generally expressed by 'if...then' clauses—if a specific condition is verified (or not verified), then a specific action will be taken. Key to building an effective expert system is using an appropriate representation to describe the task to be accomplished—in other words, if the rules are correctly chosen and combined, the expert system is usually able to perform its job well. The main advantage of the expert system approach is that if a scientist or clinician fully understands the problem at hand, and is capable of distilling his/her reasoning in solving it into a coherent set of rules, then the expert system can apply those rules in a consistently faster and more reproducible manner. The main disadvantage is that the problem must be simple enough to be defined by a reasonably small number of rules. The more rules that are introduced, the higher the likelihood that conflicts may result—it is of course possible to assign a priority to each rule, but as their numbers grow, the sheer number of interactive combinations may make the problem intractable, even when the entire decisional structure is visualized using a 'decision tree'.

A neural network is a 'black box' approach that aims to replicate the human performance of a task by analyzing a large number ('training set') of examples of how an expert practically accomplishes that task. In engineering terms, enough input-output sample pairs are provided to permit the derivation of the 'transfer function' of the black box, so that the right output can be generated upon presentation of a new input. An example of this is software that automatically determines thresholds for semi-quantitative LV segmental perfusion or function scoring by maximizing its own agreement (kappa statistics) with expert scores in a sample patient population, which subsequently applies those thresholds to new patients.2

The main advantage of neural networks is that, conceptually, they can be used to solve complex problems that are either not completely understood or not suitable for being described by a clear set of rules. The main disadvantage is that the approach 'I cannot quite explain how I do it, but watch me do it and you'll figure it out' is intrinsically limited. The expert trainer may not show the trainee a wide enough sample of cases, or may provide wrong interpretations. (Admittedly, this limitation is not unique to neural networks, as the wrong rules could be chosen in the expert system approach; however, at least the rules would be clearly spelled out, and could be more readily identified and corrected.) More importantly, computational speeds limit neural networks to a small number of electronic nodes grouped in few layers, and their design is still widely regarded as a form of art. Replicating the complexity of the human brain using a neural network is impossible with current technology, because the human brain consists of more than 1,011 neurons, each of them with 105 connections, and its super-massive parallelization would not be achievable by any existing computer.

Case-based reasoning programs can be considered to be a sub-category of neural networks, because they also base their decisions on a sample set of inputs and corresponding outputs. However, no attempt to identify the transfer function of the decisional process is made, and the output corresponding to the sample input that most closely resembles the one at hand is chosen. It is obvious that the number and variety of input/output sets from which to choose is extremely critical to the success of this approach.

AI in Nuclear Cardiology—A Brief Review

The great majority of AI techniques employed in nuclear cardiology are of the expert system type and, as Figure 1 suggests, can be applied to solving a wide variety of problems.

Immediately following cardiac SPECT acquisition, various quality control tasks can be accomplished entirely through rule-based software. For example, patient and/or heart motion has been detected by algorithms that track the LV center of mass or its epicardial boundary in the projection datasets, much like a human operator could by visual review of the rotating projection images.3 Gating errors and arrhythmia-induced anomalies have also been identified from automated analysis of the relative projection image counts and count patterns in the various intervals of a gated SPECT study.4

Both the reconstruction/reorientation process and the quantification performed on the resulting tomographic images are based on isolating or 'segmenting' the heart from neighboring structures.The simplest form of image segmentation is thresholding, i.e. setting to zero all the pixels below a certain fraction of the image's maximal pixel count so as to reduce or eliminate extra-cardiac activity. Since the heart is not often the 'hottest' structure in the image, rule-based approaches to segmentation have been devised using adaptive thresholding or knowledge of the expected location, size, and shape of the heart. For example, heuristic criteria could require that the LV be in the upper right area of a projection image, have size within the physiological range, or present a doughnut-like shape with a reasonable aspect ratio in a short axis image.1,5 In gated studies, isolation of the LV cavity or myocardium can also be effected by identifying and clustering pixels whose count value changes the most during the cardiac cycle, based on the assumption that count variations are a consequence of motion of activity-containing structures. Figure 2 shows an example of segmentation of the LV within a projection image.

Expert systems based on edge detection or pattern recognition algorithms are commonly employed to precisely outline the boundaries of the LV cavity or the myocardium. Edge detection may involve the Gaussian fitting of count profiles across the myocardium,2,5 moment analysis,6 gradient analysis of the count distribution in the myocardium7 or in the LV cavity,8 and the partial volume effect,9 sometimes with the assumption of fixed myocardial thickness.10 Rules may also include the constraint that the LV myocardial mass should be constant throughout the cardiac cycle,5 or strategies to ensure that edge detection succeeds even in the presence of severe perfusion defects—specifically, expert systems will often 'fill' discontinuities based on the geometry and smoothness of the immediately adjacent areas with normal (or less abnormal) perfusion.5

The partial volume effect is particularly helpful in the assessment of systolic myocardial thickening from gated SPECT imaging, since it can be assumed that myocardial 'brightness' (in the image), and actual myocardial thickness are linearly proportional, and that the apparent myocardial 'brightening' between diastole and systole is a good proxy for myocardial thickening. Recently proposed rule-based algorithms in the area of myocardial perfusion SPECT are capable of 'intelligently' combining quantitative information derived from prone and supine studies,11 or registering image volumes so that a common set of contours can be applied to rest and stress (or to serial stress) studies.12

The automatic interpretation of nuclear cardiology studies is a complex and difficult task, and it is perhaps for this reason that a variety of expert systems, neural networks, and case-based reasoning approaches have been attempted in this area.13 The problem most frequently studied has been the identification of the presence of perfusion defects and the location from perfusion SPECT images, or rather from a parametric representation of those images (polar maps, Fourier decomposition, or segmental uptake), which makes the input data more computationally manageable, particularly for neural network purposes.14 An interesting expert system approach has been described, using 253 heuristic ('if...then') rules created by experts in order to best match SPECT perfusion assessment to angiographic data—of note, some of the rules contain information on patient body size and probable attenuation artifacts, suggesting that patient history and other (non-perfusion) nuclear information may be incorporated into the diagnostic software in the future.15 Currently, it is fair to say that AI techniques should only be regarded as a 'second expert', and can provide computer-assisted, rather than completely automatic, interpretation of nuclear cardiology studies. The key function of a report generator is that of:

  • accepting input data such as patient demographics, medical history, qualitative and quantitative results of the nuclear and possibly stress echocardiographic (ECG) study;
  • appropriately combining that input data; and
  • generating as output diagnostic and prognostic information, expressed in a format and a language that can be readily understood by the referring physician.

In addition to providing the input-output transfer function, an 'intelligent' function of an expert system for report generation is ensuring the consistency of the input data.

When a reviewing clinician modifies the results of the quantitative algorithm (for example, segmental perfusion scores could be modified if suspected to be secondary to attenuation artifacts) or contributes his/her own qualitative assessments, it is the report generator's task to determine if human intervention has rendered the data internally inconsistent. If that is the case, conflicting items of information can be flagged, appropriate errors and warning messages generated, and suggestions as to how to reconcile the data presented, ultimately improving report accuracy and turnaround time.

While it is desirable that an expert system or neural network should be able to operate in a totally automated fashion, some may require a minor level of operator interaction to complete their task. The higher the degree of interaction required, the lower the reproducibility of the final results. The continuing research in the field of AI, coupled with the increase in computer processing speed, makes it likely that new and improved software applications will be developed in the future—this is expected to further enhance the accuracy, reproducibility, and overall efficiency of nuclear cardiology, a modality that already has seen remarkable growth over the past three decades. 

References

  1. Germano G, Kavanagh P B, Su H T et al., Automatic reorientation of three-dimensional, transaxial myocardial perfusion SPECT images [see comments], J. Nucl. Med. (1995);36: pp. 1,107-1,114.
    PubMed
  2. Germano G, Erel J, Lewin H, Kavanagh P B, Berman D S,Automatic quantitation of regional myocardial wall motion and thickening from gated technetium-99m sestamibi myocardial perfusion single-photon emission computed tomography, J.Am.Coll. Cardiol. (1997);30: pp. 1,360-1,367.
    Crossref | PubMed
  3. Matsumoto N, Berman D S, Kavanagh P B et al.,Quantitative assessment of motion artifacts and validation of a new motioncorrection program for myocardial perfusion SPECT, J. Nucl. Med. (2001);42: pp. 687-694.
    PubMed
  4. Nichols K, Dorbala S, DePuey E G,Yao S S, Sharma A, Rozanski A,Influence of arrhythmias on gated SPECT myocardial perfusion and function quantification, J. Nucl. Med. (1999);40: pp. 924-934.
    PubMed
  5. Germano G, Kiat H, Kavanagh P B et al. Automatic quantification of ejection fraction from gated myocardial perfusion SPECT, J. Nucl. Med. (1995);36: pp. 2,138-2,147.
    PubMed
  6. Goris M L,Thompson C, Malone L J, Franken P R,Modelling the integration of myocardial regional perfusion and function, Nucl. Med. Commun. (1994);15: pp. 9-20.
    Crossref | PubMed
  7. Faber T L, Stokely E M, Peshock R M, Corbett J R, A model-based four-dimensional left ventricular surface detector, IEEE Trans. Med. Imaging (1991);10: pp. 321-329.
    Crossref | PubMed
  8. Faber T L, Stokely E M,Templeton G H,Akers M S, Parkey R W, Corbett J R.Quantification of three-dimensional left ventricular segmental wall motion and volumes from gated tomographic radionuclide ventriculograms, J. Nucl. Med. (1989);30: pp. 638-649.
    PubMed
  9. Liu Y, Sinusas A, Khaimov D, Gebuza B,Wackers F,New hybrid count- and geometry-based method for quantification of left ventricular volumes and ejection fraction from ECG-gated SPECT: methodology and validation, J. Nucl. Cardiol. (2005);12: pp. 55-65.
    Crossref | PubMed
  10. Faber T L, Cooke C D, Folks R D et al. Left ventricular function and perfusion from gated SPECT perfusion images: an integrated method, J. Nucl. Med. (1999);40: pp. 650-659.
    PubMed
  11. Nishina H, Slomka P, Abidov A et al., Combined supine and prone quantitative myocardial perfusion SPECT: method development and clinical validation in patients with no known coronary artery disease, J. Nucl. Med. (2006) in press.
  12. Slomka P J, Nishina H, Berman D S et al.,Automatic quantification of myocardial perfusion stress-rest change:A new measure of ischemia, J. Nucl. Med. (2004);45: pp. 183-191.
    PubMed
  13. Wallis J W,Use of artificial intelligence in cardiac imaging - Invited commentary, J. Nucl. Med. (2001);42: pp. 1,192-1,194.
    PubMed
  14. Lindahl D, Palmer J, Ohlsson M, Peterson C, Lundin A, Edenbrandt L, Automated interpretation of myocardial SPECT perfusion images using artificial neural networks, J. Nucl. Med. (1997);38: pp. 1,870-1,875.
    PubMed
  15. Garcia E V, Cooke C D, Folks R D et al., Diagnostic performance of an expert system for the interpretation of myocardial perfusion SPECT studies, J. Nucl. Med. (2001);42: pp. 1,185-1,191.
    PubMed