Article

Information Technology Data Standards in Cardiology What, Why, and How Come

Register or Login to View PDF Permissions
Permissions× For commercial reprint enquiries please contact Springer Healthcare: ReprintsWarehouse@springernature.com.

For permissions and non-commercial reprint enquiries, please visit Copyright.com to start a request.

For author reprints, please email rob.barclay@radcliffe-group.com.
Average (ratings)
No ratings
Your rating

Abstract

Computers are the necessary substrate for everything that occurs in cardiology and all of medicine, yet computer technology has been implemented in a piecemeal manner. Multiple single solutions have been introduced to solve individual problems, with no coherent planning for integration and communication across all the multiple computer platforms. Data and data elements are the building blocks of what we call information. In order to maximally utilize the enormous capacities that computers offer in handling information, the data elements must be precisely defined and stored in computers in a uniform manner. This requires data standards. In cardiology, national professional societies led by the American College of Cardiology are developing data standards along with necessary technical specifications that will help achieve the desired goal of a fully interoperable health information network.

Disclosure:The author has no conflicts of interest to declare.

Received:

Accepted:

Correspondence Details:H Vernon Anderson, Cardiology Division, UTHSCH, 6431 Fannin St, Suite 1.246, Houston, TX 77030, USA. E: h.v.anderson@uth.tmc.edu

Copyright Statement:

The copyright in this work belongs to Radcliffe Medical Media. Only articles clearly marked with the CC BY-NC logo are published with the Creative Commons by Attribution Licence. The CC BY-NC option was not available for Radcliffe journals before 1 January 2019. Articles marked ‘Open Access’ but not marked ‘CC BY-NC’ are made freely accessible at the time of publication but are subject to standard copyright law regarding reproduction and distribution. Permission is required for reuse of this content.

By now everyone is aware of the involvement of computers in cardiology. This involvement ranges from imaging (image capture, image display, image management), to tabulating simple clinical data items, to enormous electronic health records systems. Computers are the necessary substrate for everything that occurs in cardiology, not to mention all the rest of the healthcare system. However, for a great variety of reasons, information technology (IT) in cardiology, and in medical care in general, have been implemented in a very piecemeal manner. This has occurred because small, isolated problems are solved in a single, non-integrated manner, with a single solution developed for a single problem. Each solution typically is developed by a single commercial vendor, mostly using proprietary equipment and software. As time passes, what results is an enormous confused mess of isolated, stand-alone systems, each unable to communicate with the others (or, at best, only with a few others). This is the current state of affairs.

We all conceptualize cardiology, and healthcare in general, as dependent on data. Data are conceived of as specific data elements, which are the building blocks of information. This is somewhat analogous to the way that complex organic molecules are composed of individual single elements called atoms. The difficulty for us is that, unlike atoms, data and data elements have lacked precise definitions and exact structures. To make matters worse, the way that one computer system stores and handles data elements may be, and most often is, completely different from the way another computer system does. Couple that with the proliferation of multiple isolated computers, and the result is chaos.

So the ‘what’ of data standards in cardiology involves the identification and definition of the data elements that make up the much larger structure of what we call information. Data standards require both the choice of elements as well as the definitions of the terms in them. For some items, this will be quite simple; for example, for certain numeric data, it means making certain that all weights are recorded in either pounds or kilograms, and all heights in either inches or centimeters, insuring that the number scales are not intermingled. Beyond that, clinicians must be certain that when a term is used, it means the same thing everywhere: we need precise, written definitions of terms like ‘myocardial infarction,’ ‘hypertension,’ ‘diabetes,’ and ‘chronic lung disease.’ The complexity increases when we realize that we also need categories or classes included in some data elements. A well-known example of this is the New York Heart Association Functional Class (NYHA). This simple scheme has four classes, I–IV, and each class has a clinical definition. In order to be maximally useful and computable (interoperable), we must adhere to the precise definitions of these four classes and everyone must use them as written. Otherwise, when captured into a computer dataset, that data element will be unreliable. In another element, we might have mitral regurgitation classified as: none, mild, moderate, severe. The same rules would apply here too. So IT data standards begin with a choice of elements to be used along with clear element definitions that are used by all.

When confronted with the request to list specific elements and use standard definitions of terms, many cardiologists ask: Why? The common (mis)conception is that computers are smart enough to decode the words we use and sort them into the appropriate meaning we want, or meant, or intended. But this is not true or possible. Most material stored as information in electronic health records is not stored as usable, computable, and exchangeable (that is, interoperable) data items. Most clinical material is stored in prose text files; for example, as portable document format (PDF) files or equivalent file structures. This includes notes, test reports, and procedure reports. The data or the true information in such files is not contained in single words or numbers, but exists in higher-order human-language structures like sentences or paragraphs or even several paragraphs. Despite their complexity and sophistication, computers have not been able, and likely never will be able, to comprehend human prose language and extract the kind of computable data elements that we, as clinicians, all want to be able to use. This is especially so if every prose text file contains multitudes of data elements with non-standardized definitions.

So the ‘why’ of IT data standards in cardiology involves the redirection away from prose text with its higher-order human-language structures, toward a different model based on the computational power and speed of electronic systems. The trade-off we must confront is the discipline we need for working with standardized data elements, structured data files, and adherence to formats, for the combining power, calculating power, communication power, and interoperability of computer networks. The prose text human-language model of data and information just will not work on the large-scale, integrated computer networks that have been developed. The good news is that computers themselves can help us with the transition away from prose text to the digital model.

When railroads first began to be built in the period 1820–70, every railroad used a different gauge (i.e. width) of track. There were no track standards and railroad builders designed the tracks to solve individual, local problems of terrain and freight loads. As the rail networks grew, they began to bump into one another. Since the track gauges did not match, the engine and cars of one railroad train could not be transferred over to another. At the junctions, all the passengers and freight had to be unloaded by hand off of one train and moved onto the next train. Of course, this was found to be terribly inefficient. In the US, it was only with the construction of the national transcontinental railroads during the 1860s–80s that the standard track gauge came into being. From that we developed a national transportation network. We are facing the same problem with data standards for healthcare IT today.

The ‘how come’ of IT data standards in cardiology is the same as the railroad track gauge issue. We have to adapt our work to take advantage of the astronomical speed and processing advantages of computer networks. These computer systems and networks cannot become merely giant storage depots of millions or billions or even trillions of prose text documents, each containing information in the form of higher-order human-language structures that cannot be decoded or deconstructed into atomic data elements for transmission, reception, combining, calculating, and display. No person, and no machine, can sort through and understand millions of pages of prose text documents. In order to derive the maximal benefit of computer networks, indeed, in order to not drown in a rapidly rising ocean of prose text documents that cannot be compiled into a coherent whole, the medical community must define the data elements, write the definitions, and store the data, which is the true information, as atomic elements. These atomic data elements then can be transmitted, received, combined, analyzed, calculated, and displayed. Just imagine for a moment trying to do modern polymer chemistry without the precisely structured periodic table of chemical elements. It would not be possible.

The key to the future then, is to understand that we in the clinical community cannot continue to do what we have been doing. Instead of the existing Tower of Babel, with all the poorly defined (but beautiful!) prose text language, we will have to create the precise atomic data elements that we need in order to describe the clinical attributes that we want described. Once we have these atomic data elements, and can use them to model the more complex clinical conditions we confront, then we will be able to take advantage of all the enormous capabilities and opportunities that the computer systems can offer. Yes, the task is enormous, but the goal is achievable, and worthwhile. Just as we all learned the alphabet, to read and write, and to add, subtract, multiply, and divide, we will all learn how to define precise clinical terms and create unique atomic data elements that can be built into larger and more complex clinical data structures. The data elements will become the fundamental building blocks that can be stored in a uniform manner in computers, and then also transmitted, combined, and analyzed. The alternative to this is an unpleasant and ultimately fruitless future of billions and billions of prose text files that will strain the human mind to store, index, read, and understand.

In addition to this paradigm shift away from prose text, we will have to redesign workflow processes to accommodate structured clinical data collection into routine activities. This has been referred to as the digitization of healthcare.1 Several new and evolving technologies will make it possible to collect patient data, including physician and other provider inputs, at virtually every step in the care process. Small wearable monitors with wireless (Wi-Fi) connectivity, barcode labels and barcode scanners, radiofrequency (RFID) chips, wireless tablet computers, even personal cellphones, and many other devices, can be and will be integrated into patient care. Exactly when, where, and what data should be collected will have to be determined, and that will be part of the challenging future that awaits us. The possibilities are infinite, so careful design will be required. Cardiac procedures such as catheterizations and interventions are likely to be tackled first because of their more limited scope of activities and controlled environment. Cardiac electrophysiology procedures should be close behind. As we learn more about how to reconfigure the procedure-based world, with team-based structured data capture at multiple points along the care pathway, it will then be possible to build on that knowledge and move to other areas within the hospital environment, such as routine bedside care and consultations. Eventually this can and will be carried to the outpatient office environment.

There are many groups currently at work creating the IT structures we will need in the future. At the federal level, the Office of the National Coordinator for Health Information Technology has outlined a roadmap for a fully interoperable healthcare network.2 Within cardiology, some of the most productive work is being done by professional societies like the American College of Cardiology (ACC) and its component, the National Cardiovascular Data Registries (NCDR). The large NCDR registries have been based on standardized data elements and data dictionaries from the very beginning.3,4 Additionally, collaborative efforts with the American Heart Association to create comprehensive data standards for specific clinical cardiovascular areas have been underway for many years.5 The Society for Thoracic Surgery (STS) has developed a standardized dataset for adult cardiac surgery patients.6 Minimal yet comprehensive datasets needed to describe overall cardiovascular conditions for entire patients, and not just one limited and specific area, have been developed and are undergoing continuing review and refinement.7,8 Precisely defined clinical cardiovascular endpoints for clinical trials have been developed, and may become the basis for reporting to the Food and Drug Administration.9 In addition to this, as mentioned above, work is ongoing on developing standardized team-based reporting for procedures done in cardiac catheterization laboratories.10

The vendor community that provides all of the necessary hardware and software for IT has a fundamentally important role to play in this endeavor. A partnership organization has been formed with 135 members composed of IT companies, government and nonprofit organizations, professional societies, healthcare provider firms, and standards development organizations. Known as Integrating the Healthcare Enterprise (IHE), it is facilitating the necessary collaborations needed to improve the way computer systems in healthcare share information.11 IHE promotes the coordinated use of established standards to enable care providers to use information more effectively.

All of the efforts described here will only continue to grow and advance in the years to come. While IT data standards in cardiology may appear at first glance to be an obscure and inscrutable subject, it is central to development of healthcare computer networks. It is going to be a long road to travel, but in the end, with help and guidance and leadership from clinicians, the goals of a fully interoperable electronic health information system will be achieved.

References

  1. Steinhubl SR, Topol EJ. Moving from digitalization to digitization in cardiovascular care. J Am Coll Cardiol 2015;66:1489–96.
    Crossref | PubMed
  2. The Office of the National Coordinator for Health Information Technology. Connecting Health and Care for the Nation. A Shared Nationwide Interoperability Roadmap. Available at: https://www.healthit.gov/sites/default/files/hie-interoperability/nationwide-interoperability-roadmap-final-version-1.0.pdf (accessed February 5, 2016)
  3. Anderson HV, Shaw RE, Brindis RG, et al. A contemporary overview of percutaneous coronary interventions: the American College of Cardiology – National Cardiovascular Data Registry (ACC-NCDR). J Am Coll Cardiol 2002;39:1096–103.
    Crossref | PubMed
  4. Shaw RE, Anderson HV, Brindis RG, et al. Development of a risk adjustment mortality model using the American College of Cardiology National Cardiovascular Data Registry (ACC-NCDR) experience: 1998–2000. J Am Coll Cardiol 2002;39:1104–12.
    Crossref | PubMed
  5. Hendel RC, Bozkurt B, Fonarow GC, et al. ACC/AHA 2013 methodology for developing clinical data standards: a report of the American College of Cardiology/American Heart Association task force on clinical data standards. J Am Coll Cardiol 2014;63:2323–34.
    Crossref | PubMed
  6. The Society of Thoracic Surgeons. Adult Cardiac Surgery Database. Available at: http://www.sts.org/national-database/database-managers/adult-cardiac-surgery-database (accessed January 24, 2016)
  7. Weintraub WS, Karlsberg RP, Tcheng JE, et al. ACCF/AHA 2011 key data elements and definitions of a base cardiovascular vocabulary for electronic health records. J Am Coll Cardiol 2011;58:202–22.
    Crossref | PubMed
  8. Anderson HV, Weintraub WS, Radford MJ, et al. Standardized cardiovascular data for clinical research, registries, and patient care. A report from the Data Standards Workgroup of the National Cardiovascular Research Infrastructure Project. J Am Coll Cardiol 2013;61:1835–46.
    Crossref | PubMed
  9. Hicks KA, Tcheng JE, Bozkurt B, et al. 2014 ACC/AHA key data elements and definitions for cardiovascular endpoint events in clinical trials. J Am Coll Cardiol 2015;66:403–69.
    Crossref | PubMed
  10. Sanborn TA, Tcheng JE, Anderson HV, et al. ACC/AHA/SCAI 2014 Health Policy Statement on Structured Reporting for the Cardiac Catheterization Laboratory. J Am Coll Cardiol 2014;63:2591–623.
    Crossref | PubMed
  11. IHE International website. Available at: www.ihe.net. (accessed February 5, 2016)