Guidelines and Recommendations for Digital Echocardiography:
A Report from the Digital Echocardiography Committee of the American Society of Echocardiography
Appendix A: Historical development
Echocardiography evolved as an analog technique, with acoustic signals being amplified and displayed on an oscilloscope or recorded onto strip chart paper. One of the first applications of digital (computerized) technology came in the late ’70s, with the development of digital scan conversion. For the first time, the polar data set that originated in a two-dimensional echocardiograph could be displayed in a raster format, allowing it to be recorded on videotape. It also meant that at certain points within the processing stream of the machine, the image existed in a purely digital form and potentially could be stored in that format. ,
The initial impetus for digital storage and review of echocardiograms came from Harvey Feigenbaum in the late 1970s, who sought a way to make computer assisted measurements by overlaying quantitative electronic calipers over a frozen echo image. Working with John Freeland and Roger Camp, they utilized a Sony videodisk to present a crisp frozen image to the reviewer, without the jitter of videotape machines of that era. When Sony discontinued the videodisk, they turned to the embryonic technology of computer frame grabbers, capable of capturing the image from the video port of the echo machine (or secondarily from video tape) and storing it in digital format. Although developed simply to facilitate accurate measurements, the advantages of digital echo review itself soon became manifest, particularly for the nascent technique of stress echocardiography. Because of limitations in computer storage at the time, these images were stored in relatively low resolution, included only systole and were rendered in only 64 shades of gray. The specific format that they were stored in was relatively unimportant, since the same machine performed digitization and display, and there was no attempt at interoperability between systems. However, when the advantages of digital review became widely apparent, there was a strong push both from users and vendors to develop an industrywide formatting standard that would allow all echo machines and digital review stations to work with each other.
As part of this demand, the American Society of Echocardiography formed in 1992 the Digital Formatting Committee, which worked closely with the National Electrical Manufacturers Association to develop the Digital Imaging and Communications (DICOM) formatting standard for all of medical imaging including echocardiography. The salient details of the standard will be described below, but its mere presence was enough to induce all major manufacturers to develop echo machines capable of both computer disk and network storage of images formatted in the DICOM standard. All that remained was for the natural evolution in computer cost effectiveness to reach a point that the enormous demands of digital storage and review in echocardiography could be met in an economical manner.
Appendix B: Image acquisition
The most efficient way to obtain true digital echocardiographic data is with a contemporary cardiac ultrasound machine that enables direct output of digital images and loops using a standard network protocol and the DICOM format. Fortunately, all of the major manufacturers have instruments on the market today that provide just such digital output, though their implementation details may differ. With direct digital output, maximal fidelity is maintained, and calibration elements are stored directly with the DICOM data, facilitating quantitation on the review workstation. The machines can be configured to store loops containing single or multiple cardiac cycles, as well as loops of fixed duration (typically 1 to 3 seconds). While a default value (perhaps one cardiac cycle) can be preset, the ability to easily adjust the duration of a loop is important to obtain data in studies with arrhythmias or complex anatomical abnormalities. The quality of the electrocardiographic signal on the echo machine is critical to proper acquisition of complete cardiac cycles of echo data. A common pitfall is a loop that is too short because the spikes of a noisy EKG signal, dysrhythmia, or pacemaker are interpreted as successive R-waves. It is suggested that echo vendors implement algorithms to recognize cardiac cycles of less than, say, 400 milliseconds as most likely ones truncated by noise in the EKG and automatically default to a longer capture so the data are not lost at the time of acquisition.
Although true digital output is preferable, older existing systems may be adapted for digital use by external digitizing modules that connect to the video port of the echo machine. Protocols can be established similar to the internal systems of the digital echo machines to export either single frames, full cardiac cycles or a fixed time interval of data. One disadvantage of this approach is that these devices do not preserve the image quality as well as the direct digital systems, although digitization of the direct RGB signal shows little degradation in comparison to videotape. Furthermore, calibration and other patient information are not stored with the images. Nevertheless, for legacy systems this is a quite acceptable way of integrating them into a digital laboratory. Whether it makes financial sense to outfit a group of aging machines with digitizing computers, rather than wait until they are replaced by more contemporary machines during the regular upgrade cycle of the laboratory, is a decision each laboratory will have to make. Many laboratories may choose to implement a staged entry into all-digital storage over a period of time, leaving the older analog systems for tape review until the end of the digital transition. Video capture has been proposed for streaming video solutions to digital echocardiography (also called “full disclosure” storage models). Images are usually stored with MPEG compression, which allows longer clips to be captured in a manner that resembles a digital VCR. This may have advantages in pediatric and transesophageal studies, where long sweeps are desirable. The streaming nature also allows real-time monitoring and guidance of acquisition. However, the lack of calibration and lack of support within DICOM are disadvantages of this approach.
Appendix C: Image transmission: network considerations
Network transfer is the most efficient method to deliver echocardiographic studies to a DICOM server. If the echo machine is connected to the hospital network at the time of the examination, echo loops can be sent either at the conclusion of the study or, more efficiently, incrementally as each view is obtained. With the latter option, there is no delay between the end of the study and the availability of the images for review by the cardiologist, although such incremental transfer is not yet available from all manufacturers.
One major advantage of echocardiography is the portability offered by the devices. If network access is not available for bedside studies throughout the hospital, data can stored on the internal hard disk and transferred later to the server. It is also possible to use optical disks for transferring images from the echo machine to the review workstation. Transfer of DICOM data to media is much slower, but it does offer some flexibility in cases where direct network connections are not possible or where studies are recorded off-site in remote laboratories or clinics.
Echocardiographic studies are generally stored on a hard drive within the echocardiograph and retained until the drive is full, at which point the oldest study is automatically deleted to make space for the current examination. This procedure allows multiple studies to be held on the device for subsequent transfer, and it provides a mechanism for short-term redundancy of the data. However, the laboratory must adopt a disciplined approach to network transfers of portable studies, to assure that data stored only locally on the system are not overwritten by subsequent studies. Manufacturers must give users appropriate warning of such overwrites before they occur.
A complete adult echo study may consist of 50 to 100 megabytes of compressed imaging data (1 to 2 gigabytes of uncompressed data). This volume of data must be moved across the network when the exam is first conducted and then again every time the exam is reviewed. Thus, a single examination may generate several hundred megabytes of network traffic in a given day, equaling tens of gigabytes of daily network traffic for a busy lab. It is clear that fast efficient network transfer is critical for this to work. With older hospital networks, the standard speed is 10 megabits per second (Mbps), far too slow for a busy digital echo lab. Much more usable are 100 Mbps networks, and heavily trafficked lines in the network would benefit from gigabit (109 bps) technology. For example, the connection between the DICOM server and network switch is used for every network transfer that occurs in the digital echo lab, i.e., each time a study is sent from any echo machine to the server or each time a study is requested by one of the workstations. This network path may become overloaded as volume grows, and functionality of the system will benefit from a gigabit connection across this critical link.
Even more important than the basic speed of the network is having the proper architecture. Networks may use either routers or switches in moving packets of data around. The advantage of a switch is that it establishes an isolated connection between the two computers that are transferring the echo data at a given time, thus limiting impact on the remainder of the network traffic. In a less robust router situation, high-speed data transfer within the network may degrade performance for the rest of the network, which would be completely unacceptable given the volume of transfers required in digital echocardiography. The advent of intelligent routers can also reduce backbone traffic. Most of the echo vendors are in the process of migrating from 10 Mbps output cards to 100 Mbps cards, although, as mentioned earlier, incremental transfer of clips as they are obtained will largely overcome the disadvantage of the slower cards.
The ability to connect devices with various networking parameters (speed: 10 vs. 100BT and duplex: half vs. full) requires the switch to automatically sense the proper configuration of a device and establish a reliable connection. Auto-negotiation between echo machines and the network switch is sometimes imperfect, requiring network drops to be configured with fixed parameter settings, thereby restricting network connections for some machines to specific locations. Manufacturers should work towards improving flexibility in these auto-negotiations.
Another possible difficulty in some environments may be the inability for some echocardiographs to dynamically obtain a network address. Dynamic Host Configuration Protocol (DHCP) services are often used with PC hardware to allow connections in various locations and maintain an order to the control and uniqueness of network addresses. Unfortunately, current DICOM configurations on several manufacturers’ machines require fixed network addresses for communication (in part as a means to enforce security). As echocardiographic labs expand with variable hospital infrastructures, the need for echocardiographic devices to have network connections in multiple locations grows, particularly during portable studies. In a given institution, network addressing is often segmented by building, and may even vary by floor. While possibilities exist to create virtual local area networks (VLANs) that span floors within a building, movement of echocardiographs from building to building will pose difficulties. With an eye towards the inherent portability of echocardiography, manufacturers should provide DHCP services and make the plugging and unplugging of network cables as convenient as possible.
Appendix D: HIPAA
Congress has mandated strict security measures through the Hospital Insurance Portability and Accountability Act (HIPAA), and after several delays, enforcement began on April 14, 2003. The key provisions of HIPAA require privacy of medical data (no unauthorized sale or use), authentication (password protection for access), and security in all electronic data transactions using 128-256 bit keys for encryption in data transmission and storage.
There are several options for data encryption. Generally the data are modified using a long binary number (termed a key), in a process that must be reversed to view the data. The most efficient encryption is where both sides hold the key (termed a private key). The obvious weakness of such an approach occurs when the key is transmitted to the recipient. An alternative approach uses a public key, based on a scheme like A?B=C. Here C is published and can be used by anyone to encode a message that can be decoded only if one has both A and B, which are kept private by the recipient. Though simple, if C is a 128 bit number then there are 3.4 ? 1038 combinations of A and B, which would take thousands of years to decrypt, even with a supercomputer. Use of the public key encryption is slower than private key encryption; one approach may be to use a public key to send the private key, and then encrypt the actual data with the private key.
Ophir J, Maklad NF. Digital scan converters in diagnostic ultrasound imaging. Procedings of the IEEE. 67 (1979).
Leavitt SC, Hunt BF, Larsen HG. A scan conversion algorithm for displaying ultrasound images. Hewlett-Packard Journal 1983; 34 (10):30-34.
Feigenbaum H. Digital recording, display, and storage of echocardiograms. J Am Soc Echocardiogr. 1988;1:378-83.
Thomas JD, Khandheria BK. Digital formatting standards in medical imaging: a primer for echocardiographers. J Am Soc Echocardiogr. 1994;7:100-4.