History of invasive and interventional cardiology

The history of invasive and interventional cardiology is complex, with multiple groups working independently on similar technologies. Invasive and interventional cardiology is currently closely associated with cardiologists (physicians who treat the diseases of the heart), though the development and most of its early research and procedures were performed by diagnostic and interventional radiologists.

The birth of invasive cardiology

The history of invasive cardiology begins with the development of cardiac catheterization in 1711, when Stephen Hales placed catheters into the right and left ventricles of a living horse. Variations on the technique were performed over the subsequent century, with formal study of cardiac physiology being performed by Claude Bernard in the 1840s.

Catheterization of humans

The technique of angiography itself was first developed in 1927 by the Portuguese physician Egas Moniz at the University of Lisbon for cerebral angiography, the viewing of brain vasculature by X-ray radiation with the aid of a contrast medium introduced by catheter. Cardiac catheterization was first performed when Werner Forssmann, in 1929, created an incision in one of his left antecubital veins and inserted a catheter into his venous system. He then guided the catheter by fluoroscopy into his right atrium. Subsequently, he walked up a flight of stairs to the radiology department and documented the procedure by having a chest roentgenogram performed. Over the next year, catheters were placed in a similar manner into the right ventricle, and measurements of pressure and cardiac output (using the Fick principle) were performed.

In the early 1940s, André Cournand, in collaboration with Dickinson Richards, performed more systematic measurements of the hemodynamics of the heart. For their work in the discovery of cardiac catheterization and hemodynamic measurements, Cournand, Forssmann, and Richards shared the Nobel Prize in Physiology or Medicine in 1956.

Development of the diagnostic coronary angiogram

In 1958, Interventional Radiologist, Dr. Charles Dotter began working on methods to visualize the coronary anatomy via sequential radiographic films. He invented a method known as occlusive aortography in an animal model. Occlusive aortography involved the transient occlusion of the aorta and subsequent injection of a small amount of radiographic contrast agent into the aortic root and subsequent serial x-rays to visualize the coronary arteries.

Later that same year, while performing an aortic root aortography, Mason Sones, a pediatric cardiologist at the Cleveland Clinic, noted that the catheter had accidentally entered the patient's right coronary artery. Before the catheter could be removed, 30cc of contrast agent had been injected. While the patient went into ventricular fibrillation, the dangerous arrhythmia was terminated by Dr. Sones promptly performing a precordial thump which restored sinus rhythm.

Until the 1950s, placing a catheter into either the arterial or venous system involved a "cut down" procedure, in which the soft tissues were dissected out of the way until the artery or vein was directly visualized and subsequently punctured by a catheter; this was known as the Sones technique. The percutaneous approach that is widely used today was developed by radiologist Sven-Ivar Seldinger in 1953. Percutaneous access of the artery or vein is still commonly known as the Seldinger technique. The use of the Seldinger technique for visualizing the coronary arteries was described by Ricketts and Abrams in 1962 and Judkins in 1967.

By the late 1960s, Melvin Judkins had begun work on creating catheters that were specially shaped to reach the coronary arteries to perform selective coronary angiography. His initial work involved shaping stiff wires and comparing those shapes to radiographs of the ascending aorta to determine if the shape appeared promising. Then he would place the stiff wire inside a flexible catheter and use a heat-fixation method to permanently shape the catheter. In the first use of these catheters in humans, each catheter was specifically shaped to match the size and shape of the aorta of the subject. His work was documented in 1967, and by 1968 the Judkins catheters were manufactured in a limited number of fixed tip shapes. Catheters in these shapes carry his name and are still used to this day for selective coronary angiography.

Dawn of the interventional era

The use of tapered Teflon dilating catheters for the treatment of atherosclerotic vascular disease was first described in 1964 by two interventional radiologists, Charles Dotter and Melvin Judkins, when they used it to treat a case of atherosclerotic disease in the superficial femoral artery of the left leg. Building on their work and his own research involving balloon-tipped catheters, Andreas Gruentzig performed the first success percutaneous transluminal coronary angioplasty (known as PTCA or percutaneous coronary intervention (PCI)) on a human on September 16, 1977 at University Hospital, Zurich. The results of the procedure were presented at the American Heart Association meeting two months later to a stunned audience of cardiologists. In the subsequent three years, Dr. Gruentzig performed coronary angioplasties in 169 patients in Zurich, while teaching the practice of coronary angioplasty to a field of budding interventional cardiologists. Ten years later, nearly 90 percent of these individuals were still alive. By the mid-1980s, over 300,000 PTCAs were being performed on a yearly basis, equalling the number of bypass surgeries being performed for coronary artery disease.

Soon after Andreas Gruentzig began performing percutaneous interventions on individuals with stable coronary artery disease, multiple groups described the use of catheter-delivered streptokinase for the treatment of acute myocardial infarction (heart attack).

Development of the intracoronary stent

From the time of the initial percutaneous balloon angioplasty, it was theorized that devices could be placed inside the arteries as scaffolds to keep them open after a successful balloon angioplasty. This did not become a reality in the cardiac realm until the first intracoronary stents were successfully deployed in coronary arteries in 1986. The first stents used were self-expanding Wallstents. The use of intracoronary stents was quickly identified as a method to treat some complications due to PTCA, and their use can decrease the incidence of emergency bypass surgery for acute complications post balloon angioplasty.

It was quickly realized that restenosis rates were significantly lower in individuals who received an intracoronary stent when compared to those who underwent just balloon angioplasty. A damper on the immediate use of intracoronary stents was subacute thrombosis. Subacute thrombosis rates with intracoronary stents proved to be about 3.7 percent, higher than the rates seen after balloon angioplasty. Post-procedure bleeding was also an issue, due to the intense combination of anticoagulation and anti-platelet agents used to prevent stent thrombosis.

Stent technology improved rapidly, and in 1989 the Palmaz-Schatz balloon-expandable intracoronary stent was developed. Initial results with the Palmaz-Schatz stents were excellent when compared to balloon angioplasty, with a significantly lower incidence of abrupt closure and peri-procedure heart attack. Late restenosis rates with Palmaz-Schatz stents were also significantly improved when compared with balloon angioplasty. However, mortality rates were unchanged compared to balloon angioplasty. While the rates of subacute thrombosis and bleeding complications associated with stent placement were high, by 1999 nearly 85% of all PCI procedures included intracoronary stenting.

In recognition of the focused training required by cardiologists to perform percutaneous coronary interventions and the rapid progression in the field of percutaneous coronary interventions, specialized fellowship training in the field of Interventional Cardiology was instituted in 1999.

Changes in post-procedure medications

Through the 1990s and beyond, various incremental improvements were made in balloon and stent technology, as well as newer devices, some of which are still in use today while many more have fallen into disuse. As important as balloon and stent technology had been, it was becoming obvious that the anticoagulation and anti-platelet regimen that individuals received post-intervention was at least as important. Trials in the late 1990s revealed that anticoagulation with warfarin was not required post balloon angioplasty or stent implantation, while intense anti-platelet regimens and changes in procedural technique (most importantly, making sure that the stent was well opposed to the walls of the coronary artery) improved short term and long term outcomes. Many different antiplatelet regimens were evaluated in the 1990s and the turn of the 21st century, with the optimal regimen in an individual patient still being up for debate.

The drug-eluting stent era

With the high use of intracoronary stents during PCI procedures, the focus of treatment changed from procedural success to prevention of recurrence of disease in the treated area (in-stent restenosis). By the late 1990s, it was generally acknowledged among cardiologists that the incidence of in-stent restenosis was between 15 and 30%, and possibly higher in certain subgroups of individuals. Stent manufacturers experimented with (and continue to experiment with) a number of chemical agents to prevent the neointimal hyperplasia that is the cause of in-stent restenosis.

One of the first products of the new focus on preventing late events (such as in stent restenosis and late thrombosis) was the heparin-coated Palmaz-Schatz stent. These coated stents were found to have a lower incidence of subacute thrombosis than bare metal stents.

At approximately the same time, Cordis (a division of Johnson & Johnson) was developing the Cypher stent, a stent that would release sirolimus (a chemotherapeutic agent) over time. The first study of these individuals revealed an incredible lack of restenosis (zero percent restenosis) at six months. This led to the approval for the stent to be used in Europe in April 2002. Further trials with the Cypher stent revealed that restenosis did occur in some individuals with high risk features (such as long areas of stenosis or a history of diabetes mellitus), but that the restenosis rate was significantly lower than with bare metal stents (3.2 percent compared to 35.4 percent). About a year after approval in Europe, the United States FDA approved the use of the Cypher stent as the first drug-eluting stent for use in the general population in the United States.

Concurrent with the development of the Cypher stent, Boston Scientific started development of the Taxus stent. The Taxus stent was the Express2 metal stent, which was in general use for a number of years, with a copolymer coating of paclitaxel that inhibited cell replication. As with the Cypher stent before it, the first trials of the Taxus stent revealed no evidence of in-stent restenosis at six months after the procedure, while later studies showed some restenosis, at a rate much lower than the bare metal counterpart. With further study, the FDA approved the use of the Taxus stent in the United States in March 2004.

By the end of 2004, drug-eluting stents were used in nearly 80 percent of all percutaneous coronary interventions.

Modern controversies in interventional cardiology

Roles of bypass surgery and intracoronary stents for coronary artery disease

Another source of controversy in the field of interventional cardiology has been the overlapping roles of PCI and coronary artery bypass surgery for individuals with coronary artery disease compared to the role of intense pharmacologic therapy. This area has been studied in a number of trials since the early 1990s. With rapid changes in techniques occurring in both bypass surgery as well as PCI, multiple studies have been conducted with the hope of identifying which individuals do better with PCI and which do better with coronary artery bypass graft (CABG) surgery.

Several of these studies evaluate the effectiveness of newer techniques, such as fractional flow reserve (FFR), and suggest PCI can frequently improve outcomes for patients with clinically significant coronary atherosclerosis. The FAME study found that the primary endpoint of death, nonfatal myocardial infarction, and repeat revascularization were 5.1% lower after one year in patients whose care was guided by FFR. Benefits from PCI have also been shown for those having a heart attack (ST elevation myocardial infarction, or STEMI), those having a heart attack without EKG changes (non-ST elevation myocardial infarction, or NSTEMI), and those with angina that is not being controlled with medication. In general, each case is individualized to the patient and the relative comfort level of their interventional cardiologist and cardiothoracic surgeon.

The role of PCI in individuals without symptoms of ischemic heart disease

At the 2007 meeting of the American College of Cardiology (ACC), data from the COURAGE trial was presented, suggested that the combination of PCI and intensive (optimal) medical therapy did not reduce the incidence of death, heart attacks, or stroke compared to intensive medical therapy alone.

Research conducted since 2007 has documented ways in which PCI can be an effective treatment in specific instances. Data analysis of COURAGE trial results showed an incremental benefit for patients during the first 12-24 months following treatment using PCI, specifically regarding frequency of angina, physical limitations, and quality-of-life factors. Results from a 2019 trial did not show evidence that an initial invasive strategy reduced the risk of ischemic cardiovascular events or death from any cause. However, they did show that for patients with angina, an initial invasive strategy led to more effective relief of angina symptoms, and that PCI reduced the risk of spontaneous myocardial infarction when compared with initial conservative management.

The safety of drug-eluting stents

When results from the first trials of drug-eluting stents were published, there was a general feeling in the interventional cardiology community that these devices would be part of the perfect revascularization regimen for coronary artery disease. With the very low restenosis rates of the RAVEL and SIRIUS trials, interventions were performed on more complex blockages in the coronary arteries under the assumption that real-life results would mimic the results from the trials. Based on early trials, the advised antiplatelet regimen for drug-eluting stents was a combination of aspirin and clopidogrel for 3 months when Cypher stents were used, and 9 months when Taxus stents were used, followed by aspirin indefinitely.

Soon, case reports started being published regarding late stent thrombosis. At the 2006 ACC annual meeting, preliminary results from the BASKET-LATE trial were presented, which showed a slight increase in late thrombosis associated with drug-eluting stents over bare metal stents. However, this increase was not statistically significant, and further data would have to be collected. Additional data published over the following year had conflicting results, making it unclear whether stent thrombosis was truly higher when compared to bare metal stents. During this time of uncertainty, many cardiologists started extending the dual antiplatelet regimen of aspirin and clopidogrel in individuals with drug-eluting stents, as some data suggested it may prevent late thrombosis.

The FDA Center for Devices and Radiological Health held a Circulatory System Devices Panel in December 2006 to review data presented by Cordis and Boston Scientific to determine if drug-eluting stents should be considered less safe than bare metal stents. It became evident at the meeting that with all the data published, there were varied definitions of late thrombosis and key differences in the types of lesions in different studies, hampering analysis of the data. It was also noted that with the advent of drug-eluting stents, interventional cardiologists had started performing procedures on more complex lesions, subsequently using the stents in "off label" coronary artery lesions that would otherwise go untreated, or for bypass surgery. The FDA advisory board reiterated the ACC guidelines that clopidogrel should be continued for 12 months after drug-eluting stent placement in individuals who are at low risk for bleeding.

Since 2006, technology advancements for drug-eluting stents have led to safety improvements. More recent analysis of clinical data note drug-eluting stents can be safely used and can lead to more effective reduction in stent thrombosis than bare metal stents. According to a 2018 data analysis, using drug-eluting stents in coronary intervention lowered risk of myocardial infarction, ischemia-driven target lesion revascularization, target vessel revascularization, and stent thrombosis in the first month following stenting. Newer generations of drug-eluting stents have been found to reduce the risk of restenosis, myocardial infarction, and death when compared with bare-metal stents. Advancements in stent design that include reducing strut thickness have shown further improvements for patients compared to previous generations.

A 2021 study noting a lack of age-specific recommendations for elderly patients with ischemic heart disease found a statistically significant decrease in major cardiovascular events when elderly patients were treated using drug-eluting stents. The findings led the study’s authors to recommend drug-eluting stents over bare metal stents when treating the elderly.

See also


This page was last updated at 2023-12-11 07:21 UTC. Update now. View original page.

All our content comes from Wikipedia and under the Creative Commons Attribution-ShareAlike License.


Top

If mathematical, chemical, physical and other formulas are not displayed correctly on this page, please useFirefox or Safari