ISSN NUMBER: 1938-7172
Issue 1.6

Michael A. Fiedler, PhD, CRNA

Contributing Editors:
Mary A. Golinski, PhD, CRNA
Alfred E. Lupien, PhD, CRNA

Guest Editor:
Terri M. Cahoon, MSN, CRNA

Assistant Editor
Jessica Floyd, BS

A Publication of Lifelong Learning, LLC © Copyright 2007

New health information becomes available constantly. While we strive to provide accurate information, factual and typographical errors may occur. The authors, editors, publisher, and Lifelong Learning, LLC is/are not responsible for any errors or omissions in the information presented. We endeavor to provide accurate information helpful in your clinical practice. Remember, though, that there is a lot of information out there and we are only presenting some of it here. Also, the comments of contributors represent their personal views, colored by their knowledge, understanding, experience, and judgment which may differ from yours. Their comments are written without knowing details of the clinical situation in which you may apply the information. In the end, your clinical decisions should be based upon your best judgment for each specific patient situation. We do not accept responsibility for clinical decisions or outcomes.

Table of Contents













Combes X, Andriamifidy L, Dufresne E, Suen P, Sauvat S, Scherrer E, Feiss P, Marty J, Duvaldestin P



Comparison of two induction regimens using or not using muscle relaxant: impact on postoperative upper airway discomfort

Br J Anaesth 2007;99:276-281

Combes X, Andriamifidy L, Dufresne E, Suen P, Sauvat S, Scherrer E, Feiss P, Marty J, Duvaldestin P




Purpose            The purpose of this study was to compare hoarseness and sore throat after induction of general anesthesia and endotracheal intubation with, or without, a nondepolarizing muscle relaxant. A secondary purpose was to assess the ease of intubation and hemodynamic changes associated with intubation.

Background            Paralysis is indicated during induction of general anesthesia to assist endotracheal intubation. Relaxation of the vocal cords moves them out of the way reducing the likelihood of injury when the endotracheal tube (ETT) is passed between them. Paralysis also prevents laryngeal trauma due to bucking and ETT movement. It is possible to intubate patients in whom general anesthesia has been induced with propofol without muscle relaxant and without bucking. Studies of ETT placement without paralysis have produced conflicting results. Some report a higher incidence of postoperative vocal cord dysfunction in patients intubated without paralysis. Others have shown no difference. Factors other than intubation influence the incidence of post-extubation complaints, including, for example: ETT cuff pressure, placement of a gastric tube, ETT diameter, gender, head and neck movement, and the surgical procedure.

Methodology            This prospective, randomized, double-blind study included 300 ASA class I and II adults undergoing elective peripheral surgical procedures. All surgeries were performed in the supine position, with the head and neck in a neutral position, and were expected to require less than 2 hours to perform. Exclusion criteria included the possibility of a difficult intubation, surgery in the throat, and a history of sore throat or hoarseness prior to the procedure.

In the control group, general anesthesia was induced with 2.5?mg/kg propofol, 40 ?g/kg alfentanil, and a volume of saline equal to the volume of rocuronium given to the rocuronium group. In the rocuronium group, anesthesia was induced with 2.5 mg/kg propofol, 15??g/kg alfentanil, and 0.6 mg rocuronium. An experienced laryngoscopist commenced intubation 90 seconds after administration of the rocuronium or saline placebo. Male patients received an 8.0 mm ETT. Female patients received a 7.5 mm ETT. The ETT cuff was inflated to between 20 and 30 cm H2O as measured by a manometer. All patients received a Guedel airway.

Intubating conditions were assessed at the time of intubation. Blood pressure and heart rate were recorded every 3 minutes for 15 minutes then every 15 minutes for the duration of the case. The incidence and severity of hoarseness and sore throat were assessed 2 and 24 hours post-extubation.

Result            One patient was lost to follow up at the 24 hour assessment. Patients in the control group had a higher incidence of hoarseness and sore throat than patients in the rocuronium group (P<0.05). At two hours post extubation, 57% of control patients reported hoarseness and/or sore throat compared to 43% of rocuronium patients. At 24 hours post extubation, 38% of control patients reported hoarseness and/or sore throat compared to 26% of rocuronium patients. There was no difference in the severity of the symptoms between groups.

In the control group 18 patients (12%) were difficult to intubate compared to only 1 patient in the rocuronium group (0.7%) (P<0.05). Intubating conditions were “poor” in 49% of control patients and 13% of rocuronium patients (P<0.05). Laryngeal pressure was applied during laryngoscopy in 73% of control patients and 65% of rocuronium patients. Despite the rate of difficulty with intubation, 78% of control patients and 91% of rocuronium patients were intubated in a single attempt. Patients who were difficult to intubate were significantly more likely to report post-extubation hoarseness or sore throat; 79% of difficult intubations vs. 48% of other intubations at two hours (P<0.05).

Systolic blood pressure and heart rate decreased more following induction in the control group than in the rocuronium group and, thus, received more ephedrine or atropine.

Conclusion            Compared to intubation without a muscle relaxant, using a nondepolarizing muscle relaxant for intubation resulted in less post-extubation hoarseness and sore throat, produced better intubating conditions, and reduced the rate of adverse hemodynamic events.



Whether or not you are willing to believe this study will probably depend upon whether you view it from the perspective of clinical anesthesia in general or through the eyes of a specific anesthetist looking at a specific patient. The investigators focus on the administration of a muscle relaxant for intubation but the real issue may be overall intubating conditions. To see what I’m talking about let’s look at the conclusions more closely.

Using a nondepolarizing muscle relaxant produces better intubating conditions and results in less post-extubation hoarseness and sore throat. The investigators compared muscle relaxant to one other way of optimizing intubating conditions and appeared to generalize that administering a muscle relaxant directly caused a reduction in hoarseness and sore throat.

It is easy to accept that a smooth, gentle, quick, atraumatic intubation is least likely to cause hoarseness or sore throat. And, because a muscle relaxant aids in this process, in general terms one can accept the statement that using a muscle relaxant facilitates optimal intubating conditions and results in fewer post-extubation complaints. But aren’t we are really talking about optimal intubating conditions, rather than the muscle relaxant per se?

A muscle relaxant probably helps prevent hoarseness and sore throat by improving intubating conditions, not by any characteristic peculiar to the muscle relaxant itself. It is a tool. There are other ways to optimize intubating conditions besides administering a muscle relaxant. If someone was in the middle of a mask case with a patient deep on inhalation agent and decided to place an ETT I don’t think any of us would believe there was much to gain by administering a muscle relaxant for intubation. Also, I have, on occasion, intubated on propofol alone at significantly larger doses than the 2.5 mg/kg used in this study. Doing so wouldn’t be appropriate or effective for all patients, but in the hands of some anesthetists it provides great intubating conditions in selected patients. (Of course, not having examined my results with intubating on propofol alone systematically, I may believe them to be better than they really are.)

It would have been nice to see a study that examined the incidence of hoarseness and sore throat in patients who had a smooth, gentle, quick, atraumatic intubation compared to those who did not. Muscle relaxation is, in general, a great aid to intubation. But while fewer patients who receive a muscle relaxant reported hoarseness or sore throat 24 hours post-extubation, 26% of the rocuronium group still did. One thing this study shows is that a muscle relaxant doesn’t eliminate the problem. The question is, what are the actual causes of hoarseness and sore throat post-extubation.

Please don’t misunderstand me. As a general rule muscle relaxation is useful during intubation and probably does play a role in reducing the rate of complications associated with intubation. I’m simply cautioning against over generalizing the role of muscle relaxants in preventing complications and pointing out that there are probably other legitimate ways of preventing complications in at least some situations. Muscle relaxants are not without risk of morbidity and mortality. If we can learn how to optimize intubating conditions and prevent post-extubation complications without a muscle relaxant our patients may be better off.

Using a nondepolarizing muscle relaxant reduces the rate of adverse hemodynamic events. This conclusion is clearly not supported by the results of the study. Patients in both groups received 2.5 mg/kg of propofol. In addition, patients in the control group received 40 ?g/kg of alfentanil while patients in the rocuronium group received only 15 ?g/kg of alfentanil. It should come as no surprise that control patients who received 2.7 times more alfentanil had a lower HR and systolic BP and were given more ephedrine or atropine as a result. Further supporting this point, the HR and BP figures in the original article show “railroad tracks” for HR and systolic BP in patients who received the larger dose of alfentanil while those who received the smaller dose had variability in both vital signs. This is suggestive of a more pronounced opioid effect in the high dose alfentanil patients. It is difficult to imagine that 0.6 mg/kg of rocuronium produced some hemodynamic effect that outweighed the difference in the alfentanil dose.

Does this study have anything to teach us? Yes. As a general rule, with what we know now, the chance of a patient experiencing post-extubation sore throat or hoarseness is somewhat lower when intubation is performed in the presence of profound skeletal muscle relaxation.

Michael Fiedler, PhD, CRNA





© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 6, September 30, 2007

Equipment & Technology

Ihmsen H, Naguib K, Schneider G, Schwilden H, Sch?ttler J, Kochs E

Teletherapeutic drug administration by long distance closed-loop control of propofol

Br J Anaesth 2007;98:189-95

Ihmsen H, Naguib K, Schneider G, Schwilden H, Schüttler J, Kochs E



Purpose            The purpose of this pilot study was to explore the performance characteristics of a propofol infusion system controlled via computer from a distance of approximately 120 miles.

Background            Advancements in telecommunication technology have made medical care possible even when the providers and patients are separated by long distance, such as during oceanic or space travel. Although robotically-assisted ‘telesurgery’ has been demonstrated, there have been no published reports of ‘teleanesthesia.’

Methodology            A high speed fiberoptic virtual private network (VPN) with the capability for data transmission rates up to 1000 Mbits/s connected investigators at two sites in Germany with physician anaesthetists present at both sites. At the patient’s location, a 1-channel monitor recorded EEG data and transmitted the information via computer to the distant location where a second computer analyzed incoming signals, computed the median edge frequency (MEF) from 8 second epochs of EEG data, derived propofol infusion rates using an adaptive feedback control algorithm, and then transmitted control information to a propofol infusion pump at the patient location. Updated instructions were sent to the infusion pump from the control site computer every 8 seconds. In the event of interrupted communication between sites, propofol infusion could be continued by the local computer based on the last valid information transmitted, or the patient-site anaesthetist could assume manual control of the propofol administration. Information about the patient’s status and surgical progress was communicated between anaesthetists using a text message box.

Eleven adult patients, ASA I or II, scheduled for elective general surgical procedures were recruited to participate in the investigation. General anesthesia was initiated in a traditional fashion in the institution’s induction room, using boluses of propofol, sufentanil, and atracurium. After tracheal intubation, anesthesia was continued with propofol, sevoflurane, or desflurane while the patient was transferred to the operating room and EEG electrodes were applied. Once computer communication between the patient and distant control sites was established, propofol infusion was initiated as a target controlled infusion and administration of the volatile anesthetics was discontinued. Following surgical incision, the propofol infusion was converted from target control to closed-loop control with propofol infusion rates adjusted automatically by the computer at the distant site to maintain a MEF between 1.5 and 2 Hz. Intraoperative sufentanil and atracurium were administered as necessary at the discretion of the anesthetist. Approximately 20 minutes prior to the end of surgery, propofol infusion control reverted from closed-loop back to target control. The propofol infusion was discontinued when the surgical procedure was completed. Variables measured included standard clinical data (vital signs, hemoglobin saturation, and end-tidal CO2), performance error estimates (differences between the actual and desired MEF values), and amount of data lost (difference between data stored on the patient location computer versus control site data).

Result            Control of propofol administration via Internet was achieved in all 11 cases. Infusion times ranged from 49 to 338 minutes, with an average time of 133 min. Infusion control was incomplete during 3 cases (27%). Infusions were discontinued after 1 hour in 2 cases due to a software malfunction, and in 1 case because of an interruption in the Internet connection. Distant site closed-loop control of propofol administration was achieved during 65% of all anesthesia time. The remaining 35% of the time included induction and emergence phases, time periods when the EEG signal was interrupted by electrocautery interference, or the infusion was target controlled with the infusion parameters determined at the distant site and transmitted to the infusion pump via Internet.

Vital signs, hemoglobin saturation, and end-tidal CO2 remained stable. The mean (standard deviation) MEF was 1.8 (.2) Hz during closed-loop control and average BIS value was 42 (2). Total propofol delivered to patients during closed-loop control averaged 5.9 (1.4) mg/kg/hr. Emergence time to extubation ranged from 5 to 10 min with a mean of 7.5 min. Intra-patient EEG fluctuations (wobble) averaged 18.8% for MEF and 6.2% for BIS. Of the 10,905 epochs of EEG data transmitted, only 5 (0.05%) epochs were lost.

Conclusion            Long distance administration of anesthetic drugs is possible. Although it may not become routine, the teletherapeutic system explored in this investigation illustrates the potential for remote control of infusions in unique situations.



At an AANA Annual Meeting in the mid-1990s, Executive Director John Garde encouraged all CRNAs to read Daniel Burrus’s book, Technotrends.1  Burrus uses the metaphor of a card game to develop a series of axioms about the future. Among these axioms are “What can be dreamed can be done” and “Once a card is in the deck, it will be played.”  It is through the Technotrends lens, that I have watched the progression of technological advances in health care with great curiosity.

A key surgical milestone was Operation Lindbergh, the first successful demonstration of transoceanic telesurgery, as surgeons operating from New York performed a laparoscopic cholecystectomy on a patient hospitalized in France.2  This surgical feat has not yet been matched by anesthesia providers. Cone and colleagues reported the use of various communication methods to “telemonitor” patients and “telementor” Ecuadorian anesthesiologists during surgical procedures, but the actual administration of anesthesia was not controlled from the United States.3,4

Ihmsen et al have moved the dream of teleanesthesia one step closer to reality. The report of this pilot investigation was candid: distant control of anesthesia was limited only to the maintenance phase of the anesthetics and needed to be discontinued in 3 of 11 procedures. However, when the process worked, it worked well. Patients were remarkably stable, although physiological stability is not too surprising because bedside computer-controlled administration of propofol has been demonstrated previously. Limitations of the investigators’ approach were noted, although with more subtlety. For example, the use of a virtual private network to transmit data at speeds up to 1000 Mbits/s far exceeds current commonly-available capabilities, so matching the communications speed reported by the investigators would require a dedicated private network or access to the Abilene Network (through Internet2). The authors speculate that their system might be useful in isolated locations; however, it would seem unlikely that most remote locations have the necessary state-of-the-art fiberoptic Internet capability. Wireless communication, as a possible alternative, introduces a new set of problems, such as data security. An interesting consequence of the investigators’ approach to teleanesthesia is that manpower requirements doubled because of the need for anesthesia providers at both the control and patient locations. If teleanesthesia becomes commonplace, then perhaps it will become the frontier for discussions about implementation of the anesthesia care team, direction of anesthesia care, and reimbursement for anesthesia services.

Daniel Burrus advises us that today’s dreams are tomorrow’s realities. Unquestionably computer and communications technologies will continue to influence the science of anesthesia, but how these advancements will affect day-to-day clinical practice is less clear. The challenge becomes how we can best use technological achievements to improve the delivery of, and access to, high quality anesthesia care. Burrus’s other axioms encourage readers to overcome the complacency of past successes and “re-become” experts. I wonder what tools of the trade we CRNAs will need to become future experts.


Alfred E. Lupien, Ph.D., CRNA


1. Burrus D. Technotrends: How to Use Technology to Go Beyond Your Competition. New York, NY:Collins; 1994.

2. Marescaux J, Leroy J, Gagner M, et al. Transatlantic robot-assisted telesurgery. Nature 2001;413:379-380.

3. Cone SW, Gehr L, Hummel R, Rafiq A, Doarn CR, Merrell RC. Case report of remote anesthetic monitoring using telemedicine. Anesth Analg 2004;98:386-8.

4. Cone SW, Gehr L, Hummel R, Merrell RC. Remote anesthetic monitoring using satellite telecommunications and the Internet. Anesth Analg 2006;102:1463-7.


AORN Journal’s Editor-in-Chief Dr. Nancy Girard’s recently published editorial on how perioperative nursing will be the influenced of technology Girard NJ. Science fiction comes to the OR. AORN J 2007;86:351-3.  The editorial is also available on-line at:


An interesting project to follow is the HAPsMRT (High Altitude Platforms for Mobile Robotic Telesurgery) in which an unmanned airborne vehicle (UAV) is used as a communications link between an operating surgeon and remotely-located patient. For more information, readers can search on-line using “HAPsMRT” as a keyword or refer to a press release at the following URL:


© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 6, September 30, 2007

Johansson A, Chew M



Reliability of continuous pulse contour cardiac output measurement during hemodynamic instability

J Clin Monit Comput 2007;21:237-242

Johansson A, Chew M




Purpose            The purpose of this study was to evaluate the reliability of pulse contour cardiac output (PCCO) (PiCCO catheter, Pulsion Medican Systems, Germany) measurements in an animal model of hemodynamic instability.

Background            Cardiac output is a useful monitor in anesthesia and critical care but is difficult to measure safely and accurately. It is especially useful when intravascular volume is changing and when vasopressors are being administered. Clinically, cardiac output is most commonly measured by thermodilution with a pulmonary artery (PA) catheter. The use of PA catheters is limited, however, by associated risks and the general lack of evidence that the information from PA catheters contribute to improved patient outcomes.

Pulse Contour Cardiac Output (PCCO) monitoring generates a cardiac output value by analysis of an arterial pressure waveform. PCCO must be periodically calibrated against a transpulmonary thermodilution cardiac output (TpCO) measurement. TpCO requires a central venous line and a central arterial line (often femoral or axillary).

Studies evaluating the accuracy of PCCO measurements have yielded contradictory results. Few studies have examined the accuracy of PCCO in hemodynamically unstable patients.

Methodology            This prospective, randomized study was conducted in 15 pigs weighing between 20 and 25 kg. The pigs were anesthetized, intubated, and mechanically ventilated. IV fluid was infused at 10 ml/kg/h. Nine pigs received an infusion of bacterial endotoxin to simulate sepsis. Six pigs served as controls. All pigs received a central venous and central arterial line for TpCO monitoring. The PCCO monitor was connected to the central arterial line.

At time zero (T0), after the pigs were prepared, the PCCO was calibrated to the TpCO. Each hour thereafter for six hours (T1 through T6) PCCO measurements were taken before and after the PCCO was calibrated to the TpCO. In the experimental pigs, the endotoxin infusion was begun at T1. Heart rate (HR), mean arterial pressure (MAP), and central venous pressure (CVP) measurements were also recorded at all data collection points.

Result            Pigs given endotoxin became hypotensive and their TpCO changed by 46% over time. Their PCCO values before calibration were statistically and/or clinically significantly different from the TpCO values at multiple time points during the study. PCCO values immediately after calibration were reflective of TpCO values.

Conclusion            In hemodynamically unstable animals, PCCO was unacceptably different from TpCO values; their agreement differing by 56%. The hemodynamic changes that accompany hemodynamic instability resulted in grossly inaccurate PCCO values.



We regularly treat patients undergoing significant changes in stroke volume, cardiac output, and systemic vascular resistance without knowing which values have changed or how much they have changed. We often make good guesses about what has changed based upon our knowledge of physiology, pharmacology, indirect indicators, and experience. I don’t think any of us would be willing to give up blood pressure monitoring during an anesthetic. I suspect there will come a time when we will feel the same way about cardiac output monitoring. Of course, we don’t use cardiac output monitoring very much right now because it is highly invasive (and thus adds risk), expensive, and fairly labor intensive to set up and use. If and when cardiac output monitoring becomes as low risk, inexpensive, and accurate as oscillometric blood pressure monitoring I think we’ll feel differently about it.

A number of alternatives to Swan-Ganz thermodilution cardiac output monitoring have been under development for decades with little success. This one analyzes the area under the curve of an arterial line waveform to generate a cardiac output. Since it must be calibrated to a transpulmonary cardiac output value (a technique closely related to the pulmonary artery catheter cardiac output measurements we are familiar with) PCCO isn’t a replacement for the more invasive techniques. It simply adds information by providing hands off, continuous, real time cardiac output monitoring.

My first thought when reading this article was that the investigators had chosen the wrong “gold standard” against which to compare the PCCO. While thermodilution cardiac output monitoring has long been the best we had clinically, it isn’t all that accurate; about ?19%. (Every wonder why you shot multiple COs and averaged them?) Given how far off the PCCO values in this study were from the thermodilution values, however, the inaccuracy of the thermodilution measurements is unlikely to be an issue.

This study is important because we are most anxious to know what the cardiac output is when a patient is unstable. What the study shows is that when hemodynamics are unstable, pulse contour cardiac output monitoring is currently a non-starter.


Michael Fiedler, PhD, CRNA





© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 6, September 30, 2007

Sakka SG, Kozieras J, Thuemer O, van Hout N



Measurement of cardiac output: a comparison between transpulmonary thermodilution and uncalibrated pulse contour analysis

Br J Anaesth 2007;99:337-342

Sakka SG, Kozieras J, Thuemer O, van Hout N




Purpose            The purpose of this study was to evaluate the accuracy of Vigileo (Edwards Lifesciences, Irvine, CA) pulse contour cardiac output (PCCO) measurements in patients being treated for sepsis.

Background            Cardiac output monitoring is one of the most important hemodynamic variables in critically ill patients. Vigileo Pulse Contour Cardiac Output (PCCO) monitoring generates a cardiac output value by analysis of an arterial pressure waveform. It requires a peripheral arterial line and demographic information specific to the patient it is being used to monitor (e.g. height, weight). The Vigileo system does not use in vivo calibration with a reference source, such as thermodilution cardiac output. Studies evaluating the accuracy of PCCO measurements have yielded contradictory results.

Methodology            This prospective study included 24 sedated and mechanically ventilated septic patients. All patients were being monitored with a transpulmonary cardiac output (TpCO) monitor and all were receiving norepinephrine as part of their treatment. Vigileo Pulse Contour Cardiac Output monitoring was added for the purpose of the study and its cardiac output values compared to those of the TpCO monitor. Each patient served as their own control.

Baseline cardiac output (CO) measurements were obtained with TpCO and PCCO. The mean arterial pressure (MAP) was then increased with norepinephrine and a second set of CO measurements were recorded. A third set of control CO measurements were recorded after the norepinephrine infusion was reduced to its previous rate. Each measurement was taken five minutes after a stable MAP was achieved. Fluid status, airway pressure, and FIO2, were unchanged during the study period.

Result            The mean APACHE II score of the subjects was 26. Their ages ranged from 26 to 77 years. There were 16 males and 8 females. Vigileo PCCO correlated poorly with TpCO over all measurements (r2=0.26, P<0.0001). In general Vigileo PCCO measurements were lower than TpCO, but in some cases they were much higher.

Conclusion            The Vigileo PCCO system does not correlate well with transpulmonary thermodilution cardiac output.



The previous abstract and comment in this issue examined the same minimally invasive cardiac output technology with one distinction, it was made by a different manufacturer. (See Reliability of continuous pulse contour cardiac output measurement during hemodynamic instability elsewhere in this issue.) In addition to the device being made by a different manufacturer, there are a couple important differences in the studies themselves. First, this study looked at human patients, rather than an animal model. Second, the patients included in this study were examined during a period of hemodynamic stability. Past studies of PCCO technology have shown better accuracy during periods of hemodynamic stability than during unstable hemodynamics. (This fact alone is troubling. Why else would we want to monitor vital signs except to know when they had changed?) The last difference is the most informative. The previous study examined the difference between the PCCO values and thermodilution cardiac output values. They found that the differences were quite large much of the time. This study looked at the correlation between PCCO values and thermodilution values. Even if PCCO was not accurate, if it went up when thermodilution went up and down when thermodilution went down it could still correlate well. In that case, the PCCO numbers might not mean much by themselves but we would at least know when the cardiac output was going up or down. We’d know something qualitative about cardiac output. The first study showed that PCCO wasn’t accurate. This one showed that it didn’t correlate well with thermodilution either.

Two words of caution are warranted. It will take more than two studies to determine whether or not PCCO is accurate and/or reliable. Some other studies have shown better results. I don’t see a consensus yet. Often times, when a new product is under development different manufacturers produce more or less accurate devices of the same type. To know if the technology works all the devices will have to be tested. And, I feel obligated to reiterate, both of these studies compared PCCO to thermodilution cardiac output. Thermodilution cardiac output has a significant level of inaccuracy. It may be accurate enough to compare the current crop of fairly inaccurate PCCO devices against. But if PCCO devices get more accurate, they will need to be compared to a better “gold standard,” such as fick dye dilution cardiac output.


Michael Fiedler, PhD, CRNA




Correlation examines whether or not two things change together. If there is no correlation between HR and BP the correlation coefficient is 0. If HR and BP both move up equally at the same time the correlation coefficient would be a perfect 1. If HR moved down half as much as BP moved up the correlation would be -0.5.

© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 6, September 30, 2007


Makary M, Al-Attar A, Holzmueller C, Sexton B, Sin D, Gilson M, Sulkowski M, Pronovost P



Needle stick injuries among surgeons in training

N Engl J Med 2007;356:2693-9

Makary M, Al-Attar A, Holzmueller C, Sexton B, Sin D, Gilson M, Sulkowski M, Pronovost P




Purpose            Due the fact that comprehensive information coupled with clear descriptive details is not available regarding needle stick injuries for surgeons in training, it makes it difficult to develop working strategies for the prevention of such injuries. The purpose of this study was to investigate the prevalence and details surrounding needle stick injuries as well as the behavior associated with the reporting, or lack thereof, of these injuries amongst surgeons in training.

Background            It has been reported that healthcare workers sustain approximately 600,000 to 800,000 needle stick and other percutaneous injuries every year. The potential for catastrophic health consequences is huge; additionally the psychological stress involved with sustaining a needle stick injury is insurmountable. It is common sense to acknowledge that all healthcare workers who perform invasive procedures with needles and other sharp instruments are at high risk for injury, and it has been demonstrated that those who work in the operating room present the greatest risk. Surgeons, especially those in training, are considered the highest risk of exposure to blood borne pathogens since they use sharp instruments while operating and are in the learning mode for skill acquisition. Currently there is a very high prevalence of human immunodeficiency virus (HIV) and hepatitis B and C viruses (HBV, HCV) amongst hospitalized surgical patients. Recently reported in the literature from an urban academic hospital is that fact that 20 to 38% of all procedures involved exposure to HIV, HBV, or HCV. Those are very alarming statistics.

As the science for prevention and treatment of these horrific diseases continues to proliferate, timely reporting of exposures by those us who sustain an injury is paramount to ensure appropriate counseling, facilitate prophylaxis or early treatment, and establish legal prerequisites for workers compensation. Failure to report exposures precludes interventions that could benefit the injured party, placing healthcare workers at unnecessary risk. Information regarding how often needle stick and other sharp instrument injuries occur, the circumstances surrounding them, and the barriers to reporting them, are very limited.

Methodology             The research design encompassed a survey approach. The chosen sample (respondents), were surgeons in training at residency programs certified by the Accreditation Council for Graduate Medical Education in the United States. The study participants were recruited to answer survey questions after they completed the January 2003 American Board of Surgery In-Service Training Examination. Participation was voluntary and return of the survey was considered implied consent. Approval for the study was obtained from the institutional review board from the Johns Hopkins University. The survey instrument was developed in 2002 by a panel of surgical residents and faculty, with specialists in infectious disease and occupational safety involvement. The instrument was pilot tested and assessed for validity and feasibility. The survey requested information from the respondents about a year of clinical training, typical demographic information, numbers of past needle stick injuries during training, needle stick injuries that had occurred involving high risk patients, and an expanded additional set of questions about the most recent event. Additional data gathered included:

  1. the blood borne pathogen to which exposure was most feared?
  2. what activity resulted in the needle stick (e.g. suturing, passing a blade etc.)?
  3. did they follow the institution's reporting protocol?
  4. if they didn't report the injury, why not?
  5. who else knew about the injury (e.g. their attending, spouse, OR nurses)?


Results            There was a 95% response rate; of the 741 surgical residents invited to participate, 702 returned completed survey forms. Eighty three percent (83%) of the respondents had a needle stick injury during their training and the total number of needle stick injuries increased according to the year of training. By the 5th year of training, 99% of the surgeons in training had a needle stick injury, and for 53% of those, the injury involved a high risk patient. Details of the most recent needle stick injury were provided and included the following information:  67% of the sample who had an exposure reported that the injury was self inflicted, 72% reported that the injury occurred in the operating room, and 52% reported that the injury occurred while suturing. Fifty seven percent reported a feeling of being rushed as the cause of the injury and 20% felt that the injury could not have been prevented. Ninety percent (90%) of the respondents identified a single “cause” for the injury. An astounding 51% did not report their injury and of those involving high risk patients, 16% of that total did not report. Some of the main reasons cited for not reporting included statements that it takes too much time and no utility in reporting. Factors that were significantly associated with not reporting included the male gender, lack of involvement with the patient, occurrence in the operating room, the lack of knowledge of the injury by another person, and the total number of needle stick injuries during training.

Conclusion            Needle stick injuries pose a huge occupational risk for surgical trainees. The numbers of those who reported an injury in this study was remarkable; the numbers of those who sustained an injury and did not reported it for treatment and prevention was also remarkable. The risks of under-reporting and delaying or forgoing treatment are significant. The infections that can be contracted affect virtually all aspects of one’s life; personal and social, and career -wise. Reporting the injury enables counseling regarding the risks involved and the prevention of secondary transmission. It allows for medical evaluation and if warranted antiretroviral therapy. Interesting to note, antiretroviral therapy administered within 24 to 36 hours after exposure has been associated with an 81% reduction in HIV infection. Although no post exposure prophylaxis is available for the hepatitis C virus, testing with HCV RNA can identify HCV infection at an early state. During this early stage, treatment is highly effective in preventing the catastrophic chronicity. Additionally critically important is that reporting of these injuries can establish the causal relation of the exposure and subsequent complications.



It was my intent that as all of you as professional anesthesia providers read through this article, you noticed how important this study is to those in our distinct field. While obviously not ‘surgeons in training’, our exposure to occupational risks such as needle stick and/or percutaneous injuries is equally as threatening and can result in detriment and serious consequences no different than those experienced by ‘surgeons in training’. My educated guess is that our exposure rate is higher than it should be or than we want to believe it is, that our reporting of occurrences is as low as those studied in this article, and that our reasons for NOT reporting occurrences or for NOT seeking treatment is comparable to those cited in this study: the ‘surgeons in training’. The analogy that I hoped I wouldn’t have to make forced me to come to the realization that I needed to indeed believe it was true. Whether students in anesthesia training or as practicing anesthesia providers, we are every day, in almost every task we perform, subjected to the risks of exposure to a life threatening disease. We suture central lines in place, we use scalpels while performing the task of central line insertion, we suture arterial lines in place, we use large needles for epidural insertion techniques, and we use large and small needles for intravenous insertion and for spinal anesthesia and other regional anesthesia techniques. It is critical and non-negotiable that we understand the importance of prevention of exposure as well as the importance of reporting incidents and seeking appropriate counseling, guidance and treatment. It is the reporting of such incidents that cannot only save our lives, it can assist us in serious development of prevention strategies.


Mary A. Golinski, PhD, CRNA





© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 6, September 30, 2007

Wischmeyer PE, Johnson BR, Wilson JE, Dingmann CBachman HM, Roller E, Vu Tran Z, Henthorn TK



A survey of propofol abuse in academic anesthesia programs

Anesth Analg 2007;105:1066-1071

Wischmeyer PE, Johnson BR, Wilson JE, Dingmann C

Bachman HM, Roller E, Vu Tran Z, Henthorn TK




Purpose            The purpose of this survey was to determine the prevalence and outcome of propofol abuse in academic anesthesia departments with residency training programs in the USA.

Background            Propofol is used extensively for induction of general anesthesia. It is not commonly considered a drug of abuse but propofol increases dopamine in brain areas thought to be associated with reward. This is the mechanism by which alcohol and other drugs are believed to reinforce substance abuse. Propofol has been reported to produce euphoria and there are case reports of abuse in anesthesia personnel.

A wide ranging survey of multiple drugs of abuse conducted in academic anesthesia programs during the 1990s reported two cases of propofol abuse in 11,666 individuals. The calculated 10-year incidence of propofol abuse was 0.02%.

Methodology            A primary survey was emailed to the chairperson of 126 anesthesia departments with residency programs in the USA. A second round of primary surveys was sent to those who did not respond to the first distribution. This was followed up with personal emails and phone calls. A secondary survey was sent to departments that reported any cases of propofol abuse. The secondary survey sought specific information about propofol abuse incidents. The denominator in incidence calculations was 23,385. This number was derived from the number of attending physicians and residents in residency programs. The number of certified registered nurse anesthetists (CRNAs) in departments with residency programs was estimated by assuming 20 CRNAs per program.

Result            Of the 126 primary surveys sent out in the first round there were 67 responses (53%). The second round yielded an additional 26 responses (21%). The response rate after two rounds was 74%. Phone calls and personal emails continue until the total response rate was 100%.

Twenty-five of 126 departments (20%) reported that one or more individuals had abused propofol in the previous 10 years. Two departments reported more than one incident. Seven deaths were reported, six residents and one anesthesia technician. Other drugs in addition to propofol may well have contributed to these deaths. The incidence of death reported to the survey was 28%. The incidence of death reported in residents reported to have abused propofol was 38%. The overall incidence of propofol abuse reported was 0.10%. (This incidence uses a denominator that was partially estimated.) This incidence was five times higher than the previous study but is still a tenth or less the incidence of other drugs of abuse.

Propofol was controlled by the pharmacy at 29% of the institutions represented in the survey and uncontrolled in 71%. Three of the 25 (12%) programs that reported propofol abuse had pharmacy control of propofol. Lack of pharmacy control of propofol was associated with propofol abuse (P=0.048).

Conclusion            Propofol abuse has become a growing problem in academic anesthesia programs. Pharmacy control of propofol should be considered.



The conclusions of this study concern me. While they may appear to be supported by the results of the survey, any support for them is weak at best.

The number of individuals reported to have abused propofol is based upon statements from chairs of academic departments only; it is not a sample of anesthesia providers in general. (It does not, for example, include any private practice anesthesia practices.) It is also an attestation by that chair and we don’t have any information about how chairs decided upon the number of cases of abuse that they reported. Their report may or may not be a valid number. Next, this report of propofol abuse cases was divided by the total number of anesthesia personnel and that number was partially estimated. As a result, the incidence calculated is, at best, an estimate. The estimate may not be generalizable to all anesthesia providers. This estimated incidence is then compared with an incidence from an earlier survey that was conducted on a different population and looked at multiple drugs of abuse. I am skeptical that these two studies are comparable enough to yield a valid comparison of the incidence of propofol abuse over time, as the authors intend. Using this information to reach the conclusion that propofol abuse is increasing in frequency is too much of a stretch for me.

I can only imagine that the recommendation to control propofol in the same way we control opioids was made by people who don’t have to work in the OR putting patients to sleep every day. It is difficult to imagine how adding paperwork and adding to the workload of pharmacists and anesthesia providers would reduce the incidence of propofol abuse to a meaningful extent. A number of other drugs that are controlled by pharmacy and are used in much smaller quantities have much higher incidences of abuse. The, so called, association between departments that didn’t control their propofol and the rates of propofol abuse is equally unimpressive. It certainly comes as no surprise that propofol abuse can be associated with departments that don’t control it when almost no department controls access to propofol.

It is tragic that a number of individual died abusing anesthetic drugs. Efforts to better understand the incidence, what leads individuals to abuse anesthetic drugs, who is likely to abuse anesthetic drugs, and how to intervene in a meaningful way are worthwhile. This survey may provide us with an estimate of the incidence of propofol abuse in academic anesthesia departments. I caution against using it for any other purposes.


Michael Fiedler, PhD, CRNA





© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 6, September 30, 2007

Chen M, Habib AS, Panni MK, Schultz JR



Detecting an infiltrated intravenous catheter using indigo carmine: a novel method

Anesth Analg 2007;105:1130-1131

Chen M, Habib AS, Panni MK, Schultz JR




Purpose            The purpose of this case report was to describe a new method for assessing a peripheral intravenous(IV) catheter for intravascular vs. subcutaneous placement.

Background            A supposedly intravenous catheter that is actually subcutaneous may be uncomfortable for the patient. It may also prevent the timely administration of critical medications or allow the extravasation of a drug that causes tissue damage. Traditional methods of diagnosing a misplaced IV are usually, but not always, sufficient.

Methodology            A peripheral IV was started in a woman with a body mass index (BMI) of 44 who was edematous due to her medical illness. After the catheter was placed there was some question about whether or not it was in the vein. Traditional methods of making that determination were equivocal. Administering a small dose of epinephrine and observing for an increase in heart rate was considered but ruled out as having more risk than benefit. Finally, 1 mL of indigo carmine dye was injected into the IV. The dye was seen to flow through superficial veins of the skin in a branching pattern. Within 2 seconds it disappeared, leaving no trace of blue dye. Half an hour later the IV site was swollen and the IV had stopped dripping. A second 1 mL injection of indigo carmine resulted in a localized blue discoloration at the catheter tip which did not quickly fade away.

Result            In both cases, injection of indigo carmine dye in the IV was a useful tool to help determine the proper placement of a peripheral intravascular catheter.

Conclusion            Injection of indigo carmine dye in an IV may help the observer determine if the IV is placed intravascularly or is subcutaneous.



This is a nice, concise report that suggests one more tool to keep in my bag. I doubt I’ll need it often, but when I do I’ll thank these authors for teaching it to me.


Michael Fiedler, PhD, CRNA





© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 6, September 30, 2007


Chondrogiannis KD, Siontis GCM, Koulouras VP, Lekka ME, Nakos G


Acute lung injury probably associated with infusion of propofol emulsion

Anaesthesia 2007;62:835-837

Chondrogiannis KD, Siontis GCM, Koulouras VP, Lekka ME, Nakos G



Purpose            The purpose of this report was to describe a case of acute lung injury and hypoxia due, most likely, to a propofol infusion.

Background            Propofol is used for induction and maintenance of general anesthesia as well as for sedation in the operating room and the intensive care unit. Propofol is prepared as a lipid emulsion. Lipid emulsions may affect lung mechanics, pulmonary vascular resistance, and, ultimately, respiratory gas exchange. Parenteral nutrition with fat emulsion has been associated with adverse effects on lung function in patient with increased alveolar-capillary membrane permeability. When this occurs, the degradation in lung function is dependent upon both the concentration of the fat emulsion and the infusion rate. There is a previous case of hypoxia in a chronic obstructive pulmonary disease patient who received a propofol infusion.

Methodology            A 49 year old woman arrived in the emergency room with a Glasgow Coma Scale score of 6, neck stiffness, and focal neurologic deficits. She was intubated and ventilated. Initial PaO2 was 148 mm Hg on 35% oxygen. Her chest radiograph was normal. She was sedated with propofol at 67 ?g/kg/min. She underwent clipping of a ruptured middle cerebral artery aneurysm. Intracranial pressure was normal postoperatively. The propofol infusion was continued for sedation.

On day three her PaO2 dropped to 65 mm Hg on 60% oxygen. Chest radiography showed diffuse bilateral infiltrates but no pleural effusion. Analysis of bronchial lavage fluid showed negative cultures, elevated proteins, elevated phospholipids, and elevated cholesterol. Chromatography revealed a lipid profile similar to that of 2% propofol solution. No other risk factors for acute lung injury were identified.

Result            The propofol infusion was discontinued. Within three days oxygenation and radiographic findings improved significantly. By seven days arterial blood gasses were normal.

Conclusion            Increased alveolar-capillary membrane permeability due to brain injury was thought to have allowed intravascular propofol to extrude into the lungs causing hypoxia.



This is likely a very rare complication. It might be tempting to dismiss it as something that will only happen in the ICU when propofol is administered for long periods, and then, only in susceptible patients. If that is true, as propofol experts, it is good for us to be aware of the complication when we have occasion to advise those who are providing long term ICU sedation. I suspect, however, that we could see this complication in the operating room during much shorter infusions of propofol. It appears that patients with increased alveolar-capillary permeability are at risk for this complication. In those patients, the higher the propofol infusion rate, the more likely propofol is to extrude into the lungs. In this case, the propofol infusion rate was only abut 67 ?g/kg/min. We may use an infusion rate three times as high during a general anesthesia. I’m guessing (the published report is silent on this point) that the patient in the case report did not receive a propofol infusion during her craniotomy. This would partially explain why her hypoxia didn’t develop in the OR.

The bottom line here is somewhat speculative but, I believe, worth considering. In patients with increased alveolar-capillary permeability, propofol delivered by infusion may extrude into the lungs resulting in hypoxia. Increased lung permeability and faster infusion rates both increase the risk. Being aware of this possibility will help us plan more appropriate anesthetics and allow us to react more quickly if the complication should develop.


Michael Fiedler, PhD, CRNA

© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 6, September 30, 2007

La Colla L, Albertin A, La Colla G, Mangano A


Faster wash-out and recovery for desflurane vs sevoflurane in morbidly obese patients when no premedication is used

Br J Anaesth 2007;99:353-358

La Colla L, Albertin A, La Colla G, Mangano A



Purpose            The purpose of this study was to compare the kinetics and clinical end points of desflurane and sevoflurane in morbidly obese patients during wash-in and emergence.

Background            Morbidly obese individuals have a relatively lower proportion of lean body mass. Blood flow to fat tissue is lower in obese compared to non-obese individuals. Obesity alters cardiopulmonary dynamics which plays a role in the uptake and elimination of inhalation agents. The kinetics of many drugs, including potent inhalation agents, differs in morbidly obese and non-obese individuals. In particular, as total body weight and the duration of anesthetic administration increase both the blood : gas and oil : gas partition coefficients have a larger influence on the rate of emergence from anesthesia. Desflurane’s oil : gas partition coefficient is about 64% lower than that of sevoflurane. Consequently, desflurane has a smaller apparent volume of distribution than sevoflurane.

Methodology            This prospective, randomized, double-blind study included 28 ASA class II and III patients scheduled for elective open bilio-intestinal bypass surgery. Exclusion criteria included ASA physical status class > III, age < 20 years or > 65 years, a history of alcohol or other drug abuse, and myocardial dysfunction.

To avoid bias in emergence times, no patients received any premedication. All patients were intubated with a fiberoptic bronchoscope during a pharmacokinetic model-driven target controlled remifentanil infusion. General anesthesia was then induced with propofol 2 mg/kg while the remifentanil infusion continued. Patients were mechanically ventilated with 50% oxygen in air via a non-rebreathing circuit. Cis-atracurium was used for neuromuscular blockade. The anesthetic vaporizer was then turned directly to either 2% sevoflurane or 6% desflurane according to randomization. These concentrations were maintained for a 30 minute data collection period before surgery was begun. During surgery, the remifentanil infusion was titrated to keep heart rate and blood pressure within 10% of baseline values. Sevoflurane 1% to 2% or desflurane 3% to 4% was titrated to achieve a Bispectral Index (BIS) value of 40 to 50.

During wound closure all patients received morphine 0.05 mg/kg and ketorolac 30 mg Editors Note: the route of administration was not specified. During the last 20 minutes of surgery, inhalation agent concentration was decreased to achieve a BIS value of 60. After the last suture end tidal agent concentration was recorded and the vaporizers were turned off. End tidal agent concentration was then recorded every 30 seconds for 5 minutes. Mechanical ventilation was continued until the first spontaneous breath.

Result            There were no statistically significant differences in patient demographics between groups. The average age was 27.1 (sd 212.9) years. Average body Mass Index was 50.6 (sd 5.4); 48 in the sevoflurane group and 53 in the desflurane group (P=0.180). The duration of anesthesia was 180 minutes in the sevoflurane group and 193 minutes in the desflurane group (P=0.232).

The wash out of desflurane was significantly faster than sevoflurane at all time points (P<0.01). At five minutes the end tidal concentration of desflurane was abut 10% of what it was when the vaporizer was turned off. The sevoflurane concentration was about 20% of what is was when the vaporizer was turned off. Desflurane patients squeezed the investigator’s hand in 8 minutes vs. 15.8 minutes for sevoflurane patients. Desflurane patients were extubated in 9.4 minutes vs. 16.4 minutes for sevoflurane patients. Desflurane patients were discharged from recovery in 16.3 minutes vs. 27.0 minutes for sevoflurane patients. These and other clinical end points were all significant at a P<0.001.

Conclusion            Morbidly obese patients recover more quickly following desflurane than sevoflurane anesthesia following an approximately three hour anesthetic.



The blood : gas solubilities of desflurane and sevoflurane are relatively similar (desflurane 0.45, sevoflurane 0.65) and much lower than the agents we used before they became available (isoflurane 1.4). Although fat absorbs inhalation agents slowly, each agent’s solubility in fat largely determines how quickly it can be eliminated from fat. Desflurane’s fat : blood partition coefficient is 29. Sevoflurane’s fat : blood coefficient is 52, similar to that of isoflurane (50) and halothane (57).

This study tries to answer the question, “does the lower fat solubility allow morbidly obese patients to wake up more quickly after a desflurane anesthetic than a sevoflurane anesthetic?” It does a pretty good job of answering the question (“yes”). Much as in lean patients, once the vaporizer is turned off and the fresh gas flow increased to prevent rebreathing of inhalation agent, the fraction of sevoflurane exhaled is about twice as great at any given time, compared to the concentration just before the vaporizer was turned off, as the fraction of desflurane. Whether or not the faster wake up is clinically significant will depend upon each anesthetist’s practice environment and anesthetic technique. If you can take advantage of being able to extubated morbidly obese patients 7 minutes faster and get them out of recovery 11 minutes faster following a three hour case the difference will be clinically significant. Also, keep in mind that relatively low concentrations of inhalation agent were used during the maintenance phase of the case.

I do want to suggest one way in which these differences in recovery time might be an advantage for all our obese patients. A previous study has shown that two minutes after responding to command all desflurane patients had a complete return of airway reflexes and could swallow water without coughing or drooling.1 At that the same time, 55% of sevoflurane patients did not have a complete return of airway reflexes and 18% still did not at six minutes. If this delay in recovery of airway reflexes was related to the remaining concentration of agent, the faster elimination of desflurane may offer a wider margin of safety for airway complications during early recovery.

There were several aspects of this study design that may have resulted in biased results. The maintenance concentrations of desflurane and sevoflurane allowed (3% - 4% and 1% - 2% respectively) represent a lower MAC fraction of desflurane than sevoflurane. If a lower MAC equivalent of desflurane was administered the results would have been biased in favor of desflurane. The actual concentration of inhalation agent delivered during the maintenance phase of the cases was not reported so there is no way to know if the study results were biased. The target controlled remifentanil infusion was also titrated during the case. The actual rates of remifentanil infusion during maintenance were not reported. If one group received more remifentanil than the other it is likely that less inhalation agent would have been administered in that group. The study results would therefore have been biased in favor of the agent used in that group.


1. McKay RE, Large MJ, Balea MC, McKay WR. Airway reflexes return more rapidly after desflurane anesthesia than after sevoflurane anesthesia. Anesth Analg 2005;100:697-700.


Michael Fiedler, PhD, CRNA





© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 6, September 30, 2007

Nuttall GA, Eckerman KM, Jacob KA, Pawlaski EM, Wigersma SK, Shirk Marienau ME, Oliver WC, Narr BJ, Ackerman MJ



Does low-dose droperidol administration increase the risk of drug-induced QT prolongation and torsade de pointes in the general surgical population?

Anesthesiology 2007;107:531-536

Nuttall GA, Eckerman KM, Jacob KA, Pawlaski EM, Wigersma SK, Shirk Marienau ME, Oliver WC, Narr BJ, Ackerman MJ




Purpose            The purpose of this study was to determine the incidence of torsade de pointes (TdP) associated with droperidol use in anesthesia.

Background            Approximately 1 in 3,000 individuals have congenital long QT syndrome. A considerable number of drugs and medical conditions are associated with QT prolongation and TdP, notably including a number of other antiemetic drugs. (Unlike droperidol, none of these other antiemetics carry a black box warning.) Droperidol has been used as an antiemetic in millions of patients over more than 30 years. In 2001 the Food and Drug Administration (FDA) issued a black box warning regarding droperidol, prolonged QT interval, and potentially fatal TdP. The FDA’s action was based upon 10 reports associated with droperidol doses of 1.25 mg or less over the entire time droperidol has been on the market in the USA.

Methodology            This retrospective study included over 100,000 patients who underwent surgery and anesthesia over two, three year periods; one before the FDA black box warning was added to droperidol and the other after. The incidence of droperidol use during each of the three year time periods was determined by a random sample of 150 anesthetic patients. The chart of all patients with a prolonged QTc interval, ventricular tachycardia, and those who died within two days of surgery was reviewed to identify patients who experienced TdP.

Result            In the pre black box warning period 139,932 patients underwent surgery and anesthesia. The incidence of droperidol administration was 12% (95% CI 7.3% to 18.3%). Thus, approximately 16,791 (95% CI 10,173 to 25,607) patients received droperidol. During this time, 2,321 patients (1.66%) had a prolonged QT, TdP, or death within two days after surgery.

The post black box warning period included 151, 256 patients. The incidence of droperidol administration was 0%. During this time, 2,207 patients (1.46%) had a prolonged QT, TdP, or death within two days after surgery.

In the three year period before the black box warning there were no documented cases of TdP. Of the 456 patients in this period who died within two days after surgery, there was one patient in whom TdP could not be ruled out as the cause of death. This patient had received droperidol 1.25 mg IV in the OR before noon and ondansetron 4 mg at 5:00 pm in recovery. She was seen to be well at 9:30 pm and found dead at 10:00 pm.

In the three year period after the black box warning there were two documented cases of TdP. Neither of these patients received droperidol.

Conclusion            There was no difference in the incidence of TdP during the time when low dose droperidol was used as an antiemetic and the time when droperidol was not used. The FDA black box warning and FDA guidelines for ECG monitoring before and after low dose droperidol administration are not supported by these findings.



There is no reason to stop using low dose droperidol as an antiemetic. Of course, like all drugs, one should consider the risks and benefits before administering droperidol. There is plenty of support in the scientific literature for droperidol’s continued use in antiemetic doses. This study provides more such support and, although retrospective, includes a larger number of patients than any other study I’ve read on the topic. However, given how uncommon torsade de points is following droperidol administration at any clinically useful dose, even the large number of patients included in this study might not be enough to see the real incidence of complications following droperidol. Or …. perhaps that is the point!?


Michael Fiedler, PhD, CRNA




For more information on droperidol and the black box warning see the following articles:

Anesth Analg 2003;96:1377

Anesth Analg. 2004;98:1330

Anesth Analg 2003;97:1542

© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 6, September 30, 2007


Gonzales E, Moore F, Holcomb J, Miller C, Kozar R, Todd S, Cocanour C, Balldin B, McKinley B.



Fresh frozen plasma should be given earlier to patients requiring massive transfusion

J Trauma 2007;62:112-9.

Gonzales E, Moore F, Holcomb J, Miller C, Kozar R, Todd S, Cocanour C, Balldin B, McKinley B.




Purpose            The deadly triad of acidosis, hypothermia and coagulopathy for those who have suffered massive trauma with exsanguinating hemorrhage was identified more than 20 years prior. This specific identification led to fundamental changes in the initial emergency management of the severely injured patient. Despite all the advances in the science of trauma care, uncontrolled bleeding remains a leading cause of early death in trauma victims. The latest evidence has demonstrated that coagulopathies or a coagulopathic state begins much earlier than suspected, and is evident upon the patient’s arrival to the emergency room; most times before any resuscitative efforts have begun. The traditional massive transfusion practices are appearing to grossly underestimate the patient’s needs. The purpose of this study was to determine if a specific hospital’s massive transfusion protocol adequately corrected coagulopathies of the severely injured, and to assess whether uncorrected coagulopathy is predictive of increased mortality.

Background            Regional trauma systems triage the critically injured patients to Level 1 trauma centers where prevention of hypothermia, damage control surgery, massive transfusion protocol adherence and early intensive care unit triage for optimization of resuscitation efforts, takes precedence. These processes, while extremely advanced, are still unable to abort the fact that hemorrhage remains a leading cause of early death in civilian trauma and military combat casualty care. The researchers in this study had developed a formal massive transfusion protocol in the late 1990s. One component of their protocol was to transfuse fresh frozen plasma (FFP) after a trauma victim had received 6 units of packed red blood cells (PRBC). This delay in administering FFP until after the six units of packed red blood cells were administer was related to previous beliefs that posttraumatic coagulopathy develops over time due to the acidotic state, hypothermia, and resuscitation-related hemodilution and consumption of clotting factors. The most recent evidence is leading the experts to challenge this traditional thought process. Evidence is now pointing to a realization that patients are coagulopathic upon entrance to the emergency rooms well before aggressive resuscitation interventions take place. Newer information has identified that early prolongation of prothrombin time is the sentinel event; early administration of FFP is paramount in preventing coagulopathy; and the optimal replacement ratio of FFP:PRBC is 2:3. The recommendation now is that patients be given 2 units of FFP with the first unit of PRBCs (in those who are at high risk of requiring massive transfusion).

Methodology            This research was carried out as a prospective non experimental project. High risk patients who were admitted to the Shock Trauma ICU at Memorial Hermann Hospital in southeast Texas who met specific criteria underwent a 24 hour standardized shock resuscitation process directed by computerized decision support. Detailed data describing the patients’ clinical course and resuscitation process were obtained prospectively. Inclusion criteria were 1) major torso trauma, defined as injury of two or more abdominal organs, two or more long bone fractures, complex pelvic fracture, flail chest, or major vascular injury; 2) metabolic stress, defined as base deficit of ≥ 6 mEq/L within 12 hours of hospital admission; and 3) anticipated transfusion requirement of ≥ 6 units PRBCs within 12 hours of hospital admission, or age ≥ 65 years with any two of the three previous criteria. Brain injury patients were excluded due to a risk of worsening cerebral edema. Comprehensive hemodynamic data was gathered, including pulmonary artery pressure values at protocol and patient appropriate times. Additionally data was gathered on vasopressor and inotropic medication support, frequent hemoglobin and other pertinent laboratory values, arterial blood gas analysis, data describing hypothermia and coagulopathy, and any other clinical course of events (for example surgery or other procedures). At the beginning of the shock resuscitation protocol, baseline body core temperature, arterial blood gas data, and a coagulation profile comprising PT, international normalized ratio (INR), platelet count, partial thromboplastin time, and fibrinogen concentration were obtained. The same data was recorded on a scheduled basis for the duration of the 24 hour process. Additional data that characterized the pre-ICU course were recorded retrospectively. Data was recorded in to a Trauma Research Database. During the 51 months ending January 2003, there were 200 shock resuscitation protocol patients, of which 97 patients received a massive blood transfusion (>10 units of PRBCs). Data were then extracted from the Trauma Research Database for this study describing all patient demographics, the pre-ICU course, the ICU resuscitation, and outcomes with the focus on hypothermia, acidosis, and coagulopathy.

Result            Of the 97 patients who were resuscitated using this trauma center’s protocol and received massive transfusion (who made up this cohort), the following was found: 68 patients lived and 29 died. The mean age was 39 years; 61 were men. All patients (n = 79) required emergency surgery and/or interventional radiology procedures and were admitted to the ICU approximately 6.8 hours after emergency department admission. The mean severity of injury score was 29 for all. Blunt trauma was the predominate mechanism of injury. The mean INR upon arrival to the emergency room was 1.8. The patients were not hypothermic upon admission to the ICU and their base deficit, while still existing, had been approaching near normal values within 8 hours. This speaks to appropriate and aggressive warming mechanisms as well as management of any metabolic derangements. The protocol however, did not advocate sodium bicarbonate therapy in the emergency department, in the operating room, nor in the ICU unless arterial pH< 7.20. Statistical analysis found that the severity of the coagulopathy (indicated by the INR) measured at arrival in the ICU to be associated with survival outcome. Also found, with severe coagulopathy (INR ≥ 2.0), the probability of death was ≥ 50%.

Conclusion            The data indicated that acidosis and hypothermia were reasonably well managed at this Level 1 Trauma Center, and that their rigorous resuscitation protocols in terms if acidosis and hypothermia prevention, appeared stable. Coagulopathy did, however, remain a significant problem. These critically injured patients presented to the emergency department with an INR of ~1.8, indicating support of evidence that derangements in clotting cascades, and coagulopathy itself, were present upon arrival to the emergency room. Even though these patients were treated aggressively according to current protocols consistent with accepted trauma standard of care, coagulopathies were not ultimately definitively corrected. A moderate elevation in INR (1.4) persisted for these patients in the ICU for the last sixteen hours of this first 24 hour period. Within the ICU, transfusion of FFP was the primary intervention for coagulopathy correction, and a 1:1 ratio of FFP:PRBC used was. Do note that this ratio exceeds published recommendations. The authors of this research believe that failure to correct coagulopathy during the ICU resuscitation was largely attributable to inadequate pre-ICU interventions. The results of this study led the authors to develop and implement a standard protocol for prevention and correction of coagulopathy that starts in the emergency department. Their major policy change:  emphasis on early FFP administration in a ratio of 1 unit of FFP to 1 unit PRBC, beginning with the first unit of PRBC transfusion.



The evidence itself was more than troubling to these researchers. Uncorrected coagulopathy at ICU admission was found to be associated with ongoing transfusion requirements and the severity of the coagulopathy was found to be directly related to an increased risk of death. Finding evidence that the coagulopathy was present upon emergency room admission was paramount. The researchers found themselves scientifically assessing and questioning the entire process of their protocol. They reviewed the literature and most current studies as well as other’s protocols, and the prospective data gathering and trauma database providing them with a starting point in which to change a piece of the process. This is a great example of evidence based care!  Of course, it will be crucial to study whether the change in FFP to RBC ratio of administration upon initiation of the resuscitation protocol leads to improved patient outcomes.

As anesthetists, we frequently receive trauma victims directly from the emergency department. A major goal is to prevent the deadly trio of events: acidosis, coagulopathies, and hypothermia. Preventing coagulopathies is very difficult. For example, it is well known that 25% of those who are diagnosed with critical pelvic fractures, an unfortunate yet common trauma happenstance, experience major hemorrhage. We also know that persistent hypotension following trauma is usually the result of bleeding and massive hemorrhage. We infuse crystalloid and we begin the administration of blood products. This in and of itself has been known to worsen things. The crystalloid can lead to dilutional factor deficiencies, as can the red blood cells. We are used to diagnosing coagulopathies during an emergent anesthetic and typically this diagnosis is made after observed bleeding is noticed in the surgical field, usually long before we get stat laboratory results. Optimization of coagulation status during surgery and anesthesia correlates with favorable outcomes post- operatively. The evidence has now identified that trauma victims are presenting to emergency departments with critical laboratory values such as INRs of >1.8; this leads to a major change in trauma care critical pathways. Optimization of coagulation status and prevention of worsening hemorrhage is recommended immediately. It appears that it is no longer acceptable to begin massive transfusion of 4 or 5 units of PRBCs, large amounts of crystalloid, and then after the PRBC transfusion, call for the FFP. The analogy is similar to the patient who has begun the hypothermic spiral downward. We try and catch up to a normothermic state while some interventions are worsening the hypothermia. It then becomes extremely difficult to progress to normal physiologic states after a downward spiral has begun. The key appears to be identifying the situation early and warding off with treatment BEFORE it is too late to catch up.

Knowing that patient’s coagulopathic states have begun when entering the system has major implications on trauma anesthesia practice. The administration of FFP beginning at the same time as the initiation of massive transfusion protocol truly may lead to improvement of all resuscitation and interventional opportunities.


Mary A. Golinski, PhD, CRNA

© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 6, September 30, 2007