ISSN NUMBER: 1938-7172
Issue 1.8

Michael A. Fiedler, PhD, CRNA

Contributing Editors:
Mary A. Golinski, PhD, CRNA
Alfred E. Lupien, PhD, CRNA

Guest Editor:
Steven R. Wooden, MS, CRNA

Assistant Editor
Jessica Floyd, BS

A Publication of Lifelong Learning, LLC © Copyright 2007

New health information becomes available constantly. While we strive to provide accurate information, factual and typographical errors may occur. The authors, editors, publisher, and Lifelong Learning, LLC is/are not responsible for any errors or omissions in the information presented. We endeavor to provide accurate information helpful in your clinical practice. Remember, though, that there is a lot of information out there and we are only presenting some of it here. Also, the comments of contributors represent their personal views, colored by their knowledge, understanding, experience, and judgment which may differ from yours. Their comments are written without knowing details of the clinical situation in which you may apply the information. In the end, your clinical decisions should be based upon your best judgment for each specific patient situation. We do not accept responsibility for clinical decisions or outcomes.

Table of Contents











Krajewski W, Kucharska M, Pilacik B, Fobker M, Stetkiewicz J, Nofer JR, Wrońska-Nofer T


Impaired vitamin B12 metabolic status in healthcare workers occupationally exposed to nitrous oxide

Br J Anaesth 2007;99:812-818

Krajewski W, Kucharska M, Pilacik B, Fobker M, Stetkiewicz J, Nofer JR, Wrońska-Nofer T



Purpose            The purpose of this study was to compare biochemical markers of vitamin B12 metabolism in nurses occupationally exposed to various levels of nitrous oxide.

Background            Nitrous oxide (N2O) is commonly used during general anesthesia. While considered safe, habitual N2O exposure, and exposure in some vulnerable patients, can result in hematopoietic and central nervous system complications including agranulocytosis and polyneuropathy. N2O oxidizes vitamin B12 (cobalamine) from an active to an inactive state. Inactivation of B12 leads to inhibition of methionine synthase which is needed for DNA production. Homocystine is consumed during methionine production. Inhibition of methionine synthase decreases homocystine consumption and an increase in homocystine levels. Elevated homocystine levels are associated with increased risk of thrombosis and coronary artery disease. Elevated homocystine levels occur in individuals in whom methionine synthase has been inhibited by N2O exposure. When anesthesia waste gas scavenging systems are not properly used, healthcare workers can be exposed to excessive levels of N2O.

Theoretically, at least, those chronically exposed to high N2O levels might be at greater risk for both vitamin B12 deficiency and elevated homocystine levels.

Methodology            This prospective, cross-sectional study included female nurses from 10 different hospitals in the area of Lodz, Poland. The Control group included nurses who did not work in the operating room (OR) and were not occupationally exposed to N2O. The Exposed group included OR nurses in the same hospitals who were routinely in the OR for five or more hours a day. Exclusion criteria included overt hematologic disease, serious symptoms of neurologic deterioration, and heart failure. All participants underwent a physical examination. None had clinical signs of vitamin B12 deficiency or signs of acute N2O intoxication. All denied B12 or folic acid therapy within three years.

The normal vitamin B12 serum level was 156 – 672 pmol/L. For the study, B12 values were arbitrarily divided into the following categories:

            Low                          150 – 250 pmol/L

            Borderline Low            250 – 300 pmol/L

            Medium                     300 – 350 pmol/L

            High                      > 350 pmol/L

The upper bound of normal total homocystine concentration was defined as 12.8 µmol/L.

N2O levels were sampled in 26 ORs in 10 hospitals. Fifteen of the 26 ORs had anesthesia gas scavenging devices in place. Three different levels of ventilation were represented in the 26 ORs, ranging form a low of 6-10 air changes per hour to a high of >15 air changes per hour.

Result            Data was collected on 185 nurses (90 Control, 95 Exposed) between 25 years and 56 years old. Each had worked for between 5 years and 26 years. The length of employment was comparable between groups. There was no difference between groups in the Red Blood Cell count, hemoglobin, or hematocrit. Average serum B12 concentration was 27% lower (P=0.001) and homocystine concentration 26% higher (P=0.006) in the Exposed group. Nevertheless, average B12 and homocystine levels were still with the normal range in the Exposed group.

Measured ambient air N2O levels in operating rooms ranged from 35.8 to 1502 mg/m3. (Editor’s Note: US National Institute of Occupational Safety and Health (NIOSH) Recommended Exposure Limits are 45 mg/m3.) In only three ORs were N2O levels below NIOSH recommendations despite the presence of scavenging systems in 15 of the 26 ORs. The highest concentrations of N2O in ambient air was measured in ORs with < 10 air changes per hour. B12 and homocystine levels in those exposed to 360 mg/m3 or less of N2O were not significantly different than the Control group. In those exposed to more than 360 mg/m3 of N2O, B12 levels were significantly increased compared to the Control group (P=0.006) but still within the normal range. Levels of homocystine were also significantly increased (P=0.047) and slightly above the defined maximum normal value, 12.9 pmol/L (maximum normal defined as 12.8 pmol/L). There was a small but statistically significant negative correlation between the level of N2O exposure and B12 level (r= –0.22; P=0.038). There was a moderate and statistically significant positive correlation between the level of N2O exposure and homocystine level (r= –0.51; P=0.001).

Conclusion            Repeated exposure to very high levels of N2O was associated with inhibition of vitamin B12 metabolism and elevated homocystine levels. Maintaining ambient N2O levels below 360 mg/m3 prevented those undesirable biochemical alterations in healthy women.



Operating rooms in the USA are required to scavenge waste anesthetic gasses, including nitrous oxide. In ORs where I’ve worked, periodic monitoring that I’ve been aware of has always measured ambient nitrous oxide concentrations at levels below the stringent NIOSH standards.

Studies performed before waste gas scavenging showed that female dental assistants exposed to nitrous oxide on a regular basis had lower fertility rates than the general population. Studies performed after waste gas scavenging was instituted failed to show any significant changes in fertility rates or birth defects. As a result of scavenging systems, the risks of occupational exposure to nitrous oxide in the USA have been a non-issue for decades.

This study is important in a couple of ways. First, it looked at the biochemical effects of nitrous oxide exposure. Specifically, it looked at the relationship between nitrous oxide exposure and homocystine levels. While there is a clear association between elevated homocystine levels and cardiovascular pathology, homocystine levels are a proxy for the cause of the increased cardiovascular risk. Homocystine levels don’t cause cardiovascular disease as far as I know. Nevertheless, an increase in homocystine levels as the magnitude of nitrous oxide exposure increases (r=0.51 in this study), is an important finding. They are a reflection of the reduced availability of functional vitamin B12. The negative effects of low vitamin B12 levels are well known. Second, this study showed that only exposure to very high levels of nitrous oxide resulted in a decreased B12 level or an abnormal increase in homocystine. The average concentration of nitrous oxide measured in ORs was about 10 times the NIOSH standard. The average nitrous oxide concentration in “high exposure” ORs was over 16 times the NIOSH standard. This high level of exposure produced a statistically significant (but still within normal limits) decrease in B12 level and increase in homocystine level. The fact that chronic exposure to such high levels of nitrous oxide was needed to produce meaningful changes in B12 and homocystine levels is reassuring.

It is important to note that this study examined the effects of nitrous oxide exposure on healthcare workers, not on patients receiving nitrous oxide during a general anesthetic. We can’t reach any conclusions about the effects of nitrous oxide on patients based upon this study. For example, if nitrous oxide administration resulted in an acute increase in a patient’s homocystine levels in the immediate postoperative period this would not cause an acute increase in that patient’s risk of cardiovascular pathology. (There may, however, be some other reasons why it would be advisable not to use nitrous oxide in a patient with cardiovascular disease.)

How does this information affect our clinical practice? Well, if we are using a properly functioning scavenging system it shouldn’t. It should, however, remind us of the importance of making sure our scavenging system is working properly. For example, some scavenging systems use a reservoir bag to collect waste anesthetic gas and an adjustable suction to empty the reservoir bag. If the suction is set too low and the reservoir bag becomes full a scavenger pop off valve will dump waste anesthetic gas into the OR. Once waste gas is coming faster than the suction is removing it, the scavenger isn’t doing it’s job and the OR is being contaminated. Checking the scavenging system is part of the complete morning anesthesia machine check that is often overlooked. I find that large changes in the total fresh gas flow require that I also adjust the scavenger suction. It is my hope that the information in this study will remind us to check the scavenging system during our morning machine check and to make sure the scavenger system is functioning properly at all times during an anesthetic.


Michael Fiedler, PhD, CRNA



© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 8, December 31, 2007

Borgman MA, Spinella, PC, Perkins JG et al.


The ratio of blood products transfused affects mortality in patients receiving massive transfusions at a combat support hospital

J Trauma 2007;63:805-813

Borgman MA, Spinella, PC, Perkins JG et al.



Purpose            To determine whether the ratio of plasma to packed red blood cells (RBC) transfused in a combat support hospital increases survivability by reducing the number of deaths from hemorrhage.

Background            The current practice of component therapy in lieu of whole blood replacement is based primarily on experiences with elective surgery. The goals of component therapy are to optimize blood resource utilization and minimize the transmission of infectious diseases; however, these rationale may not be appropriate in severely hemorrhaging patients with concomitant hypothermia, metabolic acidosis, and hypocoagulopathy. There is emerging anecdotal evidence that using equal parts of RBC, fresh frozen plasma, and platelets may be more efficacious.

Methodology            The U.S. Military’s Joint Theatre Trauma Registry was used to identify trauma patients admitted to a combat support hospital in Iraq between November 2003 and September 2005. The database included U.S. military patients, military coalition soldiers, and civilian patients. Records were analyzed for all patients who received 10 or more units of RBC, or whole blood equivalents, within the first 24 hours following admission. Additional data collected included injury severity scores, time and cause of death, laboratory values, vital signs, and volume of crystalloid and blood products infused (fresh whole blood, RBC, fresh frozen plasma, apheresis platelets, cryoprecipitate, and recombinant Factor VIIa). Statistical bootstrapping techniques were used to divide patients into groups based on ratios of blood products received. In circumstances where whole blood was administered, a “unit” was considered to be equivalent to 1 unit of each RBC, plasma, and platelets. Primary outcome variables were hospital discharge and overall mortality.

Result            Of 5,293 patients admitted to the combat support hospital, 246 (4.6%) received massive transfusions with an overall mortality rate of 28%. Two hundred thirty-two of the 246 (94%) patients had sustained penetrating injuries. The overall median ratio of plasma to RBC in surviving patients was 1:1.6 compared to 1:2.3 in non-survivors (p < 0.001). Overall mortality rate was 65% in patients receiving a low ratio of plasma to blood (1 unit of plasma per 8 units of RBC). The mortality rate decreased to 34% among patients receiving plasma to RBC at a ratio of 1:2.5 and mortality rate was further reduced to 19% among patients receiving 1 unit of plasma for every 1.4 units of RBC infused. The differences in mortality rates among groups were statistically significant (p < 0.001) and remained significant even when other potential factors, such as types of  trauma, were included in the analysis. Comparing fluid and blood products administered to each group, patients in all three groups received similar quantities of RBCs (16 units); however the high plasma to RBC ratio group received less crystalloid per hour, more plasma and cryoprecipitate than either the low or medium ratio groups, and were more likely to receive recombinant Factor VIIa (rFVIIa). Thirty-eight percent of patients in the high plasma to RBC ratio group received rFVIIa compared with 26% of medium ratio patients and 16% of low ratio patients.

The predominant cause of death was hemorrhage in all three groups; however, the percentage decreased steadily from 92% in the low plasma to RBC group to 78% in the medium ratio group and 37% in the high plasma to RBC group. Central nervous system injury (23%) and sepsis (19%) were the other leading causes of death in the high plasma to RBC group. Additionally, the median time of death improved from 2 hours in the low ratio group to 38 hours in the high ratio group.

Conclusion            Increasing the ratio of plasma to RBC infused was associated with more than a 50% reduction in patient mortality. Study results are consistent with the philosophy of “hemostatic resuscitation” which emphasizes the prevention of trauma-induced coagulopathy through the administration of platelets and cryoprecipitate prior to the onset of hypothermia and acidosis. One potential approach to the early administration of plasma in civilian settings is the use of thawed plasma which has been approved by the American Association of Blood Banks and can be kept ready for use for up to 5 days when refrigerated at 4°C.


Conventional transfusion practices are derived from The American Society of Anesthesiologists’ Practice Guidelines for Perioperative Blood Transfusion and Adjunctive Therapies. In most circumstances, the decision to administer either plasma or platelets is based on either laboratory evidence of coagulopathy or observable microvascular bleeding.1  Military trauma experts agree that the recommended approach is reasonable when trauma has induced a state of  hypercoagulabilty.2  The context of this research report is quite different. The patients in this investigation had lost 10 or more units of blood within a 24-hour period and were at risk of developing what Borgman and colleagues call the “lethal triad” or “bloody vicious cycle” of hypothermia, acidosis, and coagulopathy. While traditional resuscitation strategies have focused on hypothermia prevention, reversal of metabolic acidosis, and surgical intervention to control blood loss; the current investigation tests the hypothesis that mortality can be reduced by anticipatory direct treatment of coagulopathy. The success of using greater quantities of plasma as part of the initial resuscitation was dramatic; however, as with every retrospective study, the lack of random assignment to treatment groups stands out as a threat to the study’s internal validity. It can not be determined whether unmeasured, or perhaps even subconscious, factors influenced fluid management strategies. 

The report by Borgman and colleagues is also interesting for another reason: their classification of causes of death beyond the first few hours in a combat support hospital. The researchers did not address the controversy regarding use of rFVIIa as part of their resuscitation strategy, but a simple Internet search combining the terms “Factor VII” and “Iraq” produces a mixed result of praise for the military’s successes in saving severely wounded soldiers with critical essays suggesting that injured soldiers are being used as unconsented participants in dangerous clinical investigations. The most common criticism is that the non-FDA approved use of rFVIIa for treatment of acute hemorrhage in patients without hemophilia is both expensive ($6,000 per dose) and can cause lethal pulmonary and cerebral emboli.3-8 

Whether military or civilian, the stark reality of major traumatic injuries is that survival beyond initial resuscitation depends on avoidance of more insidious insults such as head injury, multi-organ failure, embolus or sepsis. Unintentionally, this report provides some context for interpretation of media reports about rFVIIa. Using data presented by the authors, it is possible to compute a series of conditional probabilities for survival. For example, 19 of 31 (61%) patients treated with the low ratio of plasma to RBC died due to hemorrhage. In the high plasma to RBC ratio group, 11 of 162 (7%) patients died from hemorrhage. Within the high ratio group, an additional 19 (12%) patients died from causes other than hemorrhage. Presumably, it is within this subset of non-hemorrhagic deaths that complications from rFVIIa therapy would be recorded. Contrasting the mortality rate from hemorrhage after resuscitation with a low ratio of plasma to RBC with the mortality rate from the longer-term sequelae following aggressive transfusion practices, the odds of dying were five times higher after receiving more conventional treatment (61% versus 12%). Undeniably, this rudimentary analysis has many limitations. Most significantly, the authors did not provide details about which patients in each group received rFVIIa, other than to note that there was no statistically significant difference in the percentages of survivors and nonsurvivors receiving rFVIIa. The sole purpose of this probability exercise is to provide a cursory examination of claims published by the media rather than health care investigators. Unquestionably, it is important to thoroughly evaluate the risks and benefits of rFVIIa use in acute trauma resuscitation; however, the data provided in this report do not support media claims of impropriety.

In conclusion, the experiences of military health care providers have provided new evidence to assist in our examination of recommendations for resuscitation of severely hemorrhaging patients. Although some aspects of an aggressive approach to resuscitation may be controversial; it seems reasonable to agree with the investigators that a more liberal use of plasma and clotting factors improves survivability in a select group of severely-injured trauma patients.


Alfred E. Lupien, Ph.D., CRNA


1. American Society of Anesthesiologists. Practice guidelines for perioperative blood transfusion and adjuvant therapies. Anesthesiology. 2006;105,198-208.

2. Holcomb JB, Jenkins D, Rhee P, et al. Damage control resuscitation: directly addressing the early coagulopathy of trauma. J Trauma. 2007;62:307-10.

3. Berenson A. Army’s aggressive surgeon is too aggressive for some. New York Times. 2007;November 6. Available from Accessed November 7, 2007.

4. Little R. Part 1 of 3: dangerous remedy. Baltimore Sun. 2006;November 19. Available from,0,5568480.story. Accessed November 7, 2007.

5. Little R. Part 2 of 3: ‘don’t let me die’. Baltimore Sun. 2006;November 20. Available from /bal-te.factorvii20nov,0,1837664.story. Accessed November 7, 2007.

6. Little R. Part 3 of 3: dubious breakthrough. Baltimore Sun. 2006;November 21. Available from /bal-te.factorvii21nov,0,2296418.story. Accessed November 7, 2007.

7. Little R. Senators urge probe of drug. Baltimore Sun. 2006;November 30. Available from,0,2427490.story Accessed November 7, 2007.

8. Little R. Trauma care may be killing soldiers. (November 21, 2006). Available from Accessed November 7, 2007.


© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 8, December 31, 2007

Geriatric Anesthesia

Ishiyama T, Iijima T, Sugawara T, Shibuya K, Sato H, Terada Y, Ichikawa M, Sessler DI, Matsukawa T


The use of patient-controlled epidural fentanyl in elderly patients

Anaesthesia 2007;62:1246-1250

Ishiyama T, Iijima T, Sugawara T, Shibuya K, Sato H, Terada Y, Ichikawa M, Sessler DI, Matsukawa T



Purpose            The purpose of this study was to determine the safety and efficacy of Patient Controlled Epidural Analgesia (PCEA) in the elderly.

Background            Postoperative analgesia can be more challenging to achieve in elderly patients than in young patients. Opioid requirements are thought to decrease with increasing age due to changes in opioid pharmacokinetics. Postoperative analgesia requires less morphine, fentanyl, or alfentanil in the elderly. The elderly may also have different expectations for pain and express pain differently than younger patients. Conversely, some types of pain may be perceived more intensely in elderly than in younger patients. Healthcare providers sometimes under treat pain in the elderly out of concern for opioid side effects.

PCEA is viewed as safe and effective for postoperative analgesia. Some have questioned whether, when offered PCEA, the elderly might under medicate themselves in an attempt to avoid over medication or out of fear of the device or technology. Previous research has shown that elderly patients did not use enough IV patient controlled analgesia to relieve their pain.

Methodology            This prospective study included Young (age 20 to 64 years) and Elderly (age 65 years or greater) patients scheduled for elective, open abdominal surgery. Exclusion criteria included daily opioid use, those unable to use the PCEA device, and known allergy to study drugs. Patients were taught how to use the PCEA device preoperatively.

All patients received 0.05 mg/kg midazolam and 0.01 mg/kg atropine IM preoperatively. General anesthesia was induced with 2 mg/kg propofol and 0.1 mg/kg vecuronium and maintained with 67% nitrous oxide and 1-2% sevoflurane. Before induction of general anesthesia an epidural catheter was placed at a level corresponding to the sensory level of the intended surgical incision. During surgery the epidural catheter was dosed with a continuous infusion of 5 mL/h of 1.5% mepivacaine. For postoperative analgesia, the epidural catheter was dosed with an 8 mL bolus of 0.05% ropivacaine with 4 µg/mL fentanyl. Thereafter, the ropivacaine / fentanyl solution ran at 4 mL/h. Patients could give themselves a 2 mL bolus as needed. The lock out interval was 10 minutes.

Result            Patients in the Young group averaged 53 years old. Patients in the Elderly group averaged 72 years old. (A mean difference of 19 years.) The elderly group was shorter and weighed less, but the groups were otherwise demographically comparable.

Fentanyl use (in µg/kg) in the initial 24 hours postoperatively was almost identical between groups. The Young group pushed the PCEA bolus button an average of 44 (sd 38) times during the first 24 hours compared to 32 (sd 35) times in the Elderly group (P = not significant). Median pain scores across the 24 hours were low in both groups. Pain scores at the 24 hour measurement were statistically, though probably not clinically, significantly lower in the Elderly group. Pain scores during coughing were significantly lower in the Elderly group at two of the three measurement times (P<0.05).

The incidence of itching was almost identical in the two groups. Nausea and drowsiness was very slightly, but not statistically significantly, greater in the Elderly group. No patient in either group experienced respiratory depression (respiratory rate less than 8 breaths/min) or hypotension (systolic pressure decreased by 30% or more).

Conclusion            Elderly patients self administered similar amounts of epidural fentanyl and ropivacaine compared to Young patients. This resulted in comparable pain scores at rest but less pain in the Elderly group during coughing. Elderly patients appeared to be just as willing to self administer PCEA as were Young patients.



Just as infants and pregnant women are physiologically different than young adults, there are physiologic differences between elderly patients and young adults. We are only just beginning to take those differences into consideration when providing anesthesia care. I have a lot to learn in this area. That said, this study was not designed to investigate what is said it was investigating. Determining the safety and efficacy of Patient Controlled Epidural Analgesia (PCEA) in the elderly would take a series of much more comprehensive studies than this one. What this study did do, was to compare the efficacy of one particular PCEA recipe in older and somewhat younger surgical patients. Even at that, I really wish the “younger” group had been separated by more than an average of 19 years from the “Elderly” group, averaging 72 years old. I was glad, though, that the investigators were at least thinking about the willingness of older patients to self medicate, rather than confining themselves to simply comparing the amount of opioid actually used by each group.

What we can learn from this study is that one particular recipe for PCEA seemed to work just as well in older patients as in somewhat younger patients. Both groups were apparently equally willing to self medicate. As it turned out, both groups were apparently also hesitant to push the PCEA button “too early.” Both groups allowed their pain scores to rise higher than expected before pushing the PCEA demand button. The fact that side effects were about the same in both groups is reassuring, but given the low incidence of morbidity and mortality expected with this particular PCEA regimen and the relatively small number of subjects in this study we should be careful not to read too much into that. Also, I would have liked to have seen a more sensitive indicator of respiratory depression than patients’ respiratory rates. While respiratory depression is uncommon in younger patients receiving epidural fentanyl at similar doses, pulse oximetry has been shown to be a more sensitive monitor of respiratory depression than respiratory rate in other studies.


Michael Fiedler, PhD, CRNA



© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 8, December 31, 2007

Obstetric Anesthesia

Whitty R, Goldszmidt E, Parkes RK, Carvalho JCA


Determination of the ED95 for intrathecal plain bupivacaine combined with fentanyl in active labor

Int J Obstet Anesth 2007;16:341-345

Whitty R, Goldszmidt E, Parkes RK, Carvalho JCA



Purpose            The purpose of this study was to determine the ED95 (effective dose in 95% of the population) for bupivacaine injected into the subarachnoid space for labor analgesia.

Background            Combined spinal and epidural (CSE) analgesia effectively relieves labor pain and has some advantages over epidural analgesia alone, notably, spinal analgesia has a much faster onset. Commonly, bupivacaine is used for the spinal component of CSE analgesia in combination with an opioid, usually fentanyl. Current clinically used doses were derived from studies in which the minimum local anesthetic dose or the ED50 were determined. In those studies, the subarachnoid dose of bupivacaine ranged from 0.71 mg with 15 µg fentanyl to 2.37 mg with no fentanyl added. A statistically projected estimate of the ED95 for bupivacaine was 3.3 mg in another study. In clinical obstetric anesthesia practice, 2.5 mg bupivacaine is commonly used. The investigators hypothesized that 2.5 mg bupivacaine was greater than the ED95 and that a more appropriate dose might continue to produce the desired analgesia with fewer undesired effects.

Methodology            This single blind, dose finding study included 40 ASA I or II term parturients in active labor. To be eligible for the study, parturients carried a single fetus, were at a cervical dilation of ≥5 cm, report pain ≥6 out of 10 (verbal pain scale), and were at least 18 years of age. The fetal heart rate (FHR) was monitored throughout the study. Subjects with an abnormal baseline fetal heart rate pattern were excluded. All subjects received 250 mL of lactated ringers IV before their CSE labor analgesia. The CSE was performed at either the L2-3 or L3-4 interspace with subjects in the sitting position. A needle through needle technique was used with a 17g Tuohy and a 26g Whitacre. The subarachnoid injectate included 0.25% plain bupivacaine and 15 µg of fentanyl. (Only the dose of bupivacaine was varied.) Normal saline was added to make the total volume of injectate 2 mL in all subjects. Subarachnoid bupivacaine and fentanyl was injected over 10 seconds. The first patient received 1.75 mg subarachnoid bupivacaine. Thereafter, the bupivacaine dose was changed in increments of 0.25 mg in search of the ED95 dose (in combination with 15 µg fentanyl). Next, an epidural catheter was inserted 3 to 5 cm into the epidural space. Thirty minutes after the subarachnoid injection an epidural infusion of 0.0625% bupivacaine with 2 µg/mL fentanyl was started with no bolus. Effective analgesia was defined as a verbal pain score of 1 out of 10 or less.

Post Hoc, data from this study was pooled with data from two previous studies for further analysis.

Result            Only two different doses of bupivacaine were used during the study. The 1.75 mg bupivacaine dose was effectively an ED100 and a 1.5 mg bupivacaine dose was an ED85. The statistically estimated ED95 was 1.66 mg (95% CI 1.5 mg to 482.5 mg). The inclusion of only two bupivacaine doses limited the statistical analysis somewhat. No subjects experienced motor block, respiratory depression, nausea, or vomiting. Fifty-five percent reported itching. Fetal bradycardia occurred in 3 subjects; none resulted in a cesarean section.

Pooling the data from this study with data from two previous studies, the estimated ED95 of subarachnoid bupivacaine for labor analgesia was 1.95 mg (95% CI 1.45 mg to 2.95 mg).

Conclusion            The ED95 for subarachnoid bupivacaine with 15 µg fentanyl is 1.66 mg for labor analgesia. This dose produces a verbal pain score of 0 or 1 out of 10 within 10 minutes of administration in primaparous women with ≥5 cm cervical dilation.



My impression is that 2.5 mg bupivacaine is most commonly used for the subarachnoid injection when CSE analgesia is used in OB. I’m not sure where this dose came from so I’m glad to see a study that gives us some idea of where the “sweet spot” might be between too much and too little bupivacaine.

To decide what an ED95 is, one must first define “effective.” In this case, the investigators defined effective labor analgesia as a verbal pain scale of no more than 1 out of 10. That strikes me as a pretty tough standard. I don’t know that I’ve ever had a woman in labor rate her pain as a 1. When I get laboring women to the point that they rate their pain as about a 3 they are usually pretty happy. Lower than that and they usually have excessive motor block. That said, if these clinicians can get to a pain score of 1 they may have something to teach the rest of us. And, they apparently did so without undue motor block, so all the better.

It is encouraging that they found the optimal subarachnoid dose of bupivacaine to be between 1.5 and 1.75 mg, rather than the commonly used 2.5 mg. While 2.5 mg usually doesn’t result in too much motor block, I would expect the chance of motor block to be even smaller with the lower dose. The fact that their confidence interval was so wide indicates either that their study was underpowered or had a problem with the methodology. Either way, the clinical results are convincing enough that we can probably overlook that flaw.

The little “mini-metaanalysis” they did after the fact (post hoc) with data from two other studies is an interesting touch. On the one hand, it is factual information that further supports their findings. On the other hand, from a research perspective it is sub par. It is disappointing to see slack research procedures that could be avoided with just a little more expertise, planning, or effort. In this case, had the methodology been slightly improved and/or the number of subjects increased just a little bit I expect the investigators would have been comparing their study to the other two in the discussion section rather than including data from the other two studies in their results section. This criticism notwithstanding, the information presented in this study is sufficiently convincing and helpful that I’m willing to consider it in my clinical practice.


Michael Fiedler, PhD, CRNA



© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 8, December 31, 2007


Thomas S, Veevi S


Dexamethasone reduces the severity of postoperative sore throat

Can J Anesth 2007;54:897-901

Thomas S, Veevi S



Purpose            The purpose of this study was to determine whether or not preoperative IV dexamethasone would reduce the incidence or severity of postoperative sore throat in patients who had received general endotracheal anesthesia.

Background            Sore throat is a common and usually self limiting complaint following general anesthesia. The incidence has been reported to be as high as 90%. Sore throat may be caused by trauma to pharyngeal and/or larygeal mucosa due to laryngoscopy, placement of a gastric tube, or oral suctioning. Endotracheal tube (ETT) cuff pressure and movement of the ETT that causes erosion or edema may also result in a sore throat. Techniques to reduce the risk of postoperative sore throat include: minimizing ETT cuff pressure, smaller ETTs, topical lidocaine, and topical or systemic steroids. Studies in which hydrocortisone ointment was applied to the ETT before insertion have not shown any significant reduction in postoperative sore throat. Preoperative dexamethasone has been reported to reduce pain following oral surgery. While long term steroid therapy may increase the risk of infection, poor wound healing, and avascular necrosis of joints; a single dose of dexamethasone is considered safe in the perioperative period.

Methodology            This prospective, randomized, double-blind, placebo-controlled study included 120 patients requiring general endotracheal anesthesia for abdominal or lower extremity surgery. The surgeries were expected to last between one and three hours. Participants were ASA class I or II aged 20 years to 60 years and weighing between 40 kg and 75 kg. Exclusion criteria included recent respiratory infection, risk factors for aspiration of gastric contents, obesity, diabetes mellitus, pregnancy, recent treatment with analgesics or corticosteroids, and any contraindication to corticosteroids.

Subjects were divided into two groups. The Control group received 2 mL normal saline IV. The Dexamethasone group received 8 mg dexamethasone IV. All patients were premedicated with alprazolam 0.5 mg PO. A lumbar epidural catheter was placed before induction of general anesthesia. General anesthesia was induced with propofol 2 mg/kg, morphine 0.1 mg/kg, and vecuronium 0.1 mg/kg. A low pressure cuff ETT was placed by direct laryngoscopy. If more than two attempts were needed to place the ETT the patient was removed from the study. All larygnoscopies were performed by the same anesthesiologist. Males received an 8 mm or 8.5 mm ETT. Females received a 7 mm or 7.5 mm ETT. ETT cuff pressure was measured with a pressure gauge every 30 minutes and adjusted as necessary. No oral or nasal airways were placed in any patient. General anesthesia was maintained with nitrous oxide and isoflurane. At the end of the case, all subjects received ondansetron 6 mg and residual neuromuscular block was antagonized with neostigmine and atropine. Oral suctioning was performed under direct vision. If blood was visible in the suction aspirate or on the ETT after extubation the patient was removed from the study. The epidural catheter, which was dosed with bupivacaine during the surgery, was used for postoperative analgesia. No other analgesic or sedative drugs were administered during the first 24 hours postoperatively.

Result            Five patients from each group were excluded for requiring either more than two attempts to intubate or supplemental postoperative analgesia. Thus, data from 55 patients in each group were analyzed. The incidence of postoperative sore throat was lower in the Dexamethasone group. Twenty percent of Dexamethasone patients complained of a sore throat postoperatively compared to 56% of Control patients (P<0.01). The severity of sore throat pain was lower in the Dexamethasone group at all time points (1 hour, 3 hours, 6 hours, 12 hours, and 24 hours postoperatively). Over time, Visual Analogue pain scores ranged from 2.0 to 3.4 in the control group compared to 0.8 to 1.3 in the Dexamethasone group.

Conclusion            Dexamethasone 8 mg IV reduced the incidence and severity of sore throat following general anesthesia, laryngoscopy, and endotracheal intubation.



While simple, this is one of the better studies I’ve read recently from a methodological perspective. The investigators did a really good job eliminating factors that could impact the incidence and severity of postoperative sore throat from their study. Doing so took a lot of work and they deserve credit. To clinicians reading this study, their hard work means you and I can have a higher level of confidence that the reduction in the incidence and severity of sore throat they showed was really caused by the dexamethasone and not by some other factor.

Unfortunately, like many other investigators, they made a common mistake in the analysis of the Visual Analogue Scale data they used to assess sore throat pain. (VAS data was analyzed as if it were parametric data, while it is actually ordinal {nonparametric} data.) In this case, however, that mistake probably did not change the results. They went on to compound this error by calculating the percent difference in VAS scores between the Control and Dexamethasone groups. If you read the original article I’d recommend ignoring these percent differences. They are unlikely to be a valid comparison. (It is rather like comparing the percent difference in when someone finished a race using the order in which they finished the race; 1st, 2nd, 3rd, and so on, and saying that the third place individual took 300% as long to finish as the first place individual.)

As postoperative pain relief gets better, and as patients demand a higher level of service, a postoperative sore throat becomes a bigger issue. This study clearly shows that 8 mg dexamethasone reduces the incidence and severity of postoperative sore throat. Good evidence exists that a single dose of dexamethasone given in the perioperative period is safe in a wide range of patients. (Not all patients.) So, should we be giving dexamethasone to everyone to prevent sore throat? I’m not suggesting that we do. Dexamethasone has a place in anesthesia, in selected patients, to help prevent postoperative nausea and vomiting (PONV) and postoperative pain. When I decide whether or not to give a dose of dexamethasone I’ll now consider the benefit it may offer in preventing sore throat as well as PONV and pain. Difficult airway patients who have undergone multiple and/or traumatic intubations may also benefit from a dose of dexamethasone.


Michael Fiedler, PhD, CRNA



Additional information on dexamethasone is available in Anesthesia Abstracts Volume 1 Number 5 (August 31, 2007). The original article is titled, Blood glucose concentration profile after 10 mg dexamethasone in non-diabetic and type 2 diabetic patients undergoing abdominal surgery (Br J Anaesth 2006;97:164-170).

© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 8, December 31, 2007

Weinberg G, Ripper R, Murphy P, Edelman L, Hoffman W, Strichartz G, Feinstein D


Lipid infusion accelerates removal of bupivacaine and recovery from bupivacaine toxicity in the isolate rat heart

Reg Anesth Pain Med 2006;31: 296-303

Weinberg G, Ripper R, Murphy  P, Edelman L, Hoffman W, Strichartz G, Feinstein D



Purpose            The objective of the study was to test whether treatment with intravenous lipid infusion decreased bupivacaine content in cardiac tissue.  The research tested the hypothesis that a ‘lipid sink’ effect actually played a major role in reversing bupivacaine toxicity.

Background            It is a well established fact that local anesthetics such as bupivacaine, especially when inadvertently injected into the circulatory system, result in cardiac toxicity and the toxicity is demonstrated as an asystolic heart.   This toxicity is a source of peri-operative morbidity and occurs even when local anesthetics are administered in the most careful and technically correct manner.   If not promptly and effectively treated, the cardiac toxic effects of local anesthetics can and will result in a fatal outcome.  Previous research demonstrated that lipid infusions were efficacious in treating bupivacaine toxicity. The same lipid infusion treatment demonstrated hemodynamic recovery in laboratory animal studies (dogs) when the animals received a lethal dose of bupivacaine.   However, the mechanism of action of lipid reversal is not clearly defined.  It is thought that lipid infusion provides a lipid plasma phase, also known as a "lipid sink," and causes the bupivacaine to partition.  This theorized mechanism of action is easily understood and simplistic; experiments do demonstrate that that the oil emulsion extracts bupivacaine from the plasma.  This theory though seems questionable due to the rapidity of lipid reversal seen in animal studies (it occurs very quickly); the process requires bulk extraction of a hydrophobic drug from tissues where it has achieved very high concentrations.   It doesn’t appear possible to extract a hydrophobic drug in such a short period of time.  A second theory offers a metabolic benefit of lipid therapy and is based on the observation that bupivacaine inhibits the transport of fatty acids into cardiac mitochondria where they normally provide the bulk of myocardial energy needs.  The high lipid concentrations in plasma might overwhelm this inhibition.

Methodology            This was an animal study, approved by the Institutional Animal Care Committee of the VA Chicago Healthcare System.  Rats, weighting between 450 and 550 g were used; the experiments were performed in the laboratory of the Jesse Brown VA Medical Center in Chicago Illinois.

After the hearts of the rats were excised (the rats were anesthetized with pentobarbital and anticoagulated with heparin) they were perfused with a Krebs-Ringer bicarbonate buffer and perfusion was maintained at a constant flow rate via roller pump.  The perfusate was equilibrated with a membrane oxygenator and warmed by counter current flow from a water bath.  With the hearts beating spontaneously, they were suspended inside a heated glass cylinder.  The left ventricle was manipulated to achieve a constant left ventricular end diastolic pressure and the pressure was transduced and analyzed continuously.  Once the heart rates and left ventricular function were stable for 20 minutes minimum time, bupivacaine was infused for 30 seconds to achieve a final concentrate of toxicity.  When the bupivacaine infusion was stopped, a 20% lipid emulsion was injected to achieve a final triglyceride concentration of 1% (a physiologic event specific to the rat to mimic a normal triglyceride concentration).  Three different protocols were used to test the efficacy of lipid infusions to reduce cardiac toxicity; the timing of the lipid infusion varied according to the following 3 protocols:

Protocol A.  This protocol tested whether or not the lipid infusion reversed bupivacaine toxicity in the isolated heart.  Once the bupivacaine infusion was stopped, the hearts were perfused with either the Krebs Ringer bicarbonate (KRB) buffer (the control), or a 1% lipid infusion (experimental).  The endpoints analyzed were the times from the end of the bupivacaine infusion to the first heart beat restored and recovery of the contractile function of 90% of baseline rate pressure product.

Protocol B. This protocol tested the cardiac tissue content of bupivacaine (after the toxic dose was infused) with early lipid infusion.  Radio-labeled bupivacaine was infused achieving toxic concentrations within the myocardium.  Bites of the left ventricle were taken (analogous to biopsies) using forceps just prior to the infusion (baseline), at the end of the bupivacaine infusion, and every 30 seconds to the 2 minute mark for a total of six bites.  The KRB alone was infused in the control hearts and the lipid 1% emulsion was infused for 2 minutes beginning when the bupivacaine infusion ceased, in the treatment group.

Protocol C.  This protocol tested bupivacaine washout with delayed lipid infusion.  The radio-labeled bupivacaine was infused for 30 seconds to achieve a final toxic concentration in the hearts.  For the lipid infusion group (the experimental group), lipid emulsion was injected beginning 75 seconds after stopping the bupivacaine infusion.  Samples of heart tissue were tested for washout.  The 75 second time point tested whether lipid treatment could produce an efflux of bupivacaine from the heart after an interval where cardiac bupivacaine content already declined significantly.

Result            All hearts became asystolic during the bupivacaine infusions simulating cardiac toxicity.  Additionally, all hearts recovered to normal function over a period of minutes.  For protocol A, the time to first heart beat was approximately 30% less in the lipid treated hearts compared to the control treated hearts.  This was statistically significant.  Also statistically significant was the return to 90% of baseline of the rate pressure product; it took less time in the lipid treated hearts compared to the control group.  For protocol B, biopsies taken at 2 minutes had a considerably lower radioactivity demonstrated for all hearts (both groups) and this did not show a significant difference between the two groups.  The time course of the tissue bupivacaine content was plotted for both the control and the lipid treated hearts.  The lipid treated hearts yielded time constants showing that significance did exist; it took less time for the lipid treated hearts to demonstrate a decrease in local anesthetic content.  Protocol C tested whether a delay in lipid infusion could increase washout of residual bupivacaine in the isolated heart.   The bupivacaine efflux, or washout, in the lipid treatment group exceeded (statistically via a regression curve and at the 1.5 minute point) that which was predicted more than the controls did.

Conclusion            This study demonstrated that lipid treatment expedited recovery from bupivacaine-induced asystole in the rat heart, reduced bupivacaine myocardial tissue content and increased cardiac bupivacaine washout or efflux.  These three findings support that a mechanism does indeed exist and appears to be efficacious in reducing cardiac toxicity from bupivacaine via lipid infusion. Limitations to the study however are contributory to questioning the lipid sink theory as the only theory.  Such limitations include the fact that a mechanism may exist regarding lipid infusions accelerating bupivacaine washout by increasing metabolic demand and thereby improving coronary blood flow.  This study does not address whether indirect metabolic effects are key.  For example, could lipids possibly work by removing bupivacaine, which would be indirect, or by improving mitochondrial metabolism, a direct effect, and therefore the bioenergetic state of the heart muscle itself would be improved?  Additionally this study does fail to accurately mimic the bodies’ pharmacokinetics of clinical bupivacaine toxicity.  Once the bupivacaine has reached a toxic level or state in the rat, the isolated heart was perfused with a buffer solution with a normal oxygen content and pH, and with no bupivacaine.  In humans, residual plasma bupivacaine levels, coupled with tissue hypoxia and acidosis, mostly likely contributes to the clinical presentation of resuscitation-resistant cardiac toxicity.



We have all witnessed, been a part of, or read about toxicity resulting from local anesthetics, despite the most skilled hands.  It is a dreadfully frightening experience first for the patient and second for the provider.  Vigilance, monitoring, and preparedness of the provider should not be any different or taken lightly, simply because a regional anesthetic is being administered versus a general anesthetic.  Lipid infusions offer a significant and successful rescue treatment to this life threatening happenstance of local anesthesia toxicity.  Providers who are responsible for patients receiving local anesthetics must have a rescue plan and be prepared. Lipid infusion, as part of an algorithm for treating local anesthetic toxicity, appears to demonstrate great potential. Hopefully, further research will establish a mechanism of action and solidify its role.


Mary A. Golinski, PhD, CRNA



© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 8, December 31, 2007

Turner S, Mathews L, Pahdharipande P, Thompson R


Dolasetron-induced torsades de pointes

J Clin Anesth 2007;19:622-625

Turner S, Mathews L, Pahdharipande P, Thompson R




Purpose            The purpose of this case report was to describe an occurrence of torsades de pointes following administration of dolasetron (Anzemet).

Background            The QT interval is the time between the beginning of the Q wave of the QRS complex and the end of the T wave. The T wave corresponds to repolarization of cardiac muscle. When repolarization is prolonged, the next QRS complex can occur before repolarization has finished, so called “R on T” phenomenon, resulting in ventricular fibrillation. An abnormally long corrected QT interval (QTc) is a major risk factor for development of torsades de points. A QTc longer than 0.5 seconds and/or an increase in QTc of >0.06 seconds is associated with arrhythmias. QT prolongation can be caused by inhibition of cardiac potassium ion channels, resulting in prolonged repolarization. Many commonly used drugs can prolong cardiac repolarization to some degree. Risk factors for QT prolongation include certain cardiac disease (such as congenital long QT syndrome), sympathomimetics, potent inhalation agents, diuretics, hypomagnesemia, hypokalemia, hypothermia, slow heart rates, and female gender. Droperidol and serotonin antagonists inhibit the potassium channel. Serotonin antagonists also inhibit the sodium channel. This may widen the QRS complex, further increasing the risk of an “R on T” arrhythmia. Dolasetron produces dose dependent QT prolongation which is most intense within the first 4 hours after administration. Lesser QT prolongation can persist for up to 24 hours after dolasetron administration. While dolasetron has been shown to cause “clinically insignificant” QT prolongation by itself, there are case reports of arrhythmias and hemodynamic instability associated with dolasetron. Arrhythmias have been associated with other serotonin antagonist antiemetics as well.

Methodology            A 52 year old female underwent elective excision of a meningioma. She took an angiotensin II receptor blocker (losartan) and a combination thiazide and potassium sparing diuretic (triamterene) for high blood pressure. She also took methimazole for Graves’ disease. Otherwise, her exercise tolerance was good, she had no other signs of cardiac disease, and she had no complications of her hyperthyroidism. Her preoperative T4 level was 2.6 ng/dL (normal 0.8-2.7). CBC and electrolytes were normal. Her preoperative 12-lead ECG was a normal sinus rhythm. Her corrected QT interval (QTc) was 0.40 seconds (normal ≤0.44 seconds). Prior to induction of general anesthesia she received prophylactic ß blockade with metoprolol to prevent side effects from her hyperthyroidism. Anesthesia was induced with propofol, fentanyl, and vecuronium. Anesthesia was maintained with isoflurane and a sufentanil infusion. A radial arterial line was placed for BP monitoring. Mannitol was administered intraoperatively to reduce cerebral edema.

During the case, the patient experienced hypotension unresponsive to ephedrine and phenylepherine. BP was maintained with a vasopressin infusion. (Editors Note: the dose of vasopressin was described only as “low.”) ECG monitoring showed a normal sinus rhythm continuously during hypotension. Intraoperative potassium ranged from 3.3 to 3.8 mEq/L. Near the end of the surgical procedure the patient was given 12.5 mg dolasetron. Emergence and extubation were uneventful.

Result            Meningioma patients were normally admitted to the ICU postoperatively. During transfer to the ICU, this patient was hemodynamically stable, awake, and talking. About one hour after dolasetron administration, and five minutes after arrival in the ICU she arrested. The ECG monitor showed ventricular fibrillation (V-fib). She was defibrillated and converted to a supraventricular tachycardia with hemodynamic instability. She regained consciousness for about two minutes before reverting again to V-fib. Again she was defibrillated. This time converting to atrial fibrillation with a ventricular response of 130 to 150 beats per minute; again, hemodynamically unstable despite the administration of phenylephrine and vasopressin. Amiodarone 300 mg, magnesium, and calcium were administered without notable improvement in hemodynamics. At this point, the patient was given 16 mg etomidate and 100 mg succinylcholine and reintubated. Synchronized cardioversion was then performed with a return to normal sinus rhythm.

Review of ECG monitoring strips after the event revealed an episode of torsades de points immediately before the first episode of V-fib. While the quality of the monitoring strips did not allow an exact measurement of the QT interval, the QTc ranged from approximately 0.46 to 0.55 seconds. A 12-lead ECG two hours after arrest showed a QTc of 0.493 seconds, an increase of 0.093 over her baseline 12-lead ECG. Six months later the patient was recovered and had had no further cardiac complications.

Conclusion            When multiple risk factors for QT interval prolongation are present, dolasetron should be administered cautiously. A baseline12-lead ECG may be indicated in such circumstances.



Case reports are generally agreed to be a rather low level of evidence upon which to base our practice, falling just above expert opinion. (Case reports are level IV, with I being highest and V being lowest.) Nevertheless, this case report has something worthwhile to teach us. The biggest problem it suffers from is that the title, and some of the text, gives the impression that dolasetron caused torsades de points. An isolated cause and effect is unlikely. In my mind, that is not what this case report has to teach us.

The corrected QT interval (QTc) can be a bit confusing. Basically, the “normal” QT interval changes as the heart rate changes. Rather than listing a normal QT interval for a number of different heart rates a single “corrected” QT interval is listed as normal. The correction is made by dividing the actual, measured QT interval by the square root of the R to R interval. Modern 12-lead ECG machines calculate the QTc automatically and display it on the top of the print out. The QTc is important because a long QT interval is associated with a risk of cardiac arrhythmias. A large number of factors are associated with a lengthening of the QT interval. Most of them normally do not lengthen the QT interval to a clinically significant degree by themselves.

While this patient had a normal QTc preoperatively, she had a number of risk factors for QT lengthening. Some of those risk factors were introduced after her normal baseline 12-lead ECG. One of those factors was the administration of Anzemet. Despite the title of the article, we don’t know what the QTc was immediately before the Anzemet was given so we don’t know that Anzemet was “the straw that broke the camel’s back.” We only know that anesthesia added a risk factor when they gave Anzemet and we know the outcome was torsades de points and V-fib after Anzemet was given.

So here is the dilemma. Twelve-lead ECGs are non-invasive, fast, easy, and cheap. We can get them in patients with risk factors for prolonged QTc and know the QTc interval. Then what? Do we withhold serotonin antagonist antiemetics if we don’t have a 12-lead? Do we withhold serotonin antagonist antiemetics when the QTc is “too long” or has increased from the preoperative value? How long a QTc is too long for dolasetron administration? Droperidol has also been suggested to increase the risk of torsades de points (though at higher doses than used by anesthesia for PONV, but that is another discussion). Phenergan has side effects as well. Do we completely withhold antiemetics if a patient has risk factors for QT prolongation?

Right now, I don’t believe we know the critical value for QTc and the number of other risk factors present that should stop us from giving a serotonin antagonist. I think this case report has something more simple and obvious to teach us. We should think about risk factors for torsades de points before giving serotonin antagonist antiemetics just like we think about possible complications of other drugs before administering them. When risk factors are present, we should consider measuring the QTc with a 12-lead ECG even if the patient doesn’t “need” a 12-lead due to age, cardiac history, or diuretic use. If the QTc is prolonged we should consider the risk : benefit of administering a drug that further prolongs the QT interval. If the benefit outweighs the risk we should do what we can to minimize other risk factors, for example, by insuring normal magnesium and potassium levels and maintaining normothermia. We can also consider prophylactic antiarrhythmic measures much like those used in the case report as treatment. While this process sounds laborious when written down, it is, in fact, the thought process that each of us use every day when giving any other drug. This case report simply points out that, like every other drug, dolasetron has risks and we need to weigh the risks and benefits before giving it.


Michael Fiedler, PhD, CRNA



The motivation to summarize and comment on this article came from a discussion of the article on the CRNA News discussion list ( on December 22, 2007.

For a more complete listing of nurse anesthesia discussion lists visit the “CRNA Discussion Lists” page on the American Association of Nurse Anesthetists web site at : resources : links : CRNA Discussion Lists.

© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 8, December 31, 2007

Regional Anesthesia

Liu SS, Strodtbeck WM, Richman JM, Wu CL


a comparison of regional versus general anesthesia for ambulatory anesthesia: a meta-analysis of randomized controlled trials

Anesth Analg 2005;101:1634-42

Liu SS, Strodtbeck WM, Richman JM, Wu CL




Purpose            The purpose of this article was to combine the results of several studies comparing regional anesthesia to general anesthesia for ambulatory surgery in an attempt to better evaluate the benefits of one type of anesthesia over the other.

Background            Discharging surgical patients the day of their surgery (ambulatory surgery) is increasing for a number of reasons, including economics. Anesthesia is one of the limiting factors in speed of discharge. General anesthesia is blamed for increasing discharge time because of side effects including pain and nausea. Regional anesthesia is thought to avoid the post operative side effects of general anesthesia, thus speeding patient discharge. However, regional anesthesia has its own drawbacks, including administration and set up time as well as a unique set of side effects which may also delay discharge. Improved drugs for both general and regional anesthesia over the years have helped to reduce side effects. General anesthesia has benefited greatly from the use of multimodal analgesia and a new generation of antiemetics. This study attempted to analyze a variety of studies comparing regional and general anesthesia under ambulatory circumstances, in order to determine which technique provided the best anesthesia for rapid patient discharge post surgery.

Methodology            The Medline database and Cochrane Database were searched for studies between 1966 and April 2005 that targeted adult randomized controlled trials of “single shot” regional anesthesia versus general anesthesia techniques in ambulatory surgery. Studies that included “hospitalized” patients or patients less than 19 years of age were excluded. Regional anesthetics which included continuous techniques, intraarticular blocks, or local infiltration were excluded. Regional anesthesia was further subdivided into central neuraxial blocks (CNB) consisting of spinal or epidural anesthesia, and peripheral nerve blocks (PNB). Any general anesthetic techniques described as “monitored sedation” were excluded. The specific data extracted from each study included time for induction, nausea and vomiting, visual analogue scale pain scores, use of analgesics in the post anesthesia care unit (PACU), ability to bypass the PACU, time in the PACU, total ambulatory surgery unit (ASU) time, and patient satisfaction.

Result            Twenty-two studies that met the inclusion criteria were identified. Both CNB and PNB anesthesia were associated with lower pain scores and a less use of PACU pain medication than was general anesthesia. CNB commonly included both local anesthetic and opioid. Both CNB and PNB were associated with an increase in induction time averaging eight minutes. CNB and general anesthesia resulted in comparable time spent in the PACU, incidences of nausea, and patient satisfaction. CNB increased the total time in the ASU by an average of 35 minutes. PNB was associated with more frequent bypass of the PACU, increased patient satisfaction, a decreased incidence of nausea, and an average 24 minute reduction in PACU time compared to general anesthesia. However, PNB did not decrease or increase the total ASU time. Patients who received general anesthesia, CNB, and PNB all had similar total ASU times.

Conclusion            There were advantages to regional anesthesia over general anesthesia even though neither technique clearly resulted in a faster discharge time. The use of PNB improved postoperative analgesia and prevented nausea but, for unknown reasons, these benefits did not translate into reduced total ASU time. Some of the reasons why the discharge time was not significantly different were thought to include the use of common discharge criteria, increased induction time for regional anesthesia, a negative bias among medical personnel and patients in regards to regional anesthesia discharge criteria, and a lack of familiarity with “fast tracking” ambulatory patients. The authors suggested that there might be refinements in ambulatory anesthesia that might improve the use of both regional and general anesthesia in ambulatory surgery settings. Those refinements include the use of targeted antiemetics, multimodal analgesia, and short acting regional anesthetics. There were a number of limitations in the data analysis. The data was weighted based on trial size and not by quality, the data was not homogenous, there were a variety of surgical procedures, various patient types, various practices, and various institutions. The power analysis was not completely applicable because of these limitations.



A meta-analysis is the use of statistics to evaluate a combination of existing studies on a single topic. Often, this technique is used to combine studies that might be too small for statistical significance on their own, but when pooled with similar studies, and under ideal circumstances, the evidence becomes more clear. The key term here is “under ideal circumstances”. Seldom do the circumstances of combined studies prove to be ideal. Each study has its own unique weakness and all of the weaknesses may compound each other in a meta-analysis. In other words, a good meta-analysis of bad studies still results in a bad study. I am not necessarily saying this particular analysis is an analysis of bad studies, but there are obvious limitations, some mentioned by the authors and some not, which affected the quality of the results.

The most significant limitation is probably the time period for which the data was collected. The initial search period was from 1966 to 2005. The results yielded studies published from 1997 to 2005 but the analysis did not reveal which years the individual studies collected patients from. There have been so many significant changes in anesthesia drugs, techniques, and patient care over that period of time, it is hard to believe the data is truly comparable. Even while each individual study may have comparable data, the changes in general anesthesia verses regional anesthesia over the years have not been consistent with each other. For that reason, the meta-analysis can only evaluate how regional anesthesia compares to general anesthesia sporadically over a long time period, for a variety of different surgical techniques, a variety of patient populations, and for a multitude of other known and unknown variables. It can not necessarily tell us which technique is better and should be used most often in today’s ambulatory surgery environment.

It is apparent to me that the authors had a bias toward regional anesthesia, especially peripheral nerve blocks, and they were surprised that their results did not yield evidence of shorter ASU patient stays. They excused these results based upon such things as provider bias, lack of “lenient” discharge criteria, or the use of common discharge criteria for general and regional anesthesia. They offered no rational that these were the actual reasons, just speculation.

The authors made some assumptions that contributed to the weakness of this analysis, but at the same time they did a reasonable job of describing most of its limitations. Because of these limitations, I am not sure that this analysis does a very good job of truly identifying the benefits of one anesthesia technique over the other. It does identify some of the benefits each technique has individually, but does not show how those benefits translate into a result that would encourage a provider to select a specific technique.

When you look at the multitude of variables described in this analysis which affect each of our clinical practices, including “practitioner bias”, discharge criteria, patient expectation, and economic pressures, it is evident that individual providers have to make technique selections based on their own situational needs.

The authors of this paper are correct that anesthesia has had “continuous refinements”. Because of that, the best technique will continue to change from time to time, and from situation to situation. The best advice that can be given to any anesthesia provider is to remain proficient in both regional and general anesthesia techniques, and apply whichever technique is determined to be the best for the situation and patient presented.



Steven Wooden, MS, CRNA



© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 8, December 31, 2007

Dhir S, Ganapathy S, Lindsay P, Athwal GS


Case Report: ropivacaine neurotoxicity at clinical doses in interscalene brachial plexus block


Can J Anesth 2007;54:912-916

Dhir S, Ganapathy S, Lindsay P, Athwal GS


Purpose            The purpose of this case report was to describe an occurrence of absorption local anesthetic toxicity despite the use of a generally accepted “safe” dose of ropivacaine.

Background            Ropivacaine is less toxic than several other long acting local anesthetics. Reports of ropivacaine toxicity generally involve intravascular injection. Ropivacaine plasma concentrations in excess of 5 µg/mL have been reported without any clinical signs of toxicity. “Early” neurotoxicity has been reported at plasma levels of 2.2 µg/mL in volunteers. Ropivacaine is bound to Alpha-1-acid glycoprotein (AAG) and, thus, AAG levels may affect the free fraction of ropivacaine available to exert toxic effects.

Methodology            A 76 year old woman with multiple myeloma experienced a pathologic humeral fracture. She had a metastatic lesion on her clavicle and had previously undergone radiation therapy. She weighed approximately 38 kg. At age 21 she experienced an isolated seizure; otherwise, she had no cardiac or neurologic pathology. Her medications included: thalidomide, pamidronate, dexamethasone, aspirin, ibuprofen, paroxetine, and docusate. She was scheduled for an open reduction and internal fixation of the humerus. A continuous interscalene brachial plexus block was planned for her anesthetic and for postoperative pain management.

After sedation with fentanyl and midazolam, 3 mL of 1% lidocaine was used for local infiltration and a superficial cervical plexus block (less than 0.8 mg/kg lidocaine). Next, an interscalene brachial plexus catheter was placed. A total of 25 mL of 0.3% ropivacaine with 2.5 µg/mL epinephrine was injected through the catheter incrementally with periodic aspiration (total ropivacaine dose 75 mg, 1.97 mg/kg). The patient was talking during the block procedure. There were no early signs of local anesthetic toxicity.

Result            Fifteen minutes after ropivacaine injection the patient suddenly became unresponsive and had a grand mal seizure. Anesthesia was induced and the patient paralyzed, intubated, and ventilated with 100% oxygen. Aspiration of the brachial plexus catheter again at this time produced no evidence of intravascular placement. A venous blood sample was drawn to measure a ropivacaine level. After verification that the seizure had been terminated, general anesthesia was continued and the surgery performed. Postoperatively, the patient had a complete motor and sensory block which resolved over the next 24 hours. Intravascular catheter placement was ruled out. The total plasma ropivacaine concentration, drawn 20 minutes after the block was dosed, was 3.68 µg/mL. After follow up care, the patient was discharged home and had no sequelae.

Conclusion            Even ropivacaine may cause local anesthetic toxicity despite proper dosing, proper needle placement, incremental injection, and frequent aspiration during injection. Proper monitoring and preparations to manage local anesthetic toxicity remains vital even with “low toxicity” local anesthetics.




Ropivacaine is markedly less toxic than bupivacaine and tetracaine. Ropivacaine is probably at least 50% less toxic than bupivacaine when comparing toxic plasma concentrations. There are case reports of individuals inadvertently receiving 20 mL of 0.75% ropivacaine (150 mg) IV and only experiencing ringing in the ears. But … as this case report points out, that doesn’t mean ropivacaine has no toxicity.

Normally, 35 to 40 mL of local anesthetic is used for an interscalene block. Normally the patient is at least an average sized adult, ±70 kg. This patient weighed less than 40 kg and was probably debilitated / malnourished. While she received only 25 mL of 0.3% ropivacaine (75 mg or 1.97 mg/kg), this was almost as much ropivacaine on a mg/kg basis as an average sized adult who was injected with an average volume of local anesthetic. Furthermore, she had received a small dose of another amide local anesthetic, lidocaine, apparently without epinephrine, just before the block was performed. So, all in all, the margin of safety was probably similar or slightly less than performing an interscalene block in an average sized adult. If her Alpha-1- acid glycoprotein levels were low as a result of malnutrition she would likely be at greater risk for local anesthetic toxicity. (Protein levels were not measured and we don’t know this.) To be clear, I’m not suggesting the authors should have seen this coming. To the contrary, it appears that they took the patient’s weight into consideration when planning the block. They also used proper technique to detect and avoid even a partial intravascular injection of local anesthetic.

I see two take home messages here. First, don’t ever think that local anesthetic toxicity can’t occur; think about the margin of safety and always be prepared to manage local anesthetic toxicity when performing regional anesthesia. Second, while we can plan for what we know, what we don’t know can still decrease the margin of safety and result in complications. In this case, even though the plan included reducing the dose of local anesthetic to compensate for an underweight patient, some other factor enhanced the systemic absorption and/or free plasma fraction of ropivacaine enough to result in CNS toxicity. Elderly and debilitated patients may benefit from an even greater margin of safety to counterbalance what we don’t know.


Michael Fiedler, PhD, CRNA



For information about using intralipid to treat local anesthetic toxicity see Levobupivacaine-induced seizures and cardiovascular collapse treated with intralipid® (Anaesthesia 2007;62:516-518) in Anesthesia Abstracts · Volume 1 Number 2, May 31, 2007.

© Copyright 2007 Anesthesia Abstracts · Volume 1 Number 8, December 31, 2007