ISSN NUMBER: 1938-7172
Issue 7.8

Michael A. Fiedler, PhD, CRNA

Contributing Editors:
Penelope S Benedik, PhD, CRNA, RRT
Mary A Golinski, PhD, CRNA
Alfred E Lupien, PhD, CRNA, FAAN
Dennis Spence, PhD, CRNA
Cassy Taylor, DNP, DMP, CRNA
Steven R Wooden, DNP, CRNA

Assistant Editor
Jessica Floyd, BS

A Publication of Lifelong Learning, LLC © Copyright 2013

New health information becomes available constantly. While we strive to provide accurate information, factual and typographical errors may occur. The authors, editors, publisher, and Lifelong Learning, LLC is/are not responsible for any errors or omissions in the information presented. We endeavor to provide accurate information helpful in your clinical practice. Remember, though, that there is a lot of information out there and we are only presenting some of it here. Also, the comments of contributors represent their personal views, colored by their knowledge, understanding, experience, and judgment which may differ from yours. Their comments are written without knowing details of the clinical situation in which you may apply the information. In the end, your clinical decisions should be based upon your best judgment for each specific patient situation. We do not accept responsibility for clinical decisions or outcomes.

Table of Contents

Comparative Effectiveness of the C-MAC Video Laryngoscope versus Direct Laryngoscopy in the Setting of the Predicted Difficult Airway

Anesthesiology 2012;116:629-636

Aziz M, Dillman D, Fu R, Brambrink A


Purpose The purpose of this study was to compare the success rate of initial intubation attempts with the C-MAC video laryngoscope vs. a traditional laryngoscope in the population with a predicted difficult airway.


Background It remains uncertain if using a video laryngoscope ensures a high success rate at initial intubation attempts compared to a standard laryngoscope - specifically in the predicted difficult airway population. Use of the video laryngoscope and its associated technology has grown significantly over the past several years. It is now unofficially advocated by providers in attempts to manage a difficult airway. Several studies have provided evidence that video laryngoscopes improve laryngeal view and make intubation easier, especially for the learner. Additionally, it is used successfully as a rescue device when initial attempts at laryngoscopy and intubation have failed. What has not been validated is whether the video laryngoscope can increase intubation success when used by the experienced provider, particularly in the patient with a predicted difficult airway. Are experienced laryngoscopists more successful at initial intubation attempts using a video laryngoscope compared to a traditional laryngoscope? This is a very relevant question for anesthesia providers. Every attempt at intubation for a single patient increases their risk of significant morbidity and even mortality.


Methodology This was a single blind, two arm, randomized controlled trial comparing the C-MAC video laryngoscope with direct laryngoscopy in individuals with a predicted difficult airway. Practitioners were instructed on the use of the C-MAC video laryngoscope. For three months prior to enrollment of subjects they used the C-MAC laryngoscope for everyday clinical use. Patients were recruited and included in the study if one or more of the following predictors of difficult intubation were identified:

  • reduced cervical motion from pathologic condition
  • cervical spine precautions
  • Mallampati classification score III or IV
  • less than 3 cm mouth opening
  • history of difficult direct laryngoscopy

The randomization scheme involved a 1 to 1 allocation using computerized software. Initial attempts at intubation were either with conventional laryngoscope blades or the C-MAC system. Patients were blinded to their intubation technique until the postoperative assessment was complete. A standardized induction sequence was used and positioning for airway establishment was according to physiologic findings. Muscle relaxant was at the discretion of the provider. The obese were ‘ramped up’ and those with cervical spine precautions were managed with manual in-line stabilization. Laryngoscopy was performed only by predefined experienced providers; novices were excluded. The primary outcome measure was defined as intubation success at first attempt verified by ETCO2. Secondary outcome measures were:

  • best Cormack-Lehane laryngeal view
  • laryngoscopy time
  • use of external laryngeal manipulation
  • use of a bougie
  • oxygen desaturation
  • airway related complications

Any failed attempt was managed at the discretion of the provider. In the recovery room, each patient was examined for signs of airway trauma.


Result A total of 296 airway management procedures were provided by 91 anesthesia providers. Demographic data varied between groups only in regards to thryomental distance of < 6 cm (more common in the C-MAC group) and the type of provider (fewer residents in the C-MAC group). The proportion of success was 93% in the C-MAC group compared with at 84% in the direct laryngoscopy group (P = 0.026). This remained significant even after adjusting for the two differences in the demographic data between groups. Following is the secondary outcome data:

1) Cormack-Lehane laryngeal view was graded I or II in 93% of C-MAC laryngoscopies versus 81% of the direct laryngoscopies (P < 0.01)

2) In successful intubations; laryngoscopy time averaged 46 seconds for the C-MAC group and 33 seconds in the Direct Laryngoscopy group (P < 0.001)

3) The use of a bougie and/or external laryngeal manipulation was required less often in successful C-MAC laryngoscopies compared to successful direct laryngoscopies (P = 0.02)

4) Oxygen desaturation was not statistically significantly different between groups

5) There were very few complications and the incidence of lip/gum/oral trauma, dental trauma, sore throat were not significantly different between groups


Conclusion In a common clinical care environment with a large diverse patient population and several types of anesthesia providers, initial intubation success in the predicted difficult airway patient was more common in the C-MAC video laryngoscopy group compared with a traditional direct laryngoscopy. While the average time to intubate was longer using the C-MAC system, it appeared to be a useful technique for an initial approach to managing and a difficult airway.


Comment We are very fortunate to have the benefit of using the most current technology when we manage an airway, both in emergent and non emergent scenarios. So many of the new airway management products have been specifically designed for use in the patient with a potential difficult airway. For all of us who establish airways as a component of the clinical care we provide, the ability to ventilate and subsequently intubate is one of the most intensely complex procedures we perform, yet it is performed regularly. Failure to establish an airway remains the leading cause of anesthesia related morbidity and mortality, even in this era of sophisticated technology. I am often impressed with the advances in the technology; they have certainly made my practice safer. However I would be remiss if I failed to discuss the continued importance of the ASA difficult airway algorithm. The algorithm has been established and refined over the years to include not only advances in technology, but it encompasses what we have learned of human anatomy and physiology. The guidelines have been established from sound evidence based procedures. Consider the following general scenario:


An ASA III, middle aged male presents for elective surgery. Pre-anesthesia assessment identifies a Mallampati III airway and a normal thyromental distance. A thick neck is noted and only moderate neck flexion and extension can be elicited. He has no diagnosis of Obstructive Sleep Apnea, yet the patient has been given opioids for pain and is snoring loudly in the preoperative holding area. Oxygen saturation on room air varies between 93% - 95%. After the anesthesia assessment is complete, it is decided to use a video laryngoscope for the initial attempt at endotracheal intubation.


The patient is induced in the operating room. An induction agent is administered and a succinylcholine immediately follows. There is no attempt to ventilate between induction and muscle relaxant. Following loss of consciousness, attempts to ventilate are unsuccessful even with the placement of an oral airway. Seconds pass and oxygen saturation drops precipitously. The video laryngoscope is inserted and vocal cords are viewed, yet attempts to pass the endotracheal tube are unsuccessful. Oxygen saturation continues to decrease, the video laryngoscope is removed, the oral airway reinserted, ventilation attempted, and immediate help called for. With numerous experienced providers now attending to the patient and ventilation increasing oxygen saturation to 95% with multiple providers’ hands involved, the cords are again visualized with the video laryngoscope by another provider. Intubation is again unsuccessful. Attempts with a standard straight blade are unsuccessful as well. Because vocal cords had been visualized using the video laryngoscope, the anesthesia team felt a sense of security that intubation would be successful and more muscle relaxation was given. After more unsuccessful attempts at intubation, the patient was awakened without harm and the procedure postponed. For his subsequent anesthetic, an awake fiberoptic intubation was planned.


In this example, the use of sophisticated technology to visualize the vocal cords during attempts at intubation provided a false sense of security which lead the anesthesia team to deviate from the difficult airway algorithm. While one might argue that ventilation should have been attempted before depolarizing muscle relaxation was administered, if the intubation had been conducted as a true rapid sequence, the same situation would have occurred. What is most concerning is that visualization of the vocal cords while using the video laryngoscope allowed for a false sense of security that an endotracheal tube could be passed. Instead of aborting the procedure because of an inability to easily ventilate, and then an inability to intubate, more muscle relaxant was administered.


The lessons learned from this scenario are monumental. We should use the most advanced technology available, base our plan upon a thorough preoperative assessment, and continue to follow the difficult airway algorithm without being diverted by our assumptions of what new airway technology will do for us. Visualizing the cords does not guarantee success placing the endotracheal tube. Let us not be fooled nor forget to use our technology together with evidence based practice guidelines. Combining the two should offer the greatest safety in patient care.

Mary A Golinski, PhD, CRNA

© Copyright 2013 Anesthesia Abstracts · Volume 7 Number 8, August 31, 2013

Equipment & Technology
Relationship between bispectral index values and volatile anesthetic concentrations during the maintenance phase of anesthesia in the B-Unaware trial

Anesthesiology 2011;115:1209-1218

Whitlock E, Villafranca A, Lin N, Palanca BJ, Jacobsohn E, Finkel KJ, Zhang L, Brunside BA, Kaiser HA, Evers AS, Avidan MS


Purpose The purpose of this study was to determine the correlation between Bispectral Index Values (BIS values) and End Tidal Anesthetic Concentration (ETAC). The study also examined the effects of four patient characteristics on this correlation:

  • ASA status (3 or less vs. 4)
  • men vs. women
  • age < 60 vs. age over 60
  • those alive 1 year postoperatively vs. those who had died


Background Inadequate anesthetic depth can result in intraoperative awareness. Greater attention was directed at awareness as manufacturers began releasing monitors purported to measure depth of anesthesia. Most such depth of anesthesia monitors use processed electroencephalograph (EEG) data. Many, but not all, anesthetics cause known changes in the EEG. Investigators have suggested the following criteria as being required, though not completely sufficient, for any depth of anesthesia monitor to be useful in guiding titration of general anesthesia during anesthetic maintenance:

  • a high correlation between the depth of anesthesia displayed on the monitor and anesthetic concentration in the brain
  • a predictable depth of anesthesia value at which emergence from anesthesia reliably occurres in the majority of patients
  • the same association between depth of anesthesia and the depth of anesthesia monitor value when anesthesia is deepened and when it is lightened (no hysteresis)



Methodology This was a randomized, prospective study of 1,941 adult patients undergoing surgery with isoflurane, desflurane, or sevoflurane anesthesia. The BIS group used a protocol for titrating maintenance of anesthesia based upon BIS values (BIS Quatro, software version XP). In the BIS group, anesthesia providers could see the BIS values. The BIS manufacturer had no role in the study nor did they support the study financially or in kind. The End Tidal Anesthetic Concentration (ETAC) group used a protocol for titrating maintenance of anesthesia based upon End Tidal agent concentration and clinical signs. In the ETAC group, anesthesia providers could not see the BIS display or any part of the monitor, but, BIS data was recorded by investigators. End Tidal Anesthetic Concentrations were converted into MAC equivalents adjusted for age. Age Adjusted MAC equivalents were used in order to standardize the description of depth of anesthesia across patients of different ages and different inhalation agents.


Result Data from over 800 patients were excluded from the study because of incomplete data collection, thus the analysis included data from 1,100 patients. A total of 930 hours of stable, maintenance anesthesia data points were included.


The mode (most common) BIS value was in the low 40’s irrespective of age adjusted MAC values across a range of 0.42 to 1.51 MAC. This clustering of BIS values in the low 40’s irrespective of actual MAC delivered to the patient was seen not only in the BIS group, where BIS values could be seen and targeted; but also in the End Tidal Anesthetic Concentration group, where BIS values were not visible to the anesthesia provider. For every 0.1 increase in age adjusted MAC, the BIS value decreased by approximately 1.5 units. Thus, the BIS would go down by only 9 units when the agent concentration was doubled from 0.6 age adjusted MAC to 1.2 age adjusted MAC. At equivalent MAC values, patients 60 or younger tended to have lower BIS values than patients older than 60 years. Lower BIS values at the same MAC were also seen in:

  • women vs. men
  • ASA IV patients vs. ASA I thru III patients

In a subset of patients, a linear regression was performed to determine the correlation between BIS values and age adjusted MAC. The correlation coefficient was -0.16; a week correlation. And, in about 25% of patients the slope of the BIS vs. age adjusted MAC relationship was nearly zero, indicating almost no variation in BIS values as age adjusted MAC (depth of anesthesia) was increased or decreased. Conversely, a few patients did show a good correlation between BIS values and age adjusted MAC values.


Conclusion The three criteria identified by the investigators as being necessary for a depth of anesthesia monitor to be clinically useful were not found in this study [see background section for list]. The correlation between BIS values and stable age adjusted MAC in most patients was week; and in about 25% of patients it was non-existent. Furthermore, the BIS reading was influenced by patient age, gender, and ASA physical status and not depth of anesthesia alone. The BIS was most often insensitive to clinically significant changes in End Tidal Anesthetic Concentration.



This study is one in a long list of studies that reveal the futility of using current depth of anesthesia monitors. There is lots of scientific evidence that the BIS does not reliably monitor depth of anesthesia and no credible evidence I’m aware of that suggests it does. An often cited study that looks on the surface as though BIS use might help anesthetists avoid recall in those at greatest risk for postoperative recall has such a wide variability in it’s results that it is simply not credible.


This well executed study provides solid information in three general categories.

First, the BIS doesn’t correlate well with actual anesthetic depth. When the inhalation agent is turned up, and the patient’s anesthetic deepened, the BIS should go down. If there was a perfect relationship between an increase in the End Tidal Anesthetic Concentration (Depth of Anesthesia) and the BIS value the correlation would be -1.0. Don’t let the minus sign fool you; this would be a perfect correlation. The minus sign simply means that as one goes up (agent concentration or MAC) the other goes down (BIS). But the correlation wasn’t anywhere near -1.0. It was -0.16. The investigators generously called this correlation “weak.” I’d call it almost nonexistent.

Second, in about 25% of patients, the correlation wasn’t even this good. It was even more nonexistent than a correlation of -0.16. Over a wide range of MAC values the BIS simply didn’t change.

Third, the BIS value was partly determined by patient age, gender, and ASA physical status. While it would be inconvenient, there could be a physiologic basis for a difference due to gender. To work well, manufacturers would have to define any gender influence and put a switch on the monitor for us to specify whether the patient being monitored was male or female. But age was already adjusted in the MAC values used; it had already been compensated for age. I can’t think of a conceptual basis by which ASA physical status should influence BIS values. But these are not the only factors that have nothing to do with depth of anesthesia that have been shown in the scientific literature to influence BIS values. Other documented factors include:

  • muscle relaxants
  • patient position
  • drug used to induce / maintain general anesthesia


In fairness, it should be noted that in a very few patients there was a good correlation between anesthetic depth and BIS values. From the manufacturer’s perspective this is encouraging. It may mean that they are close to right in some way so it works as intended once in a while. But it may also mean that random chance sometimes results in readings that only look good. From a clinician’s perspective, however, this just doesn’t fly. Who among us would accept a BP monitor, pulse oximeter, or ETCO2 monitor that got it right once in a while and the rest of the time gave us results that were totally unrelated to the real BP, pulse ox, or ETCO2? No one I know.


I occasionally speak to anesthetists that regularly use a consciousness monitor and are convinced that they are accurate and clinically helpful. I respect my colleagues’ viewpoint. There may be a narrow range of patient circumstances and anesthetic techniques under which consciousness monitors work much better than their overall level of accuracy. But, my colleagues may also have been fooled by marketing, expectation bias or coincidence. This is why basing our practice on evidence, such as this research, is so important. To be useful, a monitor has to work for everyone, all the time. I can be fooled. The systematic, controlled, unbiased approach used in well done studies such as this one are much harder to fool than my observations are. We would be wise to pay attention to it.

Michael A. Fiedler, PhD, CRNA

© Copyright 2013 Anesthesia Abstracts · Volume 7 Number 8, August 31, 2013

Smoking and perioperative outcomes

Anesthesiology 2011;114:837–46

Turan A, Mascha EJ, Roberman D, Turner PL, You J, Kurz A, Sessler DI, Saager L


Purpose The purpose of this study was to describe the effects of smoking on 30 day postoperative outcomes. Secondarily, the study looked for a dose response relationship between cigarette smoking and postoperative complications.


Background The number of deaths attributed to smoking is staggering, 100 million people in the 20th century alone. It is estimated that 20% of adult Americans smoke cigarettes. Smoking's impact is far-reaching, going beyond disease to include significant economic burden both for smoker and society in general. In addition to causing several types of lung disease, smoking has also been shown to be associated with poor wound healing, wound dehiscence, wound infections, sepsis, cardiovascular disease, and stroke. We have long known that smoking worsens postoperative outcomes, but quantitatively how much smoking increases the risk of poor postoperative outcomes has not been known.


Methodology This retrospective cohort study examined data from the American College of Surgeons National Surgical Quality Improvement Program database (ACS-NSQIP database). The database contained information collected between 2005 and 2008. Although the study itself was retrospective, the ACS-NSQIP database contained patient data collected prospectively according to strict guidelines and data collection was therefore quite complete. From this database records were divided into current smokers and never smokers. Current smokers included those who reported having smoked cigarettes in the year previous to being admitted for surgery. Never smokers included those who reported 0 pack years of cigarette smoking during their lifetime. Those who had not smoked in the year before surgery but had smoked at some point in their life were not included in either group.


Statistical methods were used to ensure that the smoker and never smoker groups had equal distributions of potentially confounding variables that might impact postoperative outcome. These variables included things such as:

  • gender
  • ethnicity
  • age
  • alcohol use
  • diabetes
  • renal failure
  • A S A physical status classification
  • type of anesthesia


The dose-effect relationship between number of pack years smoked and postoperative outcomes was examined using a composite rate of all major postoperative morbidity. Exclusion criteria included patients who had preoperative pneumonia, mechanical ventilation, sepsis, coma, or wound infection.


Result The ACS–NSQIP database included 635,265 surgical cases. After removing records of those who met exclusion criteria or had missing data 391,006 patients remained. Of those, 26.5% were smokers and 73.5% were never smokers. Smokers were more likely to be male, drink alcohol, and have a higher ASA physical status classification.


In an unadjusted analysis, the odds of major postoperative morbidity were significantly higher in smokers versus non-smokers (odds ratio 1.72; P < 0.0001).The increased risk of specific postoperative morbidities in smokers compared to never smokers is summarized in table 1.



            Table 1. Increased Risk of Postoperative Complications in Smokers


Risk Increase



unplanned intubation


ventilation greater than 48 hours


cardiac arrest


myocardial infarction




superficial wound infections


deep wound infections




septic shock


wound dehiscence




When considering the dose response relationship between the number of pack years of cigarettes smoked and postoperative morbidity, smoking less than 10 pack years had a risk of morbidity similar to never smokers. However, patients who had smoked more than 10 pack years had significantly greater risk of postoperative morbidity than did never smokers (P < 0.001). Past 10 pack years, as the number of pack years smoked increased to 20, 30, and 40 pack years the further increase in risk of postoperative complications was slight.


Conclusion This analysis of a large validated database indicated that smoking was associated with increased postoperative risk of cardiovascular, pulmonary, wound infection, and septic complications. However, the number of pack years smoked did not correlate directly with increased morbidity. Smoking less than 10 pack years was associated with risk of postoperative complications similar to never smokers. Smoking more than 10 pack years incurred significantly greater risk of postoperative complications compared to never smokers but additional pack years increased risk only modestly.



The percentage of surgical patients who are smokers is higher than the percentage of smokers in the general population. Smokers still make up a significant portion of our patients. In the past we've known that smoking was associated with a host of cardiovascular and pulmonary complications but we haven't known how much smoking was associated with how large an increase in that risk. This study provides us with specific information about how much smoking is associated with how large an increase in the risk of specific postoperative complications. Having such information is incredibly important when planning an anesthetic. With it, we know that we probably don’t need to take any additional precautions for the two cigarette a day smoker. And we know that the 20 pack year smoker is at significantly greater risk of a number of postoperative complications. We can tailor our anesthetic plan to help prevent some of those complications and to prevent harm from some others.


One example of how we can use this information when planning an anesthetic is found in all the information about wound and systemic infections. In 2008 postoperative wound infections were the second most common complication in healthcare, affecting 252,695 patients a year in the USA at an average cost of $3,364 per patient.(1) We know that regional anesthesia and keeping patients warm both increase wound blood flow and oxygenation resulting in a decreased likelihood of wound infections. In a 10 plus pack year smoker (30% - 42% increased risk for wound infection) considering a regional anesthesia and taking every precaution to keep the patient normothermic perioperatively should be strongly considered. All the more so if they have additional risk factors for wound infections like diabetes. Doing so is not only good for the patient, it helps prevent wound infections from sucking money out of the healthcare system. A lower cost of care due to preventing wound infections ultimately means health care is more affordable. This is but one example of how we can identify patients at higher risk and make plans that will reduce incidence or severity of their postoperative complications.


I was surprised by one finding of this study. I have always assumed that more pack years of smoking resulted in more pathology and sicker patients. Apparently past about 10 pack years that is only a little bit true. This is the beauty of good research; we have a chance to unlearn what we’ve gotten wrong.

Michael A. Fiedler, PhD, CRNA

1. Van Den Bos J, Rustagi K, Gray T, Halford M, Ziemkiewicz E, Shreve J. The $17.1 billion problem: the annual cost of measurable medical errors. Health Aff 2011;30:596-603.

© Copyright 2013 Anesthesia Abstracts · Volume 7 Number 8, August 31, 2013

Preoperative dexamethasone enhances quality of recovery after laparoscopic cholecystectomy

Anesthesiology 2011;114:882-890

Murphy GS, Szokol JW, Greenberg SB, Avram MJ, Vender JS, Nisman M, Vaughn J


Purpose The purpose of this study was to compare recovery characteristics in laparoscopic cholecystectomy patients with, and without, the preoperative administration of 8 mg dexamethasone.


Background Laparoscopic cholecystectomy is a common surgery. Patients are often discharged home shortly after the procedure is completed, making the quality of recovery a crucial factor in a patient’s overall comfort and satisfaction. Dexamethasone has been shown in some studies to contribute to a reduction in PONV after laparoscopic procedures, reduce postoperative pain, increase feelings of well being, decrease fatigue, and reduce sore throat associated with endotracheal intubation. Since dexamethasone has an onset of 1 to 2 hours, when it is administered may be crucial to producing some or all of these outcomes. Shorter times between dexamethasone administration and start of surgery may account for the failure of some studies to demonstrate these beneficial effects.


Methodology This was a randomized, double-blind, placebo-controlled study in patients who had laparoscopic cholecystectomy with general anesthesia. Approximately one hour before incision Decadron patients received 8 mg dexamethasone IV over at least 60 seconds. Control patients received an equal volume of saline. All patients had a general anesthetic with midazolam, propofol, muscle relaxant (succinylcholine and/or rocuronium), fentanyl, and sevoflurane. All patients received 4 mg ondansetron 30 minutes prior to the end of the case. Trocar insertion sites were infiltrated with bupivacaine in both groups.


The primary tool used to assess quality of recovery was the 40 item “Quality of Recovery” instrument (QoR-40). Each item in the QoR-40 was a question answered on a five-point Likert scale [1=poor … 5=excellent]. The QoR-40 has been validated in a number of types of surgical patients. QoR-40 results 24 hours postoperatively was the primary outcome measure. A second survey was administered to capture symptoms related to steroid side effects. Pain was assessed with a 100 mm visual analog scale (VAS). QoR-40 data was “averaged” [emphasis by editor] and analyzed with a t test. Categorical data was analyzed with the Fisher’s exact variant of the chi squared test.


Result After five patients were converted to open cholecystectomies and excluded from the study, 91 patients remained in the analysis (46 Decadron group, 45 Control group). There were no significant differences in demographics, medical history, anesthetic technique, or duration of anesthesia between groups except for a higher incidence of preexisting hypertension in the Decadron group. There was no difference between groups in the incidence of adverse events that might be attributed to decadron.


The incidence of postop nausea was lower in the Decadron group [despite the fact that all patients received ondansetron] (12.5% vs. 37%, P=0.003). Decadron patients were 67% less likely to be treated for PONV in the Ambulatory Surgery Unit. (P=0.001). Likewise, Decadron patients were treated for pain less often than Control patients while in the PACU (71% vs. 97%, P<0.001) and their pain was relieved with lower total doses of hydromorphone (P<0.001). But, once in the Ambulatory Surgery Unit, there was no difference in pain, or the need to treat pain, between the Decadron and Control groups. Twenty-four hours postoperatively, the Decadron group had higher QoR-40 scores, indicating a higher “quality of recovery” compared to Control group patients (QoR-40 scores 178 vs. 161, P<0.001). Lastly, time to discharge was also less in Decadron patients, 1 hour 37 min vs. 4 hours 25 min (P=0.009).


Conclusion Dexamethasone 8 mg IV approximately one hour before incision for laparoscopic cholecystectomy resulted in less nausea postoperatively and a reduction in pain while in the PACU. Quality of Recovery scores were somewhat improved in the Decadron group and discharge times were significantly shorter.



I think we are beginning to understand that dexamethasone can be a real friend to anesthesia in many patients. And, I’ll admit, going into this study, I expected it was going to show that Decadron did all sorts of good things for lap chole patients. I already believed it clinically. I’ve experienced it both as a patient and as an anesthetist. And all that may be true, but this study didn’t provide the evidence to convince me my observations were correct. Unfortunately, it suffers from several common analytical mistakes and, in my view, the investigators were a little too enthusiastic in their interpretation of the results.


So, was there anything I could believe in this study? Yes. Here is what I took away from it after winnowing out the chaff. 

  • The Decadron group had significantly less nausea; 67% less than Control patients in the Ambulatory Surgery Unit. This difference was highly statistically significant and was analyzed properly. While this effect is not unsurprising, the magnitude of the reduction in postop nausea makes me ask if we shouldn’t be giving Decadron to all lap chole patients unless there is a reason not to.
  • The Decadron patients were clearly ready for discharge much sooner than Control patients. This difference was highly statistically significant and was analyzed properly. This ultimate outcome criteria, how quickly patients could be discharged, may be the best evidence of the Quality of Recovery.


That said, the study was less convincing in the areas of:

  • Pain
  • Quality of Recovery

The study assessed pain with a Visual Analogue Scale, as is commonly done. They produced “statistically significant results.” But patients don’t seem to perceive pain linearly; not like a ruler where the difference between 1 inch and 2 inches is the same as the difference between 5 and 6 inches. Also, the difference in pain reported between groups really wasn’t that great from a clinical perspective; less than 10%. And the confidence interval included zero so I’m not even sure of that 10% difference. Is that clinically significant? I don’t think so. Nevertheless, Decadron patients were treated less often for pain  and they needed less opioid to relieve their pain. This pain treatment data was analyzed correctly and was highly clinically and statistically significant. I accept it as evidence that Decadron probably did result in better pain management, at least while patients were in the PACU.


Next to Quality of Recovery and the QoR-40 tool. This is a valid tool to assess the recovery experience of a patient. But averaging this data is like averaging what order runners finished a race in. It really doesn’t mean anything. Is the difference between a QoR-40 of 178 and 161 clinically significant? (Maximum value = 200. Higher numbers are better quality of recovery.) Statistically significant, probably. Clinically, I’m doubtful. Decadron may actually make people feel better. I’m only saying this study didn’t show it.


Lastly, and perhaps most importantly, this study raised a question in my mind that has profound ethical considerations. I think the biggest finding from this study was that Decadron patients were discharged about an hour sooner than Control patients. If that’s true over a wide range of patients it’s huge. Every patient out the door an hour sooner? Think of the money that would save! So, then I’m thinking, “could there be institutional pressure to give everyone Decadron to get them out the door sooner and save money?” Now I’m not saying there would be, nor am I trying to disparage administrators in general. But I do think this raises a question we need to think about. How do we make sure there is a decision making process in place that is based solely upon patient considerations and excludes institutional considerations? How do we make sure we aren’t “pressured” into giving Decadron when we wouldn’t otherwise? Something to think about.

Michael A. Fiedler, PhD, CRNA

© Copyright 2013 Anesthesia Abstracts · Volume 7 Number 8, August 31, 2013

The limits of succinylcholine for critically ill patients

Anesth Analg. 2012;115:873-9

Blanié A, Ract C, Leblanc PE, Cheisson G, Huet O, Laplace C, Lopes T, Pottecher J, Duranteau J, Vigué B


Purpose The purpose of this study was to identify factors associated with increases in arterial potassium concentrations after succinylcholine administration in ICU patients. A secondary purpose was to analyze the incidence of hyperkalemia ≥ 6.5 mmol/L under these same conditions.


Background Succinylcholine is often used to facilitate endotracheal intubation in high acuity patients. It causes paralysis by occupying nicotinic receptors and causing skeletal muscle cell depolarization. Part of the mechanism of skeletal muscle depolarization involves the movement of potassium from the intracellular to the extracellular space. As a result, when succinylcholine is used to cause skeletal muscle paralysis, plasma potassium levels rise temporarily; generally no more than of 0.5 to 1 mmol/L. Under some circumstances, however, the increase in plasma potassium may be much greater, resulting in acute, life threatening, hyperkalemia. Either way, peak plasma concentrations of potassium occur about 4 minutes after succinylcholine administration.


One of the most important causes of hyperkalemia after succinylcholine administration is a prior upregulation of nicotinic receptors. Risk factors for this upregulation are known and include:

  • anatomic denervation of skeletal muscle (e.g. paraplegia or quadriplegia > 48 hours old)
  • prolonged administration of a neuromuscular blocking drug
  • major burns
  • prolonged immobilization.

Despite what is known about hyperkalemia following succinylcholine administration, the contraindications to succinylcholine use in the ICU are not agreed upon. Previously, one small study found a correlation between the length of time a patient had been in the ICU and the magnitude of the increase in serum potassium after succinylcholine administration.


Methodology This was a prospective, observational study of established patterns of care over 18 months in a 22 bed trauma center surgical ICU. The study included all emergency intubations with succinylcholine in ICU patients. The choice of muscle relaxant and whether or not an induction drug was used was made by the attending physician. Patients with a severed spinal cord >48 hours old, burns, or a baseline potassium > 5.5 mmol/L were not included in the study. Also excluded were those patients who did not have a recent pre-intubation potassium or had their post intubation potassium drawn more than 5 minutes after succinylcholine administration. The normal post intubation arterial blood gas was collected between 3 and 5 minutes after succinylcholine administration to insure measurement of the peak potassium concentration. Statistical analysis was well thought out and rigorous.


Result Data from131 patients and 153 intubations were analyzed. Exclusion criteria were applied to 65 other intubations. Median patient age was 62 years and 72% were male. The median dose of succinylcholine was 1 mg/kg. Etomidate was used for induction in 88% of intubations. All patients received an induction drug. The median pre-succinylcholine potassium was 4.0 mmol/L. The median increase in potassium was 0.4 mmol/L.


Factors associated with an increase in potassium after succinylcholine administration to ICU patients included:

  • length of ICU stay (P<0.001)
  • acute cerebral pathology (P=0.005)
  • previous use of succinylcholine (P<0.001)
  • motor deficit > 48 hours (P=0.001)

When a multivariate analysis was performed the only risk factor that remained significant was the length of ICU stay prior to intubation (𝝆=0.56, P<0.001). The change in serum potassium after succinylcholine administration increased linearly with the length of the patients’ stay in the ICU.


In 11 cases (7% of intubations) the potassium after succinylcholine administration was ≥6.5 mmol/L. In 2 of these, ventricular tachycardia occurred and was successfully treated with calcium administration. Risk factors for a potassium ≥6.5 mmol/L compared to those whose potassium remained below this value included:

  • length of ICU stay (P<0.001)
  • acute cerebral pathology (P=0.008)
  • previous succinylcholine use (P=0.026)

When a multivariate analysis was performed, both length of ICU stay and cerebral pathology remained significant. The critical threshold for length of ICU stay predictive of a post-succinylcholine potassium ≥6.5 was 16 days. This threshold was both sensitive and specific. There were 126 intubations before this 16 day threshold and only one of them had a post-succinylcholine potassium ≥6.5. There were 27 intubations after the 16 day threshold and 10 had a post-succinylcholine potassium ≥6.5. The median increase in this group was 1.9 mmol/L. One patient had an increase of almost 5 mmol/L.


Conclusion Length of ICU stay ≥16 days correlated with a greater than normal increase in serum potassium following succinylcholine administration. Risk of a serum potassium ≥6.5 mmol/L after succinylcholine included both an ICU stay ≥16 days and an acute cerebral pathology.



This was an especially well done study from an analytical perspective and it accomplished a lot given that the only way to carry it out was as an observational study. Now, I question whether all anesthesia providers in the USA would always use succinylcholine for intubations in the ICU, but that was the standard of care at the hospital in France when these data were collected and this study still has something to teach us. This is the first evidence I’ve seen that simply being immobile for long enough increases the risk for hyperkalemia following succinylcholine. We’ve long known that spinal cord injury patients that couldn’t move were at risk. But this is different. These were critically ill patients who were immobile because they were severely injured or unconscious; no spinal cord injury required.


One thing this study probably doesn’t tell us is the critical number of days of immobility (≥16 in this study) that places patients at risk for hyperkalemia after succinylcholine in your ICU. The reason I say this is twofold. First, the degree of immobility is likely an important factor. Perhaps all their patients were sedated to the point that they didn’t move at all and all your ICU patients are unsedated and move from time to time. Second, their patient population was mostly older men (median age 62 years, 72% male). To the extend that your patient population looks different than theirs, the critical length of immobility associated with the risk of hyperkalemia after succinylcholine may be shorter or longer. Please note, however, that the longer the immobility the higher the risk should still be true, no matter how different the patient population.


Lastly, I want to make some observations about figure 1 in this study which plotted the change in potassium after succinylcholine administration by the length of the patient’s stay in the ICU. There was a clear linear increase in potassium after succ as the length of ICU stay increased. The slope approximated 0.24 by my visual calculation. Not huge, but clinically significant over time. Changes in potassium in patients who’d been in the ICU less than 10 days had little variability; they tended to be closely clustered around the slope of the regression line. However, being in the ICU less than 16 days was no guarantee that patients wouldn’t get hyperkalemic after succ. Several patients in the 7 to 9 day range had increases in potassium of 1.5 to 2 mEq/L. So if they’d started with a K+ of 5 they would have ended up at 6.5 to 7 after succ. The variability in potassium increase got progressively larger the longer the length of the ICU stay. For example, one patient had almost no increase in potassium after succ at 28 days while another patient had a 5 mEq/L increase at 20 days.


This study gives us some solid evidence to better inform our choice of muscle relaxant for intubations in the ICU. If you use succ for intubation in your ICU, it may warrant measuring the pre and post succinylcholine potassium for a while in patients that have been there for a week or longer to see how large the increase in potassium is in your patients.

Michael A. Fiedler, PhD, CRNA

For potassium1 mmol = 1 mEq. This is not always the case, it depends upon the ionization state of the element in question.

© Copyright 2013 Anesthesia Abstracts · Volume 7 Number 8, August 31, 2013

Systemic lidocaine to improve postoperative quality of recovery after ambulatory laparoscopic surgery

Anesth Analg. 2012;115:262-267

De Oliveira Jr. GS, Fitzgerald P, Streicher LF, Marcus RJ, McCarthy RJ.


Purpose This study sought to answer the question, “Does systemically administered lidocaine reduce postoperative pain and improve the quality of recovery in outpatients?”


Background Postoperative pain is associated with slower postoperative recovery. Relieving pain in outpatients can be challenging even when employing multimodal analgesia. Avoiding opioids in outpatients is helpful to reduce opioid-related side effects. Systemically administered lidocaine has been shown to reduce postoperative pain. At least one study has shown that systemically administered lidocaine reduced pain in outpatients. While a sufficient dose of intraoperative opioids may result in hyperalgesia postoperatively, lidocaine has been shown to reduce remifentanil-induced hyperalgesia. The safety of relatively low dose systemic lidocaine has previously been demonstrated by research.


Methodology This prospective, randomized, double-blind study included healthy women undergoing outpatient gynecologic laparoscopy. Patients chronically taking opioids or who were on corticosteroids were excluded. If the laparoscopic procedure was converted to an open procedure subjects were withdrawn from the study.


Patients were randomized to either a lidocaine or placebo (saline) group. The lidocaine group received 1.5 mg/kg lidocaine IV followed by 2 mg/kg/h until the end of surgery. Placebo patients received the same volume of saline. All patients received 0.04 mg/kg midazolam IV preoperatively. General anesthesia was induced with 1 to 2 mg/kg propofol and maintained with sevoflurane and a remifentanil infusion. Rocuronium was used for muscle relaxation. Remifentanil was discontinued when the trocars were removed. Ketorolac 30 mg and ondansetron 4 mg were administered before emergence. In the PACU, hydromorphone 0.4 mg IV was given every five minutes until patients reported their pain as less than 4 on a 0 to 10 numeric rating scale. Being ready for discharge to home was assessed with the modified post-anesthesia discharge scoring system which evaluates vital signs, ambulation, pain, PONV, and surgical bleeding. Each category is assigned a score from 0 to 2, and higher scores are better; ≥ 9 is generally considered ready for discharge. For pain relief at home, patients were instructed to take ibuprofen 400 mg PO every 6 hours. If they needed additional pain relief they were next to take hydrocodone 10 mg/acetaminophen 325 mg.


Twenty-four hours later an investigator called each patient to find out how much pain medicine they had taken and to administer a Quality of Recovery questionnaire (QoR-40). The QoR-40 includes questions about physical comfort, pain, independence, psychological support, and emotional state. Scores on the QoR-40 range from 40 to 200. Higher scores represent a higher quality of recovery. The investigators accepted a 6.25% difference in QoR-40 (absolute score difference of 10) as a “clinically relevant improvement in quality of recovery.” Statistical analysis was appropriate.


Result A total of 63 patients completed the study and their data were analyzed, 31 in the lidocaine group and 32 in the placebo group. Demographics were no different between groups. The median difference in QoR-40 score was 16 with the lidocaine group having higher quality of recovery scores (P=0.02).


The lidocaine group also had less pain and used less opioid pain medicine. In the PACU, the lidocaine group used a median of 6.2 mg morphine equivalents compared to 8.6 mg morphine equivalents in the placebo group (P=0.04). From discharge home until 24 hours postop, the lidocaine group used a median of 20 mg morphine equivalents compared to 30 mg morphine equivalents in the placebo group (P=0.01). Notably, 5 patients in the lidocaine group used no opioid pain medication at home compared to only 1 patient in the placebo group. Also interesting, though perhaps not unexpected, those patients who used less opioid analgesia also reported a higher quality of recovery.


The average time to hospital discharge was 91 minutes in the lidocaine group compared to 118 minutes in the placebo group (P=0.03). Lidocaine patients were discharged 27 minutes earlier, on average.


Conclusion Patients who received a lidocaine bolus and infusion intraoperatively experienced a better quality of recovery, had less pain, and used less opioid pain medication.



We have known for decades that systemic lidocaine could produce a number of desirable effects in surgical patients. Studies now clearly show how to use lidocaine during general anesthesia in a clinically significant and feasible way. The doses used in this study are only slightly higher than those we used to use routinely as an antiarrhythmic in CCU patients. We have lots of evidence and clinical experience to show that such doses are generally safe in awake patients. And, of course, the CNS depression during general anesthesia makes the possibility of lidocaine CNS side effects, such as seizures, practically zero. I say all this because the idea of running an infusion of lidocaine throughout a general anesthetic sounds a bit, well, … crazy or reckless … but upon careful consideration it is really very unlikely to cause harm at these doses. It is also pretty easy and cheap to do, so if there is a reasonable possibility that it will improve the patient’s postoperative outcome we should consider it.


While the QoR-40 tool has been validated and is accepted, it is a qualitative tool. Qualitative methods can be valuable and I’m not suggesting otherwise. But we must view qualitative results differently than we view quantitative results like blood pressure and the number of milligrams of a drug used. Tools like the QoR-40 produce results that are “softer” than mm Hg or mg morphine. In my opinion, qualitative results should require larger differences to impress us because there is more “play” in their results. In this study the lidocaine group had a higher “quality of recovery,” a median QoR-40 score that was 16 points higher than the control group. While this was statistically significant, in my opinion it was, at best, only minimally clinically significant. (The investigators disagred with me. They saw it as quite significant.)


So am I saying that this study was no good because they used the QoR-40? Not at all. In fact, other measures showed that the intraoperative infusion of lidocaine significantly improved patients postoperative experience. During the first 24 hours postoperatively, lidocaine patients used ⅓rd less opioid analgesia, 20 mg vs. 30 mg. That is clinically significant. Lidocaine patients were consistently discharged home about 30 minutes earlier than placebo patients despite the fact that those applying the discharge criteria had no idea which patient was in which group. That is clinically significant. My bottom line is that nothing says “I feel better” more strongly than having less pain, using less pain medicine, and meeting discharge criteria way faster. So, consider adding a lidocaine bolus and infusion to the anesthetic of your outpatients and see for yourself if it makes a difference.

Michael A. Fiedler, PhD, CRNA

© Copyright 2013 Anesthesia Abstracts · Volume 7 Number 8, August 31, 2013

Policy, Process, & Economics
Building shared situational awareness in surgery through distributed dialog

J Multidiscip Healthc 2013;6:109-118

Gillespie BM, Gwinner K, Fairweather N, Chaboyer W


Purpose The purpose of this study was to describe the strategies used to communicate decisions during surgery and the ways in which this dialog creates or compromises the situational awareness of all involved.


Background Providers in the operating room (OR) develop situational awareness by knowing “where have we come from, where are we now, and where are we going?” Teams who function in dynamic environments such as the OR require a shared situational awareness, which helps all members focus on the “big picture.” Shared situational awareness requires communication through dialog, contributes to effective teamwork, and clinical decision-making.


Methodology During this study, 143 Australian metropolitan surgeons, physician anesthetists, nurses, and ancillary staff were observed or interviewed regarding their dialog and decision making in the OR. Data were examined for patterns and themes using established qualitative research methods. 


Result Providers used distributed (explicit) dialog in order to build shared situational awareness and coordinate clinical decision-making. Three features of decision-making dialog emerged as recurring themes. These were synchronizing/strategizing, sharing local knowledge, and prioritizing contingencies.


Synchronizing/strategizing allowed providers to time tasks appropriately based on cues provided by other team members. An example of synchronizing/strategizing occurred when a surgeon announced publicly that he was going to talk himself through the next step of the procedure. Synchronizing/strategizing can also occur in response to more subtle cues, such as when scrub nurses anticipated the need for an instrument before it was requested because they overheard dialog not necessarily addressed to them.


Sharing local knowledge included providers’ understanding of the patient condition, the procedure, each other’s capabilities, and equipment. The degree to which knowledge is shared among the providers influenced their dialog with each other. An example of shared knowledge occurred during a cardiac procedure, during which the anesthetist described the care as regimented, consisting of one way of doing things. Since all participants knew the cardiac regime, only subtle differences in decision-making needed  to be discussed, in contrast to more involved dialog required during general cases. Provider experience plays an obvious role with shared local knowledge. Providers with great levels of experience can help maintain a smooth work flow for the entire team.


Prioritizing contingencies takes place during times of urgency or emergency when the team must respond to unpredicted situations. Dialog among providers is needed to communicate changes in the plan of care. An example of prioritizing contingencies occurred when the anesthetist communicated information of the patient’s deteriorating condition. The surgeon confirmed understanding of this dialog and decided to abandon his original surgical plan and instead focus on patient stabilization and incision closure.


Conclusion Decision making in the OR can benefit from communication strategies to improve situational awareness. Dialog among providers that is distributed, that is to say explicitly and openly spoken, helps establish a shared perspective of the big picture that promotes cohesive patient care.



It is tempting to view OR events in separate silos based on provider roles. Anesthesia decisions seem separate from surgical decisions which are separate from decisions made by surgical nurses. Providers who operate in silos may block out important information that should influence their decision making. Maintaining a sense of situational awareness promotes cohesive patient centered care delivered by coordinated teams of providers. Building a cohesive shared awareness requires effective communication through explicit dialog among providers.


Communication may seem like something that is just “common sense,” making it an unusual subject for a research study. The ability to communicate well may seem to just come naturally. But the reality is that some of us do struggle to be understood and any of us can be less understood than we realize. Communication is a skill that can benefit from focused practice similar to efforts to improve IV starts or peripheral block placement. Studies such as this provide us with increased understanding about the individual components of OR dialog. We can use this increased understanding to improve our own communication skills as well as mentor the skills of nurse anesthesia students and new graduates.


This study gives us information about how we talk in the OR and what we talk about. While Australian ORs do not have providers equivalent to CRNAs, the fundamental elements of OR communications are similar. Our scientific understanding of OR teamwork is still in the early stages, and efforts have focused mostly on crisis management and error prevention. A nice feature of this study is that data were collected during routine procedures; adding to our understanding of ordinary dialog and decision-making.


The clinical examples included in this article illustrate different ways we talk in the OR. Providers sometimes talk outloud, in a sort of free association way, to share their thinking with others. Sometimes providers overhear dialog not necessarily addressed to them, but which nonetheless helps them in their own decision making. Other dialog is specifically aimed to deliver a particular message, and is completed by the other provider explicitly acknowledging their receipt and understanding. Each one of these types of dialog is a specific strategy we can make use of in our communication and decision-making.


This article identified three types of things we talk about in the OR. We use dialog as a guide to synchronizing/strategizing our decisions. We use dialog to share knowledge among providers. Our dialog also helps us prioritize contingencies, especially when events occur differently than we planned. CRNAs are skilled at multitasking and are used to doing all of these things simultaneously. As CRNAs, our dialog to build a shared situational awareness will not only increase the quality of our own decision-making, it will also help other providers improve theirs.

Cassy Taylor, DNP, DMP, CRNA

© Copyright 2013 Anesthesia Abstracts · Volume 7 Number 8, August 31, 2013