ISSN NUMBER: 1938-7172
Issue 12.16

Michael A. Fiedler, PhD, CRNA

Contributing Editors:
Mary A Golinski, PhD, CRNA
Dennis Spence, PhD, CRNA

Assistant Editor
Heather Whitten, MEd.

A Publication of Lifelong Learning, LLC © Copyright 2019

New health information becomes available constantly. While we strive to provide accurate information, factual and typographical errors may occur. The authors, editors, publisher, and Lifelong Learning, LLC is/are not responsible for any errors or omissions in the information presented. We endeavor to provide accurate information helpful in your clinical practice. Remember, though, that there is a lot of information out there and we are only presenting some of it here. Also, the comments of contributors represent their personal views, colored by their knowledge, understanding, experience, and judgment which may differ from yours. Their comments are written without knowing details of the clinical situation in which you may apply the information. In the end, your clinical decisions should be based upon your best judgment for each specific patient situation. We do not accept responsibility for clinical decisions or outcomes.

Table of Contents

Bag-mask ventilation during tracheal intubation of critically ill adults

N Engl J Med 2019; Epub ahead of print

DOI: 10.1056/NEJMoa1812405

Casey JD, Janz DR, Russell DW, Vonderhaar DJ, Joffe AM, Dischert KM, Brown RM, Zouk AN, Gulati S, Heideman BE, Lester MG, Toporek AH, Bentov I, Self WH, Rice TW, Semler MW; PreVent Investigators and the Pragmatic Critical Care Research Group



Purpose   The purpose of this study was to determine if bag-mask ventilation (BMV) between induction and laryngoscopy would prevent hypoxemia, compared to no ventilation, without increasing aspiration risk in critically ill adults.


Background   Hypoxemia during intubation of critically ill adult ICU patients increases the risk of cardiac arrest and death. Current guidelines recommend a rapid sequence intubation (RSI) be used when intubating critically ill adults in the ICU unless ventilation is needed to treat hypoxemia. However, there is controversy in the literature as to whether or not a strict RSI should be used in this population. Some guidelines recommend providing bag-mask ventilation between induction and laryngoscopy. Therefore, the investigators conducted the Preventing Hypoxemia with Manual Ventilation during Endotracheal Intubation trial (PreVent) to test the hypothesis that bag-mask ventilation between induction and laryngoscopy would significantly increase oxygen saturation between induction and two minutes post intubation.


Methodology   This was a multi-center, unblinded, randomized controlled trial. The study was conducted at seven academic medical centers’ intensive care units. Critically ill adult patients requiring urgent tracheal intubation where included. Patients were excluded if they required intubation prior to randomization or if the clinician believed they needed to make the choice to ventilate or not due to the patient’s condition. Patients were randomized to a Ventilation or No Ventilation group. In the Ventilation before intubation group clinicians provided ventilation with at least 15 LPM oxygen flow to the self inflating bad. In the No Ventilation group ventilation was not permitted except after a failed intubation, for treatment of hypoxemia, or if the clinician felt ventilation was necessary. All patients were preoxygenated prior to intubation. Induction agents, use of cricoid pressure, and intubation equipment was at the discretion of the clinician performing the intubation.


The primary outcome was the lowest oxygen saturation observed during the interval between induction and two minutes after intubation. The secondary outcome included incidence of severe hypoxemia, a saturation <80%. Other data collected were lowest oxygen saturation <90% and 70%; median decrease in oxygen saturation; and operator-reported aspiration and new opacity on chest x-ray. Power and statistical analysis was appropriate.


Result   There were N = 199 in the Ventilation group and N = 202 in the No Ventilation group. No significant differences were found in patient demographics, active medical conditions, reason for intubation, difficult airway characteristics, aspiration risk factors, use of Bilevel positive airway pressure, highest fraction of inspired oxygen in the prior six hours, or median pre-intubation oxygen saturation. Median age was 60 years, 56% were male, and BMI was approximately 27. The most common reason for intubation was hypoxic respiratory failure (58%). Patients in the Ventilation group had a higher rate of gastrointestinal bleeding (P = 0.02) and lower rate of pneumonia (P = 0.04).


Mean oxygen saturation at the time of intubation was 99% in both groups. The percent with an oxygen saturation <92% at intubation was 14% in the Ventilation group vs. 9% in the No Ventilation group. In the No Ventilation group 22% required bag mask ventilation for hypoxemia prior to intubation. Median time from induction to intubation was similar in both groups. A video laryngoscope was used in 36% of the Ventilation group and 32% in the No Ventilation group (P = NS). First intubation success was 84% in the Ventilation group and 80% in the No Ventilation group (P = NS).


The group median lowest oxygen saturation was significantly higher in the Ventilation group (96%) than the No Ventilation group (93%) (P = 0.01; Table 1). Severe hypoxemia was significantly less common in the Ventilation group (11%) vs. the No Ventilation group (23%) (relative risk 0.48, P < 0.05). The Ventilation group's lowest oxygen saturation was higher than the No Ventilation group (P < 0.05). Patients with a lower oxygen saturation benefited the most from Ventilation prior to intubation (P = 0.01). Clinician-reported aspiration was similar between groups as was the incidence of new opacity on x-ray (Table 1).



Conclusion   Critically ill adults who received bag mask ventilation prior to intubation had higher oxygen saturations and lower rates of severe hypoxemia compared to those receiving no ventilation.




These results probably seem obvious to most of us; if you ventilate the patient after induction, they will have higher oxygen saturations. In the critical care world, many patients needing urgent intubation would be considered full stomachs, so they are at much greater risk for aspiration. However, as the authors pointed out, there is controversy in the critical care literature as to whether or not an RSI should be performed. This study provides some reassuring evidence to suggest that ventilation after induction in critically ill patients decreases hypoxemia and may not increase the risk of aspiration.


I say, 'may not increase the risk of aspiration' because the study was not powered to examine the rate of aspiration. That would have required a sample size close to 4,000. However, there is a trend towards lower aspiration and better hemodynamics when ventilation was provided prior to laryngoscopy.


So, what should we take away from this study? First, I think we should make sure we have all our airway adjuncts and backup equipment readily available. Make sure the patient is positioned in an optimal sniffing position. My preference would be to use a video laryngoscope on my first attempt and have a bougie readily available if I do direct laryngoscopy. Make sure you have a free-flowing intravenous line, and suction and vasopressors at the ready. If a nurse is administering your drugs, speak clearly and make sure they repeat back the drug and dose prior to administration. And finally, try to have an extra pair of experienced hands around to assist in case you get into trouble.


Dennis Spence, PhD, CRNA

This article is available free full text at the following url:


The views expressed in this article are those of the author and do not reflect official policy or position of the Department of the Navy, the Department of Defense, the Uniformed Services University of the Health Sciences, or the United States Government.


© Copyright 2019 Anesthesia Abstracts · Volume 12 Number 16, February 25, 2019

Regional Anesthesia
Pectoral nerve blocks to improve analgesia after breast cancer surgery: a prospective, randomized and controlled trial

J Clin Anesth 2018;45:12-17

DOI: 10.1016/j.jclinane.2017.11.027

Neethu M, Pandey RK, Sharma A, Darlong V, Punj J, Sinha R, Singh PM, Hamshi N, Garg R, Chandralekha C, Srivastava A



Purpose   The purpose of this study was to examine the efficacy of ultrasound-guided pectoral nerve blocks I and II in patients undergoing breast cancer surgery.


Background   Effective control of postoperative pain after breast cancer surgery is essential for a rapid recovery. Pectoral nerve blocks I and II (PECS I and PECS II) have been purported to provide effective analgesia for breast surgery. PECS I blocks the medial and lateral pectoral nerves and a PECS II blocks the lateral branch of intercostal nerves. Both blocks are easily done under ultrasound guidance and provide a sensory block to a majority of the chest wall and breast. However, further research is needed to confirm the efficacy of PECS I and PECS II blocks for breast cancer surgery. The investigators hypothesized PECS I and II blocks would decrease postoperative fentanyl requirements, time to first analgesic request, pain scores with movement and rest, and improve shoulder range of motion during the first 24 hours after breast cancer surgery.


Methodology   This was a prospective randomized controlled trial conducted at a single center on 60 ASA I and II women undergoing modified radical mastectomy with or without sentinel node biopsy or axillary node dissection. Patients were randomized to either a PECS Block Group or Control Group that did not receive the PECS blocks. The PECS Block Group received a PECS I block with10 mL 0.25% ropivacaine and a PECS II block with 20 mL 0.25% ropivacaine, both with ultrasound-guidance. A standard general anesthetic and postoperative analgesic plan was administered to all patients.


The primary outcome was the cumulative dose of fentanyl administered during the first 24 hours, both intraoperative and postoperative. Secondary outcomes included:

  • time to first analgesic request
  • pain scores at movement & rest
  • limitation of shoulder movement
  • incidence of PONV
  • patient satisfaction

Sample size and statistical analysis were appropriate (P < 0.05).


Result   No significant differences in demographics were reported. Total fentanyl requirement in the first 24 hours averaged 171 µg less in the PECS block group vs. the control group (P < 0.001). The mean time to first analgesic request also averaged 34 minutes longer in the PECS blocks group (P < 0.001). Pain scores at rest and movement were lower in the immediate postoperative period, and at 1h, 2h, and 4h in the PECS block group compared to the control group. No significant difference in pain scores were reported at 6h, 12h, and 24h. Shoulder movement was significantly less limited in the PECS block group at 4h and 5h after surgery compared to the control group (P < 0.001), but similar at 6h and 24h. No significant differences were found in PONV (73% vs. 67%). Satisfaction with postop analgesia was much higher in the PECS Block Group, 43%, compared to the Control Group in which only 3% of patients reported being very satisfied (P < 0.001).


Conclusion   Ultrasound-guided PECS I and II blocks provide effective analgesia in patients undergoing breast cancer surgery.




The PECS I and II blocks are interfascial blocks purported to provide analgesia to the breast and anterior chest wall by blockade of the pectoral and intercostal nerves. They are reported to be safer alternatives to thoracic epidural, paravertebral, intercostal, and intrapleural blocks. In this study the PECS blocks with 0.25% ropivacaine reduced analgesic requirements and pain scores for the first 4 hours after surgery. Overall, I felt like this was a well-designed study. It could have been strengthened had a sham block been performed; however, this is sometimes difficult to get approved by an institutional review board.

In my cursory search of the literature it appears there are not a lot of randomized trials on the efficacy of PECS I and II blocks for breast surgery. Many appear to be conducted outside the United States. One interesting study I found was from India in which they demonstrated a PECS II block with ropivacaine delivered by the surgeon under direct vision reduced analgesic requirement and pain scores significantly when compared to a saline injection in patients undergoing modified radical mastectomy.1 At my facility our surgeons have asked us to start placing preoperative PECS blocks for radical mastectomy patients with a mixture of liposomal bupivacaine and 0.25% bupivacaine combined with a multimodal analgesic pathway. Anecdotally, they have reported good pain relief and decreased opioid requirements. However, this requires a surgeon who injects the local anesthetic into the proper fascial planes. In experienced hands either direct vision or ultrasound-guided PECS blocks may provide effective postoperative analgesia.


Dennis Spence, PhD, CRNA

1. Thomas M, Philip FA, Mathew AP, Jagathnath K. Intraoperative pectoral nerve block (Pec) for breast cancer surgery: A randomized controlled trial. J Anaesthesiol Clin Pharmacol 2018;34:318–323.

Excellent reviews of PECS I and II ultrasound-guided blocks can be found at the following websites:


The views expressed in this article are those of the author and do not reflect official policy or position of the Department of the Navy, the Department of Defense, the Uniformed Services University of the Health Sciences, or the United States Government.


© Copyright 2019 Anesthesia Abstracts · Volume 12 Number 16, February 25, 2019

Improving mortality in trauma laparotomy through the evolution of damage control resuscitation: Analysis of 1,030 consecutive trauma laparotomies

J Trauma Acute Care Surg 2017;82:328-333

DOI: 10.1097/TA.0000000000001273

Joseph B, Azim A, Zangbar B, Bauman Z, OʼKeeffe T, et al



Purpose   The purpose of this study was to discover if Damage Control Resuscitation implemented perioperatively for trauma laparotomy surgery resulted in improved outcomes. The authors hypothesize that Damage Control Resuscitation measures, such as minimizing crystalloid administration and increasing early blood product administration, would prevent the lethal triad of trauma: acidosis, coagulopathy, and hypothermia.


Background   The goal of an abbreviated laparotomy, also called a Damage Control Laparotomy, is to quickly treat severe injuries in a patient needing emergency surgery but in whom a prolonged procedure would worsen their condition. This type of laparotomy was adopted years ago when crystalloid was the main fluid used for trauma resuscitation. The goal of a trauma laparotomy is to stabilize the patient until they are in better condition for a more extensive procedure. Current evidence shows that crystalloid resuscitation is linked to generalized tissue and organ system inflammation. It also favors dilutional coagulopathy. As a result, extensive crystalloid resuscitation contributes to acidosis, coagulopathy, and hypothermia in the trauma patient.


It has been theorized that coagulopathies begin to develop very early after a traumatic injury. With traumatic hemorrhage, volume replacement with blood products can prevent many common complications. Excessive or even moderate crystalloid use during an initial trauma surgery and subsequent surgeries is also associated with postoperative complications.


Methodology   This was a retrospective medical record analysis. All trauma patients who underwent laparotomies at a Level 1 trauma center were included. The only exclusion criteria were death within 30 minutes of the start of the operation. The institution implemented a resuscitation protocol based on the principles of Damage Control Resuscitation. Damage Control Resuscitation includes minimizing crystalloids and increasing blood products in a ratio- based transfusion process. A massive transfusion protocol was begun at the same time. Damage Control Resuscitation efforts began in the emergency department and continued through surgery/anesthesia to the intensive care unit. Three groups of patients were established; one before Damage Control Resuscitation (DCR) was instituted, one early in the use of DCR - a transitional and learning time, and one after DCR was well established:

  • pre DCR group
  • transitional DCR group
  • established DCR group

The primary outcome measure was mortality. Secondary outcome measures included complications, hospital and ICU length of stay, ventilator days, and hospital cost.


The following data points were collected and compared between groups:

  • Patient demographics
  • Glasgow coma scores
  • Volume of crystalloid & blood product transfused during the first 24 hours
  • Laboratory values
  • Surgical intervention
  • Abdominal injury grades
  • Complications
  • Length of stay
  • In hospital mortality

Comparisons were made between the three groups to identify significant differences in terms of the primary and secondary outcome measures.


Result   The total sample size was 1,030 patients; N = 265 in the pre DCR group, N = 261 in the transitional DCR group, and N = 504 in the established DCR group. There were no differences in demographics, admission vital signs, admission laboratory values, or injury severity between the three groups.


Statistically significant differences were discovered in the following areas:

Initial 24-hour crystalloid (P = 0.001)

  • pre DCR 9.1 L
  • transitional DCR 5.8 L
  • established DCR 5.1 L

Initial 24-hour blood products (P = 0.03)

  • pre DCR 0.85 L
  • transitional DCR 1.07 L
  • established DCR 1.9 L

There was a significant reduction in overall mortality rates, complications, hospital and ICU length of stay, and hospital costs between the pre Damage Control Resuscitation group and the established Damage Control Resuscitation group. The volume of infused crystalloid was an independent predictor of mortality in the pre DCR and transitional DCR groups (P = 0.01 for both). The volume of crystalloid was also an independent predictor of complications and hospital length of stay in these groups but not the established DCR group. Not surprisingly, coagulopathy and acidosis were independently associated with mortality in all three groups, as was age, Glasgow Coma Scale, Injury Severity Score, and total volume of blood product given.



Conclusion   Minimizing crystalloids, normal saline or ringer’s lactate, while replacing blood loss and coagulation factors with blood products during trauma resuscitation and laparotomy was associated with significantly lower mortality rates, fewer overall complications, fewer infections, shorter ICU and hospital length of stay, and lower hospital costs.




Two years ago, I became certified in advanced trauma life support even though I have practiced most of my professional career in level one inner city trauma hospitals. I wanted to be sure I was staying current and practicing according to the most recent evidence; evidence that is continually evolving. I chose to write this abstract because I believe we still have some work to do in terms of trauma resuscitation.


My educated guess is we are still relying too much on crystalloids in acute hemorrhagic trauma situations. We may be knowledgable about the lethal triad of trauma, but the reality is, we often receive patients only after crystalloid resuscitation has begun either in the field or the ER. And some of what we are taught almost seems to contradict the teachings of Damage Control Resuscitation. Over the past decade we have moved practice towards allowing a greater drop in hemoglobin because we are keenly aware of transfusion related adverse events and the poor outcomes associated with liberal transfusion practices. But it is different for trauma resuscitation. We know more about the pathophysiology of trauma and the lethal triad. The authors also explain, and I concur, that trauma life support guidelines include crystalloids 1-2 L for intra vascular expansion, stabilizing vascular volume, and replace with fluid what is lost into the interstitial/intracellular spaces.


This study supports allowing permissive hypotension, replacing blood with blood, and greater ratios of RBC to FFP to Platelets. It demonstrates that too much crystalloid is associated with higher mortality and complications, hospital and ICU length of stay, and cost of care. We may not be able to control what is done in the field or in the ER, but we can collaborate with other teams to promote Damage Control Resuscitation starting in the field, through the ER, and into the OR. Point of care testing is commonplace in trauma anesthesia. We are staying abreast of acid- base balance, hemoglobin and hematocrit values, and coagulation states. Thromboelastography (TEG) is routinely used to assess coagulation status. If excessive crystalloid is given routinely during anesthesia for trauma surgery, it should cease, and we should be involved in assessing outcomes of our care. We should share what we find when we follow up on our trauma patients. Have the trauma team participate in grand rounds. Only then can we modify practice across disciplines based on objective data.


Mary A Golinski, PhD, CRNA

Additional reading on Damage Control Resuscitation:


Voiglio EJ, Dubuisson V, Massalou D, Baudoin Y, Caillot JL, Létoublon C, Arvieux C. Abbreviated laparotomy or damage control laparotomy: Why, when and how to do it? J Vasc Surg. 2016;153:13-24.


Lamb CM, MacGoey P, Navarro AP, Brooks AJ. Damage control surgery in the era of damage control resuscitation. Br J Anaesth. 2014;113:242-9.


© Copyright 2019 Anesthesia Abstracts · Volume 12 Number 16, February 25, 2019