Adverse Sedation Events and Impact on Provider Specialty in Pediatrics

Recently an article was published in Pediatrics titled Impact of Provider Specialty on Pediatric Procedural Sedation Complication Rates with authors Kevin G. Couloures, Michael Beach, Joseph P. Cravero, Kimberly K. Monroe and James H. Hertzog.   (2011, 127, pages e1154-e1160).  The conclusion presented in the abstract of the article is

“In our sedation services consortium, pediatric procedural sedation performed outside the operating room is unlikely to yield serious adverse outcomes. Within this framework, no differences were evident in either the adjusted or unadjusted rates of major complications among different pediatric specialists.”

The authors mentioned in this article another study conducted in Pediatrics which found over a 27 year period there were 60 cases identified in which death or severe neurological injury occurred of children 0.08 years to 20 years old. However, this study presented different conclusions in the abstract of the study (Adverse Sedation Events in Pediatrics: A Critical Incident Analysis of Contributing Factors with authors Charles J. Cote, Daniel A. Notterman Helen W. Karl, Joseph A. Weinberg, and Carolyn McCloskey – 2000, 105, pages 805-814).

“There were differences in outcomes for venue: adverse outcomes (permanent neurologic injury or death) occurred more frequently in a nonhospital-based facility, whereas successful outcomes (prolonged hospitalization or no harm) occurred more frequently in a hospital-based setting.”

One not very familiar with medical journal articles should understand that most often the abstract is what is read and that the lay public many times does not have easy access to the full article.

If one takes a close look at the study design in both studies one begins to see many differences which call into question while the 2011 was even published.

In the 2011 study pediatric specialists at 38 different sites submitted 133,941 sedation records of patients up to their 19th birthday (of which 131,751 were used in the analysis) during the study period between July 1, 2004 and December 31, 2008.  No deaths were recorded present in any of the records.

If one takes a look at the dental deaths presented on my website, one can see that it appears that at least 10 children died in the U.S. during this time frame while visiting the dentist. So this should really call into question whether or not the results of this 2011 Pediatric study are clinically meaningful and if the study suffers from poor study design.

The 2011 study does address some of the limitations of their study this includes 1) an inability to ensure that the definitions of adverse events have equal meaning between all providers, 2) the level of sedation received by the child can potentially be related to major complications that occur, 3) that the selection or exclusion bias could be introduced such as if not all potential cases from the site were made aware to the study reviewers, and 4) bias in site selection such as having participants be highly motivated to have systems and training in place.

Again this 2011 study reports no deaths occurred amongst the pediatric procedural sedation cases during the study even though my own analysis looking strictly at dental settings found at least 10 children to have died during this time period in the U.S.

Looking at the 2000 study we see clear differences in study design. The 2000 study received used three different methods to obtain their study population 1) adverse drug reports from the Food and Drug Adminstration (FDA) from 1969 through March 20, 1996 for patients up to their 20th birthday, 2) pediatric adverse drug events reported to the US Pharmacopoeia, and  3) a survey mailed to 310 pediatric anesthesiologists, 470 pediatric intensivists, and 575 pediatric emergency medicine specialists.  From this the reviewers arrived at 118 pediatric adverse sedation events and included 95 of these in their analysis which included 60 deaths as indicated earlier.

Specifically of interest in this 2000 study is that 13 deaths among 43 adverse events (30.2 %) occurred at a hospital facility while 23 deaths among 28 adverse events (82.1%) occurred at a non hospital facility.  This is a statistically significant result as noted in the 2000 study with p < .001. To confirm one could for example do a 2 proportion test in a statistical program such as MINITAB or Stata. From MINITAB 16 I get a 95% CI (confidence interval) for the difference of  (-0.716505, -0.321701) which indicates clear statistical significance (along with the p value).

On the other-hand in the 2011 study no statistically significant results can be found amongst the 95% confidence intervals of the number of complications per 10,000, and the odds ratio (OR) amongst anesthesiologists, emergency physicians, intensivists, pediatricians, and other.

In the 2011 study a similar type of table is provided as in the 2000 study where a breakdown of just the complication by types is shown. No percentages are indicated and the authors state

“No statistical difference among providers was present.”

Upon close inspection this is actually not true.  The authors are clearly at the 95% confidence level and so the statistically significant p value is 0.05. We see that 4 emergency anesthesia consultations occurred amongst 50 major complications for Intensivists while 0 emergency anesthesia consultations occurred amongst 14 major complications for anesthesiologists.  Again using the 2 proportion test in MINITAB 16 I get a  95% CI (confidence interval) for the difference of  (0.00480274, 0.155197) with a p value of 0.037 which is less than 0.05 indicating statistical significance at the 95% level.  Although this may not be meaningful the authors should at least present the statistics and not lie.

Looking at the study design and statistical analysis of the 2011 study compared to the 2000 study one can clearly see the 2000 study should hold much more weight as a valid and well thought out study. Since the study was published roughly 11 years prior, I seriously question why the 2011 authors didn’t bother to design a better study when a precedent was clearly set already.

This calls into question the sponsor of the 2011 study and what they are really trying to show. See as well.

Leave a Comment