The Problem With Chemical Indicators

Chemical indicators are used in many blood banks as the sole means for recording blood product temperatures when blood is returned for storage. There are several problems that affect chemical indicators and impede technicians’ abilities to make a determination whether a blood product should be discarded or not.

A generic square placeholder image with rounded corners in a figure.

False Positives

Often times chemical indicators have vague or incorrect indications that a blood product exceeded the temperature thresholds for a blood product. This is usually caused by handling whereby a nurse or technician accidentaly touches the chemical indicator and causes it to trigger.

Difficult Workflow

Chemical indicators often have very strict protocols for preconditioning and placement. Many brands require that the chemical indicators are refrigerated for 24 hours before application and that a cold pack is used to keep the indicator cold up until the moment it gets applied.

Multiple temperature thresholds

Besides not being able to detect the magnitude of the temperature violation, chemical indicators are also unable to show if a blood product has accidentally been frozen.

Surface Temperature

Studies have shown that a blood product’s surface temperature can be several degrees higher than its true core temperature¹. Although this may seem like a small difference, this temperature delta can potentially result in a large number of “false-positive” wastage events. By monitoring core temperature, a blood product can be exposed to room temperature almost 4x longer than if surface temperature is used to trigger wastage. Given that each unit of wasted blood may cost over $300², switching to core temperature monitoring could represent a significant cost savings opportunity.