1414 Heiderich TM, Leslie AT, Guinsburg R. Neonatal procedural pain can be assessed by computer software that has good sensitivity and specificity to detect facial movements. Acta Paediatr. 2015;104:e63–9.
|
Developed in the Delphi environment, based on image recognition of pain-related facial actions. /Scale: NFCS |
Own Base - UNIFESP University Hospital (30 newborns between 35 and 41 weeks of gestational age). |
Bulging brow; narrowing of the lid slit; deepening of the nasolabial furrow; open lips; mouth stretching. |
The software exhibited 85% sensitivity and 100% specificity in detecting neutral facial expressions in their sting state and 100% sensitivity and specificity in detecting procedural pain in neonates. |
2323 Carlini LP, Ferreira LA, Coutrin GAS, Varoto VV, Heiderich TM, Balda RCX, et al. A convolutional neural network-based mobile application to bedside neonatal pain assessment. In: 2021 34th SIBGRAPI Conference on Graphics, Patterns and Images (SIB-GRAPI), IEEE; 2021:394–401. [Cited 2023 Mar 8]. Available from: https://ieeexplore.ieee.org/document/9643144/. https://ieeexplore.ieee.org/document/964...
|
A computational model with face detection, data augmentation, and classification model with transfer learning to a CNN architecture pre-trained by adding fully connected layers specifically trained with neonatal face images. The application was developed for the Android operating system using the Android Studio IDE.
/Scale: NFCS
|
UNIFESP University Hospital (30 newborns between 35 and 41 weeks of gestational age) and Infant COPE (26 Caucasian neo-nates). |
Not applicable. |
This model achieved 93.07% accuracy, 0.9431 F1 Score, and 0.9254 AUC. |
3030 Grifantini K. Detecting faces, saving lives. IEEE Pulse. 2020;11:2–7.. [cited 2022 Aug 14]. Available from: https://ieeexplore.ieee.org/document/9089065/. [cited 2022 Aug 14]. Available from:. https://ieeexplore.ieee.org/document/908...
|
Report
/Scale: Not applicable.
|
Not applicable. |
Not applicable. |
Not applicable. |
3030 Grifantini K. Detecting faces, saving lives. IEEE Pulse. 2020;11:2–7.. [cited 2022 Aug 14]. Available from: https://ieeexplore.ieee.org/document/9089065/. [cited 2022 Aug 14]. Available from:. https://ieeexplore.ieee.org/document/908...
cited 2020 Zamzmi G, Paul R, MdS Salekin, Goldgof D, Kasturi R, Ho T, et al. Convolutional neural networks for neonatal pain assessment. IEEE Trans Biom Behav Identity Sci. 2019;1:192–200.
|
Compares the use of a novel Convolutional Neural Networks Neonatal along with others (ResNet50 and VGG-16) for pain assessment application.
/Scale: NIPS
|
Infant COPE (26 Caucasian neonates) and NPAD (31 neonates between 32 and 40 weeks of gestational age). |
Not applicable. |
Assessing neonatal pain using LBP features achieved 86.8% average accuracy; Assessing neonatal pain using HOG features with Support vector machines achieved 81.29% average accuracy; Proposed N-CNN, which extracts features directly from the images, achieved state-of-the-art results and outperformed ResNet, VGG-16, as well as handcrafted descriptors. |
3030 Grifantini K. Detecting faces, saving lives. IEEE Pulse. 2020;11:2–7.. [cited 2022 Aug 14]. Available from: https://ieeexplore.ieee.org/document/9089065/. [cited 2022 Aug 14]. Available from:. https://ieeexplore.ieee.org/document/908...
cited 2626 Zamzmi G, Pai CY, Goldgof D, Kasturi R, Ashmeade T, Sun Y. A comprehensive and context-sensitive neonatal pain assessment using computer vision. IEEE Trans Affect Comput. 2022;13:28–45.
|
Existing static methods have been divided into two categories: handcrafted-repre-sentation-based methods and deep-representation-based methods.
/Scale: NIPS
|
Infant COPE (26 Caucasian neonates). |
Not applicable. |
The system achieved 95.56% accuracy using decision fusion of different pain responses that were recorded in a challenging clinical environment. |
3131 Egede J, Valstar M, Torres MT, Sharkey D. Automatic neonatal pain estimation: an acute pain in neonates database. In: 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), IEEE; 2019:1–7. [Cited 2023 Mar 8]. Available from: https://ieeexplore.ieee.org/document/8925480/. https://ieeexplore.ieee.org/document/892...
|
Uses handcrafted algorithms and deep-learned features.
/Scale: NIPS and NFCS
|
Own Base - APN-db (213 newborns between 26 and 41 weeks of gestational age). |
Brow bulge, eye squeeze, nasolabial furrow, open lips, stretch mouth (vertical), stretch mouth (horizontal), lip purse, taut tongue, chin quiver. |
The system performs well with an RMSE of 1.94 compared to human error of 1.65 on the same dataset, demonstrating its potential application to newborn health care. |
3232 Martinez-Balleste A, Casanovas-Marsal JO, Solanas A, Casino F, Garcia-Martinez M. An autonomous system to assess, display and communicate the pain level in newborns. In: 2014 IEEE International Symposium on Medical Measurements and Applications (MeMeA), IEEE; 2014:1–5. [Cited 2023 Mar 8]. Available from: http://ieeexplore.ieee.org/document/6860144/. http://ieeexplore.ieee.org/document/6860...
|
The behavioral parameters related to movement and expression are measured using computer vision techniques.
/Scale: NIPS, BPSN, DAN, NFCS, PIPP and CRIES
|
Not reported. |
Head movement, expression of pain, frowning, lips movement, eyes open/ closed, cheek frowning. |
Not reported. |
3333 Roué JM, Morag I, Haddad WM, Gholami B, Anand KJS. Using sensor-fusion and machine-learning algorithms to assess acute pain in non-verbal infants: a study protocol. BMJ Open. 2021;11:e039292.
|
Uses facial electromy-ography to record facial muscle activity-related infant pain.
/Scale: N-PASS, PIPP-R, NFCS, FLACC and VAS
|
Own Base (The painful procedures will be a minimum of 60 new-borns and infants averaging 6 months). |
Forehead, cheek, eyebrow puffing, eye pinch, and nasolabial sulcus. |
Tests will be performed in further studies. |
3434 Cheng X, Zhu H, Mei L, Luo F, Chen X, Zhao Y, et al. Artificial intelligence based pain assessment technology in clinical application of real-world neonatal blood sampling. Diagnostics. 2022; 12:1831.
|
It was implemented with the client-server model and designed to run on the mobile nursing personal digital assistant device.
/Scale: NIPS
|
Own Base (232 new-borns with a mean gestational age of 33.93 ± 4.77 weeks). |
Frown, eye squeezing, nasolabial fold deepening, mouth stretching, and tongue tightening. |
The accuracies of the NIPS pain score and pain grade given by the automated NPA system were 88.79% and 95.25%, with kappa values of 0.92 and 0.90 (p< 0.001), respectively. |
3535 Domingues PH, da Silva RM, Orra IJ, Cruz ME, Heiderich TM, Thomaz CE. Neonatal face mosaic: an areas-of-interest segmentation method based on 2D face images. Anais do XVII Workshop de Visão Computacional (WVC 2021). Sociedade Brasileira de Computação - SBC; 2021. p. 201–5. [Cited 2023 Mar 8]. Available from: https://sol.sbc.org.br/index.php/wvc/article/view/18914. https://sol.sbc.org.br/index.php/wvc/art...
|
Identifying, transforming, and extracting the regions of interest from the face, assembling an average face of the newborns, and using similarity metrics to check for artifacts.
/Scale: NFCS
|
UNIFESP University Hospital (30 newborns between 35 and 41 weeks of gestational age). |
Eyebrows, eyes, nose, the region between the eyes, mouth, nasolabial folds, cheeks, and forehead. |
Not reported. However, all images could be mapped and segmented by region. |
3636 Han J, Hazelhoff L, de With PHN. Neonatal monitoring based on facial expression analysis. Neonatal Monitoring Technologies. IGI Global; 2012. p. 303–23. [Cited 2023 Mar 8]. Available from: http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/978-1-4666-0975-4.ch014. http://services.igi-global.com/resolvedo...
|
The system consists of several algorithmic components, ranging from face detection, determination of the region of interest, and facial feature extraction to behavior stage classification.
/Scale: Unmentioned
|
Own Base (newborn with different conditions) |
Eyes, eyebrows, and mouth. |
The algorithm can operate with approximately 88% accuracy. |
3737 Mansor MN, Junoh AK, Ahmed A, Kamarudin H, Idris A. Infant pain detection with homomorphic filter and fuzzy k-NN classifier. Appl Mech Mater. 2014;643:183–9.. [Cited 2023 Mar 8]. Available from: https://www.scientific.net/AMM.643.183. https://www.scientific.net/AMM.643.183...
|
The Local Binary Pattern features are computed in the Fuzzy k-NN classifier employed to classify newborn pain.
/Scale: Unmentioned
|
Infant COPE (26 Caucasian neonates) |
Not applicable. |
Using the HOMO method, the sensitivity is 96.667%. Specificity ranged from 96.5% to 97.2% and accuracy ranged from 93.3% to 97.2% depending on the illumination. The fastest time consumption was obtained by Conventional Validation under 100 illumination levels with 0.065s. |
3838 Parodi E, Melis D, Boulard L, Gavelli M, Baccaglini E. Automated newborn pain assessment framework using computer vision techniques. In: In: Proceedings of the International Conference on Bioinformatics Research and Applications 2017 - ICBRA 2017, New York, New York, USA: ACM Press; 2017:31–6. [Cited 2023 Mar 8]. Available from: http://dl.acm.org/citation.cfm?doid=3175587.3175590. http://dl.acm.org/citation.cfm?doid=3175...
|
Facial features were extracted through different image processing methods: placement and tracking of landmarks, edge detection, and binary thresholding. /Scale: NFCS, PIPP and DAN |
Own Base - Ordine Mauriziano Hospital (15 healthy full-term neonates between 48 and 72 hours of life). |
Eye squeeze (between mid-eyebrow and mid-lower eyelid), cheek raise (between eye medial corner and nose corner), brow bulging (between eyebrows medial border). |
The overall result is not reported, but some operators' evaluations were particularly inconsistent regarding some parameters like face furrowing. For these parameters, the scores had very low consistency (about 40%). |
3939 Wang Y, Huang L, Yee AL. Full-convolution Siamese network algorithm under deep learning used in tracking of facial video image in newborns. J Supercomput. 2022;78:14343–61.
|
The commonly used face detection methods are introduced first, and then, the convolutional neural network in deep learning is analyzed and improved and then applied to the facial recognition of newborns.
/Scale: Used in Hubei hospital
|
Own Base - Hubei hospital (40 newborns with the age of no more than 7 days). |
Not applicable. |
The accuracy of the improved algorithm is 0.889, higher by 0.036 in contrast to other models; the area under the curve (AUC) of success rate reaches 0.748, higher by 0.075 compared with other algorithms. |
4040 Dosso YS, Kyrollos D, Greenwood KJ, Harrold J, Green JR. NICU-face: Robust neonatal face detection in complex NICU scenes. IEEE Access. 2022;10:62893–909.. [Cited 2023 Mar 8]. Available from: https://ieeexplore.ieee.org/document/9791241/. https://ieeexplore.ieee.org/document/979...
|
Compare five pre-trained face detection models, proposing two new NICU face models.
/Scale: Unmentioned
|
CHEO (33 newborns), COPE (27 newborns), and NBHR (257 patients) |
Not applicable. |
The proposed NICU-face models outperform previous state-of-the-art models for neonatal face detection and are robust to many identified complex NICU scenes. |