ABSTRACT
Objective: Validate the content of an instrument that assesses the quality of a software applied to the risk classification of patients.
Method: Methodological study, conducted in three stages: adaptation of the instrument, content validation through Delphi technique and pre-test. The results were analyzed through Content Validity Index, Overall Content Validity Index and Inter-rater Reliability.
Results: The final version of the instrument comprises 8 characteristics and 28 sub-characteristics, being 37 general questions to computer experts and nurse and 7 specific questions to computer experts, including 1 question and excluding 3 questions of the original instrument. We obtained Overall Content Validity of 92% and Inter-rater Reliability of 100% in the second Delphi round.
Final considerations: The instrument has content validity, allowing to assess the technical quality and functional performance of the software applied to the risk classification of patients.
Descriptors: Classification; Emergency Medical Services; Validation Studies; Validation of Computer Programs; Healthcare Quality
RESUMEN
Objetivo: Validar el contenido de instrumento que evalúa la cualidad de un programa (software) aplicado a la clasificación de riesgo de pacientes.
Método: Estudio metodológico, realizado en tres etapas: la adaptación del instrumento, la validación de contenido por medio de la técnica Delphi y la prueba previa. Los resultados fueron analizados por medio del Índice de Validez de Contenido, Índice de Validez de Contenido Global e Índice de Concordancia Interevaluadores.
Resultados: La versión final del instrumento contempla ocho características, 28 subcaracteristicas, siendo 37 cuestiones generales a los expertos en informática y enfermero y siete cuestiones específicas a los expertos en informática, con inclusión de una cuestión y exclusión de tres cuestiones del instrumento original. Se obtuvo porcentual de Validez de Contenido Global del 92% e Índice de Concordancia Interevaluadores del 100% en la segunda ronda Delphi.
Consideraciones finales: El instrumento posee validez de contenido permitiendo evaluar la cualidad técnica y el desempeño funcional de programa (software) aplicado a la clasificación de riesgo de pacientes.
Descriptores: Clasificación; Servicios Médicos de Urgencias; Estudios de Validación; Validación de Programas de Ordenador; Cualidad de la Asistencia a la Salud
RESUMO
Objetivo: Validar o conteúdo de instrumento que avalia a qualidade de um software aplicado à classificação de risco de pacientes.
Método: Estudo metodológico, realizado em três fases: adaptação do instrumento, validação de conteúdo por meio da técnica Delphi e pré-teste. Os resultados foram analisados por meio do Índice de Validade de Conteúdo, Índice de Validade de Conteúdo global e Índice de Concordância Interavaliadores.
Resultados: A versão final do instrumento contempla oito características, 28 subcaracterísticas, sendo 37 questões gerais aos especialistas em informática e enfermeiro e sete questões específicas aos especialistas em informática, com inclusão de uma questão e exclusão de três questões do instrumento original. Obteve-se percentual de Validade de Conteúdo Global de 92% e Índice de Concordância Interavaliadores de 100% na segunda rodada Delphi.
Considerações finais: O instrumento possui validade de conteúdo permitindo avaliar a qualidade técnica e desempenho funcional de software aplicado à classificação de risco de pacientes.
Descritores: Classificação; Serviços Médicos de Emergências; Estudos de Validação; Validação de Programas de Computador; Qualidade da Assistência à Saúde
INTRODUCTION
The quality of health information is a global concern. Countries such as the United States of America, Canada and England promote initiatives for the safety in the conception, acquisition and deployment of health information technology(1). Advances in access to information, increased accuracy of documentation, deployment of evidence-based practice, cost reduction with return on investment, as well as improvements in quality of care and satisfaction of employees, are evidence of international experiences in the use of computerization in health(2).
In Brazil, the use of computer systems to management has increased exponentially in different areas. In the work process of nursing, computer science improves more and more through the development and evaluation of tools, processes and structures that assist nurses in care management(3-4).
In emergency hospital services, computerization has been used for the Assessment with Risk Classification of patients. International literature brings two Assessment and Risk Classification protocols in electronic format: the Manchester Triage System and the Andorrá Triage Model(5). In Brazil, Manchester is the most known protocol, which has inspired the computerization of the National Protocol of Risk Assessment and Classification, implemented by the Ministry of Health through the ordinance 2048/2002(6). However, no scientific evidences were found on the evaluation of technical quality and functional performance of software used for this computerization, both of national and international protocol.
The assessment of a software is critical to identify the weaknesses and limitations of the product, analyze its performance and diagnose the need for adjustments(7). This assessment is consolidated by specific regulations, such as the International Organization for Standardization (ISO), International Electrotechnical Commission (IEC) and Brazilian Association of Technical Standards (ABNT), which proposed two Brazilian Standards (NBR), ISO/IEC 14598 and ISO/IEC 9126, regarding the quality of software products(8-9). In 2011, updates of these regulations gave rise to the standards ISO/IEC 25010 - System and Software engineering - System and software quality models; and ISO/IEC 25040 - System and Software engineering - Evaluation process(10-11). Such standards have been used to create software and its assessment in different contexts of Brazilian nursing(3,7,12-14).
In this sense, we identified on our work practice the need to assess the technical quality and functional performance of a software used to perform the Risk Assessment and Classification of patients in a Hospital Emergency Service (HES). To do so, our study was based on a recent survey that applied such methodology in an Electronic Document System of Nursing Processes (PROCenf-USP) through an instrument that contemplated the last resolution specific to this type of assessment(3).
OBJECTIVE
To validate the content of an instrument that assesses the quality of a software applied to the risk classification of patients.
METHOD
Ethical aspects
This study was approved in April 27, 2016 by the Research Ethics Committee of the Londrina State University (UEL).
Study design, location, and period
This is a methodological and applied study, with a quantitative approach, carried out from April to October 2016, consolidated in three stages, namely: instrument adaptation, validation of content and pre-test. The study was held in a medium-sized public secondary hospital, located in the State of Paraná, Brazil.
Study protocol
For validation of this instrument we opted for the Delphi technique, a systematized method of judgment of information intended to reach the consensus of opinions about a particular subject, regarding the knowledge of a committee of experts, through validations articulated in rounds, maintaining anonymity.(15) Content validation is held by questioning various experts who analyze the representativeness of the items that integrate the object of analysis(16).
The instrument chosen in its original version aims to assess the software quality of the Electronic Document System of Nursing Processes PROCenf-USP®(3), which includes the latest update from the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), specific to the assessment of the software.
The original instrument contains 8 characteristics and 31 sub-characteristics directed to computer experts and nurses, they are: functional adequacy (functional integrity, functional correction and functional fitness); performance efficiency (time, resources, and capacity); compatibility (coexistence and interoperability); usability (recognition of suitability, learnability, error protection, operability, user interface aesthetics and accessibility); reliability (maturity, fault tolerance, recoverability and availability); security (confidentiality, integrity, non-repudiation, accountability and authentication); maintainability: (analyzability, changeability, modularity, reusability, testability); portability (adaptability, installability and repleaceability). These two last characteristics were assessed only by computer experts.(10-11)
The author of this instrument granted permission for its adaptation and validation to assess the software of Risk Classification of patients.
The first stage of the study was the adaptation of the original instrument to the context of Assessment with Risk Classification of patients. In the second stage, we proceed to content validation of the instrument adapted through Delphi technique. Two Delphi rounds were needed. In the first, the committee was composed of 6 experts; of these, 3 computer experts with experience in creating software and computerization in health area and 3 nurses, 2 doctors and 1 master, both with experience in validation and adaptation of instruments. The same experts were invited for the second round, but only 5 responded to the request.
After prior contact with the experts by e-mail, the following documents were sent: an Informed Consent Form in electronic format, a summarized protocol for the presentation of the research project, the instrument adapted and instructions for completion and assessment of the questions to be reviewed by them. It was stipulated a deadline of 15 days for the return of the analyzed material.
Experts assessed each item of the instrument in +1 (suitable), -1 (suitable with amendments) or 0 (unsuitable). At the end of the assessment, it was possible to calculate a Content Validity Index (CVI) and the Overall Content Validity Index, which measure the proportion or percentage of evaluators who are in agreement on certain aspects of the instrument and its items, being considered valid the ones that obtained minimum percentage of 80% of agreement between the experts(17-18).
The Reliability Index or Inter-rater Reliability, which is intended to assess the extent to which the evaluators are reliable when assessing the items outside the context studied, was calculated for each dimension of the instrument, by dividing the number of items that scored above 80% of agreement between evaluators by the total number of items of each dimension(17-18).
After content validation, the third stage of the study was held by pre-testing the adapted instrument. The selection of individuals for this stage occurred through purposive sampling at the institution studied, being composed by 1 computer expert, 3 nurses with experience in computerized Assessment of Classification Risk of patients and 3 nurses without experience in this activity. In this stage, participants were asked about problems regarding the understanding of each item and terms difficult to be understood; they assessed the questions through questionnaire of Practicality of Instrument(19).
Analysis of results and statistics
Data were tabulated in the software Microsoft Excel 2010 and analyzed from reflective reading and descriptive statistics.
RESULTS
In the process of adaptation of the instrument, researchers chose to combine some questions that complemented each other to better understand the content. Sub-characteristics “Adaptability” and “Installability” were grouped into a single question. “User Interface Aesthetics,” “Time,” and “Integrity” had questions grouped each one within its own sub-characteristic.
The content validation procedure consolidated the final version of the instrument with 8 characteristics, 28 sub-characteristics and 44 questions. The characteristics Functional Adequacy, Reliability, Usability, Performance Efficiency, Compatibility, and Safety appear in 37 questions and are aimed at computer experts and nurses. The characteristics Maintainability and Portability are included in 7 specific questions to computer experts. One question was included and 3 were excluded of the original instrument. From the first Delphi round, we identified the need to reformulate some questions of the instrument, justifying the second Delphi round.
Table 1 presents the Content Validity Index (CVI) of each characteristic and sub-characteristic of the instrument, the Inter-rater Reliability (IRR) and the Overall Content Validity Index, verified in the first and second rounds.
Percentage of agreement between the experts in relation to the Content Validity Index (CVI) of each characteristic and sub-characteristic of the instrument, Inter-rate Reability (IRR) and Overall Content Validity Index in the first and second Delphi rounds, Londrina, Paraná, Brazil, 2016
According to the assessment of experts regarding the agreement and representativeness of the items in the instrument, the overall CVI in first Delphi round was 70%, reaching 93% in the second round.
The characteristics Usability and Portability obtained the best percentages, with CVI above 80% in the first Delphi round. Safety, Reliability, Compatibility and Maintainability presented the lowest percentages of CVI (53%, 62%, 63% and 63%, respectively).
Regarding reliability, just the Portability characteristic reached satisfactory index in the first Delphi round. All the others reached IRR of 100% in the second round.
Of the 30 sub-characteristics containing the instrument in the first round, 15 (50%) reached CVI above 80%, being 5 with 100% of compliance. Availability, Accountability, and Authentication obtained the lowest percentages of CVI (0%, 0% and 33%, respectively). In the second round, the Accountability and Authentication sub-characteristics were excluded from the instrument on the recommendation of the experts.
In Chart 1 are presented the suggestions of the committee of experts on the items considered “suitable with amendment” and “unsuitable,” obtained in the first and second Delphi round.
Presentation of the suggestions of the committee of experts on the items considered "suitable with amendment" and "unsuitable," Londrina, Paraná, Brazil, 2016
After content validation, the instrument was submitted to a pre-test conducted with 7 participants: 1 computer expert, 3 nurses with experience in computerized Assessment of Classification Risk of patients and 3 nurses without experience in this activity. The results of this stage showed ease of understanding of the items of the instrument and pertinence of the proposal of software assessment; there are no suggestions for adjustments in the items regarding the instrument. Two participants suggested replacing the words Disagree and Agree by Yes and No expressions, but the researchers considered this change inappropriate. The time demanded by participants for completing the instrument was approximately 18 minutes to each.
DISCUSSION
The option of adapting and validating an instrument to assess the computerization of the Assessment with Risk Classification in Hospital Emergency Services (HES), using a tool that indicates the level of technical quality and functional performance, was based on the fact that, so far, no instrument was able to quantify the opinions and impressions of health professionals regarding technical quality and functional performance of this computerized instrument.
The process of adaptation of the instrument was consolidated through scientific basis on the assessment of technical quality and functional performance of software, Assessment with Risk Classification of patients, computerization in health and methodological benchmark of validation of instruments(3,5-7,10-20).
In this stage, we decided to group the questions of the sub-characteristics Adaptability and Installability, presenting them in a single question. The sub-characteristics “Time” and “Integrity” had their questions grouped, each one within its own sub-characteristics. Such changes do not generate questions from the experts, nor concerns regarding clarity and representativeness, reaching a CVI above 80% since the first Delphi round.
However, the sub-characteristic “Interface Aesthetics,” which also had its questions grouped to better research, presented CVI of 67% in the first round. The experts assessed it as “suitable with amendments” and suggested replacing the term “graphic design” by “software design” and adding which characteristics must be assessed. Such suggestions were accepted, reaching a CVI of 80% in the second round.
In the adaptation of the instrument, through the analysis of the results and reflective reading of the suggestions and comments of the committee of experts, it was possible to verify problems regarding clarity and representativeness in the instrument. Considering the method used in this research, which required IRR of 100% and a minimum CVI of 80%, the results obtained in the first Delphi round showed the need for adjustments of the instrument.
In the first Delphi round, 50% of the sub-characteristics that compose the instrument did not achieve the minimum CVI recommended by the literature and adopted in this research(17-18). Adjustments were made on the questions of the items indicated by the experts, through reformulations of writing on the questions, such as inversions of words or replacement of items by synonyms, which allowed better understanding of the items(21). After this stage, we proceed to the second Delphi round to ensure that the adjustments made would obtain the clarity and agreement required.
In the first Delphi round, an expert suggested adding on the “Recognition of Suitability” sub-characteristic, of the Usability characteristic, a question related to the occurrence of training for the use of the software. The suggestion of the experts was accepted, since it was considered relevant within the context of the use and assessment of technical quality and functional performance; the other questions of the sub-characteristic received suggestion of reformulations and standardization of writing, which were accepted and submitted to a second round of assessment by the committee, obtaining CVI compliance percentage > 80%.
In this context, we consider that the incorporation of new technologies in the work process requires skills that must be acquired by health professionals, to have success and efficiency in the use of these tools. In this sense, training courses are important strategies, a praxis for knowledge production and success in the appropriate use of technologies, with direct impact on the quality of care provided(22-24). The failure of the training courses or their absence may influence on the ability and desire of professionals to engage in activities of development, applicability and transformation of practices.
Still on the content analysis of the instrument, the characteristics Functional Fitness, Maintainability, Recoverability, Learnability, Capacity and Testability achieved percentage ≥ 80% in the first Delphi round, but were respectively assessed as “unsuitable” by an expert and “suitable” by the others; although there is no suggestion of reformulation and reaching favorable percentage of CVI, they were forwarded to the second round and reached percentage of 100%.
However, the “Fault tolerance,” “Availability,” and “Coexistence” sub-characteristics did not reach percentage of 80% or more in the first Delphi round, and were respectively assessed as “unsuitable” by an expert and “suitable” by the others. Changes were made on the items indicated, considering their relevance. They were forwarded to the second Delphi round, reaching favorable percentage of CVI, mostly 100% of agreement; and considered as valid, trustworthy and representative content, as advocated by the reference adopted in this study(15-19).
On the other hand, the “Accountability” and “Authentication” sub-characteristics were removed from the instrument in the first Delphi round, since both were assessed as “unsuitable” by six experts, being observed problems of redundancy that can influence the results, according to the committee, justified as contemplated in others sub-characteristics of the instrument.
In the end, the indexes from the content validation process indicated high reliability and trustworthiness of the instrument and made it possible to assess the technical quality and functional performance of software applied to computerized risk classification of patients in EHS, a tool that helps professionals in the health care and that can be applied to several processes of computerization in health.
Study limitations
We considered as limitations of this study the number of experts in computer science that would contemplate the inclusion criteria established, i.e., with experience in computerization in health, given the shortage of these professionals on the staff of most health services. Health Informatics in Brazil is an area in exponential growth that has attracted professionals from diverse academic segments in search of specializations that confer competence to act. However, we found a restriction of specialization courses in this area and, among these, there are differences in their course syllabus, corroborating the limitations of our study(25).
Contributions to the Nursing field
The validated instrument brings contributes to the health sector in the acquisition of a tool that aims to assess the efficiency of software, qualify, promote improvements in its development and construction through health institutions, considering that this instrument can be continuously optimized and adapted to other computerized work processes. We emphasize the development of technical and scientific skills to healthcare professionals, specifically in the nursing field, with incentives to an entrepreneurial attitude, which is still found empirically in many hospitals. These professionals are assisted by the exponential growth of technologies whose purpose is to meet their daily needs in different contexts of their practice, facilitating the work time and improving the care provided.
FINAL CONSIDERATIONS
The content validation procedure consolidated the final version of the instrument with 8 characteristics, 28 sub-characteristics and 44 questions. The Functional Adequacy, Reliability, Usability, Performance Efficiency, Compatibility, and Safety characteristics appear in 37 questions and are aimed at computer experts and nurses. The Maintainability and Portability characteristics are included in 7 specific questions to computer experts.
The overall CVI in the first Delphi round was 70%, reaching 93% in the second round. Of the 30 sub-characteristics containing the instrument in the first round, 15 (50%) reached CVI > 80%, being 5 with 100% of compliance. Availability, Accountability and Authentication obtained the lowest percentages of CVI (0%, 0% and 33%, respectively). In the second round, the “Accountability” and “Authentication” sub-characteristics were excluded from the instrument on the recommendation of the experts.
Regarding reliability, just the Portability characteristic reached the satisfactory index in the first Delphi round. All the others reached IRR of 100% in the second round.
With two Delphi rounds, we obtained satisfactory percentages of Overall CVI and IRR, inclusion of 1 question and exclusion of 3 questions presented in the “Accountability” and “Authentication” sub-characteristics. In addition, in the pre-test, the instrument was considered suitable regarding the clarity and objectivity in the items.
Due to high levels achieved in the content validation of the instrument, it is important that EHS which use the computerized Assessment with Risk Classification have this instrument as a tool for measuring, reliably, the technical quality of such software, to obtain, on its results, diagnoses that enhance this tool and even the construction of new software. We also consider that this instrument can be continuously optimized and adapted to other computerized work processes in health.
To do so, we suggest new research. on the theme, with expectation that the instrument be released and used for health institutions to qualify, enhance and promote improvements on the development and construction of software that facilitate the process of work in health area, the development of technical and scientific skills and the incentive of this entrepreneurial practice.
REFERENCES
-
1 Kushniruk AW, Bates DW, Bainbridge M, Househ MS, Borycki EM. National efforts to improve health information system safety in Canada, the United States of America and England. Int J Med Inform [Internet]. 2013 [cited 2015 Aug 10]; 82(5):149-60. Available from: https://www.ncbi.nlm.nih.gov/pubmed/23313431
» https://www.ncbi.nlm.nih.gov/pubmed/23313431 -
2 Cherry BJ, Ford EW, Peterson LT. Experiences with electronic health records: early adopters in long-term care facilities. Health Care Manag Rev [Internet]. 2011 [cited 2016 Sep 10]; 36(3):265-74. Available from: https://www.ncbi.nlm.nih.gov/pubmed/21646885
» https://www.ncbi.nlm.nih.gov/pubmed/21646885 -
3 Oliveira NB, Peres HHC. Evaluation of the functional performance and technical quality of an Electronic Documentation System of the Nursing Process. Rev Latino-Am Enfermagem [Internet]. 2015 [cited 2015 Sep 10]; 23(2):242-9. Available from: http://www.scielo.br/pdf/rlae/v23n2/0104-1169-rlae-3562-2548.pdf
» http://www.scielo.br/pdf/rlae/v23n2/0104-1169-rlae-3562-2548.pdf -
4 Gonçalves MFS, David G. Planejamento e realização de estudo de (re)utilização da informação clínica em contexto hospitalar com base na metodologia quadripolar. Prisma [Internet]. 2014 [cited 2015 Sep 15]; 26:67-95. Available from: http://revistas.ua.pt/index.php/prismacom/article/view/3102/pdf_40
» http://revistas.ua.pt/index.php/prismacom/article/view/3102/pdf_40 -
5 Jiménez GJ. Clasificación de pacientes en los servicios de urgencias y emergencias: hacia un modelo de triaje estructurado de urgencias y emergências. Emerg [Internet]. 2003 [cited 2015 Sep 13]; 15:165-74. Available from: http://www.triajeset.com/acerca/archivos/revision_triaje_estructurado.pdf5
» http://www.triajeset.com/acerca/archivos/revision_triaje_estructurado.pdf5 -
6 Conselho Regional de Enfermagem de Santa Catarina. PARECER Nº 009/CT/2015/PT Autarquia Federal criada pela Lei Nº 5.905/73. Acolhimento com Classificação de Risco [Internet]. 2015 [cited 2015 Sep 13]. Available from: http://www.corensc.gov.br/wp-content/uploads/2015/07/Parecer-009-2015-Acolhimento-com-Classifica%C3%A7ao-de-Risco-CT-Alta-e-M%C3%A9dia-Complexidade.pdf
» http://www.corensc.gov.br/wp-content/uploads/2015/07/Parecer-009-2015-Acolhimento-com-Classifica%C3%A7ao-de-Risco-CT-Alta-e-M%C3%A9dia-Complexidade.pdf -
7 Sperandio DJ. A tecnologia computacional móvel na sistematização da assistência de enfermagem: avaliação de um software-protótipo [Tese] [Internet]. Ribeirão Preto: Escola de Enfermagem de Ribeirão Preto da Universidade de São Paulo; 2008. [cited 2015 Sep 13]. Available from: http://www.teses.usp.br/teses/disponiveis/22/22132/tde-11092008-165036/publico/DirceleneJussaraSperandio.pdf
» http://www.teses.usp.br/teses/disponiveis/22/22132/tde-11092008-165036/publico/DirceleneJussaraSperandio.pdf - 8 Associação Brasileira de Normas Técnicas. NBR ISO/ IEC 14598-1:2001: Tecnologia de informação: avaliação de produto de software. Parte 1: visão geral. Rio de Janeiro; 2001. 165p.
- 9 Associação Brasileira de Normas Técnicas. NBR ISO/ IEC 9126-1:2003. Engenharia de software: qualidade de produto. Parte 1: modelo de qualidade. Rio de Janeiro; 2003. 21p.
- 10 ISO/IEC 25010 - System and Software engineering - System and software Quality Requirements and Evaluation (SQuaRE) - System and software quality models. Switzerland; 2011. 34p.
- 11 ISO/IEC 25040 - System and Software engineering - System and software Quality Requirements and Evaluation (SQuaRE) - Evaluation process. Switzerland; 2011. 34p.
-
12 Pereira IM, Gaidzinski RR, Fugulin FMT, Peres HHC, Lima AFC, Castilho V, et al. Computerized nursing staffing: a software evaluation. Rev Esc Enferm USP [Internet]. 2011 [cited 2015 Sep 10];45(Esp):1600-5. Available from: http://www.scielo.br/pdf/reeusp/v45nspe/en_v45nspea10.pdf
» http://www.scielo.br/pdf/reeusp/v45nspe/en_v45nspea10.pdf -
13 Rangel AL, Évora YDM, Oliveira MMB. O processo de avaliação do software de geração automática de escala de trabalho da enfermagem e da escala por ele gerada. J Health Inform [Internet]. 2012 [cited 2015 Sep 13]; 200(4). Available from: http://www.jhi-sbis.saude.ws/ojs-jhi/index.php/jhi-sbis/article/view/208/148
» http://www.jhi-sbis.saude.ws/ojs-jhi/index.php/jhi-sbis/article/view/208/148 -
14 Jensen R, Lopes MHBM, Silveira PSP, Ortega NRS. The development and evaluation of software to verify diagnostic accuracy. Rev Esc Enferm USP [Internet]. 2012 [cited 2016 Jul 10]; 46(1):178-85. Available from: http://www.scielo.br/pdf/reeusp/v46n1/en_v46n1a25.pdf
» http://www.scielo.br/pdf/reeusp/v46n1/en_v46n1a25.pdf -
15 Scarparo AF, Laus AM, Azevedo ALCS, Freitas MRI, Gabriel CS, Chaves LDP. Reflexões sobre o uso da técnica Delphi em pesquisa na enfermagem. Rev Rene [Internet]. 2012 [cited 2016 Jul 10]; 13(1):242-51. Available from: http://www.revistarene.ufc.br/revista/index.php/revista/article/download/36/31
» http://www.revistarene.ufc.br/revista/index.php/revista/article/download/36/31 -
16 Alexandre NMC, Coluci MZO. Validade de conteúdo nos processos de construção e adaptação de instrumentos de medidas. Ciênc Saúde Colet [Internet]. 2011 [cited 2015 Sep 13]; 16(7):3061-3068. Available from: http://www.scielo.br/pdf/csc/v16n7/06.pdf
» http://www.scielo.br/pdf/csc/v16n7/06.pdf -
17 Rubio DM, Ber-Weger M, Tebb SS, Lee ES, Rauch S. Objectifying content validity: conducting a content validity study in social work research. Soc Work Res [Internet]. 2003 [cited 2015 Dec 10]; 27(2):94-111. Available from: http://swr.oxfordjournals.org/content/27/2/94.abstract
» http://swr.oxfordjournals.org/content/27/2/94.abstract -
18 Alexandre NMC, Coluci MZO. Validade de conteúdo nos processos de construção e adaptação de instrumentos de medidas. Ciênc Saúde Colet [Internet]. 2011 [cited 2015 Sep 13]; 16(7):3061-8. Available from: http://www.scielo.br/pdf/csc/v16n7/06.pdf
» http://www.scielo.br/pdf/csc/v16n7/06.pdf -
19 Coluci MZO, Alexandre NMC. Development of a questionnaire to evaluate the usability of assessment instruments. Rev Enferm UERJ [Internet]. 2009 [cited 2015 Dec 10]; 17(3):378-82. Available from: http://www.facenf.uerj.br/v17n3/v17n3a14.pdf
» http://www.facenf.uerj.br/v17n3/v17n3a14.pdf -
20 Bellucci Jr JA, Vituri DW, Versa GLGS, Furuya OS, Vidor RC, Matsuda LM. Acolhimento com classificação de risco em serviço hospitalar de emergência: avaliação do processo de atendimento. Rev Enferm UERJ [Internet]. 2015 [cited 2016 Apr 4]; 23(1)82-7. Available from: http://www.facenf.uerj.br/v23n1/v23n1a14.pdf
» http://www.facenf.uerj.br/v23n1/v23n1a14.pdf -
21 Souza AC, Milani D, Alexandre NMC. Adaptação cultural de um instrumento para avaliar a satisfação no trabalho. Rev Bras Saúde Ocup [Internet]. 2015 [cited 2016 Apr 4]; 40(132):219-27. Available from: http://www.scielo.br/pdf/rbso/v40n132/0303-7657-rbso-40-132-219.pdf
» http://www.scielo.br/pdf/rbso/v40n132/0303-7657-rbso-40-132-219.pdf -
22 Miranda MCG, Almeida BA, Aragão E, Guimarães JM. Política nacional de ciência, tecnologia e inovação em saúde e a necessidade de educação permanente. Rev Baiana Saúde Pública [Internet]. 2012 [cited 2016 Apr 4]; 36(1):82-9. Available from: http://inseer.ibict.br/rbsp/index.php/rbsp/article/viewFile/238/210
» http://inseer.ibict.br/rbsp/index.php/rbsp/article/viewFile/238/210 -
23 Perez Jr EF, Oliveira EB, Souza NVDO, Lisboa MTL, Silvino ZR. Segurança no desempenho e minimização de riscos em terapia intensiva: tecnologias duras. Rev Enferm UERJ [Internet]. 2014 [cited 2016 Apr 4]; 22(3):327-33. Available from: http://www.facenf.uerj.br/v22n3/v22n3a06.pdf
» http://www.facenf.uerj.br/v22n3/v22n3a06.pdf -
24 Oliveira AM, Danski MTR, Pedrolo E. Technological innovation for peripheral venipuncture: ultrasound training. Rev Bras Enferm [Internet]. 2016 [cited 2016 Jan 10]; 69(6):990-6. Available from: http://www.scielo.br/pdf/reben/v69n6/en_0034-7167-reben-69-06-1052.pdf
» http://www.scielo.br/pdf/reben/v69n6/en_0034-7167-reben-69-06-1052.pdf -
25 Rondon EC, De Novais MAP, Nappo SA. A importância da informática em saúde na educação superior nos cursos da área da saúde. Rev Eletrôn Gestão Saúde [Internet]. 2013 [cited 2016 Apr 4];Edição Especial (Março):1653-66. Available from: http://www.gestaoesaude.unb.br/index.php/gestaoesaude/article/view/361/pdf_1
» http://www.gestaoesaude.unb.br/index.php/gestaoesaude/article/view/361/pdf_1
Publication Dates
-
Publication in this collection
May-Jun 2018
History
-
Received
28 Mar 2017 -
Accepted
21 May 2017