Abstract
The Brazilian program of higher education evaluation, broadly known as the National Exam of Students' Performance (ENADE), represents a governmental effort to gather information on undergraduate educational quality. As a product of that evaluation, reports are made available to each program evaluated. Our present research addresses the impact of ENADE evaluation report utilization on multiple higher education accounting programs' performance in their subsequent evaluation. Based upon theoretical support from literature about evaluation use, a web-based survey was developed and provided across the country to the coordinators of accounting programs. From a response rate of 62% of the study target population and using multiple regressions, we found that there was a positive correlation between usage of the ENADE evaluation report and the performance of undergraduate accounting programs in their subsequent evaluation. Based upon the reviewed literature and, in accordance with these research results, it is possible to infer that the use of evaluation reports derived from the higher education evaluation system promoted by the Brazilian government can influence the decisions of educational institutions and promote improvement.
Key words:
evaluation use; performance; higher education; accounting
Resumo
O programa Brasileiro de avaliação da educação superior, largamente conhecido pelo Exame Nacional de Desempenho de Estudantes (ENADE), representa um esforço governamental para reunir informações sobre a qualidade dos cursos de graduação. Como produto da avaliação, um relatório é disponibilizado para cada curso avaliado. O objetivo desse estudo foi conhecer qual é o impacto do uso do relatório de avaliação do ENADE sobre a performance dos cursos de graduação em Ciências Contábeis na avaliação subsequente. Fundamentado teoricamente na literatura sobre o uso de resultados de avaliação, um questionário foi desenvolvido e aplicado entre os Coordenadores dos cursos de contabilidade em todo país. Com base em uma taxa de resposta de 62% da população estudada e através dos resultados de regressões múltiplas, constatou-se a existência de uma correlação positiva entre o uso do relatório do ENADE e o desempenho dos cursos de Contabilidade na avaliação subsequente. De acordo com a literatura revisada e a partir dos resultados desse estudo, depreende-se que o uso dos relatórios derivados da avaliação dos cursos de graduação em Ciências Contábeis, realizada pelo Governo brasileiro, pode influenciar as decisões dos gestores das instituições de ensino de modo a promover melhorias nos programas.
Palavras-chave:
uso de resultados de avaliação; desempenho; ensino superior; Ciências Contábeis
Contextualization
The quality of educational programs has been an object of debate and research around the world. Initiatives such as the Program for International Student Assessment (PISA) and the Trends in International Mathematics and Science Study (TIMSS) show that international organizations such as the Organization for Economic Co-operation and Development (OECD) and the International Association for the Evaluation of Educational Achievement (IEA) are trying to verify whether schools are adequately preparing their students by comparing their performances, aiming to highlight the strengths and weaknesses among the educational systems of different countries.
Higher education has also been the object of quality evaluations around the world (Ursin, Huusko, Aittola, Kiviniemi, & Muhonen, 2008Ursin, J., Huusko, M., Aittola, H., Kiviniemi, U., & Muhonen, R. (2008). Evaluation and quality assurance in Finnish and Italian universities in the bologna process. Quality in Higher Education14(2), 109-120. http://dx.doi.org/10.1080/13538320802278222
http://dx.doi.org/10.1080/13538320802278...
; Van Kemenade, Pupius, & Hardjono, 2008Van Kemenade, E., Pupius, M., & Hardjono, T. W. (2008). More value to defining quality. Quality in Higher Education 14(2), 175-185. http://dx.doi.org/10.1080/13538320802278461
http://dx.doi.org/10.1080/13538320802278...
). Governmental and non-governmental organizations have developed ways to certify institutional quality through evaluation or accreditation processes. Examples of these organizations include the European Association for Quality Assurance in Higher Education (ENQA), the Quality Assurance Agency for Higher Education (QAA), the Association to Advance Collegiate Schools of Business (AACSB) and the National Institute of Educational Studies and Research - Anísio Teixeira (INEP).
Many higher education institutions are applying for an ISO 9000 certificate as a way to assure their quality (Lundquist, 1997Lundquist, R. (1997). Quality systems and ISO 9000 in higher education. Assessment & Evaluation in Higher Education22(2), 159-172. http://dx.doi.org/10.1080/0260293970220205
http://dx.doi.org/10.1080/02602939702202...
; Ursin et al., 2008Ursin, J., Huusko, M., Aittola, H., Kiviniemi, U., & Muhonen, R. (2008). Evaluation and quality assurance in Finnish and Italian universities in the bologna process. Quality in Higher Education14(2), 109-120. http://dx.doi.org/10.1080/13538320802278222
http://dx.doi.org/10.1080/13538320802278...
; Van Kemenade et al., 2008Van Kemenade, E., Pupius, M., & Hardjono, T. W. (2008). More value to defining quality. Quality in Higher Education 14(2), 175-185. http://dx.doi.org/10.1080/13538320802278461
http://dx.doi.org/10.1080/13538320802278...
), but the most popular way to obtain evidence of quality in higher education programs is through external evaluation (Van Kemenade et al., 2008Van Kemenade, E., Pupius, M., & Hardjono, T. W. (2008). More value to defining quality. Quality in Higher Education 14(2), 175-185. http://dx.doi.org/10.1080/13538320802278461
http://dx.doi.org/10.1080/13538320802278...
).
External program evaluations are implemented with the goal of producing information that helps to better comprehend how activities, processes and outcomes are contributing to the attainment of an organization's primary objectives. Therefore, if properly used, evaluations can potentially serve as an information system that can help educational institutions achieve their goals and correct possible deviations in their operations. Additionally, according to the utilization-focused evaluation literature, educational programs could benefit from the evaluation report utilization because "the ultimate purpose of evaluation is to improve programs and increase the quality of decisions made" (Patton, 2008Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: Sage ., p. 356).
The definition of evaluation use has been widely discussed in utilization-focused evaluation theory. Among the many concepts of evaluation use, that of Cousins and Leithwood (1986Cousins, J. B., & Leithwood, K. A. (1986). Current empirical research on evaluation utilization. Review of Educational Research56(3), 331-364. http://dx.doi.org/10.3102/00346543056003331
http://dx.doi.org/10.3102/00346543056003...
) perfectly fits the purpose of the present study. This concept states that "the mere psychological processing of evaluation results constitutes use, without necessarily informing decisions, dictating actions, or changing thinking" (Cousins & Leithwood, 1986Cousins, J. B., & Leithwood, K. A. (1986). Current empirical research on evaluation utilization. Review of Educational Research56(3), 331-364. http://dx.doi.org/10.3102/00346543056003331
http://dx.doi.org/10.3102/00346543056003...
, p. 332).
In an attempt to better distinguish the evaluation uses presented in the literature, Leviton and Hughes (1981Leviton, L. C., & Hughes, E. F. X. (1981). Research on the utilization of evaluations: a review and synthesis. Evaluation Review5(4), 525-548. http://dx.doi.org/10.1177/0193841x8100500405
http://dx.doi.org/10.1177/0193841x810050...
) summarized the categories for the most frequent uses described at that time and classified them into the current and broadly known types of use, which include conceptual use, instrumental use, and persuasive use. This nomenclature is generally accepted when describing the uses of evaluation findings (Alkin & Taut, 2003Alkin, M. C., & Taut, S. M. (2003). Unbundling evaluation use. Studies in Educational Evaluation29(1), 1-12. http://dx.doi.org/10.1016/s0191-491x(03)90001-0
http://dx.doi.org/10.1016/s0191-491x(03)...
; Preskill & Caracelli, 1997Preskill, H., & Caracelli, V. (1997). Current and developing conceptions of use: evaluation use tig survey results. Evaluation Practice18(3), 209-225. http://dx.doi.org/10.1016/s0886-1633(97)90028-3
http://dx.doi.org/10.1016/s0886-1633(97)...
).
The conceptual type of use, also known as enlightenment (Braskamp, 1982Braskamp, L. A. (1982). A definition of use. Studies in Educational Evaluation 8(2), 169-174. http://dx.doi.org/10.1016/0191-491x(82)90009-8
http://dx.doi.org/10.1016/0191-491x(82)9...
; Owen & Lambert, 1995Owen, J. M., & Lambert, F. C. (1995). Roles for evaluation in learning organizations. Evaluation1(2), 237-250. http://dx.doi.org/10.1177/135638909500100207
http://dx.doi.org/10.1177/13563890950010...
), refers to improving the understanding of program aspects, such as its participants, its context, or its outcomes, through the evaluation. The conceptual use is also related to developing new views of the program and identifying problems (Alkin, 2010Alkin, M. C. (2010). Evaluation essentials: from a to z. New York: The Guilford Press.; Braskamp, 1982Braskamp, L. A. (1982). A definition of use. Studies in Educational Evaluation 8(2), 169-174. http://dx.doi.org/10.1016/0191-491x(82)90009-8
http://dx.doi.org/10.1016/0191-491x(82)9...
; Henry & Mark, 2003Henry, G. T., & Mark, M. M. (2003). Beyond use: understanding evaluation's influence on attitudes and actions. American Journal of Evaluation 24(3), 293-314. http://dx.doi.org/10.1177/109821400302400302
http://dx.doi.org/10.1177/10982140030240...
). The instrumental use, "perhaps the earliest type of use examined in the literature" (Johnson, 1998Johnson, R. B. (1998). Toward a theoretical model of evaluation utilization. Evaluation and Program Planning21(1), 93-110. http://dx.doi.org/10.1016/s0149-7189(97)00048-7
http://dx.doi.org/10.1016/s0149-7189(97)...
, p. 93), is related to the purposes of decision making or problem solving using the information provided through the evaluation. This type of use refers to direct actions aimed at modifying the program in some way, symbolizing an objective use of evaluative information (Henry & Mark, 2003Henry, G. T., & Mark, M. M. (2003). Beyond use: understanding evaluation's influence on attitudes and actions. American Journal of Evaluation 24(3), 293-314. http://dx.doi.org/10.1177/109821400302400302
http://dx.doi.org/10.1177/10982140030240...
; Shadish, Cook, & Leviton, 1991Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: theories of practice. Thousand Oaks, CA: Sage .; Shulha & Cousins, 1997Shulha, L. M., & Cousins, J. B. (1997). Evaluation use: theory, research, and practice since 1986. American Journal of Evaluation 18(1), 195-208. http://dx.doi.org/10.1177/109821409701800121
http://dx.doi.org/10.1177/10982140970180...
). Lastly, the persuasive use is related to convincing others to agree with or support some specific choice or political position or to persuading stakeholders about the programs' values using evaluation findings, often in a selective way (Fleischer & Christie, 2009Fleischer, D. N., & Christie, C. A. (2009). Evaluation use: results from a survey of U.S. American evaluation association members. American Journal of Evaluation30(2), 158-175. http://dx.doi.org/10.1177/1098214008331009
http://dx.doi.org/10.1177/10982140083310...
; Leviton & Hughes, 1981Leviton, L. C., & Hughes, E. F. X. (1981). Research on the utilization of evaluations: a review and synthesis. Evaluation Review5(4), 525-548. http://dx.doi.org/10.1177/0193841x8100500405
http://dx.doi.org/10.1177/0193841x810050...
; Patton, 2008Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: Sage .).
In Brazil, the practice of educational evaluation has been consolidated through governmental initiatives that aim to measure the quality of the Brazilian educational system with a focus on accountability, but this had an unstable beginning. Although introduced in the first decades of the 20th century, only in the 1960s did educational evaluation become more systematized and begin to be part of Brazil's developmental politics. However, at the end of the 1970s and the beginning of 1980s, educational evaluation was discredited and questioned as a field of study, recovering its significance in the late 1980s early 1990s through initiatives directed toward elementary school evaluation (Gatti, 2002Gatti, B. A. (2002). Avaliação educacional no Brasil: pontuando uma história de ações. EccoS Revista Científica4(1), 17-41. http://dx.doi.org/10.5585/eccos.v4i1.291
http://dx.doi.org/10.5585/eccos.v4i1.291...
).
Among the problems identified by the Brazilian educational evaluation literature, the two primary difficulties related to the educational evaluation process were the lack of people with program evaluation expertise to manage and structure the system and the discontinuity of public politics over the years, which caused changes to the work teams and to the study objects (Gatti, 2009Gatti, B. A. (2009). Avaliação de sistemas educacionais no Brasil. Revista de Ciências da Educação, (9), 7-18. ).
Educational evaluation in Brazil is funded by the Brazilian government, which also maintains employees who manage each program jointly with consultants, mainly professors, who make up specific committees. The work teams define the evaluation concept and the standards used to measure the quality of institutions, which are usually based on the outcomes of standardized tests applied to students, and these teams are responsible for undertaking the evaluation.
The current Brazilian program of higher education evaluation was implemented in 2004 by the Ministry of Education through the National System of Higher Education Evaluation (SINAES) and has been used to evaluate each undergraduate program offered in both public and private institutions every three years. This evaluation is managed by the INEP and is generically titled the National Exam of Students' Performance (ENADE). After the ENADE is implemented, each higher education program in Brazil receives a grade from 1 (lower) to 5 (higher) that represents its educational quality. The Brazilian government then summarizes and posts the results of each program on the website of the INEP, but the utilization of these reports and the impact of the evaluation information among colleges and universities in Brazil have not yet been thoroughly studied (Burlamaqui, 2008Burlamaqui, M. G. B. (2008). Avaliação e qualidade na educação superior: tendências na literatura e algumas implicações para o sistema de avaliação brasileiro. Estudos em Avaliação Educacional19(39), 133-154. http://dx.doi.org/10.18222/eae193920082473
http://dx.doi.org/10.18222/eae1939200824...
).
The ENADE grade is comprised of four instruments: (a) a standardized test that aims to measure the performance of undergraduate students, considering the curriculum contents, skills and competencies; (b) the students' perception of the test questionnaire; (c) the student questionnaire; and (d) the program administrator questionnaire. The standardized test is divided into two sections: the general knowledge test, which is the same for all programs evaluated in the year, and the test of specific knowledge, which is based on the contents provided in the guidelines for each program curriculum by the Ministry of Education. The ENADE is applied to freshmen and senior undergraduate students annually, but the program evaluation is rotated so that each field of knowledge is evaluated every three years (Zoghbi, Oliva, & Moriconi, 2010Zoghbi, A. C. P., Oliva, T. B., & Moriconi, M. G. (2010). Aumentando a eficácia e a eficiência da avaliação do ensino superior: a relação entre o Enem e o Enade. Estudos em Avaliação Educacional 21(45), 45-66. http://dx.doi.org/10.18222/eae214520102024
http://dx.doi.org/10.18222/eae2145201020...
).
The ENADE evaluation report comprises detailed information about the grade achieved by the program, the performance of students on the large-scale test, the students' perceptions of the large-scale test, and information about the students' socioeconomic status. Comparative data from the national average student's performance and perceptions are also presented in the report. Thus, program stakeholders can utilize that information in their daily work to persuade people, to support their decisions, and/or to better know their students' characteristics and academic strengths and weaknesses.
Based on this context and assuming that through the utilization of evaluation reports, Brazilian higher education institutions can better comprehend themselves, improve their processes and make decisions that will increase the quality of their programs, this study aims to examine the impacts of evaluation report use on one undergraduate programs' performance in their subsequent evaluation.
By focusing on the Brazilian setting, we aim to contribute to the progress of discussions on higher education evaluation use as well as to empirically test the assumptions provided by evaluation use literature, using a Brazilian undergraduate program as pilot.
Method
The study population and sample
The study population consisted of the Brazilian undergraduate accounting programs that participated and obtained a grade in the National Exam of Students' Performance in both the 2006 and 2009 editions. From the first edition (2006), the grades were not relevant because no analysis was performed from this data, however only the programs with grades had a complete evaluation report available. From the second edition (2009), the grades were used as the dependent variable in the regression models. It is important to highlight that a different methodology was used to measure the grades in each edition, which is why no comparison was made of the two grades.
As in other fields of knowledge, accounting education has been pushed to improve teaching and learning quality due to the new economic dynamics encountered by companies (Suddaby, Cooper, & Greenwood, 2007Suddaby, R., Cooper, D. J., & Greenwood, R. (2007). Transnational regulation of professional services: governance dynamics of field level organizational change. Accounting, Organizations and Society, 32(4/5), 333-362. http://dx.doi.org/10.1016/j.aos.2006.08.002
http://dx.doi.org/10.1016/j.aos.2006.08....
). Moreover, accounting programs have been trying to prevent professional misbehavior and failures that are related to a lack of knowledge, which is commonly verified in cases of accounting fraud, by including courses such as ethics in their curricula and requiring approval via accountant examinations before the students begin their professional careers (Delaney & Coe, 2008Delaney, J., & Coe, M. J. (2008). Does ethics instruction make a difference? Advances in Accounting Education9, 233-250. http://dx.doi.org/10.1016/S1085-4622(08)09011-1
http://dx.doi.org/10.1016/S1085-4622(08)...
). Additionally, the harmonization of international financial reporting standards has recently required major curriculum changes and has challenged accounting education in many countries (Alon, 2012Alon, A. (2012). The IFRS question: to adopt or not? Advances in Accounting Education: Teaching and Curriculum Innovations13, 405-423. http://dx.doi.org/10.1108/S1085-4622(2012)0000013021
http://dx.doi.org/10.1108/S1085-4622(201...
; Glover & Werner, 2015Glover, H., & Werner, E. M. (2015). Teaching IFRS: options for instructors. Advances in Accounting Education : Teaching and Curriculum Innovations 16, 113-131. http://dx.doi.org/10.1108/S1085-462220150000016006
http://dx.doi.org/10.1108/S1085-46222015...
; Jackling, Howieson, & Natoli, 2012Jackling, B., Howieson, B., & Natoli, R. (2012). Some implications of IFRS adoption for accounting education. Australian Accounting Review, 22(4), 331-340. http://dx.doi.org/10.1111/j.1835-2561.2012.00197.x
http://dx.doi.org/10.1111/j.1835-2561.20...
). In this context, concerns about quality are constantly present in the daily routine of accounting program administrators, making them especially interested in the evaluation results.
A total of 772 undergraduate accounting programs were evaluated in the 2006 ENADE, but only 570 obtained a grade and consequently had an evaluation report available. From the 570 accounting programs evaluated in 2006, only 518 were evaluated with grades in 2009 and currently continue their operations. Therefore, this study target population was equal to 518 undergraduate accounting programs.
The study subjects were the current undergraduate accounting program administrators from the 518 institutions researched. The program administrators are responsible for the academic management of educational programs and can be considered to be one of the parties most interested in the evaluation results.
The study data collection instrument
The data collection instrument was intended to identify whether the accounting program administrators made any use of the 2006 ENADE evaluation report. Here, use was defined as the action of simply reading the cited evaluation report. This definition was used in accordance with the concept of use proposed by Cousins and Leithwood (1986Cousins, J. B., & Leithwood, K. A. (1986). Current empirical research on evaluation utilization. Review of Educational Research56(3), 331-364. http://dx.doi.org/10.3102/00346543056003331
http://dx.doi.org/10.3102/00346543056003...
). To verify the evidence of use, an objective yes or no question was asked. People who answered yes were redirected to the scale about the most frequent types of use of the ENADE evaluation report. The statements that represent the types of use were defined in accordance with Leviton and Hughes's (1981Leviton, L. C., & Hughes, E. F. X. (1981). Research on the utilization of evaluations: a review and synthesis. Evaluation Review5(4), 525-548. http://dx.doi.org/10.1177/0193841x8100500405
http://dx.doi.org/10.1177/0193841x810050...
) study, which summarized three types of use posteriorly consolidated by the evaluation utilization literature: (a) conceptual, (b) instrumental, and (c) persuasive. Thirteen statements were developed to identify how accounting program administrators use the ENADE evaluation report.
The last part of the data collection instrument was designed to obtain demographic information from the respondents such as their gender, their highest degree obtained, and how long they had been in the program administration position.
Other descriptive information was obtained from the database provided by the INEP about the ENADE, such as the Brazilian region where each institution is located (north; northeast; central-west; south; and southeast), the institutional academic organization (university; university center; college; and federal institute of education, science and technology), and the institutional main funding source (public; and private).
The three questions on the demographic information questionnaire and the institutional data provided by the INEP were used as explanatory variables in the ordinary least squares regression in the study of the impact of evaluation utilization on the performance of accounting programs.
The study variables and measurements
This research intended to apply different multiple regressions with ordinary least squares (OLS) estimator to achieve its objective of testing the correlation between the use of the 2006 ENADE evaluation report and the programs' outcomes in the 2009 ENADE.
Two data sources were utilized to gather all of the variables tested in this research: (a) the data collection instrument, and (b) the 2009 ENADE evaluation database provided by INEP. From the data collection instrument, the variables related to accounting program administrators' perceptions of the ENADE evaluation, their personal characteristics, and their evaluation use were obtained. Table 1 presents the data collection instrument variables plus their descriptions and measurements.
INEP provided the second data source utilized in this research. The INEP database contained the data related to the 2009 ENADE evaluation. Table 2 presents the variables tested in this study plus their descriptions and measurements.
Results and Discussion
The final sample was comprised of 322 institutions, 20 from the north, 38 from the central-west region, 56 from the northeast, 125 from the southeast and 83 from the south of Brazil. After examining the data and the regression outcomes, it was possible to identify the outliers among the institutions' respondents. Four surveys were identified to be outliers and were excluded from all analyses. These surveys presented a standardized residual greater than three standard deviations from the mean standardized residual score and caused a heteroscedasticity problem. After the exclusion of the four outliers, two from central-west and two from southeastern institutions, no heteroscedasticity was verified in the multiple regressions.
Only the respondents who affirmed that they had read the 2006 ENADE evaluation report responded to the scale about the types of use plus misuse (n = 196). That scale was intended to capture the level of use by type, with a goal of creating variables to test the relationship between evaluation use and the programs' performance. However, the reliability and validity of the instrument needed to be examined before proceeding to the analysis (Devellis, 2011Devellis, R. F. (2011). Scale development: theory and applications (3rd ed.). Thousand Oaks, CA: Sage.).
The internal consistency reliability and construct validity were assessed through a confirmatory factor analysis (CFA) conducted in SmartPLS 2.0 using partial least squares path modeling (PLS-PM) as an estimator. Cronbach's alpha indicated reliabilities greater than 0.7, suggesting that the responses were consistent across the latent variables within the scale for each construct: conceptual (α = .836), instrumental (α = .845), and persuasive (α = .727).
The construct validity was assessed through the convergent and discriminant validities. The average variances extracted (AVE) were greater than 0.5, indicating convergent validity. The assessment of the discriminant validity was conducted through a comparative analysis between the latent variable bivariate correlations and the composite reliabilities. The correlations ranged from 0.505 to 0.786, and reliabilities ranged from 0.831 to 0.896, suggesting that the indicators were able to differentiate the constructs measured by each latent variable. Table 3 presents the scale items and its cross-loadings.
All of the evaluation use variables were tested in two stages, first using simple regression and second using multiple regression to verify the outcomes' robustness. The outcome variable in all of these regressions was the grades achieved by the programs in the 2009 ENADE evaluation. Table 4 shows the descriptive statistics of the outcome and control variables and Table 5 provides Pearson (above the diagonal) and Spearman (below the diagonal) correlations among the variables.
The explanatory variables were mostly defined from previous research on factors associated with undergraduate program performance in Brazil (all predictors are presented in Tables 1 and 2). The approach was to add variables related to evaluation use to test whether they contribute to program performance.
Four regressions were used to test the correlation between the use of the ENADE evaluation report and undergraduate accounting program performance:
cpc_cont = β0 + β1 use + ε (model 1)
cpc_cont = β0 + β1 use + β2 hig_deg + β3 north + β4 northeast + β5 central-west + β6 south + β7 univ_center + β8 college+ β9 fiest + β10 adm_dep + ε (model 2)
cpc_cont = β0 + β1 use_int + ε (model 3)
cpc_cont = β0 + β1 use_int + β2 hig_deg + β3 north + β4 northeast + β5 central-west + β6 south + β7 univ_center + β8 college+ β9 fiest + β10 adm_dep + ε (model 4)
The first test determined if there is a positive correlation between the use of the ENADE evaluation report and undergraduate accounting program performance. This test was performed through a simple and a multiple regression (model 1 and model 2, respectively).
The second test determined if there is a positive correlation between the intensity of use of the ENADE evaluation report and undergraduate accounting program performance. In this case, the variable use_int was tested through a simple and a multiple regression (model 3 and model 4, respectively). Table 6 shows the regressions outcomes from the models 1 through 4.
The first regression aimed to verify whether the binary variable use alone was sufficient to predict program performance. The positive and statistically significant coefficient of the variable tested indicates that the act of reading the 2006 ENADE evaluation report is positively correlated with 2009 evaluation program outcome in the group researched. The low R2 is understandable because it was not assumed that program grades would only be explained by the evaluation report use. Additionally, previous research developed in Brazil has identified other important variables that are related to ENADE outcomes. Some of those variables were added to the model in the next regression to test whether the variable use would remain statistically significant.
The second regression showed that even in the presence of other control variables, the variable use remains statistically significant and positively correlated with program performance. Thus, this result corroborates the first, suggesting that the reading of the 2006 ENADE evaluation report was related to 2009 evaluation outcomes in the undergraduate accounting programs researched.
Another association tested in the second regression was the highest degree earned by the accounting programs' administrators and the 2009 evaluation outcomes. These results also indicate a statistically significant and positive correlation between administrators' academic degrees and the 2009 ENADE grades; in other words, the higher the academic title of a program administrator, the stronger the 2009 ENADE outcomes in the undergraduate accounting programs studied.
The other variables included in the second regression have already been tested by previous research on evaluations in Brazilian higher education. The negative coefficients indicate that accounting programs from the northern, central-west and northeastern regions presented lower grades than institutions from the southeast of Brazil in the group researched. Diaz (2007Diaz, M. D. M. (2007). Efetividade no ensino superior brasileiro: aplicação de modelos multinível à análise dos resultados do exame nacional de cursos. Revista EconomiA8(1), 93-120. ) found similar results, especially regarding the low performance of institutions from the northern region, although she studied the ENC evaluation system by examining different programs and using students' grades as an outcome variable.
Among the institutions researched, the university centers and colleges presented negative coefficients and, consequently, a lower performance in the 2009 ENADE when compared with universities. This result corroborates the findings of Moreira (2010Moreira, A. M. A. (2010). Fatores institucionais e desempenho acadêmico no enade: um estudo sobre os cursos de biologia, engenharia civil, história e pedagogia (Tese de doutorado). Universidade de Brasília, Brasília, Brasil. ), although she worked with different programs and used students' grades as an outcome variable. Lastly, the negative coefficient of the private institutions researched reveals that they had a lower performance in the 2009 ENADE than the public institutions. It is important to highlight that the regression assumptions were tested for both regressions, and a non-normal distribution of error terms was identified in the first regression.
The second test determined if there is a positive correlation between the intensity of use of the ENADE evaluation report and undergraduate accounting program performance. In this case, the variable use_int was tested through a simple and a multiple regression, models 3 and 4, respectively.
The use_int variable measures the intensity of use, that is, the degree of utilization based on the types of use diversity and volume as indicated by the accounting programs' administrators through their agreement level on the scale statements. The coefficient for this variable indicates that the intensity of the 2006 ENADE evaluation report use is positively correlated with the 2009 evaluation programs' outcomes in the group researched. Thus, the greater the three types of use were verified jointly, the higher the programs' grade. Again, in this case, the low R2 is understandable because it was not assumed that the programs' grades would be explained only by the intensity of the evaluation report use. As in the first test, additional variables were added to the model to test whether the variable use_int would remain statistically significant.
As shown in Table 6, the use_int variable coefficient remains statistically significant and positively correlated with program performance in the 2009 ENADE evaluation even when the control variables were included in the model, presenting a slightly greater contribution (t = 2.5651) to that model than the use variable (t = 2.1895). Therefore, the intensity of use explained part of the program's performance variance in the group researched. When compared to the prior multiple regression, the other variables retain the same signal direction and almost the same weight in relation to the outcome variable. Hence, the substitution of the variable use for the variable use_int in the model did not cause significant changes in the control variables' results and, consequently, in their regression analyses.
It is important to note that the simple regression (model 3) presented heteroscedasticity and non-normal distribution of error term problems but that in the multiple regression (model 4), after the inclusion of the control variables, these problems were solved. As in the model 2, there was no multicollinearity among variables verified.
The last analysis related to the impacts of evaluation use on program performance examined whether the types of use variables were correlated with program grades. The third test determined if there is a positive correlation between at least one type of use of the ENADE evaluation report and undergraduate accounting program performance. Table 7 presents the simple and multiple regression outcomes for the conceptual, instrumental and persuasive types of use variables, tested from the following models:
cpc_cont = β0 + β1 Concep + ε (model 5)
cpc_cont = β0 + β1 Inst + ε (model 6)
cpc_cont = β0 + β1 Pers + ε (model 7)
cpc_cont = β0 + β1 Concep + β2 hig_deg + β3 north + β4 northeast + β5 central-west + β6 south + β7 univ_center + β8 college+ β9 fiest + β10 adm_dep + ε (model 8)
cpc_cont = β0 + β1 Inst + β2 hig_deg + β3 north + β4 northeast + β5 central-west + β6 south + β7 univ_center + β8 college+ β9 fiest + β10 adm_dep + ε (model 9)
cpc_cont = β0 + β1 Pers + β2 hig_deg + β3 north + β4 northeast + β5 central-west + β6 south + β7 univ_center + β8 college+ β9 fiest + β10 adm_dep + ε (model 10)
The conceptual type of use (model 5) presented a positive and statistically significant (p = 0.0022) coefficient that was correlated with program performance. Thus, the fact that program administrators had read the 2006 ENADE evaluation report to gather information about student perceptions and outcomes appears to be positively associated with the results obtained by the undergraduate accounting programs in the 2009 evaluation, considering the group researched.
When compared with the conceptual type of use, the second type of use, instrumental (model 6), presented a positive and statistically less significant (p = 0.0693) coefficient correlated with the undergraduate accounting programs' performance in the ENADE 2009. This result indicates that the use of the 2006 ENADE evaluation report to make specific decisions produced a lower association with the 2009 evaluation outcomes than the use of the report to learn and better understand the evaluation outcomes.
As shown in Table 7, the persuasive coefficient (model 7) was statistically significant (p = 0.0482), indicating that, among the programs researched, using the 2006 ENADE results politically, such as to convince others or to reinforce a point of view in a negotiation or discussion, was positively correlated with accounting programs' 2009 evaluation outcomes.
However, all simple regressions related to the types of use presented a non-normal distribution of error terms, and the instrumental and persuasive regressions also presented a heteroscedasticity problem. Hence, multiple regressions were performed to test the robustness of the coefficients found in the simple regressions and to correct the problems related to the regression assumptions. Due to the multicollinearity that exists among the three types of use variables, they were not tested together.
According to the multiple regression results, in Table 7, the conceptual use variable (model 8) retains its statistical significance and its positive correlation with programs' 2009 ENADE evaluation outcomes even when the control variables are added to the model. Compared to the previous multiple regressions, the conceptual use proved to be the most relevant variable (p = 0.0035) among the evaluation use measures in the prediction of accounting programs' performance in the 2009 ENADE evaluation in the group researched. The control variables also retain the same association with the dependent variable verified in the previous multiple regressions.
The instrumental use variable (model 9) presented a greater statistical significance (p = 0.0279) for predicting accounting program performance in the 2009 ENADE in the presence of the control variables than the significance resulting from the simple regression. Additionally, the same positive correlation was verified, suggesting that the greater the instrumental use of the 2006 ENADE evaluation report, the greater the 2009 ENADE program performance, considering this study sample. Again, the control variables presented similar results to the previous regressions.
Table 7 indicates that no important variation occurred with the persuasive use variable or the control variables in the last multiple regression (model 10). The third type of use remains statistically significant (p = 0.0167) and positively correlated with the 2009 ENADE programs' performance. Hence, the regression outcomes suggest that the persuasive use of the 2006 ENADE evaluation report, verified among the institutions researched, is also related to their grades in the subsequent evaluation.
Analyzing the regression outcomes jointly revealed that the use of the 2006 ENADE evaluation report by the undergraduate accounting program administrators researched was related to improved program performance in the 2009 ENADE evaluation independently of how this use was measured (binary, sum of factor scores, or individual factor scores), suggesting that the use of the ENADE evaluation report should be incentivized to increase the chances of achieving an evaluation performance improvement through the enhancement of program quality (Patton, 2008Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: Sage .).
Based on the regression results, it is also possible to affirm that the conceptual type of use was the most strongly correlated with accounting programs' performance in the 2009 ENADE evaluation in the group researched. This result is in accordance with previous studies that indicated that the conceptual type of use was the most frequent and significant among the evaluation users (Mccormick, 1997McCormick, E. R. (1997). Factors influencing the use of evaluation results (Doctoral dissertation). University of Minnesota, Minneapolis, MN, United States. ; Shea, 1991Shea, M. P. (1991). Program evaluation utilization in Canada and its relationship to evaluation process, evaluator and decision context variables (Doctoral dissertation). University of Windsor, Windsor, Ontario, Canada. ).
The control variables presented a stable behavior throughout the multiple regressions. The exploratory test of the highest degree of the program administrator variable (hig_deg) remained statistically significant and positively correlated with 2009 ENADE programs' performance, suggesting that undergraduate accounting program administrators with a doctorate or a master's degree are related to programs that achieved better performances. Thus, if better grades in the ENADE evaluation are desirable, then program administrators with the highest degrees should be preferred.
Concurrently, the other control variables already tested by previous research on Brazilian higher education evaluation demonstrated the usual results as follows: the institutions from the northern, central-west and northeastern regions presented a lower performance than the southeastern region; university centers and colleges showed a lower performance than the universities; and private institutions received lower grades than public institutions (Diaz, 2007Diaz, M. D. M. (2007). Efetividade no ensino superior brasileiro: aplicação de modelos multinível à análise dos resultados do exame nacional de cursos. Revista EconomiA8(1), 93-120. ; Moreira, 2010Moreira, A. M. A. (2010). Fatores institucionais e desempenho acadêmico no enade: um estudo sobre os cursos de biologia, engenharia civil, história e pedagogia (Tese de doutorado). Universidade de Brasília, Brasília, Brasil. ; Santos, 2012Santos, N. A. (2012). Determinantes do desempenho acadêmico dos alunos dos cursos de ciências contábeis (Tese de doutorado). Universidade de São Paulo, São Paulo, Brasil. ). Possible explanations for these results include the association between educational development and regional socioeconomic development, inasmuch as the north, northeast and central-west present the lowest socioeconomic indicators in Brazil; the more complex organizational and academic structure may lead universities to better program performance when compared to colleges and university centers; and public institutions may attract more of the educationally most prepared students when compared with the private institutions.
The non-normal distribution of error terms and the heteroscedasticity problems that were verified in the simple regressions were solved through the multiple regressions. The multiple regressions also presented no multicollinearity problems. Lastly, the omitted-variable bias was tested using the Ovtest in Stata. In this test, the null hypothesis is that the model does not have omitted-variables bias. The results obtained suggest no evidence of omitted variables inasmuch as the p-value was higher than the usual threshold of p-value <0.05 for all multiple regressions.
Conclusions
Some characteristics of accounting programs in Brazil make accounting education peculiar, especially as concerns the students (Mamede, Marques, Rogers, & Miranda, 2015Mamede, S. P. N., Marques, A. V. C., Rogers, P., & Miranda, G. J. (2015). Psychological determinants of academic achievement in accounting: evidence from Brazil [Special Edition - BBR Conference]. Brazilian Business Review50-71. http://dx.doi.org/10.15728/bbrconf.2015.3
http://dx.doi.org/10.15728/bbrconf.2015....
). For instance, the students typically come from families with lower socioeconomic status; most of them are part-time students, and there is high demand for evening programs. In addition, the accounting restructuring that resulted from the adoption of the international financial reporting standards has required curriculum and knowledge updates, impacting accounting education in Brazil (Carvalho & Salotti, 2012Carvalho, L. N., & Salotti, B. M. (2012). Adoption of IFRS in Brazil and the consequences to accounting education. Issues in Accounting Education, 28(2), 235-242. http://dx.doi.org/10.2308/iace-50373
http://dx.doi.org/10.2308/iace-50373...
). In this context, program evaluation could be a powerful tool for the process of comprehending and managing educational institutions, providing information that helps them to better understand themselves and their outcomes.
Additionally, the recent results from the accountants' professional exam in Brazil caused some concern regarding Brazilian accounting education (Miranda, 2011Miranda, G. J. (2011). Relações entre as qualificações do professor e o desempenho discente nos cursos de graduação em contabilidade no Brasil (Tese de doutorado). Universidade de São Paulo, São Paulo, Brasil.). The high failure rate among newly graduated students may be an indication of a knowledge shortfall, which would induce accounting programs to seek quality improvement.
The key conclusion based on the evidence yielded by this research is that ENADE evaluation report use is positively correlated with undergraduate accounting program performance in the subsequent evaluation, independently of how the ENADE evaluation report use was measured (by the reading of the report, by the types of use described, or by the intensity of use represented by the sum of the types of use). Therefore, actions to increase the potential use of that report among program administrators should be incentivized.
Considering that the grades achieved by the programs in the evaluation process reflect their quality, the regression results suggest that the information presented in the ENADE evaluation report can help undergraduate programs to better understand themselves and to improve their decision making process. Hence, the potential benefits from the evaluation report utilization indicate that efforts should be made to convince the nonusers to read the report.
In addition, the feedback provided by this study allow the Ministry of Education in Brazil to better understand the impact and the usefulness of the reports developed through the national exam of students' performance and to make decisions aimed at increasing the users' potential interest in the evaluation outcomes. It is important to highlight that concerns regarding the utilization of the higher education evaluation results or products are present in the Brazilian educational evaluation literature (Souza & Oliveira, 2003Souza, S. Z. L. de, & Oliveira, R. P. de (2003). Políticas de avaliação da educação e quase mercado no Brasil. Educação & Sociedade24(84), 873-895. http://dx.doi.org/10.1590/S0101-73302003000300007
http://dx.doi.org/10.1590/S0101-73302003...
; Verhine, Dantas, & Soares, 2006Verhine, R. E., Dantas, L. M. V., & Soares, J. F. (2006). Do provão ao ENADE: uma análise comparativa dos exames nacionais utilizados no ensino superior Brasileiro. Ensaio: Avaliação Políticas Públicas Educacionais14(52), 291-310. http://dx.doi.org/10.1590/S0104-40362006000300002
http://dx.doi.org/10.1590/S0104-40362006...
; Vianna, 2009Vianna, H. M. (2009). Fundamentos de um programa de avaliação educacional. Meta: Avaliação1(1), 11-27. http://dx.doi.org/10.18222/eae246020143314
http://dx.doi.org/10.18222/eae2460201433...
).
More specifically, the Brazilian program of higher education evaluation can contribute to changes in laws, regulations, and educational management and, in particular, the ENADE evaluation report can influence decisions about didactic-pedagogical organization, curriculum adequacy, and institutional infrastructure, aiming to contribute to the betterment of higher education quality.
Inasmuch as a positive association between the ENADE evaluation report use and educational institution performance has been verified and considering that, according to evaluation utilization literature, the use can have a broadly organizational effect, this study produced evidence about the relevance of evaluation utilization to program management. The question is then raised as to whether that use is also associated with other aspects of the educational institutions that were not examined in this research.
Therefore, because the ENADE reports are already produced by the INEP after the evaluation process, promoting the use of the evaluation findings is only a matter of stimulus and knowledge about the potential usefulness of this managerial instrument. Through its results, this study reinforces the idea that undergraduate accounting institutions can improve their internal understanding by using the ENADE evaluation report, which would also contribute to improving the programs.
The main limitations of this study are (a) the utilization of retrospective actions as a way to recognize use and the occurrences of types of use, and (b) the utilization of a large-scale test as part of the measurement of the quality of the programs.
The data collected through the scale application were based on past events derived from reading the ENADE evaluation report. Hence, memory was the basis of the answers and experiences reported. In this case, the limitation associated with the use of memory in the process of gathering information is the fact that memories may not be reliable.
Inasmuch as students may not take the large-scale test used by the Brazilian Ministry of Education to evaluate the quality of programs seriously (Leitão, Moriconi, Abrão, & Silva, 2010Leitão, T. M. S. de P., Moriconi, G. M., Abrão, M., & Silva, D. S. da (2010). Análise acerca do boicote dos estudantes aos exames de avaliação do ensino superior. Estudos em Avaliação Educacional 21(45), 87-106. http://dx.doi.org/10.18222/eae214520102028
http://dx.doi.org/10.18222/eae2145201020...
), the test outcomes may not represent the students' knowledge. Consequently, the programs' grade may be affected because the large-scale test outcome is a relevant variable in the definition of the programs' performance, which was correlated with the utilization of the ENADE evaluation report in this study. Then, any possible imprecision in these data would influence the results and analyses of this research.
Lastly, the results presented in this research cannot be generalized because they did not come from a probabilistic sample. Therefore, the conclusions derived from this research are applicable only to the group of program administrators and accounting programs studied.
Some recommendations for future research can be derived from this study experience and results: (a) an investigation of evaluation use by different stakeholders, (b) a measurement of the impact of evaluation use at the student level, and (c) research on evaluation use at programs from other fields of knowledge.
This study considered the undergraduate accounting program administrators to be the main stakeholders and only research subject. Thus, all analyses were based on that stakeholder viewpoint and answers. Other potential users, such as professors, college or university deans could be used as subjects in future research on ENADE evaluation report utilization.
Another research alternative would be to change the outcome variable and the statistical approach used in the analysis about the impact of evaluation utilization. Instead of using the programs' performance (grades), the students' grades could be used as the outcome variable, and a hierarchical linear model (HLM) could be performed. Hence, aside from verifying the impact of evaluation utilization only on the program level, it would be possible to also verify it on the student level, increasing the understanding of the relationship between evaluation utilization and the program and student performances.
Finally, other fields of knowledge could also be the object of studies on ENADE evaluation report utilization. Comparative studies among programs in different fields or other single-field program analysis could be performed to examine the impact of evaluation utilization on program performance.
References
- Alkin, M. C. (2010). Evaluation essentials: from a to z. New York: The Guilford Press.
- Alkin, M. C., & Taut, S. M. (2003). Unbundling evaluation use. Studies in Educational Evaluation29(1), 1-12. http://dx.doi.org/10.1016/s0191-491x(03)90001-0
» http://dx.doi.org/10.1016/s0191-491x(03)90001-0 - Alon, A. (2012). The IFRS question: to adopt or not? Advances in Accounting Education: Teaching and Curriculum Innovations13, 405-423. http://dx.doi.org/10.1108/S1085-4622(2012)0000013021
» http://dx.doi.org/10.1108/S1085-4622(2012)0000013021 - Braskamp, L. A. (1982). A definition of use. Studies in Educational Evaluation 8(2), 169-174. http://dx.doi.org/10.1016/0191-491x(82)90009-8
» http://dx.doi.org/10.1016/0191-491x(82)90009-8 - Burlamaqui, M. G. B. (2008). Avaliação e qualidade na educação superior: tendências na literatura e algumas implicações para o sistema de avaliação brasileiro. Estudos em Avaliação Educacional19(39), 133-154. http://dx.doi.org/10.18222/eae193920082473
» http://dx.doi.org/10.18222/eae193920082473 - Carvalho, L. N., & Salotti, B. M. (2012). Adoption of IFRS in Brazil and the consequences to accounting education. Issues in Accounting Education, 28(2), 235-242. http://dx.doi.org/10.2308/iace-50373
» http://dx.doi.org/10.2308/iace-50373 - Cousins, J. B., & Leithwood, K. A. (1986). Current empirical research on evaluation utilization. Review of Educational Research56(3), 331-364. http://dx.doi.org/10.3102/00346543056003331
» http://dx.doi.org/10.3102/00346543056003331 - Delaney, J., & Coe, M. J. (2008). Does ethics instruction make a difference? Advances in Accounting Education9, 233-250. http://dx.doi.org/10.1016/S1085-4622(08)09011-1
» http://dx.doi.org/10.1016/S1085-4622(08)09011-1 - Devellis, R. F. (2011). Scale development: theory and applications (3rd ed.). Thousand Oaks, CA: Sage.
- Diaz, M. D. M. (2007). Efetividade no ensino superior brasileiro: aplicação de modelos multinível à análise dos resultados do exame nacional de cursos. Revista EconomiA8(1), 93-120.
- Fleischer, D. N., & Christie, C. A. (2009). Evaluation use: results from a survey of U.S. American evaluation association members. American Journal of Evaluation30(2), 158-175. http://dx.doi.org/10.1177/1098214008331009
» http://dx.doi.org/10.1177/1098214008331009 - Gatti, B. A. (2002). Avaliação educacional no Brasil: pontuando uma história de ações. EccoS Revista Científica4(1), 17-41. http://dx.doi.org/10.5585/eccos.v4i1.291
» http://dx.doi.org/10.5585/eccos.v4i1.291 - Gatti, B. A. (2009). Avaliação de sistemas educacionais no Brasil. Revista de Ciências da Educação, (9), 7-18.
- Glover, H., & Werner, E. M. (2015). Teaching IFRS: options for instructors. Advances in Accounting Education : Teaching and Curriculum Innovations 16, 113-131. http://dx.doi.org/10.1108/S1085-462220150000016006
» http://dx.doi.org/10.1108/S1085-462220150000016006 - Henry, G. T., & Mark, M. M. (2003). Beyond use: understanding evaluation's influence on attitudes and actions. American Journal of Evaluation 24(3), 293-314. http://dx.doi.org/10.1177/109821400302400302
» http://dx.doi.org/10.1177/109821400302400302 - Jackling, B., Howieson, B., & Natoli, R. (2012). Some implications of IFRS adoption for accounting education. Australian Accounting Review, 22(4), 331-340. http://dx.doi.org/10.1111/j.1835-2561.2012.00197.x
» http://dx.doi.org/10.1111/j.1835-2561.2012.00197.x - Johnson, R. B. (1998). Toward a theoretical model of evaluation utilization. Evaluation and Program Planning21(1), 93-110. http://dx.doi.org/10.1016/s0149-7189(97)00048-7
» http://dx.doi.org/10.1016/s0149-7189(97)00048-7 - Leitão, T. M. S. de P., Moriconi, G. M., Abrão, M., & Silva, D. S. da (2010). Análise acerca do boicote dos estudantes aos exames de avaliação do ensino superior. Estudos em Avaliação Educacional 21(45), 87-106. http://dx.doi.org/10.18222/eae214520102028
» http://dx.doi.org/10.18222/eae214520102028 - Leviton, L. C., & Hughes, E. F. X. (1981). Research on the utilization of evaluations: a review and synthesis. Evaluation Review5(4), 525-548. http://dx.doi.org/10.1177/0193841x8100500405
» http://dx.doi.org/10.1177/0193841x8100500405 - Lundquist, R. (1997). Quality systems and ISO 9000 in higher education. Assessment & Evaluation in Higher Education22(2), 159-172. http://dx.doi.org/10.1080/0260293970220205
» http://dx.doi.org/10.1080/0260293970220205 - Mamede, S. P. N., Marques, A. V. C., Rogers, P., & Miranda, G. J. (2015). Psychological determinants of academic achievement in accounting: evidence from Brazil [Special Edition - BBR Conference]. Brazilian Business Review50-71. http://dx.doi.org/10.15728/bbrconf.2015.3
» http://dx.doi.org/10.15728/bbrconf.2015.3 - McCormick, E. R. (1997). Factors influencing the use of evaluation results (Doctoral dissertation). University of Minnesota, Minneapolis, MN, United States.
- Miranda, G. J. (2011). Relações entre as qualificações do professor e o desempenho discente nos cursos de graduação em contabilidade no Brasil (Tese de doutorado). Universidade de São Paulo, São Paulo, Brasil.
- Moreira, A. M. A. (2010). Fatores institucionais e desempenho acadêmico no enade: um estudo sobre os cursos de biologia, engenharia civil, história e pedagogia (Tese de doutorado). Universidade de Brasília, Brasília, Brasil.
- Owen, J. M., & Lambert, F. C. (1995). Roles for evaluation in learning organizations. Evaluation1(2), 237-250. http://dx.doi.org/10.1177/135638909500100207
» http://dx.doi.org/10.1177/135638909500100207 - Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: Sage .
- Preskill, H., & Caracelli, V. (1997). Current and developing conceptions of use: evaluation use tig survey results. Evaluation Practice18(3), 209-225. http://dx.doi.org/10.1016/s0886-1633(97)90028-3
» http://dx.doi.org/10.1016/s0886-1633(97)90028-3 - Santos, N. A. (2012). Determinantes do desempenho acadêmico dos alunos dos cursos de ciências contábeis (Tese de doutorado). Universidade de São Paulo, São Paulo, Brasil.
- Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: theories of practice. Thousand Oaks, CA: Sage .
- Shea, M. P. (1991). Program evaluation utilization in Canada and its relationship to evaluation process, evaluator and decision context variables (Doctoral dissertation). University of Windsor, Windsor, Ontario, Canada.
- Shulha, L. M., & Cousins, J. B. (1997). Evaluation use: theory, research, and practice since 1986. American Journal of Evaluation 18(1), 195-208. http://dx.doi.org/10.1177/109821409701800121
» http://dx.doi.org/10.1177/109821409701800121 - Souza, S. Z. L. de, & Oliveira, R. P. de (2003). Políticas de avaliação da educação e quase mercado no Brasil. Educação & Sociedade24(84), 873-895. http://dx.doi.org/10.1590/S0101-73302003000300007
» http://dx.doi.org/10.1590/S0101-73302003000300007 - Suddaby, R., Cooper, D. J., & Greenwood, R. (2007). Transnational regulation of professional services: governance dynamics of field level organizational change. Accounting, Organizations and Society, 32(4/5), 333-362. http://dx.doi.org/10.1016/j.aos.2006.08.002
» http://dx.doi.org/10.1016/j.aos.2006.08.002 - Ursin, J., Huusko, M., Aittola, H., Kiviniemi, U., & Muhonen, R. (2008). Evaluation and quality assurance in Finnish and Italian universities in the bologna process. Quality in Higher Education14(2), 109-120. http://dx.doi.org/10.1080/13538320802278222
» http://dx.doi.org/10.1080/13538320802278222 - Van Kemenade, E., Pupius, M., & Hardjono, T. W. (2008). More value to defining quality. Quality in Higher Education 14(2), 175-185. http://dx.doi.org/10.1080/13538320802278461
» http://dx.doi.org/10.1080/13538320802278461 - Verhine, R. E., Dantas, L. M. V., & Soares, J. F. (2006). Do provão ao ENADE: uma análise comparativa dos exames nacionais utilizados no ensino superior Brasileiro. Ensaio: Avaliação Políticas Públicas Educacionais14(52), 291-310. http://dx.doi.org/10.1590/S0104-40362006000300002
» http://dx.doi.org/10.1590/S0104-40362006000300002 - Vianna, H. M. (2009). Fundamentos de um programa de avaliação educacional. Meta: Avaliação1(1), 11-27. http://dx.doi.org/10.18222/eae246020143314
» http://dx.doi.org/10.18222/eae246020143314 - Zoghbi, A. C. P., Oliva, T. B., & Moriconi, M. G. (2010). Aumentando a eficácia e a eficiência da avaliação do ensino superior: a relação entre o Enem e o Enade. Estudos em Avaliação Educacional 21(45), 45-66. http://dx.doi.org/10.18222/eae214520102024
» http://dx.doi.org/10.18222/eae214520102024
Publication Dates
-
Publication in this collection
Nov-Dec 2016
History
-
Received
22 Nov 2015 -
Reviewed
08 May 2016 -
Accepted
14 May 2016