Document Type : Research Article

Author

Department of English, Tarbiat Modares University, Tehran, Iran

Abstract

Introduction
In the current era of policies and practices of testing-based accountability, high- stakes tests such as university entrance exams are widely perceived to have immense importance for the people and institutions involved because they induce a rather equal curriculum through a renewed focus on what is measured. In fact, high-stakes tests have the potential to shape curricular teaching and learning. Such a consequential alignment of teaching and learning with testing, i.e., ‘test wash back’ has recently encouraged policymakers to manage for a reform in situations where high-stakes tests can be deliberately employed to promote standards of teaching, accountability and powerful learning. Macro policies as such have formed the key concerns of many reformers around the globe, including Iran. A scrutiny of the ways these policies are conceptualized at their planning phase, and the possible dilemmas and challenges anticipated for their implementation have been the subject of very scarce studies, however. The present study aimed to unveil macro-policies, plans, values, and conceptualizations underlying different perspectives of a community of policymakers and planners planning for gradual substitutions of the University Entrance Examinations (UEEs) with the High school National Achievement Examinations (HNAEs) and students’ academic background, in a test change context in Iran.
2. Methodology
The present study, based on in-depth interviews with 14 high-rank policymakers and proponents of the UEEs reform, detailed their conceptualization of this change in terms of the underlying policies, prospects, and perspectives. The participants enjoyed different levels of experience in education (management, evaluation, and teaching) and age range of 40 to 55. An interview guide was developed for the qualitative nature of the required data. Strauss and Corbin’s (1990) ‘Paradigm Model’ of qualitative data analysis was used as a tool for identifying thematic categories and subcategories. This particular model is a data-driven conceptual model that works based on a series of causal/consequential relationships among the categories or themes.
3. Discussion
The initial database yielded a template which revealed participants’ understanding of the given situation, the logic underpinning their planning, their examinations of the problems, and the prospects about the programme’s future. Three major themes finally emerged: (1) the induction of the intended consequences; (2) the value of the multiple-approach assessment of learners’ knowledge and abilities; and (3) the significance of the prerequisites and challenges for implementation: current and future trends. (1)According to all respondents, what formed the main rationale behind the UEEs reform was to counterbalance the negative impacts of the objective UEEs and trigger intended positive effects on curriculum, instruction, and learning via the HNAEs. They felt that such potentials can be actualized through opting for ‘systemic validity’ which is defined by Fredriksen and Collins (1989) as a process that sparks off constructive positive influences on teaching and learning. Most of the inter viewees ,in general, believed improving both quality and quantity of teaching and learning can be fostered through changing the UEEs-based programme to the one which values schooling instructions and aligns the assessment modes/means with constructivism concepts in education. The conceptual picture of ‘Consequences’ does not limit only to the systemic validity but it embraces test-related factors like fairness or psychometric characteristics too. The governmental policies based on which the HNAEs were and are built come from the fairness and social equity premises. In relation to measuring real abilities through fair measures, the respondents questioned psychometric traditions for decision-making about the candidates’ abilities. They all converged in their beliefs that the UEEs with their sizable proportion of the memorization-based items are not perceived as fair measures for selecting the students. (2) All emerging themes and subthemes revealed a progress moving away from a ‘measurement culture’ that limits students’ performance to tightly specific skills captured under specific times towards ‘edumetrics culture’ (Segers & Dochy, 2001).The themes ‘integrating qualitative measurement modes like the interviews or oral communications for specific majors (e.g., English Language majors or arts)’, ‘keeping an on-going record of students’ performances from the beginning of their high school towards their graduation’, ‘exploiting regular formative assessments rather than conventional summative ones per se’, ‘integrating IT in assessing students’ learning’, and ‘designing and administering standard tests of Educational Progress (like SAT for instance) several times rather than once a year’ support this assertion. (3) Informants’ descriptions raised their shared concerns about the provision of the logistics ranging from allocation of financial, material, and human resources to timely collaboration and communication between the two ministries of Education and Science, Research and Technology (MSRT )and the National Organization of Educational Testing(NOET). Besides these requirements, the data came up with other concerns that would make serious challenges in future. They also explained their doubts in ‘discrimination power’ of the HNAEs (0-20) compared to the standardized UEEs, or argued for the likelihood of distributing a compatible ‘test-
anxiety and stress’ over the four years of high schools.
4. Conclusion

In Iran, choosing between the two competing admission practices by the policy makers bears a testimony that policymakers’ tendency to shift to the directing function of the HNAEs resulted from dynamics of their power. It is, then, within the realm of such a power that the unintended washback associated with the selecting function of the high-stakes tests seems to be controlled. Motivated by the current debates on evaluating the changed programmes, this study contributes to the literature through exploring the planning/policy phase prior to a later evaluation of the sole products which is common to the traditional evaluations. Policy/planning phase analyses can establish a baseline for subsequent evaluation of any programme, revealing a constellation of factors that might mitigate the intended policies, visions, or missions of that programme. In this study, a partial congruence was found between the policy and the desired outcomes of the HNAEs programme. This may thwart the success rate of ideals intended by the underpinning policies. Such concerns are not unique to Iran, but in other systems as well.

Keywords

1. Gholipoor, R.,& Aghabozorgi, M. (2005). A report on the Article for Student Selection Program of Iranian’s Universities. Research Center of the Parliament Archive.
2. Hajforoush, A. (2002, May). Negative consequences of entranceexams on instructionalobjectives and a proposal for removing them.Proceedings of Esfahan University Conference on Evaluating the Issues of the Entrance Exams,Esfahan University, 77-125.
3. The Council of Iran’s Expediency (2005). Iran’s 20-year vision. Retrieved from http://www.majma.ir/Contents.aspx?p=67ee04aa-7171-4f72-bdf7-e6f68c3547e5.
4. Supreme Council of Cultural Revolution of Islamic Republic of Iran (2009).Iran’s comprehensivescience roadmap. Retrieved from http://www.iranculture.org/fa/Default.aspx?current=viewDoc¤tID=736
5. Kia, A.,&Bozorgi, K. (2006). Comments on the proposal for student admission to higher education universities. Research Center of the Parliament Report 8247.
6. Kiamanesh, A. R. (2000). Educational evaluation. Tehran: Payam-e Noor Publication.
7. Ministry of Education of Islamic Republic of Iran (2009). Roadmap of the official and general education. Retrieved from http://www.sce.ir/.
8. Shojaee, M.,&Gholipoor, R. (2005). Recommend draft of applying university student system survey and designing acceptance model of university student. Research Center of the Parliament Report 7624.
9. The Parliament of Islamic Republic of Iran (2007).The Act of student admission to universities. Parliament Achieve132730.
10. Ministry of Education of Islamic Republic of Iran (2008).The document of innovations in the ministry of education of Iran. Ministry Archive.
11. The Document of National Curriculum (2010).Ministry of Education of the Islamic Republic of Iran. Ministry Archive.
 
References (in English)
1. Broadfoot, P. (1996). Education, assessment, and society. Philadelphia: Open University Press.
2. Fredriksen, J., & Collins, A.(1989). A system approach to educational testing. Educational Researcher,18, 27–32.
3. Glenwright, P. (2002). Language proficiency assessment for teachers: The effects of benchmarking on writing assessment in Hong Kong schools. Assessing Writing, 8 (2), 84-109.
4. Hargreaves, A., Earl, L.,& Schmidt, M. (2002). Perspectives on alternative assessment reform, American Educational Research Journal, 39 (1), 69-95.
5. Kennedy, C. (1988). Evaluation of the management of change in ELT projects. Applied Linguistics,9 (4), 329-342.
6. Kiany, GH. R.,&Shayestefar, P. (2011). High school students' perceptions of EFL teacher control orientations and their English academic achievement. British Journal of Educational Psychology, 81 (3), 491-508.
7. Markee, N. (1990, March). The diffusion of communicative innovations and classroom culture: An ethnographic Study.Paper presented at the 24th Annual TESOL Convention, San Francisco, Ca.
8. McNamara, T.,&Roever, C. (2006). Language testing: The social dimension. Oxford: Blackwell Publishing.
9. Patton, M. Q. (1997). Utilization of focused evaluation: the new century text (3rded.). Thousand Oaks, CA: Sage.
10. Nagy, P. (2000). The tree roles of assessment: Gate keeping, accountability, and instructional diagnosis. Canadian Journal of Education, 25 (4), 262-279.
11. Popham, W. J. (1987). The merits of measurement-driven instruction. Phi Delta Kappa, 68, 679-682.
12. Rogers, E. M. (1983). The diffusion of innovations (3rd ed.). London: Macmillian.
13. Segers, M., &Dochy, F. (2001). New assessment forms in problem-based learning: The value added of students' perspective. Studies in Higher Education, 26(3), 327-343.
14. Shohamy, E. (1998). Critical language testing and beyond. Studies in Educational Evaluation, 24, 331-345.
15. Stake, R. E. (1975). Evaluating the arts in education: A responsive approach. Columbus, OH: Merrill.
16. Stiggins, J. R. (1990). Toward a relevant classroom assessment research agenda. Alberta Journal of Educational Research, 36(1), 92–97.
17. Stoller, F. (1994). The diffusion of innovations in intensive ESL programs. Applied Linguistics,15 (30), 300-327.
18. Strauss, J.,& Corbin, A. (1990). Basics of Qualitative Research: Grounded theory procedures and techniques. Newbury Park, CA: Sage.
19. Wall, D. (1997). Impact and washback in language testing. In C. Clapham& D. Corson(Eds.), Testing and assessment: the Kluwer encyclopedia of language in education, (pp. 291-302, Vol. 7). Netherlands: Kluwer Academic.
20. Wall, D.,& Alderson, J. C. (1993). Examining washback: The Sri Lankan impact study, Language Testing, 10, 41-69.
21. Wall, D.,&Horak, T. (2008). The impact of changes in the TOFEL examination on teaching and learning in central and eastern Europe: Phase 2, coping with change. Princeton, New Jersey: Educational Testing Service.
 
 
 
 
 
 
 
 
 
 
Wall, D. (1997). Impact and washback in language testing. In Clapham, C. and Corson, D. editors, Testing and assessment: the Kluwer encyclopedia of language in education, Vol. 7. Netherlands: Kluwer Academic, 291-302.
Wall, D. & Alderson, J. C. (1993). Examining washback: the Sri Lankan impact study, Language Testing, 10: 41-69.
Wall, D. & Horak, T. (2008). The impact of changes in the TOFEL examination on teaching and learning in central and eastern Europe: phase 2, coping with change. ETS, TOEFL.
CAPTCHA Image