- zo 05 mrt 2017
Benefiet screening van de film ‘Girl Rising’- Amsterdam
- wo 15 mrt 2017
Eetlokaal ‘Spelen met risico’s in het kleuteronderwijs’ – Nijmegen
- wo 22 mrt 2017
Werkconferentie: ‘Ongewenst gedrag, wat kan ik er mee?’ op 22 maart
- Bekijk meer agenda punten
- vr 24 mrt 2017
HU-congres ‘LOB in het vmbo’ – Utrecht
- ma 27 mrt 2017
Expert meeting ‘Inclusief onderwijs als mensenrecht’ – Amsterdam
- vr 31 mrt 2017
Wakker blijven tijdens de Nacht van het Onderwijs
- do 06 apr 2017
Met meer energie werken aan waardegedreven onderwijs, met o.a. Gert Biesta – Amsterdam
- za 08 apr 2017
Retraite Happy Teachers Change the World – Duitsland
- wo 12 apr 2017
Conferentie ‘Met alle respect’ – Utrecht
- wo 12 apr 2017
Meetup 010: Teach like a RotterdammerT, de startende leerkracht
- Over kleuters en kleuterjuffen: ‘Alsof gras harder gaat groeien als je eraan trekt…’
- ‘Diep van binnen is hij de liefste jongen van de wereld’
- ‘Eerst was er afwezigheid, die van het soort dat niet zo opvalt…’
- Invaller, een onmogelijke taak?
- Over het werk van Luc Stevens: ‘de behoefte aan relatie, competentie en autonomie’
7 februari 2012
De laatste 20 jaar is er een opmerkelijke toename van de interesse om onderwijs meetbaar te maken. Professor Gert Biesta, spreker op het APS-congres van 7 maart, stelt vragen: Waarderen we wat we meten óf meten we wat we waarderen? De noodzaak dringt zich op om weer bij de essentiële vraag te komen: Waartoe dient onderwijs? Een artikel van zijn hand n.a.v. zijn boek. En een interview met Biesta op video.
Introduction: Valuing what we measure or measuring what we value?
The past 20 years have seen a remarkable rise in interest in the measurement of education or, in the lingo of the educational measurement culture, the measurement of educational ‘outcomes.’ Perhaps the most prominent manifestation of this phenomenon can be found in international comparative studies such as the Trends in International Mathematics and Science Study (TIMSS), the Progress in International Reading Literacy Study (PIRLS) and OECD’s Programme for International Student Assessment (PISA). These studies, which result in league tables that are assumed to indicate who is better and who is best, are intended to provide information about how national education systems perform compared to those of other countries and are thus generally competitive in their outlook. Findings are utilised by national governments to inform educational policy, often under the banner of ‘raising standards.’
League tables are also produced at national level with the aim of providing information about the relative performance of individual schools or school districts. Such league tables have a complicated rationale, combining accountability and choice elements with a social justice argument which says that everyone should have access to education of the same quality. At the same time, the data used for producing such league tables are used to identify so-called ‘failing schools’ and, in some cases, ‘failing teachers’ within schools. The irony of these arguments is that accountability is often limited to choice from a set menu and thus lacks a real democratic dimension (see Biesta 2004a), that the elasticity of school choice is generally very limited, and also that equality of opportunity hardly ever translates into equality of outcomes because of the role of structural factors that are beyond the control of schools and teachers, thus also undermining part of the ‘blame and shame’ culture of school failure (see Tomlinson 1997; Nicolaidou & Ainscow 2005; Hess 2006; Granger 2008).
Interest in the measurement of educational outcomes has not been restricted to the construction of league tables. The measurement of outcomes and their correlation with educational ‘input’ is also central to research which aims to provide an evidence-base for educational practice (see Biesta 2007a). Proponents of the idea that education should be transformed into an evidence-based profession argue that it is only through the conduct of large-scale experimental studies – the randomised controlled field trial being the ‘gold standard’ – and careful measurement of the correlation between input and output, that education will be able to witness “the kind of progressive, systematic improvement over time that has characterized successful parts of our economy and society throughout the twentieth century, in fields such as medicine, agriculture, transportation and technology” (Slavin 2002, p.16). In the USA the reauthorization in 2001 of the Elementary and Secondary Education Act (‘No Child Left Behind’) has resulted in a situation where federal research funding is only available for research which utilises this particular methodology in order to generate scientific knowledge about ‘what works.’
An important precursor of many of these developments can be found in research on school effectiveness, which played an influential role in discussions about educational change and improvement from the early 1980s onwards (see Townsend 2001; Luyten et al. 2005). While the research initially focused on overall school and administrative variables, later work increasingly paid attention to the dynamics of teaching and learning in order to identify the variables that matter in making schooling more effective. With it came a shift towards a more narrow view of relevant outcomes and outputs (see, e.g., Rutter & Maugham 2002; Gray 2004). In recent years the movement as a whole seems to have become more interested in the wider question of school improvement rather than just issues concerning effectiveness (see, e.g., Townsend 2007). Notwithstanding this, the school effectiveness and improvement movement has played an important role in the idea that educational outcomes can and should be measured.
The rise of the measurement culture in education has had a profound impact on educational practice, from the highest levels of educational policy at national and supra-national level down to the practices of local schools and teachers. To some extent this impact has been beneficial as it has allowed for discussions to be based on factual data rather than just assumptions or opinions about what might be the case. The problem is, however, that the abundance of information about educational outcomes has given the impression that decisions about the direction of educational policy and the shape and form of educational practice can be based solely upon factual information. Despite the fact that this is what increasingly is happening in discussions about education in the wake of international comparisons, league tables, accountability, evidence-based education and effective schooling, there are two (obvious) problems with this way of thinking.
Lees verder BIESTA — GOOD EDUCATION met highlights 1
The Stirling Institute of Education
University of Stirling, UK
Gert Biesta is