- wo 10 mei 2017
Onderwijsavond Arnold Jonk: Gelijke kansen en de verantwoordelijkheden van de school
- do 11 mei 2017
Jaarcongres Expertisecentrum Jeugd – Leiden
- do 11 mei 2017
Congres ‘Tijdig signaleren in de kinderopvang’, Rotterdam
- Bekijk meer agenda punten
- do 11 mei 2017
Alchemiefestival: een zoektocht naar gouden professionaliteit van professionals binnen het onderwijs en jeugdhulp – Leiden
- do 11 mei 2017
MeetUp035 – Amersfoort
- za 13 mei 2017
Symposium ‘Onderwijs als persoonsvorming’ – Utrecht
- wo 31 mei 2017
Symposium ‘Wat als je onderwijs ruimte geeft?’ – Leiden
- wo 07 jun 2017
Onderwijsavond met Harry Kunneman: ‘Van waarde naar actie, onderwijswerk is waardenwerk’
- wo 07 jun 2017
Symposium Jeugdsport ‘Spelend leren spelen in teamsporten’ – Windesheim, Zwolle
- za 07 okt 2017
Tweedaagse in Amsterdam: ‘Hoe houd je de deur naar de natuurlijke bron van creativiteit bij kinderen open?’
- Staat van het Onderwijs: tien dezelfde appeltaarten?
- Als ouders scheiden: ‘Papa maakt nog grapjes, maar zijn ogen lachen niet meer’
- Over het werk van Luc Stevens: ‘de behoefte aan relatie, competentie en autonomie’
- Echte problemen op 7-jarige leeftijd
- Over weerwolven en angsten: ‘Luisteren. Had ik dat maar eerder gedaan.’
7 februari 2012
De laatste 20 jaar is er een opmerkelijke toename van de interesse om onderwijs meetbaar te maken. Professor Gert Biesta, spreker op het APS-congres van 7 maart, stelt vragen: Waarderen we wat we meten óf meten we wat we waarderen? De noodzaak dringt zich op om weer bij de essentiële vraag te komen: Waartoe dient onderwijs? Een artikel van zijn hand n.a.v. zijn boek. En een interview met Biesta op video.
Introduction: Valuing what we measure or measuring what we value?
The past 20 years have seen a remarkable rise in interest in the measurement of education or, in the lingo of the educational measurement culture, the measurement of educational ‘outcomes.’ Perhaps the most prominent manifestation of this phenomenon can be found in international comparative studies such as the Trends in International Mathematics and Science Study (TIMSS), the Progress in International Reading Literacy Study (PIRLS) and OECD’s Programme for International Student Assessment (PISA). These studies, which result in league tables that are assumed to indicate who is better and who is best, are intended to provide information about how national education systems perform compared to those of other countries and are thus generally competitive in their outlook. Findings are utilised by national governments to inform educational policy, often under the banner of ‘raising standards.’
League tables are also produced at national level with the aim of providing information about the relative performance of individual schools or school districts. Such league tables have a complicated rationale, combining accountability and choice elements with a social justice argument which says that everyone should have access to education of the same quality. At the same time, the data used for producing such league tables are used to identify so-called ‘failing schools’ and, in some cases, ‘failing teachers’ within schools. The irony of these arguments is that accountability is often limited to choice from a set menu and thus lacks a real democratic dimension (see Biesta 2004a), that the elasticity of school choice is generally very limited, and also that equality of opportunity hardly ever translates into equality of outcomes because of the role of structural factors that are beyond the control of schools and teachers, thus also undermining part of the ‘blame and shame’ culture of school failure (see Tomlinson 1997; Nicolaidou & Ainscow 2005; Hess 2006; Granger 2008).
Interest in the measurement of educational outcomes has not been restricted to the construction of league tables. The measurement of outcomes and their correlation with educational ‘input’ is also central to research which aims to provide an evidence-base for educational practice (see Biesta 2007a). Proponents of the idea that education should be transformed into an evidence-based profession argue that it is only through the conduct of large-scale experimental studies – the randomised controlled field trial being the ‘gold standard’ – and careful measurement of the correlation between input and output, that education will be able to witness “the kind of progressive, systematic improvement over time that has characterized successful parts of our economy and society throughout the twentieth century, in fields such as medicine, agriculture, transportation and technology” (Slavin 2002, p.16). In the USA the reauthorization in 2001 of the Elementary and Secondary Education Act (‘No Child Left Behind’) has resulted in a situation where federal research funding is only available for research which utilises this particular methodology in order to generate scientific knowledge about ‘what works.’
An important precursor of many of these developments can be found in research on school effectiveness, which played an influential role in discussions about educational change and improvement from the early 1980s onwards (see Townsend 2001; Luyten et al. 2005). While the research initially focused on overall school and administrative variables, later work increasingly paid attention to the dynamics of teaching and learning in order to identify the variables that matter in making schooling more effective. With it came a shift towards a more narrow view of relevant outcomes and outputs (see, e.g., Rutter & Maugham 2002; Gray 2004). In recent years the movement as a whole seems to have become more interested in the wider question of school improvement rather than just issues concerning effectiveness (see, e.g., Townsend 2007). Notwithstanding this, the school effectiveness and improvement movement has played an important role in the idea that educational outcomes can and should be measured.
The rise of the measurement culture in education has had a profound impact on educational practice, from the highest levels of educational policy at national and supra-national level down to the practices of local schools and teachers. To some extent this impact has been beneficial as it has allowed for discussions to be based on factual data rather than just assumptions or opinions about what might be the case. The problem is, however, that the abundance of information about educational outcomes has given the impression that decisions about the direction of educational policy and the shape and form of educational practice can be based solely upon factual information. Despite the fact that this is what increasingly is happening in discussions about education in the wake of international comparisons, league tables, accountability, evidence-based education and effective schooling, there are two (obvious) problems with this way of thinking.
Lees verder BIESTA — GOOD EDUCATION met highlights 1
The Stirling Institute of Education
University of Stirling, UK
Gert Biesta is