Pages

Sunday, March 23, 2014

Value Added (to the coffers of Pearson and the testing companies) but not to students.


Value Added margin of errors have been known to range from the mid 60s to 109%. Can't wait until a third of my evaluation as a teacher depends on this:


"Covariate Adjustment Model
The statistical value-added model implemented for the State of Florida is typically referred to as a covariate adjustment model (McCaffrey et al, 2004) as the current year observed score is conditioned on prior levels of student achievement as well as other possible covariates that may be related to the selection of students into classrooms.
In its most general form, the model can be represented as:
y¬_ti=X_i β+∑_(r=1)^L▒〖y_(t-r,i) γ_(t-r) 〗+∑_(q=1)^Q▒〖Z_qi θ_q 〗+e_i
where y¬_ti is the observed score at time t for student i, X_i is the model matrix for the student and school level demographic variables, β is a vector of coefficients capturing the effect of any demographics included in the model, y¬_(t-r,i) is the observed lag score at time t-r (r∈{1,2,…,L}), γ is the coefficient vector capturing the effects of lagged scores, Z_qi is a design matrix with one column for each unit in q (q∈{1,2,…,Q}) and one row for each student record in the database. The entries in the matrix indicate the association between the test represented in the row and the unit (e.g., school, teacher) represented in the column. We often concatenate the sub-matrices such that Z={Z_1,…,Z_Q}. 謬_q is the vector of effects for the units within a level. For example, it might be the vector of school or teacher effects which may be estimated as random or fixed effects. When the vector of effects is treated as random, then we assume θ_q~N(0,σ_(θ_q)^2) for each level of q.
Corresponding to Z={Z_1,…,Z_Q}, we define θ'=(θ_1^',…,θ_Q^'). In the subsequent sections, we use the notation δ'={β',γ'}, and W={X,y¬_(t-1),y_(t-2),…,y_(t-L)} to simplify computation and explanation.
Note that all test scores are measured with error, and that the magnitude of the error varies over the range of test scores. Treating the observed scores as if they were the true scores introduces a bias in the regression and this bias cannot be ignored within the context of a high stakes accountability system. Our approach to incorporating measurement error in the model is described in a later section."

No comments:

Post a Comment