What does learning look like?

(quotations from eLASTIC: Pulling and stretching what it means to learn and assess art and educational progress. Studies in art education. I, 55(2), 128-142.)

Tugging at formative assessment models

Certainly, it could be said that such traditional formative assessments as in-process critiques (self, peer and/or teacher-driven) actually do contribute to learning processes. But, this eLASTIC research pushes formative assessment further by asking, what if data from formative assessments could be accumulated and used for large-scale and/or longitudinal evaluation?

Connective, critical and visual art thinking extend assessment

One of the most eloquent passages regarding the value of connective learning and thinking, I believe comes from Houghton Mifflin’s online Tutorama ten years ago:

Anyone can be trained to accurately summarize what they’ve read: the creative aspect of thinking emerges when connections are made between the texts you’ve read; between what you’ve read and your own experience; and between what you’ve read and thought in the past and what you’re coming to think now. By learning how to make connections; you will learn how to make ideas mobile and active and this is the habit of mind that is most highly rewarded both inside and outside the academy (Miller & Spellmeyer 2002, para. 2).

Images from the VCU Department of Art Education small grant enabling Dr. Taylor and Richmond area educators to help develop rubrics that related to standards associated with the International Baccalaureate, Advanced Placement, National Association of Educational Progress, National Visual Art Standards, the General Certificate of Secondary Education (British) and the Qatar Curriculum Standards.

Pushing art and assessment boundaries with emerging technologies

An alternative and controversial possibility for technology enhanced art assessment came to my attention in a 2012 New York Times article reporting on Robo-Reader, an innovation involving Automated Essay Scoring (Winerip, 2012). According to Justin Reich (2012), Harvard doctoral researcher and fellow at the Berkman Center for Internet and Society, Automated Essay Scoring involves the computer in comparing essays with other essays scored by human beings. It may do this through word and phrase searches similar to my VACT work, but it goes beyond through the use of algorithms designed to, what Dr. Mark Shermis[i] refers to as “faithfully replicate” human scores. Obviously or rather, in their current state, computers cannot “read” essays in the same way as humans. Computers are programmed with varied and seemingly endless algorithms.

References available in the submitted for publication article “eLASTIC: Pulling and Stretching what it Means to Learn, Know and Assess Art and Educational Progress” found on the Publications and Presentations page.


[i] Dr. Mark Shermis (2012) is the Dean of the College of Education at the University of Akron. He authored the study that assessed a number of different automated scoring programs against human scorers.

 

 

Home

Contact Dr. Pamela G. Taylor  804-828-3804  pgtaylor@vcu.edu