Peeragogy Handbook V1.0/Peeragogical Assessment

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Authors: Joe Corneli and David Preston

This article will be about both assessment in peer learning and an exercise in assessment, as we will try to put our strategy for assessment into practice by evaluating the Peeragogy Handbook itself.

Thinking about "contribution"[edit | edit source]

It is intuitive to say: "learning is adaptation." What else would it be?

Further, since adaptation happens not just on the individual level, but also on the socio-cultural level -- anthropologists use the phrase "adaptive strategy" as a synonym for "culture" -- we can say that contributions to social adaptation are "paragogical."

Adapting strategies for learning assessment to the peer-learning context[edit | edit source]

In "Effective Grading: A Tool for Learning and Assessment," Barbara E. Walvoord and Virginia Johnson Anderson have outlined an approach to grading. They address three questions:

  1. Who needs to know, and why?
  2. Which data are collected?
  3. How does the assessment body analyze data and present findings?

The authors suggest that institutions, departments, and assessment committees should begin with these simple questions and work from them towards anything more complex. These simple questions provide a way to understand - and assess - any strategy for assessment! For example, consider "formative assessment:

"...which involves constantly monitoring student understanding through a combination of formal and informal measures. Teachers ask searching questions, listen over the shoulders of students working together on a problem, help students assess their own work, and carefully uncover students’ thinking [and] react to what they learn by adjusting their teaching, thereby leading students to greater understanding." (Quote from the website for the book "New Frontiers in Formative Assessment".)

In this context, our answers to the questions above would be:

  1. Teachers need to know about the way students are thinking about their work, so they can deliver better teaching.
  2. Teachers gather lots of details on learning activities by "listening over the shoulders" of students.
  3. Teachers apply (hopefully well-informed) analysis techniques that come from their training or experience -- and they do not necessarily present their assessments to students directly, but rather, feed it back in the form of improved teaching.

This is very much a "teacher knows best" model! In order to do something like formative assessment among peers, we would have to make quite a few adjustments.

  1. At least some of the project participants would have to know how participants are thinking about their work. We might not be able to "deliver better teaching," but perhaps we could work together to problem-solve when difficulties arise.
  2. It may be most convenient for each participant to take on a share of the work, e.g. by maintaining a "learning journal" (which could be shared with other participants). This imposes a certain overhead, but as we remarked elsewhere, "meta-learning is a font of knowledge"! Outside of self-reflection, details about others' learning can sometimes be abstracted from their contributions to the project ("learning analytics" is a whole topic unto itself).
  3. If a participant in a "learning project" is bored, frustrated, feeling closed-minded, or for whatever other reason "not learning", then there is definitely a question. But for whom? For the person who isn't learning? For the collective as a whole? We may not have to ponder this conundrum for long: if we go back to the idea that "learning is adaptation", someone who is not learning in a given context will likely leave, and find another context where they can learn more.

This is but one example of an assessment strategy: in addition to "formative assessment", "diagnostic" and "summative" strategies are also quite popular in mainstream education. The main purpose of this section has been to show that when the familiar roles from formal education devolve "to the people", the way assessment looks can change a lot. In the following section, we offer and begin to implement an assessment strategy for evaluating the peeragogy project as a whole.

Case study in peeragogical evaluation: the Peeragogy project itself[edit | edit source]

We can evaluate this project partly in terms of its main "deliverable," the Peeragogy Handbook (which you are now reading). In particular, we can ask: Is this handbook useful for its intended audience? The "intended audience" could potentially include anyone who is participating in a peer learning project, or who is thinking about starting one. We can also evaluate the learning experience that the co-creators of this handbook have had. Has working on this book been a useful experience for those involved? These are two very different questions, with two different targets for analysis -- though the book's co-creators are also part of the "intended audience". Indeed, we might start by asking "has working on this book been useful for us?"

For me (Joe) personally, it has been useful:

to see some more abstract, conceptual, and theoretical ideas (paragogy.net) extended into practical advice (which I'm sure I can personally use), with references to literature I would not have come up with in library or internet searches, and with a bunch of ideas and insights that I wouldn't have come up with on my own. I definitely intend to use this handbook further in my work.

It's true; I do see myself as one of the more involved participants to date, which stands to reason since I'm actually paid to research peer learning, and this project is (in my opinion) one of the most cutting-edge places to talk about that topic! If "you get out of it what you put into it" is true, then, again, as a major contributor, I think I "deserve" a lot. And I'm certainly not the only one: quite a large number of person-hours have been poured into this project by quite a number of volunteers. This should say something!

Nevertheless, one does not need to be a "handbook contributor" at all to get value from the project: if it were otherwise, we might as well just get rid of the book after writing it. Actually, our thought is that this work will indeed have "value" for downstream users, and our choice of legal terms around the book reflects that idea. Anyone downstream is free to use the contents of this book for any purpose whatsoever. For all we know, there will be future users who will add much more to the study and practice of paragogy/peeragogy than any of us have so far. This could happen by putting the ideas to the test, feeding back information on the results to the project (please do! - the ultimate assessment of the Peeragogy Handbook will be based on what people actually do with it): perhaps further developing the book, developing additional case studies or recipes, and so forth.

In fact, questions about "usefulness" are what we aim to study in our "alpha testing" phase (which is beginning now!).

Conclusion[edit | edit source]

We can estimate individual learning by examining the real problems solved by the individual. Sometimes those are solved in collaboration with others. If someone only consumes information, they may well be "learning", but there is no way for us to measure that. On the other hand, if they only solve "textbook problems", again, they may be learning and gaining intuition (which is good), but it is still not 100% clear that they are actually learning anything "useful" until they start solving problems that they really care about! So, to assess learning, we do not just measure "contribution" (in terms of quantity of posts or what have you) but instead we measure "contribution to solving real problems". Sometimes that happens very slowly, with lots of practice along the way. Furthermore, at any given point in time, some of the "problems" are actually quite fun and are "solved" by playing! Indeed (as people like Piaget and Vygotsky recognized), if we're interested to know real experts on learning, we should talk with kids, since they learn tons and tons of things.

Recommended reading[edit | edit source]

Supplement: An overview of assessment topics[edit | edit source]

  • Diagnostic, formative and summative evaluation
  • Competency-based learning
  • Experiential learning

UNIT OF ANALYSIS[edit | edit source]

  • individual
  • group/team
  • class
  • course
  • program
  • organization

Purpose[edit | edit source]

  • diagnostic
  • formative
  • summative

Feedback source[edit | edit source]

  • peer
  • pedadogical authority
  • content expert
  • group
  • public

Models[edit | edit source]

  • Peer assessment
  • Self-assessment
  • Norm-referenced testing
  • Criterion-referenced testing
  • Information-referenced testing
  • Writing
  • Transmedia/e-portfolios

Other considerations[edit | edit source]

  • Suitability to task
  • Suitability to learner's desired/expected outcomes (e.g., "If I want to master a skill, I need more expert/critical/constructive feedback than someone clicking a 'like' button.")
  • Capital: time, money, energy, ROI
  • Future documentary usage
  • professional guidelines

Further reading[edit | edit source]