| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Rubrics and Evaluation 2013 (redirected from Rubrics and Evaluation)

Page history last edited by Bethanie Carlson Drew 10 years, 9 months ago

Essential Questions to ponder

Feel free to add your thoughts

 

1) Rubrics can be used for many purposes:

     - teacher evaluates students: formative (where are they in acquiring "X") and summative (how well did they perform "X" task)

     - students evaluate themselves ("I can...") and set goals 

     - students evaluate teacher/lesson/unit (feedback for teacher)

     - teacher evaluates self (ongoing and post reflection)

     - administrator/colleague evaluates teacher (observation/walk through)

 

I assume that we would like to concentrate our discussion on the first two, correct?

(SraSpanglish) Correct for me!

 

(@natadel76) Here's excellent post by @CoLeeSensei about Student Self-Evaluation in Discussions

 

2) Proficiency-based evaluation of language outcomes:

     - ACTFL Proficiency Guidelines

     - Linguafolio self assessment & NCSSFL Linguafolio docs

     - Outstanding proficiency-based rubrics and materials developed by JCPS Performance Grading Guidelines 

     - Kentucky Standard for WL Proficiency with sample learning targets, proficiency benchmarks

 

 

Should we use "I can..." or "Student in able to..." statements on the rubric? Alternatives?

(SraSpanglish) I think "I can" is more powerful. Will you break these down for us? JCPS materials overwhelm me a little.

     - (natadel76) What exactly do you find overwhelming? Descriptors too vague? Levels not easy to distinguish/separate?

(SraSpanglish) The way that the JCPS is divided into units throws me--what if I'm not doing those units? Then there are SO many documents in each! What am I supposed to look at first? Do I need ALL of them?

 

(Bethanie) We have used/I am using JCPS as my model.  I rearrange, add, and delete the units as needed.  I believe that the assessments are there as options--some are nearly identical to offer a variety of options.  The French and German editions of the assessment are nearly identical to the Spanish ones.  Within each folder are the unit overview which includes the pertinent vocab and unit descriptors; the unit stamps are the I can statements; then the assessments.  One of the things that I really appreciate is how the assessments are clearly linked to the I can statements--to the point of stating them as part of the directions.  

 

     - here is a link to @MartinaBex Proficiency Targets rubric  you referenced on Twitter (free download)

(SraSpanglish) I LOVE these! The descriptors are AWESOME! But how could I possibly use these grades with Spanish 1 or 2, where the objective is only to achieve Novice Mid in the former and Novice High in the latter??

 

Here's a blog post by @SECottrell about the difference between Novice High and Intermediate Low 

 

Here's the post Bring the Rubric to Live by www.creativelanguageclass.com

(Bethanie) Loved this idea!  Used it in class, and added a poster of the rubric to use as a continual reference.  One of the best parts of using one rubric that was tied to proficiencies was being able to use it as a tool of continual improvement, especially for the higher-achieving students.

 

 

3) There are different ways to evaluate language outcomes:

     - by skill: listening, reading, speaking, writing

     - by mode of communication: interpretive, interpersonal, presentational

 

Which one makes more sense?

(SraSpanglish) In NC we have them broken down by a combination: Intepretive reading, interpretive listening; person-to-person, presentational speaking, presentational writing. I, for one, rather like this.

     - (natadel76) I have been functioning at the skill level and was planning to move to the mode of communication next year. Would like to hear how you structure your gradebook for that. I could totally see doing a portfolio next year too.

(SraSpanglish) In my district, our gradebook structure is somewhat prescribed: 65% "tests," 20% "quizzes," 15% "daily work." So I just decide which one of those a given task falls under. So I might have an interpersonal "quiz" (some kind of recorded conversation) or "daily work" (simple online discussion forum) or "test" (answering questions about their final product or group collaboration efforts). They're pretty unnatural restrictions, and I'm looking for suggestions on how to cope.

 

(srapontarelli)  We will be working on changing our departmental and school wide rubrics this summer.  In my opinion, I would like to move away from the 4 skills in isolation and set up our rubrics by mode.  It makes more sense since the WL standards and ACTFL standards are presenting these three modes of communication.  Also, the 4 essential skills of learning a language don't live in isolation in the real world.

 

(jeanrueckert):  Since WL standards and ACTFL performance guidelines are presented via three modes of communication and also further defined by skill (interpretive listening, interpretive reading, interpersonal listening and speaking, presentational speaking, presentational writing, etc.), I suggest we align our rubrics with these.  It's what happens at the AP level, as well.  I'm happy to start with the least 'known' one of interpretive reading, interpretive listening at the Novice level (Mid, High) level to see where we go.

 

(SraSpanglish) This fits with NC standards and how I organize e-portfolios, so it is my preference as well.

 

(kkissings) In our district in OR, we evaluate the students via skills.  When we started moving to proficiency/standards based assessments about 5 years ago and our school has been grading based on these for 3 years.  For the last 2, we do not give letter grades.  With my classes, I am doing a pseudo-flipped classroom model via Moodle.  I created modules based on the grammar and vocabulary for the particular unit we are doing.  I assign 1 -2 modules per week for them to work on at home and then each week there is a quiz over that information.  Rubric is here: Grammar-Vocabulary Proficiency Rubric.pdf I assess via Listening, Reading and Writing every half unit and speaking via formative and summative assessments.  Here are the rubrics that we primarily use: Writing - Holistic Rubric.pdf , Reading - Comprehensive - Teacher.pdf , Listening - Comprehensive - Teacher.pdf , Writing_Speaking- Holistic Rubric.pdf.  We report out on these 5 skills with each report sent home.  These has been working very well for us and have been producing some very impressive results.

 

How do we design our assessment so that it accurately reflects what we're trying to evaluate?

 

(natadel76) This is the trickiest part of them all. In my experience, when I started on proficiency-based road, I had to review my assesments and eliminate many variables to make sure they assessed a particular skill only. What are your chalenges in creating proficiency-based assessments?

(SraSpanglish) So we're isolating skills? How? Why? I'm not sure how authentic that would be...Thank goodness for Skype opp, because I feel like I'm definitely missing something.

(jeanrueckert) As well, my Level 1 PLC partner and I pulled only 2-4 criteria per interpersonal or presentational assessment to focus on (almost always around Content/Vocab, Comprehensibility, Text Type and then maybe one other).  That way, students & teachers aren't overwhelmed, yet by the end of a semester/course you could make sure that you were targeting all criteria that are part of a course performance rubric for each mode.  

 

 

4) Often our task/assessment/project includes skills other than language:

     - research and digital literacy 

     - teamwork and collaboration

     - visual presentation

 

Should we evaluate these too? If yes, then shouldn't it be separate from language outcomes evaluations?

 

(jeanrueckert):  Depending upon your school's focus in the area of 21Cent skills development and PBL, yes, these are all a part of learning & feedback to students.  Sometimes they can be done as part of a PBL rubric, or set of rubrics, or as a "minor" in a combined rubric  However, as Don has mentioned, the primary focus (the "major" criteria) in a WL course should be in proficiency-related criteria. Students should be assessed (given feedback) on the course content separately, and independently, on the language and culture standards.

Jean, how do you reflect these in your gradebook? I prefer not to mix language (proficiency) and "other stuff". Do you have a separate category set up? report by standards? only comment?

In level 4, I piloted the following:  I separated out a weighted category (15%) called, "language learning", in which I placed scores for vocabulary or grammar-related quizzes - all pieces of the bigger puzzle that students would have to put together for the Perf-Based assessments, and these quizzes were done in context.  Daily homework was not graded, but you can report any of these things in PowerSchool without actually assigning a grade to them.  I began developing a SB reporting method, with the 4-pt scale, but haven't used it in PowerSchool yet - not until our gradebooks are better set up for next year.  Instead, I used the "Assignment" or assessment labels to communicate the mode of communication being assessed.  My SPANISH 4 grades - not Level 1 - for a semester were calculated as such:

 

20% - Speaking (Perf-Based, interpersonal & Presentational on full scale; "Effective Speaking Strategies", during in-class, student-led discussions were scaled at .20 to be treated as a formative, developmental 'grade')

20% - Writing (Perf-Based, more for in-class essays, at full scale; timed writings with a developmental rubric at .20)

15% - Reading 

15% - Listening

15% - Language Learning

15% - Semester exam (each was project-based & presentational)

 

As I look to Level 1, I would be looking at a greater percentage for the Interpretive Mode & thus want to focus on those rubrics (for both my courses)

 

 

(SraSpanglish) I think the "minor" category is the most satisfactory compromise for me. Collaboration is a very big deal for me, and I've made it an interpersonal activity, where students discuss how well their group members perform throughout the project (rubric in Spanish here) And actually, I've tried to fit the research/digital literacy under interpretation...

Laura, pardon my lack of Spanish skills, in your rubric example, do the students evaluate group work as a whole or each member of the group individually? I would prefer to separate digital literacy stuff from the interpretive language (hence "language" - major, and "stuff" - minor). What do you think?

 

(SraSpanglish) Students fill out these rubrics: one on themselves, then one on each of the other team members--hence 3 per page. I then sit the groups down and ask team members about one person at a time. To illustrate: I ask B & C about A, then confirm with A on the first expectation, let them hash it out if there is discrepancy (working on making this more TL-oriented, as those discussions tend to jump to English, at least early on). We can either go on to the next expectation for A or ask A & C about how C did with the first expectation.

I think my situation is unique with the grading because I am assigned 3 categories with weird weighting that I'm hesitant to break down further.

 

5) Communication of performance

     - levels of proficiency (Novice High, etc.)

     - descriptors such as Beginner/Developing/Proficient/Advanced or Below Expectation/Approaching Expectation/Meets Expectation/Exceeds Expectation

     - numerical: 4-point scale, traditional 100% scale

 

What seems to be the most helpful in our quest for proficiency-based classroom?

 

(jeanrueckert):  In my experience, you can make the SBG and the "traditional" 100% scale (but do it in a sliding scale format) work for reporting purposes, but naturally need to have the criteria, and what "meets expectations" focused on the proficiency targets that have been determined to be the learning targets for the course-->program.  I'll do a version of this in the first one I work on to show those less experienced in it what this means, but imagine the JCPS rubrics for a particular level presented for a particular mode and skill and then with labels or points.  I might have to explain the sliding scale for those who need to convert the 4-pt. (or 4-level) scale to a percentage grade for a grade book that requires that.  Happy to do so via a separate sheet with one of my sliding scales (maybe via Skype chat, if there are those interested).  I use http://roobrix.com/ to work them out.

 

One thing we might want to consider is what they do at JCPS:  they have a formative rubric (lower proficiency target for first quarter or first semester and then the full semester/year rubric to show the end-of-course learning targets). (SraSpanglish) So this part confuses me a little bit. Are we assigning grades based on proficiency levels? Isn't that detrimental to encouraging individual growth? If they don't get to Novice Mid by the end of I, are they penalized? Or is this our concession to The Man, though we know growth does not happen uniformly?

 

I would say that you do need to have a course-level standard for what "meets expectations" (this would be the reasonable outcome for all students and would be equivalent to a "B"), so that all involved know what the target is (something clear to aim for).   You are naturally going to have your high-flyers who will "Exceed" and those who might remain at "Approaching/Developing" by the end of a course.  The clearer you are on what you expect, the clearer students are at what that means, and the more focused practice in class that is in alignment with the developing that performance target, tends to produce students who achieve those benchmarks.  In my school's case, those latter ones would earn a "C" for the semester and, if in Level 1, would still move on to Level 2.  If lower than that, they may need to repeat; if in Level 4, they most likely wouldn't go on to Levels 5/AP.  Or, at that higher level, they go on knowing very well that they have more active practice, self-directed learning and/or engagement required of them to be more successful at attaining those course standards.  This said, teachers know that there are many variables to determining grades and placement for the next year.  

 

I did like working with using most recent grade in the actual reporting in PowerSchool and 'dropping' the others that were then treated as formative, once students were showing higher levels of performance ability.  Next year, in my high school we're looking at either Mode or most recent.

 

(natadel76) Personally, I prefer descriptives. My school is big on learning targets so it made sense to have Beginner/Developing/Proficient/Advanced. However, I was planning to change that to just the level descriptors next year (Novice High, etc.) on my rubrics because students still were not connecting the dots of "This is where you need to be" and "This is where you are".  Yes, language growth doesn't happen uniformly but one can set reasonable goals for an average student OR may be each student can set goals for each semester followed by evidence and reflection. I always take into consideration growth when I have to report grades: student who still struggles a bit but made great strides always beats the one who's still on the same page and still not putting any effort to even be there.

Unfortunatelly, I do need to quantify studnets' progress for parents and administration. I prefer 4-point scale because it makes so much more sense. I need to decide for myself whether it's worth to weigh categories differently or not. Now, I'm leaning to "more important depending on the year of study" type of approach: interpretive for lower levels, production for upper levels.

 

 

6) Progress/grade recording for record keeping

     - needs to reflect what student is able to do

     - should be easily understood by all who come in contact with it (students, parents, support teachers, administration)

 

How do we set up our grade book to address the requirements above?

 

In my experience, we must be VERY clear from the very beginning with students and parents that student's grade depends on what they can do with language + a few other things. No extra credit will help your lack of performance in task "X". Some kids (and parents) have hard time with that, but many do embrace the concept. My school uses InfiniteCampus for progress and grade reporting.

 

Should certain skill be weighed higher?

 

(jeanrueckert):  I'm interested in learning from others here.  I've piloted one format in my Level 4 class this past year to transition to Standards-based grading and can share.  My school reports via PowerSchool, and we're working with PS to have the gradebook more SBG-friendly.  I used the four skills, but created rubrics and reported incorporating the modes of communication.

(SraSpanglish) Awesome, Jean! We're switching to PowerSchool next year, too! I have no idea what it looks like yet, so could you share something, like rubrics and, I don't know, screenshots?

 

7) Final exam dilemma

 

One can't demonstrate "sudden" proficiency overnight. Is final exam necessary?

(SraSpanglish) Me, I used their e-portfolios with artifacts supporting their "I can" statements for their final--good reflection. Also, maybe some OPI's for spontaneous production?

 

(natadel76) OR could final exam BE one of the main determinants of students' proficiency grade? Who cares that they were not able to do "Y" in October? If in January, at the end of the semester, a student can demonstrate the proficiency requirement teacher outlined for "Y", why would that October grade matter?

 

In many schools it is required. How can we create the exam that aligns with proficiency philosophy?

(@mmecaspari) I am still struggling with this for my lower level classes, but the pre-AP class I teach has exams that are somewhat similar to the AP format, albeit simpler and covering only the themes for that semester. The challenge for those exams has been locating the perfect texts/audio and then making good rubrics. I'm interested in what people do for lower levels.

 

(Bethanie) We use the same rubric--modeled after JCPS--for writing and speaking for levels 1-4.  We simply adjust the target proficiency level on each assignment.  One of the positive parts of this is that it shows where we are going and provides feedback for students to know how to improve.  The rubric from last year is here; the second page explains how to determine a grade from the rubric because we still need to enter a numeric score for assessments.

 

(natadel76) Persnal dilemma. In lower levels, do I just create reading and listening based on what I know they should know+1 OR do I try to find #authres?

 

If using student portfolio to document language learning progress, can this portfolio serve as final exam? 

 

Initial/Final question

Can/should we leave assessments open: require sts to include 3 modes, but they decide how?

(SraSpanglish) I've done this too (Spanish 1 template, Spanish 2 template). I like the collection of materials throughout the course to show growth, though.

 

Comments (6)

Jean Rueckert said

at 10:32 am on Jun 13, 2013

If no one is in a hurry, so to speak (summer mode!), I'm willing to put some time in to working through a starter Novice rubric for interpretive listening & reading that pulls together the best ideas from the KY Standard for WL Proficiency, the FLENJ interpretive rubrics and anything that Kim Kissinger eventually shares with me. I'll be on the move in the coming weeks, so will see when I can start and get initial feedback. I'll do it in a SBG format so anyone could work with it using their own school's labels. My school has a MS/HS performance standard rubric with the labels "4 - Exemplary", "3 - Meeting Expectation", "2 - Approaching Expectation" and "1 - Below Expectation". For Novice, I'll look at the proficiency standards noted in different curriculum docs for Level 1 and work with that for "Meeting Expectation". Sound good?

Laura Sexton said

at 8:21 pm on Jun 13, 2013

I think this'll work as a good starting point, but I'm going to need to be walked through it!

Jean Rueckert said

at 8:36 am on Jun 16, 2013

Laura, I'm happy to walk you through what I can - once I share out a sample rubric. See my initial comments above in red. I'm sure others will have their experiences to include, as well. Are there initial questions you might have that I could be mindful of when working?

Elizabeth Caspari said

at 1:00 pm on Jun 20, 2013

That sounds great, and I appreciate the "summer mode" note. I need to crank out some "I can statements" for another one of my preps and am off to a workshop next week, but I'll check in along the way.

Natalia DeLaat said

at 5:16 pm on Jun 14, 2013

If you're interested in Skype discussion, would you prefer weekday or weekend?

Elizabeth Caspari said

at 1:03 pm on Jun 20, 2013

Normally I'd prefer weekdays, but please don't schedule around me right now. I'm leaving Sun. for a week-long AP workshop and am not sure what free time I'll have.

You don't have permission to comment on this page.