top of page
The following is a section from a research report submitted to TechTrends on March 5, 2020, titled

Investigating “The Coolest School in America”:

A Personalized Project-Based Learning Approach to Learner-Centered Education

by

Sinem Aslan & Charles M. Reigeluth

The section describes the findings of the study related to student assessment at the Minnesota New Country School.

Learner-Centered Assessment

[Results for Research Question 3]

            This section explains how the school implemented learner-centered assessment. Using multiple sources of evidence from the data, we describe formation of proposal teams, formative assessment and summative assessment, required public project presentations/exhibitions, mastery judgment, earned credits, and accomplished standards. We also explain assessment of math, reading and writing, earning credits through college courses, and assessment of students’ hopes.    

 

Formation of Proposal Teams

            In his interview, Derek explained that every year, the advisors were matched up with other advisors to form proposal teams (two to three advisors per team). Before this system was created, a student’s project was only evaluated by the student’s general advisor, which caused concerns about subjectivity. This concern led to the creation of the teams. Projects that required less than 20 hours to complete were still only reviewed by the student’s general advisor. Craig described the potential benefits of proposal teams in his interview. 


Similar to Craig, Derek also pointed out benefits of having multiple evaluators, such as potentially minimizing bias. 

 

Formative Assessment 

            During their interviews, the participants discussed multiple ways to evaluate student performance on projects. Dee identified student observation as a key formative assessment method. Advisors worked closely with and observed the students, discussed their progress with them often, and provided feedback for improvements. Derek identified the assessment of students’ time logs as another key method. Advisors read the time logs to see what the students had accomplished each day, or for a certain period, and provided feedback. 

Summative Assessment

            In their interviews, the participants indicated that, once a student submitted his or her project for evaluation, a summative assessment took place. As reported previously, if a project required less than 20hours to complete, the student’s general advisor met with him or her and evaluated the project. However, if the project took more than 20 hours to complete, then at least two advisors viewed the student’s presentation, reviewed the materials created (including a reflective writing piece), and asked the student questions to evaluate how much the student had learned. Dakota described this summative assessment process in his interview.

 

This required students to verbally defend their projects. Advisors’ questions further helped students to reflect on their learning, which helped the advisors make a fair mastery judgment. 

            One of the other important outcomes of the summative assessment was the feedback that the students received from the proposal team, as Craig described in his interview.

 

Therefore, the assessment itself was a learning experience.
 

Required Public Project Presentations/Exhibitions

            The Student Handbook indicated that the school provided guidance for students to improve their public presentation skills. According to the Student Handbook, the students were to conduct one public presentation and one public exhibition for each grade. The participants, including the advisors, would evaluate the performance.  Students who failed to meet certain standards in their presentations/exhibitions were to re-present. 

            Additionally, the quality of the presentations was expected to increase per grade level. The detailed criteria for such presentations for each grade level are outlined in Table 5, based on the information provided in the Student Handbook. Note that criteria for each grade level were built on the criteria from the previous grade level. 

Table 5

Criteria for Students’ Public Presentations 

 

Mastery Judgment, Earned Credits, and Accomplished Standards

            When asked about how the advisors decided on mastery, Dakota stated that “… [students] have to be able to teach back and basically verbalize [what they have accomplished] and create artifacts to show they’re learning. So, they have to defend their little projects.” Similarly, in his interview Aaron described how he made mastery judgments. 

In her interview, Dee also pointed out the importance of students’ defending what they did as a good indicator of mastery.

 

Similarly, Debbie discussed several considerations for making a mastery judgment. 

 

 

Mastery judgment required the advisors to think back to the whole process, not only the student presentation. Dee reported that the proposal team sometimes used a rubric or checklist, especially for the senior students. However, in most cases, the advisors used the project proposal form on Project Foundry as a checklist to guide the process of assigning credits and standards.

            In their interviews, the participants described that the assessment process started when a student submitted his or her project proposal. However, mastery judgment, certification of earned credits, and accomplished standards were decided with the proposal team at the end of the project. The proposal team met and discussed credits and standards earned, and students were provided the opportunity to comment. If a student defended his or her arguments, the team could consider assigning more credits or standards, providing the opportunity for negotiation. If a student did not show mastery, the team could provide feedback and ask the student to revise or improve parts of the project. The project evaluation recommenced after the revisions, whenever they were ready. Lucy described an example of such cases in her interview. 

 

 

           If the proposal team decided that mastery had been met, the next step was the certification of the credits and standards. In her interview, Nancy stated that the state standards were built into Project Foundry; however, they were not as detailed as the actual state standards. Some of the advisors said the many standards and sub-standards were confusing for them and their students. To combat this confusion, Nancy stated that the school came up with a list of categories and standards for each subject area in order to make it easier to understand, and embedded the list in Project Foundry.

            

Assessment of Math, Reading, and Writing

            As mentioned previously, the advisors and administrators indicated that the school did not implement project-based learningPBL for math education. Therefore, the math assessment was different from the project assessment. ALEKS Math was the major tool used to evaluate student learning in math. In her interview, Lucy described how the students were assessed for math standards in the school. 

            As students had to document certain progress in math, according to the Student Handbook, “Students not making math progress may be advised to complete math work on Fridays or to work with a parent at home”. 

            Similar to math, reading and writing also used different forms of assessment in the school. According to Lucy, 

 

 

 

 

The school required students to complete certain tasks (i.e., following the reading plans, doing a certain amount of writing) to get credits for writing and reading. 

Earning Credits through College Courses

            Students could also choose to attend a post-secondary institution and get college credits to be transferred to the school credits. The Student Handbook revealed that a three-credit college course corresponded to a one-credit project in the MNCS. However, students could secure more credits if they proved more documented time and effort in a college course. 

 

Assessment of Students’ Hopes 

            In the interviews, when asked about how MNCS measured student learning, Ron described an additional assessment method. 

 

According to the Hope Survey website, the major purpose of the Hope Survey is as follows:

In her interview, Dee referenced a study involving the Hope Survey by comparing the results of students in other schools and the students graduating MNCS. 

 Therefore, the Hope Survey was an important measure of achieving the schools’ overarching goals. 

 

Implications for Theory and Practice

            Mastery learning was as a key tenet for learner-centered assessment in the school. Aligned with the descriptions of Reigeluth and Garfinkle (1994), the purpose of assessment in the school was to “…certify attainments, not to compare students …” (p. 67). One of the major criticisms of mastery learning is that it decontextualizes and fragments learning competencies, but MNCS shows that this doesn’t have to be the case. MNCS used project-based learningPBL to ensure that student learning was holistic, contextualized within the real world, and connected to students’ prior knowledge. 

            Our findings also revealed how formative and summative assessments were implemented in the school. According to Park and Lee (2004), a formative assessment evaluates a student’s progress toward mastery, while a summative assessment evaluates if a student has reached mastery. The advisors reported that, to see the most accurate picture of student effort, application, and participation, they assessed both the students’ project process and the project artifacts using summative assessment. However, it is essential to note that formative assessment was used to help students improve when the summative assessment revealed lack of mastery. Types of formative assessment in the school were observations of students, informal conversations with students, and assessment of time logs. To assess project artifacts, advisors viewed student presentations, reviewed materials created (including the reflective writing piece), and asked clarifying questions to help students reflect on the process they used and artifacts they created.

            Mastery judgment, earned credits, and accomplished standards were the other topics that emerged from the findings. Since mastery had different levels based on different grade levels, making a fair mastery judgment was reported as challenging by the advisors. However, they indicated that by taking data from formative and summative assessments into consideration, they could make a fair mastery judgment. Upon making the mastery judgment, the next step was to certify the credits and standards that were detailed in the project proposal form. The advisors had to review them at the end of the project and make sure the student’s work was aligned with the learning standards and credits they claimed. If there was a discrepancy, the advisors revised the credits and standards accordingly. 

            One of the other major findings was that projects requiring more than 20 hours of work were evaluated by a team of advisors instead of by the students’ general advisor in MNCS. This was crucial in two ways. First, there was the potential bias of advisors towards their own students. Even if bias was not a concern, there could still be the reliability concern of a single evaluator on one big project. Second, having multiple evaluators with different expertise and subject areas provided a better learning experience for students because they received feedback from different perspectives.

            Another important finding of this study concerns the Hope Survey. As one of the major goals of the school was to change the attitudes of students towards schooling and learning, assessment of students’ hope would determine whether students really changed their attitudes after starting the school. This type of assessment was helpful in two ways. First, it helped school administrators to see whether efforts for encouraging a higher level of student engagement in learning were working well in the school. Second, the school was able to track the students’ attitudes towards learning during their post-secondary education as longitudinal data.  

 

 

Reference

Park, O. -C., & Lee, J. (2004). Adaptive instructional systems. In D. H. Jonassen (Ed.), Handbook of research for education communications and technology (2nd ed., pp. 651–684). Mahwah, NJ: Erlbaum.

… to expect students to set the bar really high for themselves is not realistic, but I can’t really accept the bar too low. So part of my job is to set an appropriate criteria or mastery level of what I want to see at the end of the projects. (Craig)
Now, the neat thing, I think, that we do with the proposal team process where we work with another advisor, is … accountability… [Dakota] … is my proposal team partner this year and it’s different each year. We rotate who our advisory partner is, so we’re not working with the same person every year. So I might look at a project and I might think it’s awesome, we did a great job on it, there is nothing wrong with it, thumbs up, great, where he might look at it and say, well … they really missed this part of the project and they didn’t do a very good job, and it comes from our background as [Dakota] is coming from … an English teacher’s background and I’m looking at it from more of a math background. So, he’s going to look at different things than I am, just based on his background in regard to what he knows and what he understands. So there is some accountability there when you know they’re having more than one person look at the project and analyze it. 
When they [students] go to the evaluation team, the project is evaluated by other people. … I am always going to be biased for my own kids, but they are not. So, sometimes I might feel it’s too critical, but then we can have discussion or an argument or whatever you want to call it, and it’s okay. We can decide what is fair for the student, and I can give input on what the student needs. The other person can give input on what is reasonable and what is fair.
I think the mastery comes from … show[ing] that you’ve learned something. … You have to do it or at least help somebody else do it. … I’d say to show me that you’ve mastered something, I want to see you teach it to somebody else. 
When they sit down with their two advisors … they have to … walk us through the information. We’ll ask them the kind of the defining questions that they proposed to start the project. We’re asking them to tell us that they really have to teach us and defend what they learned during the process. Often times, they’re using their artifacts … to kind of walk us through … [what] they have accomplished. Every student must go through this process to finalize the project and get credit. They have to show us that they have mastered the process, the knowledge that they actually said they were going to pursue.
I think the feedback that they get from us on the proposal team is really … important … because … it’s the type of feedback that they would get from a boss … or a college professor. … And instead of just getting a grade on a paper, they’re getting an honest assessment like, ‘You know that you’re a really good writer and I really like your writing, but you really need to work on finding better resources for your research.’ Like, you know, ‘You use too many websites and I want you to utilize books. I want you to utilize magazines or whatever for these resources.’ So that feedback that they are getting from us – good and bad – is really powerful because it’s the type of feedback that they’re going to get in real life. 
Assessment Table 5.png
So, I need students to be able to verbally explain to me what’s going on. I can have a student hand me, you know, a beautiful packet of materials that they’ve created, but if they open it up and I start asking them questions on the information and they cannot explain to me what’s in there, then they have not done the mastery learning, they have done copying and pasting [information from other resources]…
Well, I think we look at, overall, their abilities, their individual abilities and we look at their time logs, the final product, what they can explain to us, what they internalized, … how they generalize what they have done, what they have learned, and how they can relate it back to show us that they understand it. [They need to show us that they] learned this and feel like it is something they can use, something they have achieved. 
Sometimes they get told, ‘You couldn’t answer a lot of these questions.  You need to go back and spend a few days reviewing some of this information, and you need to develop some more slides. You need to be more aware of what it is that you are supposed to accomplish with this project. Or you didn’t answer these questions. You need to find the answers to these questions. You need to come back and review with us what you have.’
Well, with math, our students are in a computerized program that assesses their learning almost daily. … [A]bout every 18 or 20 topics, there is an assessment. … [T]hey can either go forward or back, but, essentially, what we are requiring students to do is attain an 85% mastery before they can move on to the next level in math. So, they not only have to complete 85% of the topics, but they also have to complete an assessment that says that they’ve mastered that 85% of the topics. 
… as far as their reading is concerned, we pretty much assess their learning by the writing that they do that’s incorporated into their reading plan. They have to write a summary or an analysis based on what they’ve read, so you can easily tell if they are understanding and comprehending the book. Then the other piece of the reading is also the test that they do in the fall and spring to see if their reading is improving. So, we do … NWEA [Northwest Evaluation Association] in the fall and then again in the spring to measure their progress in reading and the language arts of skills. That gives us an idea of whether they are truly on track with growth in their reading. If we feel like their analyses and their writing isn’t complete and they’re not doing a good job with what they are doing … we can sit down and discuss the books with them as well. … [A]t the end of the year, they have to do an interview with the reading team and go through all of the books that they read. … [A]s we read through their summaries and analyses, if we feel like things aren’t clear and they need to do more, then we can question them and get their input. So, they won’t get full credit for the reading until they have completed the interview with the reading team and have gone through all of their writing and assignments on the books that they have read.

[P]arents noticed a change in their [students’] attitudes, the students noticed a change in their attitudes, and the advisor noticed a change in their attitudes. So, we are not talking about any kind of real psychometric measure here, we are talking about observations. We have observed very clearly that a number of students did change their attitude toward school and learning, but there was no real measure of that. Eventually, however, [we came up with] …. an idea of measuring a variety of things what we called the Hope Survey, which we use to measure the student’s increase in dispositional hope. That changed everything because then we did have an actual measure, which we could tie to actual interventions that were taking place in the school setting. So, that’s one aspect of things that we wanted to deal with and could couple with the other things that we are looking at as far as the outcomes from the students in regard to lifelong learning skills. 

The Hope Survey is a unique tool, which enables schools to assess their school environment through the eyes of their students by measuring student perceptions of autonomy, belongingness and goal orientations as well as their resulting engagement in learning and disposition toward achievement. The Hope Survey can diagnose whether a school culture has the components that encourage higher levels of engagement in learning. (“What is the Hope Survey?”, 2012)

We actually did a study of students once they were out of New Country and their hope continued to grow up. We know that there were 30,000 kids that were tested with the Hope Survey [previously], and their hope continues to rise until ninth grade and then [decrease substantially]. [However], with our kids what we have found is that the hope continues to grow and then as they graduate it even continues to grow more once they get into post-secondary.  So, if we can get these kids truly believing in themselves, I’m an absolute believer that these kids can do just about anything that they want to do.

bottom of page