Meg Sewell, Mary Marczak, & Melanie Horn



In program evaluation as in other areas, a picture can be worth a thousand words. As an evaluation tool for community-based programs, we can think of a portfolio as a kind of scrapbook or photo album that records the progress and activities of the program and its participants, and showcases them to interested parties both within and outside of the program. While portfolio assessment has been predominantly used in educational settings to document the progress and achievements of individual children and adolescents, it has the potential to be a valuable tool for program assessment as well.

Many programs do keep such albums, or scrapbooks, and use them informally as a means of conveying their pride in the program, but most do not consider using them in a systematic way as part of their formal program evaluation. However, the concepts and philosophy behind portfolios can apply to community evaluation, where portfolios can provide windows into community practices, procedures, and outcomes, perhaps better than more traditional measures.

ortfolio assessment has become widely used in educational settings as a way to examine and measure progress, by documenting the process of learning or change as it occurs. Portfolios extend beyond test scores to include substantive descriptions or examples of what the student is doing and experiencing. Fundamental to "authentic assessment" or "performance assessment" in educational theory is the principle that children and adolescents should demonstrate, rather than tell about, what they know and can do (Cole, Ryan, & Kick, 1995). Documenting progress toward higher order goals such as application of skills and synthesis of experience requires obtaining information beyond what can be provided by standardized or norm-based tests. In "authentic assessment", information or data is collected from various sources, through multiple methods, and over multiple points in time (Shaklee, Barbour, Ambrose, & Hansford, 1997). Contents of portfolios (sometimes called "artifacts" or "evidence") can include drawings, photos, video or audio tapes, writing or other work samples, computer disks, and copies of standardized or program-specific tests. Data sources can include parents, staff, and other community members who know the participants or program, as well as the self-reflections of participants themselves. Portfolio assessment provides a practical strategy for systematically collecting and organizing such data.


*Evaluating programs that have flexible or individualized goals or outcomes. For example, within a program with the general purpose of enhancing children's social skills, some individual children may need to become less aggressive while other shy children may need to become more assertive.

Each child's portfolio asseessment would be geared to his or her individual needs and goals.

*Allowing individuals and programs in the community (those being evaluated) to be involved in their own change and decisions to change.

*Providing information that gives meaningful insight into behavior and related change. Because portfolio assessment emphasizes the process of change or growth, at multiple points in time, it may be easier to see patterns.

*Providing a tool that can ensure communication and accountability to a range of audiences. Participants, their families, funders, and members of the community at large who may not have much sophistication in interpreting statistical data can often appreciate more visual or experiential "evidence" of success.

*Allowing for the possibility of assessing some of the more complex and important aspects of many constructs (rather than just the ones that are easiest to measure).


*Evaluating programs that have very concrete, uniform goals or purposes. For example, it would be unneccessary to compile a portfolio of individualized "evidence" in a program whose sole purpose is full immunization of all children in a community by the age of five years. The required immunizations are the same, and the evidence is generally clear and straightforward.

*Allowing you to rank participants or programs in a quantitative or standardized way (although evaluators or program staff may be able to make subjective judgements of relative merit).

*Comparing participants or programs to standardized norms. While portfolios can (and often do) include some standardized test scores along with other kinds of "evidence", this is not the main purpose of the portfolio.


Tier 1 - Program Definition

Using portfolios can help you to document the needs and assets of the community of interest. Portfolios can also help you to clarify the identity of your program and allow you to document the "thinking" behind the development of and throughout the program. Ideally, the process of deciding on criteria for the portfolio will flow directly from the program objectives that have been established in designing the program. However, in a new or existing program where the original objectives are not as clearly defined as they need to be, program developers and staff may be able to clarify their own thinking by visualizing what successful outcomes would look like, and what they would accept as "evidence". Thus, thinking about portfolio criteria may contribute to clearer thinking and better definition of program objectives.

Tier 2 - Accountability

Critical to any form of assessment is accountability. In the educational arena for example, teachers are accountable to themselves, their students, and the families, the schools and society. The portfolio is an assessment practice that can inform all of these constituents. The process of selecting "evidence" for inclusion in portfolios involves ongoing dialogue and feedback between participants and service providers.

Tier 3 - Understanding and Refining

Portfolio assessment of the program or participants provides a means of conducting assessments throughout the life of the program, as the program addresses the evolving needs and assets of participants and of the community involved. This helps to maintain focus on the outcomes of the program and the steps necessary to meet them, while ensuring that the implementation is in line with the vision established in Tier 1.

Tier 4 - Progress Toward Outcomes

Items are selected for inclusion in the portfolio because they provide "evidence" of progress toward selected outcomes. Whether the outcomes selected are specific to individual participants or apply to entire communities, the portfolio documents steps toward achievement. Usually it is most helpful for this selection to take place at regular intervals, in the context of conferences or discussions among participants and staff.

Tier 5 - Program Impact

One of the greatest strengths of portfolio assessment in program evaluation may be its power as a tool to communicate program impact to those outside of the program. While this kind of data may not take the place of statistics about numbers served, costs, or test scores, many policy makers, funders, and community members find visual or descriptive evidence of successes of individuals or programs to be very persuasive.



*Allows the evaluators to see the student, group, or community as individual, each unique with its own characteristics, needs, and strengths.

*Serves as a cross-section lens, providing a basis for future analysis and planning. By viewing the total pattern of the community or of individual participants, one can identify areas of strengths and weaknesses, and barriers to success.

*Serves as a concrete vehicle for communication, providing ongoing communication or exchanges of information among those involved.

*Promotes a shift in ownership; communities and participants can take an active role in examining where they have been and where they want to go.

*Portfolio assessment offers the possibility of addressing shortcomings of traditional assessment. It offers the possibility of assessing the more complex and important aspects of an area or topic.

*Covers a broad scope of knowledge and information, from many different people who know the program or person in different contexts ( eg., participants, parents, teachers or staff, peers, or community leaders).



*May be seen as less reliable or fair than more quantitative evaluations such as test scores.

*Can be very time consuming for teachers or program staff to organize and evaluate the contents, especially if portfolios have to be done in addition to traditional testing and grading.

*Having to develop your own individualized criteria can be difficult or unfamiliar at first.

*If goals and criteria are not clear, the portfolio can be just a miscellaneous collection of artifacts that don't show patterns of growth or achievement.

*Like any other form of qualitative data, data from portfolio assessments can be difficult to analyze or aggregate to show change.


Design and Development

Three main factors guide the design and development of a portfolio: 1) purpose, 2) assessment criteria, and 3) evidence (Barton & Collins, 1997).

1) Purpose

The primary concern in getting started is knowing the purpose that the portfolio will serve. This decision defines the operational guidelines for collecting materials. For example, is the goal to use the portfolio as data to inform program development? To report progress? To identify special needs? For program accountability? For all of these?

2) Assessment Criteria

Once the purpose or goal of the portfolio is clear, decisions are made about what will be considered sucess (criteria or standards), and what strategies are necessary to meet the goals. Items are then selected to include in the portfolio because they provide evidence of meeting criteria, or making progress toward goals.

3) Evidence

In collecting data, many things need to be considered. What sources of evidence should be used? How much evidence do we need to make good decisions and determinations? How often should we collect evidence? How congruent should the sources of evidence be? How can we make sense of the evidence that is collected? How should evidence be used to modify program and evaluation? According to Barton and Collins (1997), evidence can include artifacts (items produced in the normal course of classroom or program activities), reproductions (documentation of interviews or projects done outside of the classroom or program), attestations (statements and observations by staff or others about the participant), and productions (items prepared especially for the portfolio, such as participant reflections on their learning or choices) . Each item is selected because it adds some new information related to attainment of the goals.

Steps of Portfolio Assessment

Although many variations of portfolio assessment are in use, most fall into two basic types: process portfolios and product portfolios (Cole, Ryan, & Kick, 1995). These are not the only kinds of portfolios in use, nor are they pure types clearly distinct from each other. It may be more helpful to think of these as two steps in the portfolio assessment process, as the participant(s) and staff reflectively select items from their process portfolios for inclusion in the product portfolio.

Step 1: The first step is to develop a process portfolio, which documents growth over time toward a goal. Documentation includes statements of the end goals, criteria, and plans for the future. This should include baseline information, or items describing the participant's performance or mastery level at the beginning of the program. Other items are "works in progress", selected at many interim points to demonstrate steps toward mastery. At this stage, the portfolio is a formative evaluation tool, probably most useful for the internal information of the participant(s) and staff as they plan for the future.

Step 2: The next step is to develop a product portfolio (also known as a "best pieces portfolio"), which includes examples of the best efforts of a participant, community, or program. These also include "final evidence", or items which demonstrate attainment of the end goals. Product or "best pieces" portfolios encourage reflection about change or learning. The program participants, either individually or in groups, are involved in selecting the content, the criteria for selection, and the criteria for judging merits, and "evidence" that the criteria have been met (Winograd & Jones, 1992). For individuals and communities alike, this provides opportunities for a sense of ownership and strength. It helps to show-case or communicate the accomplishments of the person or program. At this stage, the portfolio is an example of summative evaluation, and may be particularly useful as a public relations tool.

Distinguishing Characteristics

Certain characteristics are essential to the development of any type of portfolio used for assessment. According to Barton and Collins (1997), portfolios should be:

1) Multisourced (allowing for the opportunity to evaluate a variety of specific evidence)

Multiple data sources include both people (statements and observations of participants, teachers or program staff, parents, and community members), and artifacts (anything from test scores to photos, drawings, journals, & audio or videotapes of performances).

2) Authentic (context and evidence are directly linked)

The items selected or produced for evidence should be related to program activities, as well as the goals and criteria. If the portfolio is assessing the effect of a program on participants or communities, then the "evidence" should reflect the activities of the program rather than skills that were gained elsewhere. For example, if a child's musical performance skills were gained through private piano lessons, not through 4-H activities, an audio tape would be irrelevant in his 4-H portfolio. If a 4-H activity involved the same child in teaching other children to play, a tape might be relevant.

3) Dynamic (capturing growth and change)

An important feature of portfolio assessment is that data or evidence is added at many points in time, not just as "before and after" measures. Rather than including only the best work, the portfolio should include examples of different stages of mastery. At least some of the items are self-selected. This allows a much richer understanding of the process of change.

4) Explicit (purpose and goals are clearly defined)

The students or program participants should know in advance what is expected of them, so that they can take responsibility for developing their evidence.

5) Integrated (evidence should establish a correspondence between program activities and life experiences)

Participants should be asked to demonstrate how they can apply their skills or knowledge to real-life situations.

6) Based on ownership (the participant helps determine evidence to include and goals to be met)

The portfolio assessment process should require that the participants engage in some reflection and self-evaluation as they select the evidence to include and set or modify their goals. They are not simply being evaluated or graded by others.

7) Multipurposed (allowing assessment of the effectiveness of the program while assessing performance of the participant).

A well-designed portfolio assessment process evaluates the effectiveness of your intervention at the same time that it evaluates the growth of individuals or communities. It also serves as a communication tool when shared with family, other staff, or community members. In school settings, it can be passed on to other teachers or staff as a child moves from one grade level to another.

Analyzing and Reporting Data

As with any qualitative assessment method, analysis of portfolio data can pose challenges. Methods of analysis will vary depending on the purpose of the portfolio, and the types of data collected (Patton, 1990). However, if goals and criteria have been clearly defined, the "evidence" in the portfolio makes it relatively easy to demonstrate that the individual or population has moved from a baseline level of performance to achievement of particular goals.

It should also be possible to report some aggregated or comparative results, even if participants have individualized goals within a program. For example, in a teen peer tutoring program, you might report that "X% of participants met or exceeded two or more of their personal goals within this time frame", even if one teen's primary goal was to gain public speaking skills and another's main goal was to raise his grade point average by mastering study skills. Comparing across programs, you might be able to say that the participants in Town X on average mastered 4 new skills in the course of six months, while those in Town Y only mastered 2, and speculate that lower attendance rates in Town Y could account for the difference.

Subjectivity of judgements is often cited as a concern in this type of assessment (Bateson, 1994). However, in educational settings, teachers or staff using portfolio assessment often choose to periodically compare notes by independently rating the same portfolio to see if they are in agreement on scoring (Barton & Collins, 1997). This provides a simple check on reliability, and can be very simply reported. For example, a local programmer could say "To ensure some consistency in assessment standards, every 5th portfolio (or 20%) was assessed by more than one staff member. Agreement between raters, or inter-rater reliability, was 88%".

There are many books and articles that address the problems of analyzing and reporting on qualitative data in more depth than can be covered here. The basic issues of reliability, validity and generalizability are relevant even when using qualitative methods, and various strategies have been developed to address them. Those who are considering using portfolio assessment in evaluation are encouraged to refer to some of the sources listed below for more in-depth information.


Barton, J., & Collins, A. (Eds.) (1997). Portfolio assessment: A handbook for educators. Menlo Park, CA: Addison-Wesley Publishing Co.

A book about portfolio assessment written by and for teachers. The main goal is to give practical suggestions for creating portfolios so as to meet the unique needs and purposes of any classroom. The book includes information about designing portfolios, essential steps to make portfolios work, actual cases of portfolios in action, a compendium of portfolio implementation tips that save time and trouble, how to use portfolios to assess both teacher and student performance, and a summary of practical issues of portfolio development and implementation. This book is very clear, easy to follow, and can easily serve as a bridge between the use of portfolios in the classroom and the application of portfolios in community evaluations.

Bateson, D. (1994). Psychometric and philosophic problems in "authentic" assessment: Performance tasks and portfolios. Alberta Journal of Educational Research, 40 (2), p. 233-245.

Considers issues of reliability and validity in assessment which are as important in "authentic assessment" methods as in more traditional methods. Care needs to be exercised so that these increasingly popular new methods are not perceived as unfair or invalid.

Cole, D. J., Ryan, C. W., & Kick, F. (1995). Portfolios across the curriculum and beyond. Thousand Oaks, CA: Corwin Press.

Authors discuss the development of authentic assessment and how it has led to portfolio usage. Guidelines are given for planning portfolios, how to use them, selection of portfolio contents, reporting strategies, and use of portfolios in the classroom. In addition, a chapter focuses on the development of a professional portfolio.

Courts, P. L., & McInerny, K. H. (1993). Assessment in higher education: Politics, pedagogy, and portfolios. London: Praeger.

The authors describe a project using portfolios to train teachers to assess exceptional potential in underserved populations. The portfolio includes observations of the children's behavior in the school, home, and community. The underlying assumption of the project is that teachers learn to recognize exceptional potential if they are provided with authentic examples of such behavior. Results indicated that participating teachers experienced a sense of empowerment as a consequence of the project and became both involved in and committed to the project.

Glasgow, N. A. (1997). New curriculum for new times: A guide to student-centered, problem-based learning. Thousand Oaks, CA: Corwin Press.

This book is an attempt to identify and define current practices and present alternatives that can better meet the needs of a wider range of students in facilitating literacy and readiness for life outside the classroom. Discussion centers on current curriculum and the need for instruction that meets the changing educational context. Included is information about portfolio assessment, design and implementation, as as examples of a new curricular style that promotes flexible and individualistic instruction.

Maurer, R. E. (1996). Designing alternative assessments for interdisciplinary curriculum in middle and secondary schools. Boston: Allyn and Bacon.

This book explains how to design an assessment system that can authentically evaluate students' progress in an interdisciplinary curriculum. It offers step-by-step procedures, checklists, tables, charts, graphs, guides, worksheets, and examples of successful assessment methods. Specific to portfolio assessment, this book shows how portfolios can be used to measure learning. Provides some information on types and development of portfolios.

Patton, M. Q. (1990). Qualitative evaluation and research methods, 2nd ed. Newbury Park, CA: Sage.

A good general reference on issues of qualitative methods, and strategies for analysis and interpretation of qualitative data.

Shaklee, B. D., Barbour, N. E., Ambrose, R., & Hansford, S. J. (1997). Designing and using portfolios. Boston: Allyn and Bacon.

Discusses the history of portfolio assessment, decisions that need to be made before beginning the portfolio assessment process (eg., what it will look like, who should be involved, what should be assessed, how the assessment will be accomplished), designing a portfolio system (eg., criteria and standards), using portfolio results in planning, and issues related to assessment practices (eg., accountability).

Shaklee, B. D., & Viechnicki, K. J. (1995). A qualitative approach to portfolios: The Early Assessment for Exceptional Potential Model. Journal for the Education of the Gifted, 18 (2), 156-170.

The creation of a portfolio assessment model based on qualitative research principles is examined by the authors. Portfolio framework assumptions for classrooms are: designing authentic learning opportunities, interaction of assessment, curriculum and instructions, multiple criteria derived from multiple sources, and systematic teacher preparations. Additionally, the authors examine the qualitative research procedures embedded in the development of the Early Assessment for Exceptional Potential model. Provided are preliminary results for credibility, transferability, dependability, and confirmability of the design.

Winograd, P., & Jones, D. L. (1992). The use of portfolios in performance assessment. New Directions for Educational Reform, 1 (2), 37-50.

Authors examine the use of portfolios in performance assessment. Suggestions are offered to educators interested in using portfolios in aiding students to become better readers and writers. Addresses concerns related to portfolios' usefulness. Educators need support in learning how to use portfolios, including their design, management, and interpretation.

Back to Alternative Methods website.