Saturday, August 22, 2015

Architectural post-occupancy evaluation of post-secondary educational spaces, by Mikael Powell




or

             
Architectural practitioners researched ways to rate the effectiveness of higher education facilities at the very beginning of the environmental psychology movement.  Post-occupancy evaluations (POE) are a relatively contemporary method (originating around the 1960s in America) to determine whether design decisions made by design professionals are delivering the performance intended as evaluated by those who use the building.  These assessments provide several long- and short-term benefits, unlike the traditional case study published in architectural trade magazines, which tend to highlight buildings that photograph well or those designed by architectural celebrities, or those of particular interest to architectural critics.  Some of the POE benefits include the identification of spatial problems and successes, the opportunity for user involvement and the establishment of prototypical spaces.  Preiser, Rabinowitz, and White (1988) describe the intent of a POE as “to compare systematically and rigorously the actual performance of buildings with explicitly stated performance criteria; the difference between the two constitutes the evaluation” (pp. 3–4).  Since the late 1980s in America, the performance method concept has been widely employed as the foundation of the evaluation.  Performance criteria are usually developed by the university administration (in response to their goals for the institution), and performance measures are determined by a post-occupancy evaluator.
            The process is subjective on several levels.  The actual building ratings are dependent upon the performance criteria developed by the administrators.  The performance is derived directly from those values that the university deems important, which are not necessarily the same as the values of the evaluator or the users of the space.  Moreover, the building evaluation result is reliant upon the goals of the evaluator and the performance measures developed to test the criteria.  Lastly, not only may different users give different responses, but also the same users of a space may give varying responses at different times.  Preiser, Rabinowitz, and White state that, “there are no absolutes in environmental evaluation because of cultural bias, subjectivity and varied background of both the evaluators and building users” (1988, p. 33).
            POEs can collect data with quantitative or qualitative methods, but they are mostly considered a quantitative tool.  For example, even aspects of the building examination, such as personal assessments of the quality of lighting or the performance of the mechanical systems, are defined in terms that are computed and comparative.  I found no research indicating that the qualitative aspects of the building influenced perception of the quantitative performance (e.g., the overall reputation of the facility affected the report of specific actual conditions), although Preiser, Rabinowitz, and White surmised as such.  Post-occupancy evaluations originated at a time when electronic computation was at its early stages.  Thus, the format of POEs was favorable to collecting large amounts of data and to sorting and computing values for a building.  Data from the first evaluation of schools in the mid-1970s were noted for being very wide-ranging and detailed (Preiser, Rabinowitz, & White, 1988); however, the evaluation structure was rudimentary (Preiser, Rabinowitz, & White, 2005).  Eventually, POEs were grouped into three levels of sophistication –indicative, investigative, and diagnostic (respectively), with each successive level costing more money and involving more effort and time.  Within each level, there were three phases, (a) planning the POE, (b) conducting the services, and (c) applying the data to produce the deliverables, which document the appropriate amount of work at each level.  Methods employed included utilizing questionnaires, site visits, personal interviews, document review, and analysis.  The authors remarked that although this format was easy to comprehend, it was often not comprehensive enough for the task.
            While the performance method was one technique that originated, other ways did develop.  One was by Pena and Parshall (1983).  They were interested in architectural research for both evaluation of existing buildings and for the programming (the collection of pertinent information to initiate design work) of new spaces.  They authored two books, the first on post-occupancy evaluations, and the next on architectural problem-seeking.  Within their method, the evaluation strategy used the same format as in the initiation of an architectural project and they categorized their efforts into four key elements (which correspond to the phases of the method created by Preiser, Rabinowitz, and White) used throughout the POE.
            It is important to note that while a post-occupancy evaluation is said to get its name from the certificate of occupancy, which is commonly issued in the United States allowing a new facility to operate, there are other monikers that have evolved from the initial POE model.  One is the building performance evaluation (BPE).  The integrative framework of this evaluation method (Preiser, Rabinowitz, & White, 2005) covers concerns like building code related issues, life safety requirements, space utilization and human personal, cultural and social needs.
            There are several concerns often cited about the effectiveness of post-occupancy evaluations.  Firstly, the institution often commissions POEs.  Therefore, the values of that entity may influence the development, conduct, and findings of the evaluation (Preiser, Rabinowitz, & White, 1988) and serve the administration’s perspectives as the primary recipient of POE data  (Hewitt, et al. 2005).  This may be problematic if the purpose of the evaluation is to provide objective data to evaluate the feasibility of capital improvements to benefit all constituents.  It may be advantageous to review the results as referenced to other priorities and other stakeholders.  Likewise, the performance measures developed by the evaluator also serve to influence the process (Preiser, Rabinowitz, & White, 1988).  Moreover, Doidge (2001) maintains the need for setting up a national system of post-occupancy studies within the architectural design curriculum.  He advises that, oftentimes, architecture students are not introduced to client and user issues to the point that they could be an effective part of a POE team.  
Second, as Doidge (2001) goes on to state “The greatest obstacle to POE studies is that professionals must guard their reputation and avoid litigation,” and he adds that such studies “have been conducted for at least half a century but the results are not encouraging.  Most take the form of ‘internal enquiries’ either to ‘whitewash’ or to ‘apportion blame’ and are rarely published” (p. 2).  Indeed, Lackey (1999a) reports that in most instances there is “no clear economic incentive for conducting the POE in the first place.  Client organizations are not quick to support the POE due to the potential for bad publicity if problems are uncovered so soon after a large expenditure of public funds” (p.5).  In addition, because the performance criteria and performance measures are not developed by the users, it is useful to critically consider the following: What are the consequences of false positives or false negatives (if an evaluation of a university space is inaccurate) who will gain and who will lose?
            Thirdly, a critic might argue that the most important criteria for school design is flexibility.  Ponti (2005, p.85) states that “the pedagogical and didactic activities are continuously changing” and, therefore, the ability to easily change the environment to adapt to new pedagogies is paramount, whether the changes are daily or annually.  Also, with regard to the lifecycle costs of the facility, long-term adaptability to accommodate multiple uses is prudent.          
            Lastly, Tombs (2005) remarked that developing quality indicators within the framework of a POE was not without criticism.  Some individuals in the design professions were skeptical of the categories to evaluate quality.  They saw the indicators as giving less emphasis to epistemology/pedagogical practices, which they maintain are “required to be a headline item, because without an appropriate understanding of these matters, a very fine building may not end up delivering the places/spaces within which appropriate teaching can take place i.e. the school might be a very poor performer!” (p. 70).
            While there are many ways to judge a university classroom, the two contemporary methods of case study and POE are both wanting.  POEs often use comparisons to educational goals, rather than documenting the behaviors that are currently occurring to make up for the shortcomings of the space.  While case studies should document remedial actions performed, as a vehicle, there is no a format in use that responds to the needs of the architect, educator, and environmental-behaviorist.  Currently, there is no tool to evaluate the toll placed upon the education process for corrective measures.  My research is about evaluation of the built learning environment, specifically, the influences of corrective actions, so it is fitting that I review conditions where the design of the facility meets the need of its inhabitants and when it does not.

No comments: