ࡱ> a djbjbYQYQ 3333]@@@P8<X,/2:  / / / / / / /,0R3x9/9/;#N/;#;#;#b /;#d|@`d@ /;#;#r .T. ޚZ5"_..$d/0/m.z3;#3.;# ` `NAVIGATING ETHICAL CHALLENGES: THE IMPORTANCE OF PROFESSIONAL MENTORSHIPS IN DEVELOPMENT EVALUATION S. Mason Independent Consultant, Australia Abstract Evaluators frequently encounter ethical challenges. The field of development evaluation is particularly vulnerable to these challenges due to widespread inadequacies in evaluation systems, including a lack of resources, data limitations and structural incentives for positive bias. For young evaluators, navigating this minefield of evaluation ethics can be daunting, yet opportunities for young evaluators to participate in ethics training prior to fieldwork are limited. This paper explores the potential for using professional mentorships as an opportunity to build the capacity of young evaluators to deal with the practical ethical issues they encounter in the field. Introduction Ethical challenges abound in the field of evaluation (Mabry 1999, p.199). Although empirical research on the nature and extent of the ethical challenges evaluators experience are somewhat limited (Morris 2011), existing studies have established that ethical dilemmas do indeed occur, and with some frequency (Moris and Cohn 1993; Turner 2003). In Morris and Cohns (1993) frequently cited example, 65% of randomly sampled American Evaluation Association members reported having experienced an ethical problem in their work as an evaluator (Morris and Cohn 1993). Closer to home, 71% of Australasian Evaluation Society members sampled cite having dealt with an ethical problem whilst performing an evaluation (Turner 2003). The practise of development evaluation is especially vulnerable to such dilemmas given widespread funding constraints (Hendricks and Bamberger 2010), data limitations (Thomas 2010) and structural incentives for positive bias (Martens et al 2002; Clements et al 2008). This paper explores the field of development evaluation, highlighting the characteristics that heighten its susceptibility to ethical complexities. It presents a case study of one young evaluators experience conducting an external development evaluation and in responding to its ethical challenges. The case study analyses the evaluators response to two key ethical issues that presented during the evaluation. It argues that the evaluators ability to access the knowledge and support of a more experienced evaluator enabled them to appropriately respond to the second ethical challenge. The paper concludes by arguing that professional mentorships have the potential to assist young evaluators successfully navigate the ethical challenges they encounter during their first years in the field. Development evaluation an ethical minefield? The field of development evaluation is prone to ethical dilemmas (Hendricks and Bamberger 2010). This vulnerability arises from three interrelated factors: widespread under-funding, data constraints, and systematic biases that discourage robust evaluation. First, although difficult to quantify, there is an emerging perception that development evaluations are under-funded both in finite terms and as a percentage of program costs. As a result, evaluations tend to occur during a 2-3 week period at the end of a project, with evaluators spending an extremely limited time on site (Hendricks and Bamberger 2010) Although this brings a raft of practical difficulties, two are relevant here as they result in ethical considerations. With only a short time in-country, development evaluators have little opportunity to travel outside capital cities or large regional areas: this limits the base from which to draw a research populations (Bamberger 2009); it also manifestly complicates efforts to gather data on comparison groups (Hendricks and Bamberger 2010). Compounding this are data constraints that stem from weaknesses in data collection practices at development organizations. Two commonly cited constraints include an absence of baseline data (Riddell 1997; Kruse 2005; Blue et al 2009) and an absence of reliable program records documenting what has happened over the life of the project (Elkins 2010; Darcy and Hoffman 2003). Despite the importance of baseline data as a frame of reference to help determine and attribute apparent change over time (Elkins 2010, p.316) the failure to document baseline data is referred to as an almost universal complaint in one meta-evaluation of evaluation practices at development organisations (Riddell et al 1997). Without baseline data, change is primarily identified through an evaluators informed judgment judgement that may be based on an unrepresentative experience in country (Kruse 2005). Furthermore, considering that most development evaluations are at least partially retroactive, evaluation findings depend on the existence, quality and verifiability of information collected by others (Elkins 2010, p.315). In the absence of solid program records, even a rigorous external evaluation design can produce only limited conclusions framed by uncertainty (Elkins 2010, p.315). While these two factors may appear in many evaluation contexts, they are particularly common in development evaluations due to the unique structure in which development evaluation operates. More specifically, development interventions lack what is commonly referred to as an accountability feedback loop: a system that enables project beneficiaries to communicate directly with the donor government and their constituents (Martens et al 2002). Unlike beneficiaries in domestic programs who have access to national media and politicians, beneficiaries of international development projects are forced to rely on intermediaries either project implementers or evaluators to provide donor agencies with data on project performance (Martens et al 2002). Yet the competitive nature of development contracting means that organizations seek to report as much positive data as possible to enhance their likelihood of gaining future contracts. This creates an incentive for evaluations that are positively biased, and discourages the use of robust evaluation where it is likely they will uncover ambivalent or negative findings (Clements et al 2008; Pritchett 2002). Given that project implementers retain significant control over the funding of field-level development evaluations, they also maintain de facto control over the length and depth of an evaluation. In practise, this has translated into a trend towards under-funded, positively biased development evaluations that are grounded in weak data collection systems. While the limitations identified above are most certainly practical in nature, they also produce ethical concerns. Hendricks and Bamberger (2010) have identified four primary ethical implications that result from under-funded development evaluations: a failure to understand harm that may result from the project; a failure to learn about inequities arising from the project; an inability to receive valid and useful information about the program and its effects; and an inability to hear the voices of all affected populations. Given the skewed depictions that result, development agencies may continue to fund programs that are less effective than stakeholders believe or directly contribute to negative impacts that have not been identified (Bamberger 2009). Beyond this, weak evaluation may reduce incentives to develop and implement alternative initiatives that could be more effective (Hendricks and Bamberger 2010). Such weaknesses also undermine an evaluators ability to remain faithful to provisions within evaluation codes of ethics, such as the AES Responsibility to Public Interest and Competence or the AEAs Responsibility for General and Public Welfare (Thomas 2010). Limited training opportunities Despite the potential for encountering ethical challenges in development evaluation, few evaluators have formal training in evaluation ethics (Berends 2007); moreover, the evaluation profession has done little to systematically incorporate ethics into training for beginning evaluators (Newman 1999). In an analysis of 21 evaluation textbooks published between 1972 and 1994 Newman and Brown (1996, p.3) found that 80% did not mention ethics at all. Although this percentage has likely improved since the release of that research, the practical relevance of much training material on ethics remains in question. Consider, for example, the International Program for Development Evaluation Training Handbook a guide that is used in one of the most prominent non-academic programs targeted at beginning-level development evaluators. While the handbook offers a list of common ethical problems experienced by evaluators and a summary of the AEA standards, it does not discuss the potential benefits or pitfalls of various responses. Beyond this, some authors question the utility of existing training opportunities. Chelimskly (2008, p.400) argues: the training we receive assumes unthreatened evaluative independence: it emphasizes issues of technique and methodology without considering the political nature of evaluation. Other training is critiqued for merely promoting conformity with ethical standards, whereas real world ethics require[s] judgements of when and how to apply principles in response to highly contextualized circumstances (Mabry 1999, p.200). Compounding this is the view that Ethics Standards themselves provide insufficient guidance to practitioners in responding to real world dilemmas; to the contrary, they actually increase[d] anxiety over unavoidable ethical tensions (Mabry 1999, p.209) Given these considerations, it is little surprise, that several authors have urged for more formal and informal training on evaluation ethics Newman 1999; Mertens 1994; Morris and Cohn 1993). Teaching evaluation ethics Building on research by Brown and Dinnel (cited in Newman and Brown 1996), Newman and Brown (1996) propose that a beginning evaluator moves through specific developmental stages with respect to issues of ethics. In the first stage, the evaluator is nave, applying ethical codes with rigidity. Next, the evaluator enters a stage of disequilibrium, before finally moving to a stage of assimilation where they are able to integrate ethical issues into real world practice. According to Newman and Brown, evaluators move from one stage to the next when the student is both adequately challenged and appropriately supported: Students progress through these stages as they confront challengesare encouraged to reflect, and receive supportive mentoring (Newman and Brown 1996, p.179). From that basis, this paper explores this notion of supportive mentoring by examining the potential for professional mentorships to build the capacity of young evaluators to deal with ethical challenges. In pursuit of this aim, it presents a case study of one young evaluators experience conducting an external development evaluation and in responding to the two key ethical challenges it presented. Background In late 2010 a young evaluator won a bid to conduct a summative evaluation of a peace-building project in East Timor. That project delivered conflict resolution training to both community leaders and individuals identified as instigators of violence in target communities. Although the evaluator had previously assisted in a number of internal evaluations, had conducted qualitative research in a range of developing countries, and had completed two university-level classes in social science research methods, this was their first independently conducted external evaluation. The evaluator spent one-month in-country after the project had been concluded and received a total budget of $3000 that covered in-country accommodation, both domestic and international travel and the evaluators payment. The evaluation aimed to review the projects effectiveness in achieving its stated outcomes and to provide practicable recommendations to the implementing partners in order to improve future implementation. In its conclusions, the report noted that the project had met and exceeded three of its four indicators; that it was showing progress towards its stated outcomes; and that both participants and community members interviewed as part of the evaluation believed it had brought a range of positive impacts. However, the report also noted that existing indicators were pre-disposed to positive bias as they were framed as self-report statements where project beneficiaries reported on changes in their own behaviour. In response to this concern, the evaluator included secondary measures that were not based on self report methods in order to more objectively measure the intention behind the indicators: these by and large supported the original findings. In identifying areas for improvement, the evaluation recommended further examination of the projects training records: while conducting the evaluation, the evaluator identified inconsistencies between the projects attendance records - which indicated a 100% attendance rate at all training courses - and reports from trainers, which suggested that participants regularly missed training sessions. Additionally, the evaluator recommended increasing investment in the projects ongoing monitoring and evaluation as the projects existing system and the evaluations short time frame made it difficult to appreciate how participants applied the knowledge they gained during the training. Finally, interviews with community members indicated that a reasonable percentage of participants continued to use violence after completing the training the evaluation therefore recommended tracking training drop-outs and individuals who continued to use violence as a way to identify why the project had been unsuccessful in those cases. The ethical dilemmas Although a range of ethical issues arose throughout the evaluation, the evaluator reported facing two primary ethical dilemmas. First, a lack of funds and a shortage of time prevented the evaluator from interviewing a representative sample of both target beneficiaries and non-beneficiaries. Attempts to randomly select non-beneficiary community members were complicated due to time, funding and data constraints. Limited funding meant the evaluator could not extend the time frame without dipping into personal funds; data constraints made random selection difficult; and the complexities of a post-conflict development environment meant that travel to beneficiaries outside major cities or towns was extremely time consuming. According to the evaluator, some project staff also discouraged efforts to speak with trainees or community members they had not pre-selected. A second and perhaps larger ethical concern was that after submitting a final report, the project manager instructed the evaluator to re-write the findings, highlighting positive results and downplaying the negative results as they feared the report would diminish their likelihood of gaining ongoing funding. Responses to the ethical dilemmas The evaluator adopted two contrasting responses to these ethical dilemmas. In response to the first the issue of under-funding and the failure to interview a representative sample of beneficiaries and non-beneficiaries the evaluator simply chose to continue with the evaluation whilst acknowledging the limitations in the final report. Although the evaluator categorised this response as inadequate and reported being uncomfortable with the choice, they acknowledged that they chose this option partly out of a desire to complete a real world evaluation and gain experience in the field. Moreover, the evaluator reported that embarrassment the view that they should have known better, the perception that the dilemma was too overwhelming to deal with effectively and uncertainty over how other evaluators would perceive a request for advice also prevented them from pursuing alternate solutions. In contrast to this, the evaluator sought the advice of a senior evaluator when responding to the second dilemma the issue of being pressured by a stakeholder to frame findings in a positive light. According to the evaluator, the senior evaluators assurances that such requests were not unusual; were clearly unprofessional; and that they should not accede to the implementers requests unless they had made an error, enabled the evaluator to respond in a way they were comfortable with. In this instance, the evaluator responded by refusing the majority of the requests for changes to the report, while agreeing to provide a longer and more detailed executive summary to ensure that the findings were thoroughly explained upfront. The evaluator noted: I knew how I wanted to respond but wasnt sure whether it was the right thing to do, or how they [the program staff] would respond to it. By seeking help I got the confidence I needed to put that into practiceIn the end I attempted to maintain the objectivity of the evaluation by refusing their requests while also ensuring the project staff got a product that answered the questions they asked. What can we take from this? Three key lessons can be taken from an analysis of this case study. First, access to a more experienced evaluator for support and guidance enabled the young evaluator to respond in a way they viewed as appropriate. Second, the young evaluators confidence levels having the confidence to ask for help shaped the nature of their response to the ethical dilemmas; and third, uncertainty over how more experienced evaluators would respond to a request for help discouraged the young evaluator from asking for help in the first instance. Professional mentorships the pairing of a senior evaluator with a junior evaluator in a professional relationship whereby the senior evaluator is tasked with providing ongoing career-related guidance and support for the junior evaluator represents a training methodology that addresses all three of these issues. It does so by establishing a relationship premised on the understanding that the young evaluator is meant to ask for guidance and that the mentoring evaluator is willing to offer support. Moreover, mentorships provide support to beginner evaluators on issues of ethics within the highly contextualised environments in which they operate, whilst also taking advantage of the senior evaluators knowledge and experience of similar real world environments. The notion of mentoring is not new. Although the specifics vary widely, professional mentorships are offered extensively in the arts, education, science, research and business and are often offered by professional associations as formal fellowships or by employers. Existing research suggests that individuals who experience ongoing mentoring are likely to acquire greater competency in their fields; receive more promotions and earn higher salaries than their non-mentored counterparts (Dreher and Ash 1990; Janasz et al 2003). This paper suggests such relationships may also be able to cultivate a greater practical understanding of evaluation ethics among young evaluators, thereby contributing to higher quality evaluations and a stronger evaluation profession. Conclusion By offering opportunities for professional mentoring to young evaluators, evaluation associations could assist young development evaluators navigate the ethical minefield that is real-world practice. Given the widely acknowledged inadequacies in development evaluation systems, the practice of development evaluation is particularly vulnerable to ethical quandaries and so mentoring would be particularly valuable to beginning development evaluators. Professional mentoring would also allow beginner evaluators to learn from the rich experiences of their more experienced counterparts and ensure that key ethical lessons are passed on to the fields beginning practitioners. References Bamberger, M. (2009), Why do many international development evaluations have a positive bias? Should we worry?, Evaluation Journal of Australasia, 9(2), pp.39-49. Berends, L. (2007), Ethical decision-making in evaluation, Evaluation Journal of Australasia, 7(2), pp.40-45. Blue, R., Clapp-Wincek, C., and Benner, H. (2009), Beyond Success Stories: Monitoring & Evaluation for Foreign Assistance Results, USAID Monitoring and Evaluation Report, Washington D.C: USAID. Chelimsky, E. (2008), A Clash of Cultures: Improving the Fit Between Evaluative Independence and the Political Requirements of a Democratic Society, American Journal of Evaluation, 29(4), pp.400-415. Clements, P., Chianca, T. and Sasaki, R. (2008), Reducing World Poverty by Improving Evaluation of Development Aid, American Journal of Evaluation, 29(2), pp.195-214. Darcy, J. and Hofmann, C. (2003), According to need? Needs assessment and decision making in the humanitarian sector, Humanitarian Policy Group Report 15, London: Overseas Development Institute.  HYPERLINK "http://www.odi.org.uk/resources/download/239.pdf" www.odi.org.uk/resources/download/239.pdf. Site accessed March 2011. Dreher, G. and Ash, R. (1990), A comparative study of mentoring among men and women in managerial, professional, and technical positions, Journal of Applied Psychology, 75(5), pp. 539-546. Elkins, C. (2010), Evaluating Development Interventions in Peace-Precarious Situations, Evaluation, 16(3), pp.309-321. Hendricks, M. and Bamberger, M. (2010), The Ethical Implications of Underfunding Development Evaluations, American Journal of Evaluation, 31(4), pp.549-556. Janasz, S., Sullivan, S., Whiting, V. and Biech, E. (2003), Mentor Networks and Career Success: Lessons for Turbulent Times [and Executive Commentary], The Academy of Management Executive, 17(4), pp.78-93. Kruse, S. (2005), Meta-Evaluations of NGO Experience: Results and Challenges in Pitman, G., Feinstein, O., and Ingram, G., (eds). Evaluating Development Effectiveness New Brunswick, NJ: Transaction, pp.109-26. Mabry, L. (1999), Circumstantial Ethics, American Journal of Evaluation, 20(2), pp.199-212. Martens, B., Mummert, U., Murrell, P. and Seabright, P. (2002), The institutional economics of foreign aid, Cambridge: Cambridge University Press. Mertens, D. (1994), Training Evaluators: Unique Skills and Knowledge, In J.W. Altschuld and M. Engle (eds.), The Preparation of Professional Evaluators: Issues, Perspectives, and Programs. New Directionsfor Program Evaluation, no. 66, San Francisco: Jossey-Bass. Morris, M. (2011), The Good, The Bad and the Evaluator: 25 Years of AJE Ethics, American Journal of Evaluation, 32(1), pp.134-151. Morris, M. and Cohn, R. (1993), Program evaluators and ethical challenges: A national survey, Evaluation Review, 17, pp.621-642. Newman, D. (1999), Education and Training in Evaluation Ethics, New Directions for Evaluation, 82, pp.67-76. Newman. D. and Brown, R. (1996). Applied ethics for program evaluation. Thousand Oaks, CA: Sage. Pritchett, L. (2002), It Pays to be Ignorant: A Simple Political Economy of Rigorous Program Evaluation, Policy Reform, 5(4), pp.251-269. Riddell, R., Kruse, S., Kyollen, T., Ojanpera, S and Vielajus, J-L. (1997), Searching for Impact and Methods: NGO Evaluation Synthesis Study. A Report produced for the OECD/DAC Expert group on Evaluation. Helsinki: Department for International Development Cooperation, Ministry of Foreign Affairs.  HYPERLINK "http://www.valt.helsinki.fi/ids/ngo/" http://www.valt.helsinki.fi/ids/ngo/. Site accessed March 2011. Thomas, V. (2010), Evaluation Systems, Ethics and Development Evaluation, American Journal of Evaluation, 31(4), pp.540-548. Turner, D. (2003), Evaluation ethics and quality: Results of a survey of Australasian Evaluation society members, Australasian Evaluation Society Ethics Committee.  HYPERLINK "http://www.aes.asn.au/about/Documents%20.../ethics_survey_summary.pdf" www.aes.asn.au/about/Documents%20.../ethics_survey_summary.pdf. Site accessed March 2011.  Paper presented at the 2011 Australasian Evaluation Society International Conference, Sydney, Australia, 29 August 2 September 2011.  sarah@redtanahgroup.com cdefnop> ? ıģэxeO7/h~h$D56B*CJOJQJ^JaJph+h~h$D6CJOJQJ_HaJmH sH $h8h$DCJOJQJ_HmH sH (h8h$DCJOJQJ_HaJmH sH +h8h$D5CJOJQJ_HaJmH sH h8h$D6CJOJQJ%jh8h$D0JCJOJQJUh8h$DCJOJQJ h8h$D-jhhrh$D0JCJOJQJU^JaJ hhrh$DCJOJQJ^JaJefp? @ M LM}=$>$^$?,@,\,0 $ a$gd$D gd$D$a$gd$D 1$7$8$H$gd$Dgd$D$a$gd$Dcdd? @ M KM}!6######-$:$<$=$>$]$^$>,@,[,\,ƱƟƒoooohZLןh9Fh$D5CJOJQJh4GMh$D5CJOJQJ h4GMh$D#h~h$D6CJOJQJ^JaJ h~h$DCJOJQJ^JaJh~h$DCJOJQJ#h9Fh$D5CJOJQJ^JaJ(h8h$DCJOJQJ_HaJmH sH  h9Fh$DCJOJQJ^JaJ#hhrh$D5CJOJQJ^JaJ,hhrh$D5B*CJOJQJ^JaJph\,T00 1; <@@@HHLLP/P"Q-Q.QSSȷȷygVyD3 h~h$DCJOJQJ^JaJ#h~h$D5CJOJQJ^JaJ h$Dh$DCJOJQJ^JaJ#h9Fh$D6CJOJQJ^JaJ#h9Fh$D5CJOJQJ^JaJ+h9Fh$D5CJOJQJ_HaJmH sH +h9Fh$D6CJOJQJ_HaJmH sH  h9Fh$DCJOJQJ^JaJ#hyh$D5CJOJQJ^JaJ(h8h$DCJOJQJ_HaJmH sH  hhrh$DCJOJQJ^JaJ01 1+4,4;; <@@@HHHNN Q"Q#Q.QSSSS 1$7$8$H$gd$D $1$7$8$H$a$gd$D$1$7$8$H$]a$gd$D$a$gd$Dgd$DSSSSTTTTUUVV8W9WXXGYHYYYcZdZ5[6[ \ \j\k\]]gd$DSSUTxTTTTTUUUUZVzVVVW%W9WWWWW=X>X?XhXiXXXY2YEYGYHYYYY௞o`h$Dh$D0JCJOJQJjh$Dh$DCJUjh$Dh$DCJU#hyh$D6CJOJQJ^JaJ hyh$DCJOJQJ^JaJ#h$Dh$D5CJOJQJ^JaJ h$Dh$DCJOJQJ^JaJh$Dh$D6CJOJQJh$Dh$DCJOJQJ#h$Dh$D6CJOJQJ^JaJ%YY/ZOZcZdZZ#[4[6[[[ \ \6\V\\\]]J]M]R]q]] ^ ^^^횅pZI hyh$DCJOJQJ^JaJ+h$Dh$D6CJOJQJ_HaJmH sH (h$Dh$DCJOJQJ_HaJmH sH (h$Dh$DCJOJQJ_HaJmH sH (h$Dh$DCJOJQJ_HaJmH sH #h$Dh$D6CJOJQJ^JaJ h$Dh$DCJOJQJ^JaJh$Dh$D6CJOJQJh$Dh$DCJOJQJ#h$Dh$D5CJOJQJ^JaJ]] ^ ^^^______y`z`bbbbcccccdddd $7$8$H$a$gd$Dgd$D 1$7$8$H$gd$D^^~^^___Z_y______W`f``GaaaaaaaabbibbbbbcCcDccܾo)jh$Dh$DCJOJQJU^JaJh$Dh$D0JCJOJQJjh$Dh$DCJUjh$Dh$DCJUh$Dh$D6CJOJQJh$Dh$DCJOJQJ h$Dh$DCJOJQJ^JaJ hyh$DCJOJQJ^JaJ#hyh$D6CJOJQJ^JaJ"cccccccccccdddddҿұ~m^Q^QMmh$Dh$Dh$DCJOJQJjh$Dh$D0JCJU h8h$DCJOJQJ^JaJh8h$DCJOJQJ)hyh$DB*CJOJQJ^JaJph h$Dh$DCJOJQJ^JaJh$DCJOJQJ^JaJ$h)W*h$D0JCJOJQJ^JaJ)jh$Dh$DCJOJQJU^JaJ/jh)W*h$DCJOJQJU^JaJdd $7$8$H$a$gd$DA 0PP&P 1:p$D. A!"#$% s ,, Q  eg{HH(dh com.apple.print.PageFormat.FormattingPrinter com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PageFormat.FormattingPrinter _10_0_0_202 com.apple.print.ticket.stateFlag 0 com.apple.print.PageFormat.PMHorizontalRes com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PageFormat.PMHorizontalRes 300 com.apple.print.ticket.stateFlag 0 com.apple.print.PageFormat.PMOrientation com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PageFormat.PMOrientation 1 com.apple.print.ticket.stateFlag 0 com.apple.print.PageFormat.PMScaling com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PageFormat.PMScaling 1 com.apple.print.ticket.stateFlag 0 com.apple.print.PageFormat.PMVerticalRes com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PageFormat.PMVerticalRes 300 com.apple.print.ticket.stateFlag 0 com.apple.print.PageFormat.PMVerticalScaling com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PageFormat.PMVerticalScaling 1 com.apple.print.ticket.stateFlag 0 com.apple.print.subTicket.paper_info_ticket PMPPDPaperCodeName com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray PMPPDPaperCodeName A4 com.apple.print.ticket.stateFlag 0 PMPPDTranslationStringPaperName com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray PMPPDTranslationStringPaperName A4 com.apple.print.ticket.stateFlag 0 PMTiogaPaperName com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray PMTiogaPaperName iso-a4 com.apple.print.ticket.stateFlag 0 com.apple.print.PageFormat.PMAdjustedPageRect com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PageFormat.PMAdjustedPageRect 0 0 3408.3333333333335 2329.166666666667 com.apple.print.ticket.stateFlag 0 com.apple.print.PageFormat.PMAdjustedPaperRect com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PageFormat.PMAdjustedPaperRect -50 -75 3458.3333333333335 2404.166666666667 com.apple.print.ticket.stateFlag 0 com.apple.print.PaperInfo.PMCustomPaper com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PaperInfo.PMCustomPaper com.apple.print.ticket.stateFlag 0 com.apple.print.PaperInfo.PMPaperName com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PaperInfo.PMPaperName iso-a4 com.apple.print.ticket.stateFlag 0 com.apple.print.PaperInfo.PMUnadjustedPageRect com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PaperInfo.PMUnadjustedPageRect 0 0 818 559 com.apple.print.ticket.stateFlag 0 com.apple.print.PaperInfo.PMUnadjustedPaperRect com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PaperInfo.PMUnadjustedPaperRect -12 -18 830 577 com.apple.print.ticket.stateFlag 0 com.apple.print.PaperInfo.ppd.PMPaperName com.apple.print.ticket.creator com.apple.jobticket com.apple.print.ticket.itemArray com.apple.print.PaperInfo.ppd.PMPaperName A4 com.apple.print.ticket.stateFlag 0 com.apple.print.ticket.APIVersion 00.20 com.apple.print.ticket.type com.apple.print.PaperInfoTicket com.apple.print.ticket.APIVersion 00.20 com.apple.print.ticket.type com.apple.print.PageFormatTicket DyK yK bhttp://www.odi.org.uk/resources/download/239.pdfDyK yK Jhttp://www.valt.helsinki.fi/ids/ngo/DyK ?www.aes.asn.au/about/Documents%20.../ethics_survey_summary.pdfyK http://www.aes.asn.au/about/Documents%20.../ethics_survey_summary.pdf@@@ NormalCJ_HaJmH sH tH \@\  Heading 1$7$8$@&H$5>*B*\aJmH phsH \@\  Heading 5$$7$8$@&H$a$5B*\mH phsH DA@D Default Paragraph FontVi@V  Table Normal :V 44 la (k@(No List ^Q@^ Body Text 3$7$8$H$a$5B*\aJmH phsH XOX Author!$d*$5$7$8$9DH$a$CJaJmH sH F^@F 9} Normal (Web)dd[$\$tH *W@!* 9}Strong5\4@24 8Header  !4 B4 8Footer  !B@RB D Footnote Text _HmH sH @&@a@ DFootnote ReferenceH*0U`q0 $D Hyperlink>*B*cn^^C++++++$8#KcW^m ^efp?@ML M } =>^?&@&\&*+ ++.,.55 6:::BBBHH K"K#K.KMMMMMMMNNNNOOPP8Q9QRRGSHSSScTdT5U6U V VjVkVWWW X XXXYYYYYYyZzZ\\\\]]]]]^^^^H00000000000À0À0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000À0Àefp?@ML M } =>^?&@&\&*+ ++.,.55 6:::BBBHH K"K.KMMMMNNNNOOPP8Q9QRRGSHSSScT5U6U V VjVkVWWW X XXXYYYYYYyZzZ\\\\]]]]^00 0008ߒ)h00800`0C#`0C#`0C#  o U  U @ o@ o o`0C#`0C#`0C#`0C#`0C#`0C# 0- 0-`0C#0/90/9`0C#`0C#`0C# NA`0C##0A`0C#pK`0C#pK#0A#0A#0A#0A#0A`0C#`0C#M0!`0C#M0!0!0!`0C#M`0C#N`0C#N`0C#N`0C#N08`0C#N08`0C#Q08`0C#OO08@00h7j 2j0S`0C#OO`0C#OO0!v 0!v 0!v `0C#JV08080808080808080808`0C#U0808080808080808X[84@X[84@:B? \,SY^^cd478;<>?0S]dd59:=@d6Q>RhR[[[C]]]^XXX8@0(  B S  ? OLE_LINK1^]^ &/@M%%.""$$& &~&&))A+J+%E0EG'GuGGsH~H JJJ$JMMNN OOOOPPGQNQRRSSTTdTjTTTUUUUxVVVVWWRW[WWWWXZZZZZZ\\]^^M { 3%<%j-t-D DIIRNTNNNP7Q9QnQHSSSaT VhV~XXYYYYYY(\\]]]d^f^^^:::::::::::::::::::,f $@%,LT] KgAԸ()n\^`CJOJQJo(^`CJOJQJo(opp^p`CJOJQJo(@ @ ^@ `CJOJQJo(^`CJOJQJo(^`CJOJQJo(^`CJOJQJo(^`CJOJQJo(PP^P`CJOJQJo(h^`OJQJo(hHh^`OJQJ^Jo(hHohpp^p`OJQJo(hHh@ @ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJQJo(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohPP^P`OJQJo(hH^`o(. hh^h`hH. 8L8^8`LhH. ^`hH.   ^ `hH.  L ^ `LhH. xx^x`hH. HH^H`hH. L^`LhH.h^`OJQJo(hHh^`OJQJ^Jo(hHohpp^p`OJQJo(hHh@ @ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJQJo(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohPP^P`OJQJo(hH)@%,,f KgA         (        ]]^^@\\[D\\b^ @UnknownGTimes New Roman5Symbol3 Arial? Courier New;Wingdings"hj wM'fF!x4d!_%2qHP ?0TITLE IN CAPITALS Pauline Adis ````` ````     Oh+'0  0 < H T`hpx'TITLE IN CAPITALSPauline AdisNormal ````` ````4Microsoft Word 11.3@F#@3O@&mCO@l0DO wM ՜.+,D՜.+,P  hp  'Tour Hosts Pty Limited'!_ TITLE IN CAPITALS Title  8@ _PID_HLINKS'AeDhttp://www.aes.asn.au/about/Documents .../ethics_survey_summary.pdfD%http://www.valt.helsinki.fi/ids/ngo/Z1http://www.odi.org.uk/resources/download/239.pdf  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOQRSTUVWYZ[\]^_`abcdefghijklmnopqstuvwxy{|}~Root Entry FӂOData P1TableX3WordDocumentSummaryInformation(rDocumentSummaryInformation8zCompObjX FMicrosoft Word DocumentNB6WWord.Document.8