Part 2 of 4 in the CompetencyCore™ Guide to 360 Multi-source Feedback series:
By Ian Wayne, M.Sc and Suzanne Simpson, PhD, C. Psych.- Feedback Goals
- Process and Resources
- Delivering the Project
- Selecting a multi-source feedback software solution
In the first post of this blog series, we discussed the importance of following best practices in Multi-source Feedback to ensure a positive and enriching experience for all participants in the process, starting with defining the Feedback Goals for your organization. The next step is identifying a process and the resources needed to achieve these goals.
Essential Criteria to Consider in Designing Your Process
How participants view the process is critical. If participants do not think that the system is fair, the feedback accurate, or the sources credible, then they are more likely to ignore the feedback they receive.
Commitment
Commitment from senior management plays a key role in establishing the credibility of a Multi-source Feedback process. Senior management commitment can be gained through witnessing the success of the system in one part of the organization, if their direct involvement is not possible at the outset.
It is important to seek employee input in the development of the process to clarify employee expectations and perceptions of fairness.
The raters
A number of factors need to be considered when choosing raters:
- Identify the most appropriate people to rate each individual’s performance. The recipient must consider the raters to be credible in order to act on the resulting feedback.
- Identify an appropriate number of raters. If too few raters are used, one person’s feedback can have a disproportionate impact on the overall results. With a small number of raters it is also difficult to ensure the anonymity of feedback sources. We recommend a minimum of 3 to 5 people per feedback group. If fewer are available, then combine groups—for example, combining direct reports and peers into a single group.
- Concern that the person being rated may respond negatively to raters who provide negative feedback. To minimize this concern, feedback should be delivered anonymously for those groups for which there is concern or retaliation could be an issue.
The questionnaire
Best practice suggests that the method of assessment used in a Multi-source Feedback process should:
- Describe behaviors related to actual job performance. Competencies define the behaviors employees need to display for the organization to be successful; therefore, measuring the competencies at the target proficiency levels required in the job is an essential part of the feedback process.
In this case, the competency being assessed is “Teamwork”. The competency is defined as “Working collaboratively with others to achieve organizational goals” and the specific behaviour that is being assessed is “Seeks input from other team members on matters that affect them”. Those individuals providing feedback rate how effective the employee is based on the employee’s observed behaviour on the job.
- Align with other HR processes within the organization. The competencies that are incorporated within the feedback process will depend on the goal of the process. If it is aimed at supporting employee development within their current jobs or roles, then the competency profile for the target employee’s job would be used as the standard for providing feedback.
If, however, the 360 Feedback process is being used to support development for advancement within the organization (e.g., Career Development; Succession Management), then the competency profile for the next level, or another more advanced job, would be the standard used to measure and provide feedback.
It is, therefore, important to define the goal of the 360 Feedback process and then to pick the competency profile most suited to support this goal. These competencies and their associated behavioral indicators will serve as the measurement standards in the assessment process.
Reflect the organization’s culture and values. Job profiles often incorporate core competencies that describe in behavioral terms the key values of your organization.
Allow respondents to indicate when they have not had the opportunity to observe a behavior (so as to avoid feedback based on guesses).
The structure of feedback
Consistent with best practice, feedback should be broken out for each question by presenting the average ratings from each feedback group so that differences in perspectives are easy to identify. If there are enough raters involved, this should not compromise anonymity. If there are only a few raters, group averages can be combined to protect anonymity.
The option to add observations or comments should be provided. This can help to throw more light on the ratings, but the person giving the feedback needs to be sensitive in providing this information.
It is important, therefore, to provide an orientation to those giving feedback on best ways to do this, both in terms of the providing accurate rating as well as providing comments and examples that validate the rating in both a respectful and honest manner.
Once a decision is made on who has access to the ratings, this needs to be followed consistently. A change in who has access to the information is one of the most commonly cited reasons for a lack of trust in the process. If there are good reasons to change, it is critical to seek the permission of the individuals involved before making that change.
Time & Resources Required
When planning a Multi-source Feedback process, it is important to have an accurate view of the time and resources needed to roll it out effectively. This includes the time needed to set up and manage the program, provide the feedback from the different groups, gather the feedback and compile reports, and finally give that feedback to the individual and support subsequent actions to develop and improve performance.
When Multi-source Feedback is being used to encourage and enhance development, it is important to consider in advance the resources needed to support such development. Gathering feedback information is just the starting point in the development cycle. The next step is to create individual learning plans that target specific developmental needs.
Having defined the process and resources needed for Competency-based Multi-source Feedback in your organization, the next steps are to pilot, implement and finally evaluate your program to ensure that it is meeting your intended goals. The third blog in this series addresses this topic.
Sources:
DTI. (2001). 360 Degree Feedback: Best Practice Guidelines. Downloaded from: www.dti.gov.uk/mbp/360feedback/360bestprgdlns.pdf
Maylett, T. (2009). 360-Degree Feedback Revisited: The Transition From Development to Appraisal. Compensation & Benefits Review, 41(5), 52–59.
Morgeson, F. P., Mumford, T. V., & Campion, M. A. (2005). Coming Full Circle: Using Research and Practice to Address 27 Questions About 360-Degree Feedback Programs. Consulting Psychology Journal: Practice and Research, 57(3), 196–209.
Want to learn more? Get the Guide!
This guide reviews the best practices for 360 degree feedback, beginning with establishing 360 feedback goals, to process design, project delivery and software platform selection. It also includes a 360 degree feedback checklist for a successful implementation.
No comments:
Post a Comment