Tuesday 21 January 2014

New retail partnership for HRSG - Competency Content Available as Digital Downloads


HRSG’s competency content is poised to reach new markets through an innovative alliance with new ecommerce partner Talentilo.

Using HRSG competencies and job profiles, Talentilo has developed a line of competency-based job descriptions and interview kits that can be purchased individually as digital downloads.

Talentilo products are designed to make HRSG’s competency content available to the many small- and midsized businesses operating in North America and around the world.

According to recent research, more than 50 percent of the working population in the US is employed by businesses of 500 employees or less1,  and nearly 70 percent of working Canadians are employed by businesses of 100 employees or less2.

While these smaller organizations make up more than half of all employers, they often lack the resources to incorporate competencies into their talent-management processes.

By simplifying the concepts and providing step-by-step instructions on how to use the products, Talentilo has created accessible, affordable competency-based talent tools that even smaller organizations can feel confident using.

Each interview kit and job description in the product range includes HRSG’s detailed, multi-level competency content.

The current product lineup includes talent tools for jobs in accounting, administration, information technology, procurement and supply chain, project management, and sales and marketing. The company plans to release new products for more industry sectors later this year.

More information about Talentilo can be found on the company website at talentilo.com.



Sources
1. SBA data, 2013
2. Industry Canada data, 2012



Tuesday 19 November 2013

Competency-based 360 Multi-source Feedback: Selecting a Multi-source Feedback Software Solution

Part 4 of 4 in the CompetencyCore™ Guide to 360 Multi-source Feedback series:
  1. Feedback Goals
  2. Process and Resources
  3. Delivering the Project
  4. Selecting a multi-source feedback software solution
Download the complete guide to 360 Feedback
By Ian Wayne, M.Sc and Suzanne Simpson, PhD, C. Psych.

The first three blogs in this series examined “best practices” in establishing the goals for, designing and implementing a 360 Multi-source Feedback process. However, it is almost impossible to implement an effective Multi-source Feedback process without having a software system in place to support delivery and analysis of the Feedback results.

This post examines what to look for in selecting a software system that will work for your organization.

Why a competency management software system is important

Designing, developing, implementing and maintaining a competency framework is difficult to do in a paper-based format. It can quickly become unwieldy and out of control if not managed through a competency management software system.

Without such a system it is difficult to build and maintain processes like 360 Multi-source Feedback based on the most current competency information.

What to look for in a system

  • A system with existing well-researched competency content
    This includes a library of the general competencies as well as technical / professional competencies that are suited to your organization. These days, it is not necessary or even advisable to develop your competencies from scratch. It can take years to develop high-quality competencies.

    Vendors often also have standard job competency profiles available that reflect the job duties / tasks typically required in jobs within specific functional areas as well as industry sectors. These can then become the starting point for use within your organization, editing and adjusting them to fit the unique requirements of your organization.

  • A system that supports standardized implementation
    Organizations are increasingly experiencing distributed workplaces, with employees operating out of multiple locations. As a result, it is becoming more difficult to ensure that the human resource processes are implemented in a uniform and standardized way. If you have a system that supports the standardized adoption of competency content and competency-based HR processes, it becomes easier to ensure that HR professionals, managers and employees are accessing and implementing the correct competency content in an online 360 Multi-source Feedback process.

  • A flexible system configurable to your needs
    In many 360 Multi-source systems, the software delivery and content are inextricably linked. Organizations therefore have to “buy-in” to the content and underlying model being delivered in the software. So, for example, if you wish to implement a 360 process to assess Leadership Competencies, you effectively have to adopt the leadership competency model that is part of the feedback tool. But the model being delivered in the software may not meet your organizational needs, reflect the values and culture of your organization, or incorporate the competencies you are attempting to reinforce and develop within your various employee groups.

    A more appropriate and valid approach is to have a system that links to the competency content and models you have designed and developed for your organization. As such, the system should allow you to pick from a list job competency profiles or models, and then implement this competency information within the 360 Multi-source Feedback tool that is part of the same system.

    In addition, the tool should allow you to select the individuals and groups who will be part of the feedback process. In some cases, for example, you may wish to collect feedback from work colleagues; in other cases you may wish to collect feedback from work colleagues and clients of the target participants. In each case, the groups providing feedback should be based on the job being performed and the purpose of the assessment.

    The system should also be flexible with regard to the rating scale being implemented, both in terms of the number of levels (rating scales can run anywhere from 3 to 7 levels) as well as the scale type (e.g., effectiveness scale; observed frequency of the behaviour; etc.). Research on the number of levels and type of rating scale is extensive and subject to a great deal of debate as to what is best practice. You should have the flexibility to be able to choose or design a rating scale that works best for the feedback process and type of work being performed.

    Finally, you should have the flexibility to identify what type of competency information is being assessed. Most systems take the assessment process down to the level of the behaviour / performance indicators for each competency (e.g., for Client Focus – proficiency level 3 – “Looks for ways to add value beyond the client’s immediate request”), but in some cases the assessment may be performed at the level of the competency. You should be able to choose what is being assessed according to the goal of the assessment. In CompetencyCore, for example, you have the choice of assessing the competencies at the individual behavioural indicator and / or at the Competency level.

  • Reporting of Feedback Results
    The 360 Multi-source Feedback tool should also provide good graphical information that allows the comparison of results across the different types of people providing the feedback. The breakdown of information should not only be provided at the competency level, but also at the level of the individual behavioral indicators for each competency. This allows the target participant to gain different perspectives on his / her performance. It also provides a more diagnostic perspective on how each competency should be developed. For example, although the average performance on a particular competency might meet performance expectations, individual behaviours may require improvement within the competency.

    Finally, organizations can engage in a 360 Multi-source Feedback process to review performance at an organizational, regional, and / or functional level (e.g., all financial jobs). The reporting tools should therefore allow for the aggregation of data to determine key themes across selected groups. Plans and programs can then be identified to address high-priority development or training needs.

  • Security, Confidentiality and Anonymity
    As noted in a previous post, it is important to protect the anonymity of certain types of raters – in particular, when using direct reports to the target participant in the Feedback process. As well, with a small number of raters, one person’s feedback can have a disproportionate impact on the overall ratings. It is therefore important to be able to define the rules in the software for combining certain types of raters’ scores to ensure feedback confidentiality and anonymity.

    Finally, when raters are asked to provide comments to substantiate their ratings, it is important that they are instructed to do this in a positive and helpful way, and that when doing so, they abide by whatever confidentiality and anonymity rules your organization establishes. Make sure that your software allows you to incorporate these kinds of instructions.

  • Integration and Alignment within the Talent Management Process
    360 Multi-source Feedback is not a stand-alone process. It is done to accomplish a particular goal, for example to address gaps in competency through learning and development. Therefore, the software should allow the user to link to other HR processes in the system. For example, in CompetencyCore, any competency gaps identified through the assessment process can feed directly into a Learning Plan tool that provides targeted learning resources (e.g., on-job activities, books, courses, etc.), organized by competency, to help address those gaps. This is only one example of how the 360 Feedback process can be integrated with, and feed into, other Talent Management processes within the organization.


Sources:
 DTI. (2001). 360 Degree Feedback: Best Practice Guidelines. Downloaded from: www.dti.gov.uk/mbp/360feedback/360bestprgdlns.pdf‎
Maylett, T. (2009). 360-Degree Feedback Revisited: The Transition From Development to Appraisal. Compensation & Benefits Review, 41(5), 52–59.
Morgeson, F. P., Mumford, T. V., & Campion, M. A. (2005). Coming Full Circle: Using Research and Practice to Address 27 Questions About 360-Degree Feedback Programs. Consulting Psychology Journal: Practice and Research, 57(3), 196–209.



Want to learn more? Get the Guide!

This guide reviews the best practices for 360 degree feedback, beginning with establishing 360 feedback goals, to process design, project delivery and software platform selection. It also includes a 360 degree feedback checklist for a successful implementation.

Tuesday 24 September 2013

Competency-based 360 Multi-Source Feedback: Delivering the Project

Part 3 of 4 in the CompetencyCore™ Guide to 360 Multi-source Feedback series:
  1. Feedback Goals
  2. Process and Resources
  3. Delivering the Project
  4. Selecting a multi-source feedback software solution
Get the complete Guide to 360 Feedback
By Ian Wayne, M.Sc and Suzanne Simpson, PhD, C. Psych.

In the first two blogs in this series we discussed the importance of following best practices in Multi-source Feedback to ensure a positive and enriching experience for all participating in the process, starting with defining the Feedback Goals for your organization and then determining the process and resources needed to achieve these goals.

Having defined the process and resources needed for your Competency-based Multi-source Feedback in your organization, the next steps are to pilot, implement and finally evaluate your program to ensure that it is meeting your intended goals.

Essential Criteria to Consider in Designing Your Process

How participants view the process is critical. If participants do not think that the system is fair, the feedback accurate, or the sources credible, then they are more likely to ignore the feedback they receive.

Piloting

A pilot can generate a realistic picture of the resources required to manage the process throughout the rest of the organization. Valuable insights can be gained into the time required to provide ratings and feedback, as well as how soon the feedback can be given to participants.

Piloting also helps reduce uncertainties by allowing a test group to experience the process. It provides useful information for further planning and communication and allows for a review of the Multi-source Feedback instrument. An initial review allows consideration of such questions as whether the questionnaire is user-friendly, and whether appropriate development actions have been identified.

Lessons learned through the pilot should be considered. Any alterations and adaptations that will make implementation smoother should be made.

Implementation

The most critical part of the implementation process is ensuring that all participants are clear about what is involved. To ensure this occurs:

  • Establish an individual or team to take responsibility for administering the system—this helps ensure that the procedure is running smoothly and any issues are resolved swiftly.
  • Provide a point of contact for participants with questions and concerns.
  • Establish deadlines for providing ratings and timeframes for providing feedback.
  • Send automated email invitations and reminders to individuals who are late completing their feedback. This reduces the administrator’s workload and maintains momentum.

  • Brief raters on the objectives of the scheme and provide instructions for completing questionnaires.

  • Provide clear and positive communication throughout the process.


Providing Feedback

Effective feedback is the springboard for subsequent development and is integral to the success of the process.

How will the feedback be communicated?

Given that an individual is receiving sensitive information about how their colleagues, direct reports and manager view their performance, sensitivity is required. Best practice would be to make someone available to help interpret the results with that person.

The people giving feedback will need to have the skills to support this process. The facilitators need a good understanding of the organization’s policies on the process, the instrument and report, an awareness of the range of reactions individuals have to feedback, and interpersonal skills in conducting a feedback session. Facilitators must also be seen as trustworthy and credible.

When being done for development, discussion of the results with the facilitator can help focus the discussion on future development planning rather than on the feedback itself. Skilled facilitators will help the individual to draw out evidence and make connections across different people and situations. It is this process that stimulates self-awareness and makes Multi-source Feedback such a powerful process.

When will the feedback be communicated?

Ideally, individuals should receive feedback as soon as possible after the feedback was given. This maintains the momentum of the process and the motivation of the individual. Given the pace of change in many organizations, shorter turn-around times ensure that the feedback is still relevant for the role.

It is important to ensure that people receive feedback when there is support available to interpret the results. Providing a report without support, particularly prior to a weekend or going on holidays, is far from ideal, and can have negative consequences.

Review

Reviewing and evaluating the success of the process is a widely overlooked. The key question to consider is whether the program met its original purpose. If the original purpose was to improve performance, have relevant development needs been identified? If it was to support the performance review process, has the process supplied the required information in a fair and credible way?

Qualitative Review

A qualitative review with the key people involved can provide invaluable information on whether the process has achieved its goals. This review should include individuals receiving feedback, doing the rating, facilitating the feedback and the line managers of those involved. The timing of the review will depend on the original purpose, with more time needed when the purpose was development.

The Questionnaire

How effective is the questionnaire?
  • Was it consistent with and link to other relevant indicators of performance in the organization?
  • Did individuals gather development information?
  • Did raters use the rating system effectively?
  • Was it reliable?
  • Did it ‘look’ right?

Use a system that aggregates data from the questionnaires in order to identify patterns of strengths and development needs across the participating group. This information can be used to feed into development planning at a strategic level, to ensure that the organization has people with the relevant skills to meet its objectives.

The next and last post in this series examines what to look for in a software system to support the effective implementation of 360 Multi-source Feedback in your organization.


Sources:
 DTI. (2001). 360 Degree Feedback: Best Practice Guidelines. Downloaded from: www.dti.gov.uk/mbp/360feedback/360bestprgdlns.pdf‎
Maylett, T. (2009). 360-Degree Feedback Revisited: The Transition From Development to Appraisal. Compensation & Benefits Review, 41(5), 52–59.
Morgeson, F. P., Mumford, T. V., & Campion, M. A. (2005). Coming Full Circle: Using Research and Practice to Address 27 Questions About 360-Degree Feedback Programs. Consulting Psychology Journal: Practice and Research, 57(3), 196–209.



Want to learn more? Get the Guide!

This guide reviews the best practices for 360 degree feedback, beginning with establishing 360 feedback goals, to process design, project delivery and software platform selection. It also includes a 360 degree feedback checklist for a successful implementation.


Wednesday 7 August 2013

Competency-based 360 Multi-source Feedback: Process and Resources

Part 2 of 4 in the CompetencyCore™ Guide to 360 Multi-source Feedback series:
  1. Feedback Goals
  2. Process and Resources
  3. Delivering the Project
  4. Selecting a multi-source feedback software solution
Download the complete guide to 360 Feedback
By Ian Wayne, M.Sc and Suzanne Simpson, PhD, C. Psych.

In the first post of this blog series, we discussed the importance of following best practices in Multi-source Feedback to ensure a positive and enriching experience for all participants in the process, starting with defining the Feedback Goals for your organization. The next step is identifying a process and the resources needed to achieve these goals.

Essential Criteria to Consider in Designing Your Process

How participants view the process is critical. If participants do not think that the system is fair, the feedback accurate, or the sources credible, then they are more likely to ignore the feedback they receive.

Commitment

Commitment from senior management plays a key role in establishing the credibility of a Multi-source Feedback process. Senior management commitment can be gained through witnessing the success of the system in one part of the organization, if their direct involvement is not possible at the outset.

It is important to seek employee input in the development of the process to clarify employee expectations and perceptions of fairness.

The raters

A number of factors need to be considered when choosing raters:

  • Identify the most appropriate people to rate each individual’s performance. The recipient must consider the raters to be credible in order to act on the resulting feedback.
  • Identify an appropriate number of raters. If too few raters are used, one person’s feedback can have a disproportionate impact on the overall results. With a small number of raters it is also difficult to ensure the anonymity of feedback sources. We recommend a minimum of 3 to 5 people per feedback group. If fewer are available, then combine groups—for example, combining direct reports and peers into a single group.
  • Concern that the person being rated may respond negatively to raters who provide negative feedback. To minimize this concern, feedback should be delivered anonymously for those groups for which there is concern or retaliation could be an issue.

The questionnaire

Best practice suggests that the method of assessment used in a Multi-source Feedback process should:

  • Describe behaviors related to actual job performance. Competencies define the behaviors employees need to display for the organization to be successful; therefore, measuring the competencies at the target proficiency levels required in the job is an essential part of the feedback process.
An example of how competencies are measured in a 360 Feedback process is shown in the diagram below.


In this case, the competency being assessed is “Teamwork”. The competency is defined as “Working collaboratively with others to achieve organizational goals” and the specific behaviour that is being assessed is “Seeks input from other team members on matters that affect them”. Those individuals providing feedback rate how effective the employee is based on the employee’s observed behaviour on the job.

  • Align with other HR processes within the organization. The competencies that are incorporated within the feedback process will depend on the goal of the process. If it is aimed at supporting employee development within their current jobs or roles, then the competency profile for the target employee’s job would be used as the standard for providing feedback.

If, however, the 360 Feedback process is being used to support development for advancement within the organization (e.g., Career Development; Succession Management), then the competency profile for the next level, or another more advanced job, would be the standard used to measure and provide feedback.

It is, therefore, important to define the goal of the 360 Feedback process and then to pick the competency profile most suited to support this goal. These competencies and their associated behavioral indicators will serve as the measurement standards in the assessment process.
Reflect the organization’s culture and values. Job profiles often incorporate core competencies that describe in behavioral terms the key values of your organization.

Allow respondents to indicate when they have not had the opportunity to observe a behavior (so as to avoid feedback based on guesses).

The structure of feedback 

Consistent with best practice, feedback should be broken out for each question by presenting the average ratings from each feedback group so that differences in perspectives are easy to identify. If there are enough raters involved, this should not compromise anonymity. If there are only a few raters, group averages can be combined to protect anonymity.

The option to add observations or comments should be provided. This can help to throw more light on the ratings, but the person giving the feedback needs to be sensitive in providing this information.

It is important, therefore, to provide an orientation to those giving feedback on best ways to do this, both in terms of the providing accurate rating as well as providing comments and examples that validate the rating in both a respectful and honest manner.

Once a decision is made on who has access to the ratings, this needs to be followed consistently. A change in who has access to the information is one of the most commonly cited reasons for a lack of trust in the process. If there are good reasons to change, it is critical to seek the permission of the individuals involved before making that change.

Time & Resources Required

When planning a Multi-source Feedback process, it is important to have an accurate view of the time and resources needed to roll it out effectively. This includes the time needed to set up and manage the program, provide the feedback from the different groups, gather the feedback and compile reports, and finally give that feedback to the individual and support subsequent actions to develop and improve performance.

When Multi-source Feedback is being used to encourage and enhance development, it is important to consider in advance the resources needed to support such development. Gathering feedback information is just the starting point in the development cycle. The next step is to create individual learning plans that target specific developmental needs.

Having defined the process and resources needed for Competency-based Multi-source Feedback in your organization, the next steps are to pilot, implement and finally evaluate your program to ensure that it is meeting your intended goals. The third blog in this series addresses this topic.

Sources:
 DTI. (2001). 360 Degree Feedback: Best Practice Guidelines. Downloaded from: www.dti.gov.uk/mbp/360feedback/360bestprgdlns.pdf‎
Maylett, T. (2009). 360-Degree Feedback Revisited: The Transition From Development to Appraisal. Compensation & Benefits Review, 41(5), 52–59.
Morgeson, F. P., Mumford, T. V., & Campion, M. A. (2005). Coming Full Circle: Using Research and Practice to Address 27 Questions About 360-Degree Feedback Programs. Consulting Psychology Journal: Practice and Research, 57(3), 196–209.



Want to learn more? Get the Guide!

This guide reviews the best practices for 360 degree feedback, beginning with establishing 360 feedback goals, to process design, project delivery and software platform selection. It also includes a 360 degree feedback checklist for a successful implementation.