Thursday, 29 December 2011

Administering a Certification Test: Points to Remember

Securing the effectiveness of an assessment for certification testing requires periodic evaluation of its relevance and utility. As such, tools and job analysis must be regularly updated. The frequency of use of a given tool is also of particular concern. Its use (or re-use) depends on the type of tool, the frequency of its administration, and number of candidates to which it is administered. These factors reflect the degree of exposure of the test content. Some tools, like multiple choice exams, are difficult to leak since after taking a 100 question examination, candidates rarely remember more than a few questions. It is possible, therefore, to administer such a test more than once. On the other hand, an interview that comprises only ten questions involves a high likelihood that candidates will remember most or all of the questions. This method of assessment therefore requires frequent renewal. It follows that the more candidates are evaluated or the more often a test is administered, the more frequent the need for content renewal.

This post is based on content from 'Assessment Tools Certification' by Human Resource Systems Group, Ltd.

Thursday, 15 December 2011

Administering a Certification Test

To ensure the integrity of the credential, the security surrounding the examination process is essential. Security includes ensuring the right candidate is taking the test. Security also means controlling the exposure of test questions to safeguard the integrity of the tool. On this point, the certification body should put in place a mechanism for the rotation of assessment questions in order to maintain their objectivity and confidentiality. The test should be administered in a standardized manner, in a secure, proctored environment. Assessors and invigilators should be trained on the proper handling of assessment materials and on administration policies and protocol.

Care must also be taken regarding the access to testing materials when they are not being administered. All assessments materials and related confidential information should be maintained in a secure environment, and processes and policies should be in place regarding their handling. Access to these materials should be limited to those who require it and who are trained in proper handling of these materials.

Computer-based testing is becoming more prevalent than the traditional paper-and-pencil method (60% versus 72% in 2007 compared to 34% computer-based testing and 81% paper-and-pencil in 2003; Knapp, 2007). Regardless of the approach ultimately selected, test administration should always be standardized, structured and delivered in a proctored environment. Safeguards should be in place to address concerns related to computer-based assessments, such as the existence of chat rooms and forums that share tips on how to pass the test.

A common misconception is that computer-based tests are less costly to administer than paper assessments. It is demonstrated that this is not necessarily the case. There are a number of considerations when looking at computer-based testing, including the infrastructure costs, the increased number of test items required due to greater rate of item exposure, as well as security concerns.

After administering the test, it is important to conduct statistical analyses to ensure that the test is performing as expected and that the pass mark is appropriately set.

Another consideration is determining what information is to be provided to candidates in terms of their results and feedback. While there are different ways of reporting assessment results, it is important to ensure consistency in the type of information shared with candidates. In some cases, this may just be a pass or fail notice. Other administrators report a score, pass mark and feedback by category.

This post is based on content from 'Assessment Tools Certification' by Human Resource Systems Group, Ltd.

Tuesday, 13 December 2011

Test Development: Assessments Tools and Establishing a Standard

Regardless of the tool selected, the assessment for certification must be based on the blueprint requirements. Moreover, the development process should be implemented by testing and measurement experts and informed by job experts. Job experts should be trained to develop or review test questions or set the passmark. They should also be representative of the candidate population (e.g., geographic, roles, specialties, work environment, protected groups).

The tool should be reviewed by a group of content experts, often an advisory committee. When possible, it is recommended to pilot test the assessment tool. If the certification program is offered in more than one language, there must be an equivalent translation(s) of the assessment tool. On this point, the Association of Test Publishers provides standards that should be adhered to for proper translation and adaptation of the certification test content.

When developing tests for professional certification programs, the passmark must be linked to expected on-the-job performance and consistent with the nature and intended use of the assessment. As such, it is a formal, standardized process that usually includes a criterion-referenced method. The criterion-referenced method generally fixes the passing score to a certain percentage (e.g., in school 50% or 60%) of the subject matter the test is designed to assess. With this method, hypothetically every candidate could pass or fail but only those who pass have acquired the specified level of the subject matter. The criterion-referenced approach stands in contrast to an approach in which the pass mark is based upon the distribution of scores. Using this approach, approximately 15% of candidates fail for every test administration, regardless of the difficulty or the exam or candidates’ competence level.

In Knapp’s Certification Industry Scan (2007):
76% use criterion-referenced method
6% use normative method
18% use score-selected method based on professional consensus or academic standards

When establishing the standard, one must consider the target candidates (e.g., entry level, fully working, advanced), the consequences of certification (low stakes, such as a hotel attendant or a website designer, or high stakes, such as a physician or a pilot), as well as the level of difficulty of the assessment.

The passmark cannot be an arbitrary number (e.g., 75%). Careful consideration is required: if the pass mark is set too high, so that only the best candidates pass, this may discourage candidates from obtaining the certification. On the other hand, if the assessment is too lenient, it may not be perceived as relevant and no added value is gained by obtaining the credential.

This post is based on content from 'Assessment Tools Certification' by Human Resource Systems Group, Ltd.

Wednesday, 7 December 2011

How To Develop a Test: Blueprints

Test development begins with the identification of critical tasks performed by competent people working in a given profession. This is typically done through job analysis. Job analysis involves consultations with job experts as well as the review of documents that describe tasks, knowledge and skills required for the occupation (e.g., National Occupational Standards).

According to Knapp’s Certification Industry Scan (2007):
- 90% of certifying bodies utilize a formal study to identify/validate the content of the assessments; and,
- 72% of certifying bodies perform validation study updates every 5 years or less.

The results of the job analysis dictate the type of the assessment tool(s) to be developed. For example, if the analysis reveals that successful job performance is highly dependent on specialized and technical knowledge (e.g., specific IT programming language), then a tool should focus on assessing knowledge rather than skills or abilities. Alternately, if, for example, the job analysis indicates that customer focus is a crucial competency, candidates’ skills may be best assessed through a performance evaluation tool, such as observation or an on-the-job simulation. The importance of determining the competencies to be assessed is paramount. Often, a wide range of competencies and skills are identified, and in some cases, the competencies cannot be assessed by a single assessment tool. It is essential to carefully identify the competencies and skills to be assessed and select suitable assessment tools.

A test blueprint specifies the characteristics that an assessment tool must meet. It links the specific task areas from job analysis to the tool (i.e., it specifies how much weight should be given to each task area or category from occupational standards). In establishing the blueprint, content validity is essential; that is, the tool must assess what it is designed to assess.

Test blueprints typically include:
A list of competencies, skills and/or knowledge areas
The type of assessment tool (e.g., written test, structured interview)
Format of questions (e.g., multiple-choice, short-answer)
Number of questions
Proportion of questions within each category
Characteristics of questions (e.g., cognitive level, context, domain)
Scoring procedures
Format of assessment tool (e.g., paper-based, computer-based)
Target population

A blueprint is, in a sense, like a recipe. It provides instructions on how to assemble a test. This procedure ensures consistency and equivalency across different versions of the test. As such, the tool must be developed and/or revised by job experts.

This post is based on content from 'Assessment Tools Certification' by Human Resource Systems Group, Ltd.