Using Technology to Measure Learning

During the past few decades, employer demand has shifted toward workers with higher education credentials, as evidenced by the higher wages paid to those with a college degree. In addition to this increase in entry qualifications, the knowledge required t

Increasingly, work that requires interaction with information technology comprises a greater proportion of jobs. By 2006 almost half of all workers will be employed by industries that are major producers or intensive users of information technology products and services.

The need to continually upgrade skills—and the pay differential associated with increased education—is fueling the growth of online learning in the workplace and higher education. As of 2002, U.S. institutions reportedly offered more than 6,000 accredited courses electronically. In 2002, more than 2 million individuals were enrolled in online learning programs, representing a tripling of 1998 enrollment.

The growth of Internet-based distance learning is having a significant impact on traditional education as well. A June 2000 survey by the National Education Association of its members determined that one in 10 faculty members of higher education conducted a distance-learning course, and 90 percent said distance learning was either offered or being considered at their institution.

Commercial vendors and academic organizations are delivering sophisticated software and content via the Web, blurring the distinctions between distance learning and local education. According to the NEA study, 40 percent of faculty teaching a Web-based course held a very positive view.

Harvard President Neil L. Rudenstine wrote, “(The) Internet has distinctive powers to complement, reinforce, and enhance some of our most effective traditional approaches to university teaching and learning. We should embrace those capacities, not resist them.”

Researchers are now beginning to tackle the more complicated research task of investigating the impact of technology use in meeting these new expectations for what students should learn. They are examining students’ ability to understand complex phenomena, analyze and synthesize multiple sources of information and build representations of their own knowledge. This model of integrated technology-supported learning emphasizes the ability to access, interpret and synthesize information instead of rote memorization and the acquisition of isolated skills.

There are four major uses of technology to support learning. Technology can be used:

  • As a tutor (examples are drill-and-practice software, tutoring systems, instructional television, computer-assisted instruction and intelligent computer-assisted instruction).
  • As a means to explore (examples are CD-ROM encyclopedias, simulations, hypermedia stacks, network search tools and microcomputer-based laboratories).
  • As a tool to create, compose, store and analyze data (examples are word processing and spreadsheet software, database management programs, graphic software, desktop publishing systems, hypermedia, network search tools and videotape recording and editing equipment).
  • As a means to communicate with others (examples are e-mail, interactive distance learning and the use of collaborative tools).

As electronic learning becomes more widespread, the substance and format of assessment will need to keep pace. The Web-based Education Commission states, “Perhaps the greatest barrier to innovative teaching is assessment that measures yesterday’s learning goals… Too often today’s tests measure yesterday’s skills with yesterday’s testing technologies—paper and pencil.”

As students do more of their learning using technology tools, asking them to express that learning in a medium different from the one they typically work in will become increasingly untenable. This is especially true if working with technology (e.g., searching for information using the Internet or writing on a computer) is part of the skill set being tested. These changes in learning methodology offer exciting possibilities for assessment innovation.

An obvious result of using online teaching methods is the potential for integrating assessment with instruction. In this scenario, students are able to respond to online instructional exercises electronically, so their responses can be recorded. This learning “trail” can then be analyzed to indicate what each student knows and needs to learn next, enabling instruction to be tailored to individual learning needs.

In addition to assessment embedded in Internet-delivered courses, one can imagine Internet-delivered assessment embedded in instructor-led classroom activity. Such assessment could take the form of periodic exercises that both teach and test the student. In this scenario, the exercises would be standardized and performance could serve, depending on the level of aggregation, to indicate achievement on a number of levels: individual, classroom, school, district, state or national. These Web-based exercises serve accountability as well as diagnostic purposes and are useful to both individuals and institutions.

There are in fact several institutions already taking full advantage of IT’s capabilities, incorporating assessments into courseware. For example, Ohio State integrated a learning-style inventory mechanism directly into its courseware for introductory statistics. The inventory enables individuals to identify their learning style and scores students on different levels of learning: active versus reflective, sensing versus intuitive, visual versus verbal and sequential versus global. A study-skills assessment instrument can also be integrated to help learners make appropriate choices from a selection of learning options.

Florida Gulf Coast University (FGCU) offers an introductory general-education course called “Styles and Ways of Learning.” In that course, students complete the Myers-Briggs Type Indicator (MBTI) instrument, which identifies students’ preferences among sets of mental processes or mental habits.

The list of organizations delivering electronic testing is impressive and includes nonprofit testing agencies, for-profit testing companies, school districts, state education departments and government agencies.

Benefits of Technology-Based Assessments

Besides enabling testing to inform instruction, the new technology offers some practical educational benefits. Moving information electronically is generally easier and faster than moving things physically. Once the infrastructure is in place, electronic processing can help large-scale assessment programs with:

  • Reducing the cost of developing tests. Computer technology makes questions easier and cheaper to produce. With inexpensive, high-quality test questions, tutors have better resources for classroom assessment. Students have more opportunities to identify areas in need of further study and to practice critical skills.
  • Flexibility in delivering tests. Computer delivery enables tests to be individually administered. Different tests may be administered simultaneously to different students in the same classroom.
  • New types of test assessments that include audio, video and animation. This capability makes it possible to measure important skills that paper-and-pencil tests simply cannot assess (e.g., skill in using computers to search for information). It also eliminates the need for specialized testing equipment, such as audiocassette recorders and VCRs.
  • Using remote graders at lower costs. The ability to securely transmit responses to remote graders makes it practical to include more open-ended questions in assessments because the cost and time required for grading are greatly reduced. Including more open-ended questions encourages students and tutors to focus on problem-solving activities that are more like the ones required for success in work and advanced academic environments.
  • Distribution of results. With electronic distribution, students and decision-makers can get results instantly. For example, live labs and interactive simulations can provide engaging and challenging tasks and supply instant feedback on individual performance. Such a learning environment engages the full range of the human senses through multimedia technology and encourages active learning.
  • Continuous assessment. When traditional, periodic assessment is shifted toward continuous assessment, individuals view it as a learning experience rather than as a make-or-break performance measure. Continuous assessment helps students keep up with study and recognize holes in their understanding.
  • Intuitive instruction. Technology-based assessments have increased in sophistication, making them more feasible and more manageable. Computer-adaptive testing and assessment can follow an individual’s progress and adapt the learning environment to respond to areas of difficulty. Personalized paths of learning are then created that present content tailored to meet individual weaknesses and provide tasks that promote thinking and foster retention. With access to individual progress reports, mentors can plan interventions accordingly.
  • Improved analysis of data. With electronic data capture, the analysis can be deeper. For example, the time taken on each question can be reported. Areas where the student stalled can be explored for further training.

Challenges

The first challenge is cost—both the upfront investment and the ongoing expenses. In a Web-based system, investments are needed in local telecommunications hardware, computers and test authoring and delivery software. Labor expenses include costs for entering questions into the testing software, ensuring quality in the test’s operation, extracting student records from the test database and translating the information into a form suitable for analysis and servicing the technology that runs the system.

There are also ongoing connection charges. To mitigate some of these costs, those interested in pursuing electronic assessment may be able to leverage computer systems available for computer instruction. Savings can also result from the elimination of printing and shipping activities when paper testing ceases. Still, substantial funds are needed to launch any technology-based assessment program.

Another challenge in building a technology-based assessment system is ensuring dependability. Computers and the Internet do not always function as desired. Testing sessions may be interrupted, proceed so slowly as to interfere with student performance or encounter difficulties in machine operation or telecommunications that cause data to be lost entirely. Unlike a paper-and-pencil testing system, keeping a computerized system functioning requires significant technical expertise, which many schools lack.

Security is a third concern. Security issues in a computerized system are conceptually similar to those encountered with conventional tests, though the mechanisms to accomplish breaches and protect against them are different. The issues relate to protecting test questions and to ensuring the integrity and confidentiality of student data. Test questions can be stolen from central servers during transmission or from the local machines students use to take the assessment. The same is true for student data. The good news is that these threats can be minimized through thoughtful technical design (e.g., by encrypting questions and student records) and through sensible administrative procedures (e.g., by closely guarding administrator passwords). These protections are far from perfect, but they are generally as good, if not better than, the ones available for paper-and-pencil tests.

Last, there are measurement issues. To compare performance equitably, whether among students or against a curriculum standard, it is necessary that each student be tested under the same conditions. However, the reality of statewide computerized testing is that equipment will vary from one location to the next and sometimes, from one machine to the next within the same school. Similarly, the speed of the Internet connection may differ across schools or within the same school by time of day.

The result of these variations is that one student may take a test on a small-screen monitor running at low resolution, thereby requiring repeated scrolling to read comprehension passages. Because of the Internet connection, that student may have to wait for the screen to refresh before the next passage is displayed. In contrast, another student may be able to see not only the entire passage, but also the questions on the same single screen, with no wait between passages. It is known that such variations can affect performance, but it is not known how to adjust for them in test results.

Another issue relates to computer familiarity. It is true that familiarity is a temporary concern; computers are becoming so widespread that learners will soon be comfortable with them. But until that time, extra care must be taken to ensure that low performance on computerized tests is caused not by lack of familiarity with computers, but by low standing in the skill the test is designed to measure.

Over the past 20 years, Chani Pangali, Ph.D., has led employee development, growth and retention initiatives for a number of organizations. As senior vice president for KnowledgePool, Pangali contributes to the vision and direction of the organization and drives business growth in key areas. For more information, e-mail Chani at cpangali@clomedia.com.

November 2003 Table of Contents