« September 2005 | Main | November 2005 »

October 31, 2005

My Presentation

My panel was highly interactive, consisting mostly of questions and answers between the moderator (Jerry Heneghan of Virtual Heroes) and the panelists (Elaine Raybourn of Sandia National Laboratory, Jeff Taekman of Duke, Priscilla Elfrey of NASA, and me). As a result, I didn't give a single contiguous talk on assessment per se. But if I string together my comments, they would look something like this:

3Dsolve's flagship project, which we finalized this summer, was a project for the US Army Signal Center and School at Fort Gordon, GA. It was 110 hours of simulation-based task training for the 25B10 MOS, Information Systems Operator/Analyst. We've now started work on the follow-on to that, which is 120 hours of instruction for the 25B30-level MOS.

A widely-quoted statistic is that we retain approximately 5 percent of what we hear in a lecture, but retain approximately 75 percent of what we learn through hands-on training. At a 30,000-foot view, our goal is get as close to that 75 percent figure as possible with virtual hands-on training. In that light, assessment for us can be viewed as, "How close to 75 percent are we getting on a per-student basis?"

It's important to keep in mind the distinction between assessment and validation, which are separate concepts, but unfortuately sometimes used interchangably in discussions. Assessment is the process of evaluating the performance of an individual student. Did he/she learn the material? Did he/she achieve the learning objectives? Validation is the process of evaluating the courseware as a whole. Is it valid? Was it properly designed given the learning objectives?

Our task training software for the Signal School uses what is known as the FAPV model: Familiarize, Acquire, Practice, and Validate (this is an unfortunate misuse of the term validate, but then we didn't invent the model). In Familiarize mode, students can explore freely to discover the environment for themselves. In Acquire mode, we guide them through a particular lesson, telling them exactly what to do for each step of the process. In Practice mode, students navigate and interact on their own, but can receive hints from the software as needed. In Validate mode, we provide no hints whatsoever, and the student is expected to be able to execute all lesson steps without assistance.

In both Practice and Validate modes, we track virtually every action taken by the student: Where did he/she go? What did he/she look at? When did he/she take a given action? What did he/she click on and in what sequence? All this data is used for assessment purposes.

We use a documented XML-based format that we developed to define our lessons. A typical lesson might consist of 30-60 individual steps. The student is expected to navigate through the environment as required and perform the steps, which may be linear, non-linear, or a combination of both, depending on the particular lesson. A "happy path" defines the nominal sequence of actions (or sequences for non-linear content) that equate to a correct traversal of the lesson content. We use this same XML-based content not only to guide the student through the lesson in Acquire mode, but to evaluate the student's performance in Practice and Validate modes -- by comparing the student's actions to the nominal happy path.

We provide assessment results both to students directly within the courseware itself, as soon as they work through a given lesson, and to the instructors by uploading results to the Army's Learning Management System (LMS). What we have found is that the process as a whole, and notably the assessment data, changes the role of the instructor. Instead of leading classes through "death by PowerPoint" (as they put it), instructors become coaches, able to spend their time with the students who need their help the most, when they need it.

To use an analogy, I like to think of our approach as precision guided munitions for learning. By focusing instructors' time where it is needed the most, simulation-based learning with integrated, real-time assessment dramatically increases instructor effectiveness.

From an article on precision guided munitions (not included in my talk):

In the fall of 1944, only seven per cent of all bombs dropped by the Eighth Air Force hit within 1,000 feet of their aim point; even a 'precision' weapon such as a fighter-bomber in a 40 degree dive releasing a bomb at 7,000 feet could have a circular error (CEP) of as much as 1,000 feet. It took 108 B-17 bombers, crewed by 1,080 airmen, dropping 648 bombs to guarantee a 96 per cent chance of getting just two hits inside a 400 x 500 feet German power-generation plant; in contrast, in the Gulf War, a single strike aircraft with one or two crewmen, dropping two laser-guided bombs, could achieve the same results with essentially a 100 per cent expectation of hitting the target, short of a material failure of the bombs themselves.

Chris Chambers Presentation

This presentation is my favorite of the day so far. It was by Christopher Chambers, Deputy Director, Office of Economic & Manpower Analysis, Army Game Project. Chris is based at West Point and is leading an effort to create a standardized platform for all sorts of Army training. The level of thought that he has put into this so far is, at first glance, extremely impressive.

America's Army: Morphing the Game to Training and Mission Rehearsal Applications

Training and first-person gameplay are related

University of Rochester study found that

  • Visual acuity was significantly higher among FPS game players than non-gamers
  • Visual acuity effects could be 'trained' in 10 hours of gaming
Army Research Institute study found that FPS games are...
  • "Best for learning procedures and recalling experential details"
  • Procedural information is retained at up to 12 percent higher rates than factual information (i.e., written or oral)
  • "Instructional objectives should be integrated into game storylines"

First-person games differ greatly from traditional training simulations

  1. Games are "playable" -- i.e., live entities, in multiplayer teams
  2. Entertainment-focused, not engineering -- fun matters
  3. Can extend the training day if the game is engaging
  4. Focus on mass market -- household computers, Internet
  5. Common devices and conventions mean low barriers to learning
  6. People-focused, not equipment-focused
  7. No need for large facilities and staffs
  8. Access to huge pools of participants -- test, play, experiment
  9. Incorporate realism as needed, but not a slave to it

Soldier indoctrination and training could benefit from a common experience across a continuous lifecycle

  1. Find (USAAC)
  2. Recall (USAREC)
  3. Basic training (TRADOC)
  4. Unit training (FORSCOM)
Common platform breeds familiarity, increases training effects sooner

Game design matters

Persistent characters and attributes

  • Game should track progress
  • Data collection should improve experience in the next round

Low barriers to entry

  • Train the task, not the game skill
  • Use common, familiar game platforms and commands
  • Leverage generational habits to improve training

Replayability

  • Infinite would be best -> live entities
  • More replayability is better -> AI entities
  • Randomness -- spawnsteam structures, terrain, mission time, objectives
Game engine is paramount

Game engine determines usability for training

  • Geo-specific terrain?
  • Destructible environments?
  • Dynamic lighting?
  • Randomness?
  • Polygon counts and frame rates?
  • Interoperability?
Engage the soldier
  • Fun <-> realism
  • Fantasy games <-> public games <-> game-based training products <-> flight simulations
Fun/realism tradeoff affects training

Factual learning needs imply more realism in games

  • Experiential learning imply an engaging design
  • Compressing time, varying mission attributes, and focused tasks increase fun
Collecting data for AARs

Game data fields are easily captured that support training and education feedback

Raw data

  • Number of attempts before correct action
  • Number of misses
  • Time to accomplish tasks

Game state data

  • To replay / replicate key actions to facilitate AARs

Synthetic data

  • Downstream effects of performance
  • Dynamic content delivery based on player data and preferences
  • Human virtual dynamics based on analysis of gameplay actions or inactions (fatigue, stress, skill degradation)

America's Army is a plaform for communication, by virtue of its design

America's Army (public application)

  • Engaging virtual schoolhouse
  • Focuses on the soldier as a member of a team rather than teams composed of soldiers
  • Designed to render high-fidelity, first-person environments and interactions within a multiplayer team setting

Not just a game

  • Localization
  • Console games
  • Wireless games
  • License program
  • Training
  • Mission rehearsal
  • RDT&E and prototyping
  • Education
  • Middleware and database

America's Army platform

  • Technologies (AAR, SOAR AI, DIS/HLA, DVHT)
  • America's Army content library (over 15,000 catalogued assets)
  • America's Army custom code base
  • Unreal SDK 2.5/3.0
  • Data transfer layer (client-server architecture, STS (Statistics Tracking System))

Speaking at the Serious Games Summit

I'm at the Serious Games Summit in Arlington, VA today and tomorrow. I'll be speaking on a panel later this morning titled, Assessment for Interactive Training Applications:

The realm of Serious Games is currently challenged by the need to establish metrics which show the efficacy of interactive technology as a viable learning medium.

Most games based learning solutions capture state data, flag met or unmet performance criteria and "dump" this data at the end of an experiential learning session. More sophisticated solutions provide a robust After Action Reviews (AARs) that include the capture of time-stamped voice and video packets and chart the captured data graphically. Other solutions provide the means for spectators to provide instructor and peer feedback in real-time. During this panel the participants will explore common assessment requirements across several disciplines.

This panel will explore how assessment is being used in games based learning solutions for:

    New York City Firefighters (HAZMAT-HOTZONE)
  • Engineering and Science undergraduates
  • Medical School Students (surgical team coordination)
  • Cross Cultural Communications (America's Army)
Intended Audience and Prerequisites

Educators, scientists, researchers, instructional designers and game developers will benefit from this frank and open discussion of assessment methodologies being employed in several state-of-the art experiential learning games across several disciplines. Prerequisite knowledge is not necessary for understanding the content of this session.

What is the idea takeaway from this presentation?

Attendees will learn:

  • What types of state data are usually captured during the conduct of games based learning scenarios
  • What constitues an After Action Review (AAR) for learning games
  • How students are tracked and performance is base-lined in Learning Management Systems
  • What needs have been uncovered by currently fielded applications
  • Where we are headed from here
This is my first year at the summit, and so far, there's a great deal of energy here.

October 16, 2005

Has It Really Been Three Weeks?

Yeah, it has. I guess when I drop out of blogging for a while, I really drop out. And I don't even have a good excuse like Joi's "I've been playing lots of World of Warcraft".