Jump to content

Improved Performance Research Integration Tool

From Wikipedia, the free encyclopedia
IMPRINT
Developer(s)Alion Science and Technology, Army Research Laboratory, U.S. Army CCDC Data and Analysis Center
Stable release
4.6.60.0
Written in.NET Framework, C#
Operating systemMicrosoft Windows
TypeDiscrete Event Simulation
Websitewww.microsaintsharp.com/home/tools

The Improved Performance Research Integration Tool (IMPRINT) is a suite of software tools developed by the Human Research and Engineering Directorate (HRED) of the United States Army Research Laboratory. IMPRINT is designed to analyze the interactions between soldiers, systems, and missions, aiding in the evaluation of soldier performance across various scenarios. This evaluation supports the optimization of military systems and training programs.[1]

It is developed using the .NET Framework. IMPRINT allows users to create discrete-event simulations as visual task networks with logic defined using the C# programming language. IMPRINT is primarily used by the United States Department of Defense to simulate the cognitive workload of its personnel when interacting with new and existing technology to determine manpower requirements and evaluate human performance.[2]

IMPRINT allows users to develop and run stochastic models of operator and team performance. IMPRINT includes three different modules: 1) Operations, 2) Maintenance, and 3) Forces. In the Operations module, IMPRINT users develop networks of discrete events (tasks) that are performed to achieve mission outcomes. These tasks are associated with the operator workload that the user assigns with guidance in IMPRINT. Once the user has developed a model, it can be run to predict the probability of mission success (e.g., accomplishment of certain objectives or completion of tasks within a given time frame), time to complete the mission, workload experienced by the operators, and the sequence of tasks (and timeline) throughout the mission. Using the Maintenance module users can predict maintenance manpower requirements, manning requirements, and operational readiness, among other important maintenance drivers. Maintenance models consist of scenarios, segments, systems, subsystems, components, and repair tasks. The underlying built-in stochastic maintenance model simulates the flow of systems into segments of a scenario and the performance of maintenance actions to estimate maintenance manhours for defined systems. The Forces module allows users to predict comprehensive and multilevel manpower requirements for large organizations composed of a diverse set of positions and roles. Each force unit consists of a set of activities (planned and unplanned) and jobs. This information, when modeled, helps predict the manpower needed to perform the routine and unplanned work done by a force unit.

IMPRINT helps users to assess the integration of personnel and system performance throughout the system lifecycle—from concept and design to field testing and system upgrades. In addition, IMPRINT can help predict the effects of training or personnel factors (e.g., as defined by Military Occupational Specialty) on human performance and mission success. IMPRINT also has built-in functions to predict the effects of stressors (e.g., heat, cold, vibration, fatigue, use of protective clothing) on operator performance (task completion time, task accuracy).

The IMPRINT Operations module uses a task network, a series of functions that decompose into tasks, to create human performance models.[3] Functions and tasks in IMPRINT models usually represent atomic units of larger human or system behaviors. One of IMPRINT's main features is its ability to model human workload. Users can specify visual, auditory, cognitive, and psychomotor workload levels for individual tasks which can measure overall workload for humans in the system and influence task performance.[4][5]

History

[edit]

The IMPRINT tool grew out of manpower, personnel, and training (MPT) concerns identified in the mid-1970s by the U.S. Air Force, Navy, and Army. The U.S. Navy first developed the HARDMAN Comparability Methodology (HCM), with HARDMAN being a portmanteau of hardware and manpower. The Army then tailored the manual HCM, which became known as HARDMAN I, for application to a broad range of weapon systems and later developed an automated version, HARDMAN II.[a] HARDMAN II.2 was first released by the Army Research Institute (ARI) in 1985. It required a VAX-11 computer to host its suite of analytical processes. An upgraded version was released in 1990.

In HARDMAN I and II, there was no direct link between MPT and performance. To directly remedy this shortcoming, the U.S. Army began the development of a set of software analysis modules in the mid-1980s.[6] This set of modules was called HARDMAN III, and although the name was the same, it used a fundamentally different approach for addressing MPT concerns than previous methods by providing an explicit link between MPT variables and soldier–system performance.[7]

HARDMAN III was a major development effort of the Army Research Institute's (ARI) System Research Laboratory. The contract that supported the work was let in a three-phase development process.[8] HARDMAN III was government-owned and consisted of a set of automated aids to assist analysts in conducting MANPRINT analyses. As PC DOS-based software, the HARDMAN III aids provided a means for estimating MPT constraints and requirements for new weapon systems very early in the acquisition process. The DOS environment imposed several limitations on the HARDMAN III tool set. The most significant problem was the 640K RAM limitation; the original HARDMAN III tools were designed for pieces of analyses to fit within these RAM blocks. RAM constraints led to a restriction of 400 operations tasks and 500 maintenance tasks.

The nine modules in HARDMAN III were:

  1. Manpower-based system evaluation aid (MAN-SEVAL), which was used to assess human workload
  2. Personnel-based system evaluation aid (PER-SEVAL), which was used to assess crew performance in terms of time and accuracy
  3. System performance and RAM criteria estimation aid (SPARC), which helped Army combat developers identify comprehensive and unambiguous system performance requirements needed to accomplish various missions
  4. Manpower capabilities analysis aide (MANCAP), which helped users estimate maintenance man-hour requirements at the system unit level
  5. Human operator simulator (HOS), which was used to develop improved estimates for task time and accuracy
  6. Manpower constraints aid (M-CON), which Identified the maximum crew size for operators and maintainers and the maximum Direct Productive Annual Maintenance Manhours (DPAMMH).
  7. Personnel constraints aid (P-CON), which estimated the significant personnel characteristics that described and limited the capabilities of the probable soldier population from which the new system's operators and maintainers would come
  8. Training constraints aid (T-CON), which was designed to be used by the government to identify the types of training programs that were likely to be available to support new systems
  9. Force analysis aid (FORCE), which provided an Army-wide assessment of manpower and constraints based on estimating numbers of people and impacts by types of people (such as ASVAB score and MOS)

IMPRINT was originally named: Integrated MANPRINT Tools and was first released in 1995. It was a Windows application that merged the functionality of the 9 HARDMAN III tools into one application. In 1997 IMPRINT was renamed to the Improved Performance Research Integration Tool – the name changed but the IMPRINT acronym remained the same. Between 1995 and 2006 several enhancements were made to IMPRINT and new releases (Versions 2 through 6) were made available. IMPRINT Pro was introduced in 2007. It featured a new interface design and complete integration with the Micro Saint Sharp simulation engine. It had enhanced analytical capabilities and moved from being an Army tool to a tri-service tool. From the beginning IMPRINT has continued to evolve, new enhancements have been continually added, and new releases made freely available to the user community. IMPRINT has over 800 users supporting the Army, Navy, Air Force, Marine, NASA, DHS, DoT, Joint, and other organizations across the country.

Discrete event simulation in IMPRINT

[edit]

Simulations, or Missions as IMPRINT refers to them, contain a task network called a Network Diagram. The network diagram contains a series of tasks connected by paths that determine control flow. System objects called entities flow through the system to create a simulation. IMPRINT also includes more low level features such as global variables and subroutines called macros.[9]

Tasks

[edit]

The task node is the primary element driving the simulation's outcome. Task nodes simulate system behavior by allowing programmer-specified effects, task duration, failure rates, and pathing. Task Effects are programmer-specified C# expressions where programmers can manipulate variables and data structures when a task is invoked. Task duration can be specified by the programmer as a specific value, through a probability distribution, or using a C# expression. Programmers can also specify task success in a similar way. Task success influences the effects of the task node and the pathing of the entity. Failure consequences include task repetition, task change, and mission failure among other options. Control flow and pathing can also be specified by the programmer. IMPRINT provides a series of other nodes that include special functionality:

Nodes include:

  • Start Node: Emits the first entity in the model, signifying the start of a simulation execution.[9]
  • End Node: Receives an entity that signifies the end of the simulation.[9]
  • Goal Node: Emits an entity when a specified goal is achieved, activating a secondary task network.[9]
  • Work Load Monitor: A visual node not connected to the task network that displays the workload value and number of active tasks associated with a specific Warfighter.[9]
  • Function Node: creates a subnetwork diagrams which allow users to modularize complex networks into specific tasks.[9]
  • Scheduled Function Node: a Function node that allows the user to specify clock times for the start and end of the execution of the subnetwork tasks.[9]

Entities

[edit]

Entities are dynamic objects which arrive into the system and move through the task network. Entities flow from one task to the next based on the task's path logic. When an entity enters a task, the task's effects are triggered. When the task concludes, the entity moves to the next task. One entity is generated by default at the beginning of the simulation. More entities can be generated at any point in the simulation based on programmer specified logic. When all entities reach the end node or are destroyed, the simulation concludes.[9]

Events

[edit]

Events are occurrences that happen in an instant of simulated time within IMPRINT that change the global state of the system. This can be the arrival or departure of an entity, the completion of a task, or some other occurrence. The events are stored in a master event log which captures every event that will happen and the simulated time that the event occurred. Due to the stochastic nature of discrete-event simulation, an event will often trigger the generation of a random variate to determine the next time that same event will occur. Thus, as events occur, in the simulation, the event log is altered.[9]

Control flow

[edit]

Once a task concludes, the invoking entity moves to another node which is directly connected to the current node in the task network. Nodes can connect to any number of other tasks, so IMPRINT provides a number of pathing options to determine the task to which the entity moves.[9]

  • Probabilistic pathing allows the programmer to specify a percentage chance for an entity to be moved adjacent nodes by inputting the exact probabilities, summing to one hundred, for each node.[9]
  • Tactical pathing allows the programmer to use C# predicates to determine the pathing of an entity to each adjacent node. If more than one expression evaluates to true, the entity will follow the first path with a true expression.[9]
  • Multiple pathing behaves exactly like tactical pathing but will path entities to any adjacent node with an expression evaluating to true.[9]

Variables and macros

[edit]

IMPRINT has a number of global variables used by the system throughout a simulation. IMPRINT provides the public global variable Clock which tracks the simulation's current time. IMPRINT also has private variables such as operator workload values. IMPRINT allows the modeler to create custom global variables that can be accessed and modified in any task node. Variables can be of any type native to C#, but the software provides a list of suggested variable types including C# primitive data types and basic data structures. IMPRINT also provides the programmer with the functionality to create globally accessible subroutines called macros. Macros work as C# functions and can specify parameters, manipulate data, and return data.[9]

Human performance modeling

[edit]

IMPRINT's workload management abilities allow users to model realistic operator actions under different work overload conditions.[5] IMPRINT allows users to specify Warfighters which represent human operators in the modeled system. Each task in IMPRINT is associated with at least one Warfighter. Warfighters can be assigned to any number of tasks, including tasks that execute concurrently.[5] IMPRINT tasks can be assigned VACP workload values.[4] The VACP method allows modelers to identify the visual, auditory, cognitive, and psychomotor workload of each IMPRINT task. In an IMPRINT task, each resource can be given a workload value between 0 and 7, with 0 being the lowest possible workload, and 7 being the highest possible workload for that resource. The VACP scale for each resource provides verbal anchors for certain scale values. For instance, a visual workload of 0.0 corresponds to "no visual activity", while a visual workload of 7.0 continuous visual scanning, searching, and monitoring.[10] When a Warfighter is executing a task, their workload is increased using the VACP value assigned to that task. An IMPRINT plugin module was proposed in 2013 to improve the cognitive workload estimation within IMPRINT and make the overall calculation less linear.[11] IMPRINT's custom reporting feature allows modelers to view the workload over time of the Warfighters in their models. Workload monitor nodes allow modelers to view the workload of a specific Warfighter as the simulation executes.[9]

Research

[edit]

IMPRINT has been used by scientists at the Army Research Lab to study Unmanned Aerial Systems,[12] workload of warfighter crews,[13][14] and human-robot interaction.[15] The United States Air Force and Air Force Institute of Technology have used IMPRINT to study automated systems,[16][17] human systems integration,[18] and adaptive automation[19] among other things. The Air Force Institute of Technology in particular is using IMPRINT to research the prediction of operator performance, mental workload, situational awareness, trust, and fatigue in complex systems.[20]

Notes

[edit]
  1. ^ HARDMAN II was formerly called MIST (Man Integrated Systems Technology).

References

[edit]
  1. ^ Mitchell, Diane K. (2003-09-01). Advanced Improved Performance Research Integration Tool (IMPRINT) Vetronics Technology Test Bed Model Development (Report). Fort Belvoir, VA: Defense Technical Information Center. doi:10.21236/ada417350 (inactive 2024-11-12).{{cite report}}: CS1 maint: DOI inactive as of November 2024 (link)
  2. ^ Rusnock, Christina F; Geiger, Christopher D (2013). Using Discrete-Event Simulation for Cognitive Workload Modeling and System Evaluation. IIE Annual Conference. Proceedings. Norcross. pp. 2485–2494. ProQuest 1471959351.
  3. ^ Laughery, Romn (1999). "Using discrete-event simulation to model human performance in complex systems". Proceedings of the 31st conference on Winter simulation Simulation---a bridge to the future - WSC '99. Vol. 1. pp. 815–820. doi:10.1145/324138.324506. ISBN 978-0-7803-5780-8. S2CID 18163468.
  4. ^ a b Mitchell, Diane K. (September 2003). Advanced Improved Performance Research Integration Tool (IMPRINT) Vetronics Technology Test Bed Model Development. Army Research Laboratory. DTIC ADA417350.
  5. ^ a b c IMPRINT PRO user guide Vol 1. http://www.arl.army.mil/www/pages/446/IMPRINTPro_vol1.pdf
  6. ^ Kaplan, J.D. (1991) Synthesizing the effects of manpower, personnel, training and human engineering. In E. Boyle. J. Ianni, J. Easterly, S. Harper, & M. Korna (Eds. Human centered technology for maintainability: Workshop proceedings (AL-TP-1991-0010) (pp. 273-283). Wright-Patterson AFB, OH: Armstrong Laboratory
  7. ^ Allender, L., Lockett, J., Headley, D., Promisel, D., Kelley, T., Salvi, L., Richer, C., Mitchell, D., Feng, T. "HARDMAN III and IMPRINT Verification, Validation, and Accreditation Report." Prepared for the US Army Research Laboratory, Human Research & Engineering Directorate, December 1994."
  8. ^ Adkins, R., and Dahl (Archer), S.G., "Final Report for HARDMAN III, Version 4.0." Report E-482U, prepared for US Army Research Laboratory, July 1993
  9. ^ a b c d e f g h i j k l m n o IMPRINT PRO user guide Vol 2. http://www.arl.army.mil/www/pages/446/IMPRINTPro_vol2.pdf
  10. ^ Mitchell, D. K. (2000). Mental Workload and ARL Workload Modeling Tools (ARL-TN-161). Aberdeen Proving Ground.
  11. ^ Cassenti, Daniel N.; Kelley, Troy D.; Carlson, Richard Alan (2013). Differences in performance with changing mental workload as the basis for an IMPRINT plug-in proposal. 22nd Annual Conference on Behavior Representation in Modeling and Simulation, BRiMS 2013 - Co-located with the International Conference on Cognitive Modeling. pp. 24–31. ISBN 978-162748470-1.
  12. ^ Hunn, Bruce P.; Heuckeroth, Otto H. (February 2006). A Shadow Unmanned Aerial Vehicle (UAV) Improved Performance Research Integration Tool (IMPRINT) Model Supporting Future Combat Systems. Army Research Laboratory. DTIC ADA443567.
  13. ^ Salvi, Lucia (2001). Development of Improved Performance Research Integration Tool (IMPRINT) Performance Degradation Factors for the Air Warrior Program. Army Research Laboratory. DTIC ADA387840.
  14. ^ Mitchell, Diane K. (September 2009). Workload Analysis of the Crew of the Abrams V2 SEP: Phase I Baseline IMPRINT Model. Army Research Laboratory. DTIC ADA508882.
  15. ^ Pomranky, R. a. (2006). Human Robotics Interaction Army Technology Objective Raven Small Unmanned Aerial Vehicle Task Analysis and Modeling. ARL-TR-3717.
  16. ^ Colombi, John M.; Miller, Michael E.; Schneider, Michael; McGrogan, Major Jason; Long, Colonel David S.; Plaga, John (December 2012). "Predictive mental workload modeling for semiautonomous system design: Implications for systems of systems". Systems Engineering. 15 (4): 448–460. doi:10.1002/sys.21210. S2CID 14094560.
  17. ^ Storey, Alice A.; Ramírez, José Miguel; Quiroz, Daniel; Burley, David V.; Addison, David J.; Walter, Richard; Anderson, Atholl J.; Hunt, Terry L.; Athens, J. Stephen; Huynen, Leon; Matisoo-Smith, Elizabeth A. (19 June 2007). "Radiocarbon and DNA evidence for a pre-Columbian introduction of Polynesian chickens to Chile". Proceedings of the National Academy of Sciences. 104 (25): 10335–10339. Bibcode:2007PNAS..10410335S. doi:10.1073/pnas.0703993104. PMC 1965514. PMID 17556540.
  18. ^ Miller, Michael; Colombi, John; Tvaryanas, Anthony (2013). "Human systems integration". Handbook of Industrial and Systems Engineering, Second Edition. Industrial Innovation. Vol. 20131247. pp. 197–216. doi:10.1201/b15964-15 (inactive 2024-11-12). ISBN 978-1-4665-1504-8.{{cite book}}: CS1 maint: DOI inactive as of November 2024 (link)
  19. ^ Boeke, Danielle K; Miller, Michael E; Rusnock, Christina F; Borghetti, Brett J (2015). Exploring Individualized Objective Workload Prediction with Feedback for Adaptive Automation. IIE Annual Conference. Proceedings. Norcross. pp. 1437–1446. ProQuest 1791990382.
  20. ^ Rusnock, Christina F.; Boubin, Jayson G.; Giametta, Joseph J.; Goodman, Tyler J.; Hillesheim, Anthony J.; Kim, Sungbin; Meyer, David R.; Watson, Michael E. (2016). "The Role of Simulation in Designing Human-Automation Systems". Foundations of Augmented Cognition: Neuroergonomics and Operational Neuroscience. Lecture Notes in Computer Science. Vol. 9744. pp. 361–370. doi:10.1007/978-3-319-39952-2_35. ISBN 978-3-319-39951-5.