No cookie for
×

Error

The Kratos Anti Spam license key is wrong

Close Air Support: JTAC Training

As part of a Tactical Air Control Party (TACP), only the Joint Terminal Air Controller (JTAC) is authorized to say CLEARED HOT on the radio and direct aircraft to deliver their ordnance on a target.

JTACs are relied on to direct and coordinate close air support missions, advise commanders on matters pertaining to air support, and observe and report the results of strikes. Their ability to communicate effectively with pilots, and coordinate accurate air strikes can play a huge role in the success of a mission.

Virtual training systems allow JTACs to practice identifying targets, calibrating their locations, requesting air support, and the highly-specialized procedures for communicating with pilots. 

Scroll down to watch video on how VT MAK’s simulation technology comes together to make up a JTAC simulator.

The MAK Advantage:

The JTAC simulator in this use case takes advantage of simulations built on MAK’s core technologies: VR-Forces, VR-Vantage IG, and VR-Link, the MAK Data Logger and the MAK RTI.

The tight coupling of system components provides a rich simulation environment for each participant. The JTAC simulation is rendered in the dome using VR-Vantage; the flight simulation takes advantage of the VR-Forces first-person simulation engine; and the instructor/role player station uses VR-Forces CGF to populate the synthetic environment and control the training scenarios.

All these system components share a common terrain database and are connected together using VR-Link and the MAK RTI, giving the system integrator the ability to deploy reliably and cost effectively while leaving open the opportunity to expand the system to add bigger and more complex networks of live, virtual and/or constructive simulations.

Choosing MAK for your simulation infrastructure gives you state of the art technology and the renowned ‘engineer down the hall’ technical support that has been the foundation of MAK’s culture since its beginnings.

Capabilities the core technologies bring to the simulators:

JTAC Dome — Built with VR-Vantage

  • Game/Simulator Quality Graphics and Rendering Techniques

    VR-Vantage uses the most modern image rendering and shader techniques to take advantage of the increasing power of NVIDIA graphics cards. VT MAK's Image Generator has real-time visual effects to rival any modern IG or game engine.

  • Multi-Channel Rendering

    Support for multi-channel rendering is built in. Depending on system design choices for performance and number of computers deployed, VR-Vantage can render multiple channels from a single graphics processor (GPU) or can render channels on separate computers using Remote Display Engines attached to a master IG channel.

  • 3D Content to Represent Players and Interactions

    VR-Vantage is loaded with content including 3D models of all vehicle types, human characters, weapon systems, and destroyable buildings. Visual effects are provided for weapons engagements including particle systems for signal smoke, weapon fire, detonations, fire, and smoke.

  • Terrain Agility

    All MAK’s simulation and visualization products are designed to be terrain agile, that means that they can support most of the terrain strategies commonly used in the modeling, simulation & training industry. Look here for technical details and a list of the formats supported.

  • Environmental Modeling

    VR-Vantage can render scenes of the terrain and environment with the realism of proper lighting — day or night, the effects of illuminated light sources and shadows, atmospheric and water effects including multiple cloud layers effects and dynamic oceans, trees and grass that move naturally with the wind.

  • Sensor Modeling

    VR-Vantage can render scenes in all wavelengths: Night vision, infrared, and visible (as needed on a JTACs dome display). Sensor zooming, depth of field effects, and reticle overlays model the use of binoculars and laser range finders.

Flight Simulator — Built with VR-Forces & VR-Vantage

  • Flight Dynamics

    High-fidelity physics-based aerodynamics model for accurate flight controls using game or professional level hands on throttle and stick controls (HOTAS).

  • Air to Ground Engagements

    Sensors (targeting pod (IR camera with gimbal and overlay), SAR request/response (requires RadarFX Server) Weapons (missiles, guns, bombs)

  • Navigation

    Standard six-pack navigation displays and multi-function display (MFD) navigation chart.

  • Image Generator

    All the same VR-Vantage based IG capabilities in a flight simulator/roleplayer station as in the JTAC’s dome display. The flexibility to configure as needed: Single screen (OTW + controls + HUD), Dual screen (OTW + HUD, controls), Multi Screen OTW (using remote display engines).

  • Integration with IOS & JTAC

    The flight simulator is integrated with the VR-Forces-based IOS so the instructor can initialize the combat air patrol (CAP) mission appropriately in preparation for the close air support (CAS) mission called by the JTAC. All flights are captured by the MAK Data Logger for after action review (AAR) analysis and debriefing. Radios are provided that communicate over the DIS or HLA simulation infrastructure and are recorded by the MAK Data Logger fro AAR.

Instructor Operator Station — Built with VR-Forces CGF

VR-Forces is a powerful, scalable, flexible, and easy-to-use computer generated forces (CGF) simulation system used as the basis of Threat Generators and Instructor Operator Stations (IOS).

  • Scenario Definition

    VR-Forces comes with a rich set of capabilities that enable instructors to create, execute, and distribute simulation scenarios. Using its intuitive interfaces, they can build scenarios that scale from just a few individuals in close quarters to large multi-echelon simulations covering the entire theater of operations. The user interface can be used as-is or customized for a training specific look and feel.

  • Training Exercise Management

    All of the entities defined by a VR-Forces scenario can be interactively manipulated in real-time while the training is ongoing. Instructors can choose from:

    Direct control, where new entities can be created on the fly or existing entities can be moved into position, their status, rules of engagement, or tasking changed on a whim. Some call the instructor using this method a “puckster”.

    Artificial Intelligence (AI) control, where entities are given tasks to execute missions, like close air support (CAS), suppressive fire, or attack with guns. While on their mission reactive tasks deal with contingencies and reactive tasks deal and the CGF AI plays out the mission. In games, these are sometimes called “non-player characters”.

    First person control, where the instructor takes interactive control of a vehicle or human character and moves it around and engages with other entities using input devices.

  • 2D & 3D Viewing Control

    When creating training scenarios, the VR-Forces GUI allows instructors to quickly switch between 2D and 3D views.

    The 2D view provides a dynamic map display of the simulated world and is the most productive for laying down entities and tactical graphics that help to control the AI of those entities.

    The 3D views provide an intuitive, immersive, situational awareness and allow precise placement of simulation objects on the terrain. Users can quickly and easily switch between display modes or open a secondary window and use a different mode in each one.

Want to learn about trade-offs in fidelity? See The Tech-Savvy Guide to Virtual Simulation

Interested to see a demonstration?