LIGHTNER is an open-source object recognition microservice that rapidly identifies objects on labeled datasets supplied by the user. Purpose-built for defense ISR (Intelligence, Surveillance, and Reconnaissance) applications, LIGHTNER brings real-time computer vision capabilities to the battlefield.
How LIGHTNER Works
LIGHTNER applies a single neural network to full motion video images. The software divides each image into regions and predicts bounding boxes and probabilities for each region. Each bounding box is weighted by the predicted probability. LIGHTNER examines the whole image for global context to ensure informed probability predictions.
Analytical data is provided by frame, allowing users to act on intelligence without requiring continuous video monitoring. The images can be routed to entity extraction or facial recognition pipelines for further analysis.
Key Capabilities
Real-Time Object Detection: Identifies and classifies objects in live video feeds with high accuracy and low latency
Multi-Stream Processing: Processes multiple video streams simultaneously as a bolt-on microservice
Edge Deployment: Runs on low-power, unattended IoT devices for on-sensor processing at the tactical edge
ISR Integration: Compatible with existing ISR video systems and sensors used across DoD
Flexible Output: Outputs JSON format for data transport, or stores results in a graph database for advanced analytics
Custom Training: Users can train detection models on custom labeled datasets for mission-specific object recognition
Defense Use Cases
ISR Video Analysis: Automated screening of surveillance video feeds to detect persons, vehicles, equipment, and other objects of interest
Force Protection: Real-time monitoring of perimeters and critical infrastructure for unauthorized activity
Battle Damage Assessment: Automated analysis of post-strike imagery to assess operational effectiveness
Pattern of Life Analysis: Long-duration monitoring to establish baseline activity patterns and detect anomalies
Architecture & Deployment
LIGHTNER is designed as a lightweight, containerized microservice that can be deployed at any echelon — from tactical edge devices to enterprise data centers. The modular architecture allows LIGHTNER to integrate with Zapata Technology’s broader product ecosystem, including CASCADE for AI-powered decision support and ZIngest for data pipeline management.
Technical Specifications
Architecture: Open-source microservice, containerized for flexible deployment
Neural Network: Convolutional neural network with region-based detection
Input: Full motion video, static imagery, multi-spectral sensor data
Output: JSON-formatted detection results with bounding boxes, classifications, and confidence scores