Artificial intelligence has moved from the periphery of Department of War strategy to its center. The establishment of the Chief Digital and Artificial Intelligence Office (CDAO), the acceleration of Joint All-Domain Command and Control (JADC2), and a growing portfolio of AI-enabled programs across every military service signal an institutional commitment to AI that is unprecedented in scope and urgency. Yet for all this momentum, the path from AI ambition to deployed AI capability remains fraught with technical, organizational, and regulatory challenges. This whitepaper examines the current state of AI in the Department of War, identifies the trends shaping its trajectory in 2026, and highlights the opportunities for defense contractors positioned to deliver real-world AI capabilities.
The CDAO and the Centralization of AI Strategy
The creation of the Chief Digital and Artificial Intelligence Office in 2022 represented a decisive shift in how the Department of War approaches AI governance. By consolidating the Joint Artificial Intelligence Center (JAIC), Defense Digital Service, Advana, and the Chief Data Officer under a single organization, the CDAO was designed to eliminate the fragmentation that had hindered AI adoption across the enterprise.
In 2026, the CDAO’s influence continues to expand. The office has established data and AI governance frameworks that provide common standards for AI development, testing, and deployment across the services and combatant commands. Its focus on data readiness — ensuring that the Department of War’s vast data holdings are discoverable, accessible, and usable for AI applications — has driven significant investment in data cataloging, metadata management, and data sharing infrastructure.
The CDAO has also championed the concept of AI as a shared service, promoting reusable AI components, common development platforms, and centralized model repositories that reduce duplication and accelerate deployment. For defense contractors, this centralization creates both opportunities and requirements: opportunities to build components that serve multiple programs, and requirements to align with CDAO standards, interfaces, and governance processes.
JADC2: AI as the Connective Tissue of Multi-Domain Operations
The Joint All-Domain Command and Control vision — connecting sensors, shooters, and decision-makers across air, land, sea, space, and cyberspace into a unified information network — is arguably the most ambitious technology initiative in Department of War history. AI is fundamental to JADC2, not as an add-on but as the enabling technology that makes multi-domain integration feasible at the speed of modern warfare.
Without AI, the volume of data produced by sensors across all domains would overwhelm human decision-makers. AI enables automated sensor fusion, threat identification, course-of-action development, and decision support at machine speed. Computer vision systems process full-motion video and satellite imagery. Natural language processing systems analyze signals intelligence and open-source reporting. Predictive models anticipate adversary actions and logistics requirements. Together, these AI capabilities transform raw multi-domain data into the situational awareness and decision advantage that JADC2 promises.
Each military service is pursuing its own JADC2 contribution — the Army’s Project Convergence, the Air Force’s Advanced Battle Management System (ABMS), and the Navy’s Project Overmatch — creating a broad landscape of opportunities for AI development, integration, and deployment. Contractors who can deliver AI systems that operate across these service-specific architectures while adhering to joint interoperability standards will be positioned for sustained program success.
AI Adoption Challenges: The Gap Between Ambition and Deployment
Despite significant investment and strategic emphasis, the Department of War’s AI adoption continues to face structural challenges that slow the transition from prototype to production.
Data readiness remains the primary bottleneck. AI systems require large volumes of high-quality, labeled data for training, and the Department of War’s data landscape is fragmented across classification levels, organizational boundaries, and legacy systems. Data is often locked in stovepipes, stored in incompatible formats, or lacking the metadata necessary for AI consumption. The CDAO’s data governance initiatives are addressing these issues, but progress is measured in years, not months.
Integration with legacy systems presents persistent challenges. Many of the operational systems that AI must interface with — command and control platforms, intelligence databases, logistics systems — were designed decades ago without consideration for AI integration. Connecting modern AI capabilities to these systems requires middleware, API development, and careful interface engineering that is often more complex and time-consuming than the AI development itself.
The “valley of death” between successful AI prototypes and fielded AI capabilities continues to claim promising projects. A model that achieves impressive accuracy in a development environment must still survive the rigors of security accreditation, operational testing, user training, and sustainment planning before it delivers value to the warfighter. Many AI initiatives lack the engineering and programmatic support to navigate this transition, resulting in impressive demonstrations that never reach operational users.
Cultural resistance and organizational inertia also slow adoption. Decision-makers accustomed to traditional analytical processes may be reluctant to trust AI-generated insights. Operators may resist changes to established workflows. Overcoming this resistance requires not just better technology but sustained engagement, training, and demonstration of AI value in operationally relevant contexts.
Responsible AI: Ethics and Governance in Defense Applications
The Department of War has adopted a responsible AI framework built around five principles: equitable, traceable, reliable, governable, and responsible. These principles are not abstract aspirations — they carry concrete implications for how AI systems are designed, developed, tested, and deployed.
Traceability requires that AI decision processes be auditable and explainable. For defense applications, this means that when an AI system recommends a course of action or identifies a target, analysts and commanders must be able to understand the basis for that recommendation. Black-box models that produce accurate results without explanation face significant adoption barriers in operational contexts where accountability is paramount.
Reliability demands rigorous testing across the full range of operational conditions, including adversarial scenarios where opponents may attempt to deceive or manipulate AI systems. Testing for bias, robustness, and edge-case behavior is essential and must be integrated into the development lifecycle, not treated as a final quality check.
Governability requires clear human oversight mechanisms. AI systems in defense applications must have well-defined boundaries of autonomous action and clear procedures for human intervention. The level of human oversight should be appropriate to the consequences of the AI’s decisions — higher stakes demand more human involvement.
For defense contractors, responsible AI is not optional. It is a requirement that shapes technical architecture (explainability layers, audit logging, bias detection), development processes (diverse training data, adversarial testing), and documentation (risk assessments, ethical reviews). Contractors who build responsible AI practices into their engineering culture will navigate the governance landscape more effectively than those who treat it as a compliance exercise.
ATO for AI: The Accreditation Challenge
The Authority to Operate process — the security accreditation framework that governs all IT systems in the Department of War — was designed for traditional software systems and fits AI imperfectly. AI systems introduce novel security considerations that existing ATO frameworks are still evolving to address.
Model integrity — ensuring that a deployed model has not been tampered with or poisoned through adversarial training data — is a concern with no direct analog in traditional software security. Model drift — the degradation of model performance over time as the operational data distribution diverges from training data — requires continuous monitoring capabilities that traditional ATO processes do not contemplate. The use of third-party models and pre-trained weights introduces supply chain risk that must be assessed and mitigated.
The Risk Management Framework (RMF) is being adapted to address these AI-specific concerns, but the process is ongoing. In the meantime, defense contractors must work closely with authorizing officials to develop ATO packages that credibly address AI-specific risks. This requires security engineering expertise that understands both the RMF process and the unique characteristics of AI systems — a combination that remains rare in the defense workforce.
The Talent Gap: Defense AI’s Persistent Challenge
The Department of War competes for AI talent against technology companies offering significantly higher compensation, more flexible work environments, and fewer security constraints. Clearing AI engineers and data scientists adds months of delay and eliminates candidates unwilling to submit to the investigation process. The result is a persistent talent gap that constrains AI adoption across the defense enterprise.
Addressing this gap requires creative approaches from both government and industry. Defense contractors can compete by offering meaningful mission impact — the opportunity to work on problems of genuine national significance — combined with competitive compensation, professional development, and quality of life. Geographic flexibility, including positions in lower-cost-of-living areas like Augusta, Georgia, helps compensation packages stretch further while offering engineers a lifestyle that Silicon Valley cannot match.
Upskilling existing defense IT professionals in AI and machine learning is another critical strategy. Software engineers and data analysts with domain expertise and active clearances can be trained in AI techniques more efficiently than AI researchers can be cleared and oriented to defense missions. Investment in training and development is not just a benefit — it is a workforce strategy.
Zapata Technology: Delivering Deployed AI for Defense
At Zapata Technology, we have moved beyond AI theory and into AI practice for the Department of War. Our products and services reflect the hard-won lessons of building, deploying, and operating AI systems in real defense environments.
CASCADE, our AI/ML framework, provides the development and deployment infrastructure that defense AI programs need — from data preparation and model training through deployment on classified networks and edge devices. CASCADE’s architecture addresses the data readiness, security accreditation, and operational deployment challenges that stall so many defense AI initiatives.
LIGHTNER, our object recognition tool, demonstrates what deployed defense AI looks like in practice: optimized models running on tactical hardware in disconnected environments, delivering real-time intelligence to warfighters without cloud dependencies. LIGHTNER embodies the edge AI capabilities that JADC2 and tactical modernization demand.
Our AI and machine learning services extend beyond our products to encompass the full spectrum of defense AI needs: data engineering, model development, MLOps, security accreditation support, and operational sustainment. We understand that deploying AI in defense is not just a technology challenge — it is an integration challenge that spans technology, security, operations, and people.
Looking Ahead: Opportunities in Defense AI for 2026 and Beyond
The trajectory of AI in the Department of War points toward several opportunities for defense contractors prepared to deliver:
Edge AI and tactical deployment will continue to grow as JADC2 matures and services field AI-enabled systems closer to the point of need. Contractors who can optimize models for constrained hardware, operate in disconnected environments, and meet the ruggedization demands of tactical platforms will find expanding opportunities.
Data engineering and data readiness will remain foundational. As the CDAO drives enterprise data governance, the demand for data engineers who can build pipelines, manage metadata, and ensure data quality in classified environments will grow. AI cannot outrun its data foundation.
AI security and accreditation will emerge as a distinct discipline. As AI systems move from prototypes to production, the demand for engineers who can navigate the ATO process for AI, address model-specific security risks, and implement continuous monitoring will increase significantly.
Human-AI teaming will define the next phase of operational AI. Moving beyond AI as a back-office analytical tool to AI as an integrated partner in operational decision-making will require new interface designs, trust frameworks, and operational concepts that blend human judgment with machine speed.
The Department of War’s AI journey is still in its early chapters, but the direction is clear and the investment is real. For defense contractors with the technical depth, security expertise, and operational understanding to deliver AI that works in the real world — not just in demonstrations — the opportunities ahead are substantial and enduring.
Frequently Asked Questions
What is the CDAO and what does it do?
The Chief Digital and Artificial Intelligence Office (CDAO) is the Department of War’s senior organization responsible for accelerating the adoption of data, analytics, and artificial intelligence across the defense enterprise. Established by consolidating the Joint AI Center (JAIC) and other data offices, the CDAO sets AI policy, manages enterprise data governance, and oversees the development and deployment of AI capabilities for the warfighter. Explore how Zapata Technology supports these initiatives through our AI/ML Services.
How does the Department of War approach responsible AI?
The Department of War has adopted a set of AI ethical principles that require AI systems to be responsible, equitable, traceable, reliable, and governable. The Responsible AI (RAI) framework mandates that all AI capabilities undergo testing and evaluation to ensure they operate within defined parameters, that human oversight is maintained in critical decision loops, and that algorithmic bias is identified and mitigated. Zapata Technology builds RAI compliance into every phase of our AI/ML development process.
What are the biggest barriers to AI adoption in defense?
The primary barriers include data quality and access limitations on classified networks, the shortage of AI/ML-skilled personnel with security clearances, lengthy Authority to Operate (ATO) processes for new AI systems, and organizational resistance to changing established workflows. Additionally, deploying AI in air-gapped classified environments presents unique infrastructure challenges that commercial AI solutions are not designed to address. Zapata Technology specializes in overcoming these exact barriers. See our AI/ML Services for details.
