The QA Policy Document Analyzer on the NATO QA Hub has been upgraded from GPT-4 to the newer GPT-5.1 underlying AI engine. This is a behind-the-scenes improvement, your workflow stays the same (upload → analyze → results), but the quality and consistency of the assessment should improve.
What you should notice
With GPT-5.1, you can expect:
More consistent criterion-by-criterion judgments, with fewer “borderline” calls.
Clearer and faster decision for why a criterion is assessed as met / partially met / not met.
Better handling of complex QA-policy language, cross-references, and NATO-style structures.
Faster performance in many cases, thanks to GPT-5.1’s adaptive reasoning optimizations.
What does not change
Access remains restricted to users with the ETF Profile Manager role.
The tool still provides an initial, high-level assessment (it does not replace expert review).
Cost-control measures remain in place, including the current usage limit policy (and the option to request out-of-cycle access with justification via the QA Hub admin).
Why this matters
GPT-5.1 is designed as a flagship model for demanding, structured tasks and is now widely available in OpenAI’s model lineup. Upgrading the analyzer helps us deliver more reliable early insights for QA Managers and QA Teams of Experts while supporting better preparation ahead of key milestones such as Institutional (Re-)Accreditation process.
As always, feedback is welcome, especially examples where the tool is overly strict, overly permissive, or misses content that is clearly present.