PQC Support vs Performance: Why Vendor Claims Need Scrutiny
Introduction
As post-quantum cryptography moves from standards development into early adoption, an increasing number of vendors are advertising PQC support across their product portfolios. Hardware security modules, certificate management platforms, TLS libraries, VPN appliances and cloud services are all beginning to feature PQC algorithm support in their marketing materials and product roadmaps.
For organisations evaluating their options, this is a welcome development. It signals that the supply side of the PQC transition is maturing, and that the tools needed to begin migration are becoming available. But there is an important distinction that is frequently lost in the vendor messaging — one that has significant implications for planning, procurement and operational readiness.
That distinction is between supporting PQC algorithms and performing well with PQC algorithms under real-world conditions. These are not the same thing, and conflating them can lead to poor investment decisions and operational surprises.
Why PQC Algorithms Are Computationally Different
To understand why performance is a critical consideration, it helps to appreciate how the NIST-approved quantum-resistant algorithms differ from their predecessors. The mathematical foundations of algorithms such as ML-KEM (formerly CRYSTALS-Kyber), ML-DSA (formerly CRYSTALS-Dilithium) and SLH-DSA (formerly SPHINCS+) are fundamentally different from the RSA and elliptic curve algorithms that have underpinned public-key cryptography for decades.
These new algorithms are significantly more complex in their computational requirements. Key sizes are larger, signature sizes are larger, and the processing required for key generation, encapsulation and signing operations is materially greater. This is not a marginal increase — it is a structural change in the resource demands that cryptographic operations place on the systems that execute them.
This increased computational overhead will remain a characteristic of PQC algorithms for the foreseeable future. While optimisations will continue to improve performance over time, the fundamental resource requirements are inherent to the mathematical approaches that provide quantum resistance. Organisations should plan on the basis that PQC operations will be more expensive than their classical equivalents, not less.
The Gap Between Marketing and Operational Reality
Many vendors now advertise PQC support in their products. In some cases, this means the product can process PQC algorithms in controlled conditions. In other cases, it may mean that PQC algorithm libraries have been integrated but not yet optimised for production workloads. In yet other cases, the support may be limited to specific algorithms, specific operations or specific configurations.
The critical question for any organisation evaluating these claims is straightforward: how does this product actually perform under load and at scale with PQC algorithms? Stating support is easy. Demonstrating operational performance is not.
To draw an analogy from the whitepaper, it is the difference between an older PC technically running the latest version of an operating system, and that same PC running it well. The product may support the algorithms, but if it cannot maintain acceptable latency, throughput and reliability when processing PQC operations at enterprise scale, the support claim has limited practical value.
This gap between marketing and reality is not unique to PQC. It has been a recurring feature of technology transitions, from cloud computing to AI. But the consequences in a cryptographic context can be particularly severe, because performance degradation in cryptographic services directly affects authentication, encryption, certificate issuance and every process that depends on them.
Where Performance Matters Most
The impact of PQC performance overhead is not uniform across all use cases. Some areas of the technology estate will be affected more than others, and understanding where performance sensitivity is highest is essential to planning a successful transition.
Certificate issuance and renewal is one area where performance is critical. Organisations that manage large volumes of certificates — particularly those with automated lifecycle management — need their PKI infrastructure to process issuance, renewal and revocation operations at speed. If PQC algorithms introduce significant latency into these operations, the impact on certificate management workflows and dependent business processes can be substantial.
TLS handshake performance is another key consideration. Every encrypted connection between a client and a server begins with a TLS handshake, during which cryptographic operations establish the secure session. PQC algorithms increase the computational cost of this handshake, which can affect connection establishment times, particularly under high load. For organisations running customer-facing web services, APIs or real-time communication platforms, this latency increase may be noticeable and operationally significant.
Hardware security modules present a further performance consideration. HSMs are purpose-built devices that perform cryptographic operations in a secure, tamper-resistant environment. They are designed for high-throughput cryptographic processing, but their performance characteristics are tied to their hardware capabilities. An HSM that was specified and procured for classical algorithm workloads may not deliver adequate performance when processing PQC operations. Understanding this before deploying PQC algorithms on existing HSM infrastructure is essential.
Code signing and firmware verification are also affected. The larger signature sizes and increased processing requirements of PQC algorithms mean that signing and verification operations take longer and consume more resources. For organisations that sign large volumes of software packages, firmware updates or documents, this overhead needs to be factored into build pipelines, distribution systems and verification workflows.
Implications for Capacity Planning
The performance characteristics of PQC algorithms have direct implications for infrastructure capacity planning. Wherever cryptographic operations are performed — on servers, network appliances, HSMs, load balancers, endpoints and IoT devices — the increased processing demands of PQC will require more compute resource to maintain the same level of service.
This means that PQC adoption cannot be treated purely as a software or configuration change. It has infrastructure implications that must be understood and budgeted for. Performance benchmarking, load testing and capacity analysis need to be carried out in environments that reflect real-world conditions, not just laboratory configurations.
Without this data, organisations risk deploying PQC-capable solutions that introduce latency, bottlenecks or service degradation into production environments. The result is either a degraded user experience, a need for urgent infrastructure uplift, or — in the worst case — a rollback that delays the transition and erodes confidence in the programme.
The positive news is that this work can begin now. Performance benchmarking against PQC algorithms does not require a full transition to be underway. It can be conducted as a targeted exercise to inform planning, procurement and investment decisions well in advance of production deployment.
How to Evaluate Vendor Claims Effectively
Given the gap between marketing and operational reality, organisations need a structured approach to evaluating vendor PQC claims. Several principles can guide this process.
First, ask for performance data, not just feature lists. A vendor that claims PQC support should be able to provide benchmark data showing throughput, latency and resource consumption for PQC operations under realistic load conditions. If this data is not available, the claim is unproven.
Second, test in your own environment. Vendor benchmarks are conducted under conditions that favour the product. The only way to understand how a product will perform in your environment is to test it there, using workloads and configurations that reflect your operational reality.
Third, understand which algorithms are supported and at what maturity level. PQC support may be limited to specific algorithms, specific key sizes or specific operations. It may be labelled as experimental, preview or beta. Organisations need to understand exactly what is supported, what is production-ready and what is still in development.
Fourth, assess the vendor’s roadmap and commitment. PQC is an evolving landscape. The algorithms standardised today may be supplemented or replaced by future standards. A vendor that demonstrates a clear commitment to ongoing PQC development — including participation in standards processes and regular updates — is a more reliable long-term partner than one that has added PQC support as a marketing checkbox.
The Positive Outlook
Despite the caution warranted in evaluating vendor claims, the broader picture is encouraging. Vendors are already releasing PQC-capable technologies, and the pace of development is accelerating. The focus across the industry is shifting from the availability of PQC-supporting technology to its implementation in a structured, data-driven manner that safeguards business continuity.
Performance optimisations will continue to improve as implementations mature, hardware catches up and the industry accumulates operational experience with PQC algorithms in production environments. The key is to approach this transition with eyes open — informed by evidence rather than marketing, and grounded in an understanding of your specific performance requirements.
How Unsung Can Help
As a vendor-neutral consultancy, Unsung pays close attention to the gap between vendor marketing and real-world performance. We help organisations evaluate PQC-capable products objectively, develop meaningful performance benchmarks tailored to their operational environments and make platform decisions grounded in evidence rather than slide decks.
Our independence is central to the value we provide. We are not incentivised by any vendor relationship to recommend one product over another. Our recommendations are always in your interest, informed by your specific requirements and tested against the operational realities of your environment. If your organisation is evaluating PQC-capable products and wants independent assurance that the claims stand up to scrutiny, we would welcome the conversation.
Want to explore this topic further?
This blog is part of a series drawn from our strategic whitepaper, Post-Quantum Cryptography: A Strategic Whitepaper for the C-Suite. It provides vendor-neutral, business-focused guidance on navigating the quantum era — covering the threats already in play, lessons from previous hype cycles, and practical steps your organisation can take today. Download your copy here: https://2f4v3l.share-eu1.hsforms.com/20qJjHSynQkuJKhI_xq9Msg

