Blog

Application and Infrastructure Transformation for PQC: What Your Technology Estate Means for Readiness

Introduction

One of the most important — and least discussed — aspects of post-quantum cryptography readiness is that the transformation requirements differ significantly depending on an organisation’s technology estate. The journey for an organisation that relies primarily on software-as-a-service looks fundamentally different from one that manages its own infrastructure and PKI on-premise. A hybrid environment introduces its own set of considerations.

Yet much of the PQC guidance published to date treats all organisations as if they face the same challenge. In reality, your starting point matters enormously. Understanding where your organisation sits on the technology estate spectrum is essential to developing a PQC readiness plan that is realistic, proportionate and properly sequenced.

This blog explores what the PQC transition looks like across different technology models, where the complexity concentrates, and what organisations in each category should be thinking about now.

SaaS-Dependent Organisations: The Good News

For organisations that primarily consume software-as-a-service, the PQC transition carries a significant advantage: much of the heavy lifting is being done for you. SaaS vendors are responsible for the security of their platforms, and the major providers are already deploying quantum-resistant algorithms within their infrastructure. When a SaaS vendor upgrades their TLS implementation or certificate infrastructure to support PQC algorithms, every customer benefits without needing to take action.

Similarly, many of the modern browsers used to access SaaS applications — Chrome, Edge, Firefox and Safari — are already implementing or testing support for PQC-capable protocols. This means that for many SaaS-dependent organisations, the client-side and server-side components of the encrypted connection are both moving towards PQC readiness through their natural update cycles.

There are straightforward ways to assess your current readiness in this area. A growing number of websites now validate browser support for post-quantum algorithms, providing a quick and easy way to evaluate one dimension of your corporate endpoint readiness. Cloudflare, for example, offers a publicly accessible tool that tests whether your browser supports PQC key exchange. For predominantly SaaS-based organisations, running this assessment across your standard corporate endpoints is a practical, low-effort first step that provides immediate visibility.

However, SaaS dependency does not mean the PQC challenge is entirely outsourced. Organisations still need to understand how their data flows between SaaS platforms, what happens to data at integration points, and whether any custom integrations or middleware components introduce cryptographic dependencies that are not covered by the SaaS vendor’s PQC roadmap. Single sign-on implementations, API gateways, data pipelines and reporting tools that sit between SaaS platforms are all areas that may require independent assessment.

On-Premise and IaaS Organisations: A More Involved Transition

For organisations that host their own services or manage their own PKI — whether on-premise or on infrastructure-as-a-service platforms — the transformation requirements are significantly more involved. The responsibility for upgrading cryptographic algorithms sits squarely with the organisation, and the scope of that responsibility touches every layer of the technology stack.

The most immediate consideration is application code. Applications that implement cryptographic operations directly — whether for encryption, signing, authentication or certificate handling — will need to be updated to support quantum-resistant algorithms. This is not a configuration change; it requires development effort, testing and deployment across what may be a substantial application portfolio.

Critically, it would be prudent to ensure that any recoding effort does not simply swap one hard-coded algorithm for another. The quantum era demands cryptographic agility, which means applications should be refactored so that cryptographic components can be changed through configuration rather than source-code modification. Investing in a one-time algorithm swap without building in this flexibility is a missed opportunity that will create the same problem again when the next algorithm change is required.

Beyond application code, the protocol layer presents a significant consideration. TLS 1.3 will need to be implemented between service, application and user endpoints to support PQC key exchange and authentication. For basic n-tier applications with straightforward client-server architectures, this may be relatively contained. But for enterprises that integrate applications and services to support data processing, reporting, automation and orchestration, the implementation of TLS 1.3 with PQC algorithms will create a ripple effect through the IT estate.

Consider a typical enterprise data processing workflow. A user-facing application calls a middleware layer, which queries a database, triggers an automation service, generates a report and pushes the output to a document management system. Each of these connections relies on encrypted communication. Upgrading the cryptographic algorithms used in these connections requires coordination across multiple teams, environments and potentially vendors. The interdependencies mean that changes cannot be made in isolation — they must be planned and sequenced to avoid breaking the chain of trust.

Hybrid Environments: The Compound Challenge

Most large organisations operate hybrid technology estates that combine SaaS, IaaS, on-premise infrastructure and, in many cases, operational technology. For these organisations, the PQC transition is not a single challenge but a compound one, with different transformation requirements applying to different parts of the estate simultaneously.

The integration points between these different environments are where complexity concentrates. A SaaS platform may be PQC-ready, but the API gateway that connects it to an on-premise system may not be. An IaaS-hosted application may support PQC algorithms, but the legacy middleware it depends on may not. The cloud identity provider may support quantum-resistant authentication, but the on-premise directory service it federates with may not.

Managing these boundaries requires a clear map of how systems, data and trust flow across the hybrid estate. Without this visibility, organisations risk creating an uneven patchwork of PQC readiness where some connections are protected and others are not — with the weakest link defining the overall security posture.

For hybrid organisations, the PQC transition plan must explicitly address boundary management: identifying where different parts of the estate connect, understanding the cryptographic protocols in use at each boundary, and developing a sequenced approach to upgrading these connections that maintains operational continuity throughout.

The Performance Dimension: A Universal Consideration

Regardless of technology estate, the rollout of quantum-resilient algorithms will demand more processing power wherever they are used. This is a universal consideration that affects SaaS-dependent, on-premise and hybrid organisations alike, although the implications differ.

For SaaS-dependent organisations, the performance impact is largely absorbed by the SaaS vendor. However, organisations should monitor for any changes in service performance following vendor-side PQC upgrades, particularly for latency-sensitive applications.

For on-premise and IaaS organisations, the performance implications are direct and must be actively managed. Servers, network appliances, load balancers, HSMs and endpoint devices will all experience increased computational load when processing PQC operations. Performance benchmarking, load testing and capacity analysis need to be conducted to understand the increased resource requirements and inform future technical refresh and capacity planning decisions.

This performance data is not just a technical input — it is a financial planning input. Understanding the infrastructure uplift required to maintain current service levels with PQC algorithms is essential to building an accurate budget for the transition. Organisations that do not factor in performance overhead will find themselves facing unplanned infrastructure costs, repeating the pattern seen in earlier technology transitions.

A Window of Opportunity

The positive news is that vendors across the technology landscape are already releasing PQC-capable products, and the pace of development is accelerating. The focus is shifting from the availability of PQC-supporting technology to the implementation of it in a structured, data-driven manner that safeguards business continuity.

Importantly, this is not a cliff-edge problem. Organisations have a window to assess their exposure, understand their technology estate, build a transition plan and begin executing in a phased, well-governed manner. The key is to use that window productively — not to defer action, but to ensure that when action is taken, it is well-targeted, well-sequenced and grounded in a clear understanding of your specific starting point.

The organisations that invest in understanding their technology estate now — mapping where SaaS, IaaS, on-premise and hybrid boundaries sit, identifying cryptographic dependencies at each layer and building a realistic picture of the transformation effort required — will be in a fundamentally stronger position when the pace of the PQC transition accelerates.

How Unsung Can Help

Unsung helps organisations assess the PQC implications specific to their technology estate. Whether you are predominantly SaaS-based and need to understand where your residual responsibilities lie, managing complex on-premise infrastructure and PKI that requires hands-on transformation, or operating a hybrid environment where boundary management is the critical challenge, we provide independent guidance on where to focus your efforts and how to sequence your transition for maximum impact and minimum disruption.

Our vendor-neutral position means we approach every engagement with your interests at the centre. We do not favour one deployment model over another or one vendor’s roadmap over another. We help you understand your specific starting point, develop a proportionate plan and make progress that is aligned to your risk profile, your budget and your operational reality.

If you are looking for clarity on what PQC means for your specific technology estate, we would welcome the conversation.

Want to explore this topic further?

This blog is part of a series drawn from our strategic whitepaper, Post-Quantum Cryptography: A Strategic Whitepaper for the C-Suite. It provides vendor-neutral, business-focused guidance on navigating the quantum era — covering the threats already in play, lessons from previous hype cycles, and practical steps your organisation can take today. Download your copy here: https://2f4v3l.share-eu1.hsforms.com/20qJjHSynQkuJKhI_xq9Msg

April 27, 2026
-