AKS/ARO Foundation: 5-Wk Workshop

Applied Information Sciences

Understand the approach and cost of modernizing your workloads to AKS or ARO

Understanding your application inventory is core of any migration project. Analyzing the infrastructure and workloads requires an objective and comprehensive review of your app portfolio. Our Azure Kubernetes Service (AKS) or Azure Red Hat Openshift (ARO) assessment and design offering helps customers with the first two of five phases of modernizing workloads to AKS or ARO.

The first step is to assess each workload to measure pre-migration and pre-deployment effort and sizing of clusters. AIS will produce dependency mapping and remediation steps required to migrate to the cloud. We'll use tooling that outlines specific actions needed to move workloads and understand Azure costs.

The second step is design, where an AIS Certified Kubernetes Application Developer will work with client stakeholders to establish an enterprise architecture for AKS/ARO that is scalable, secure, resilient, and maintainable. This guidance will be relevant to delivery teams adopting AKS/ARO as a foundational component of their enterprise cloud adoption and includes (but is not limited to) Naming Conventions, Registry, Tenant Segmentation, Compute, Network, Storage, Security, Availability, and Scaling.

At the end of this engagement, you will know the level of effort to build out AKS/ARO clusters, the effort to remediate and migrate workloads, and the Azure costs associated.


  • 1: Kickoff: AIS will conduct a kickoff meeting with client stakeholders within two (2) business days after the start of the period of performance of this SOW to define the approach for successfully completing this engagement. During the kickoff, attendees will review and establish the required process startup for working sessions, proposed meeting cadence, and team formation for AIS and client stakeholders for these meetings and this engagement.

  • 2: Workload Discovery Workshops - During our app workload discovery, AIS will conduct workshops with IT and business owners to understand the landscape of your app infrastructure, the current organization, and maintenance.

  • 3: App Inventories Created: The app inventories created in the workshops will act as a guide to develop your migration strategy through the design phase to help us plan the migration batches and schedules. AIS will use tooling that produces precise assessments, remediations, and Azure estimates for each workload.

  • 4: Design Sessions: AIS will conduct design sessions throughout the performance period of the SOW. These design sessions will cover the design areas listed.

  • 5: Deliver Design Documents: AIS will create design document(s) for each group incrementally. In certain cases, separate design documents may be required for design areas listed. AIS will deliver the design document(s) two business days prior to the conclusion of each design group per the established timeline. The client will have two business days to sign off on the design documents to allow the team to effectively move to the next design group.

*Areas Covered in the Design Recommendations: *

  • Name conventions that meet the client's existing conventions for Cluster, Node Pools, Nodes, and Pods.

  • A private container registry that is a tightly controlled image distribution pipeline. The design will consider the following requirements including geo-replication, patching, automation, and security.

  • Cluster topology that meets customer scaling, security, availability, reliability, compliance, and audit requirements.

  • Compute capacity for node/node pool.

  • Network topology that includes IP address space, network security (NSG, WAF devices), and traffic routing. Design the egress traffic, kubernetes subnet/IP space, and cluster DNS.

  • Storage that includes storage type, SKU, performance, connectivity, node size, reliability, availability, and disaster recovery.

  • AKS/ARO security that includes private cluster to meet zero trust security posture with the understanding of private endpoint limitations; Kubernetes Services security with WAF and cluster management; and security that meets your least security principles and centrally managed authentication/authorizations.

  • Policy for high availability and resilience, cluster availability, and pod availability (RTO and RPO).

  • Scaling

  • Monitoring, including identifying and setting up tooling for monitoring and monitoring for latency, traffic, error, and saturation.