Microsoft recently announced that it would be discontinuing its experimental support for WASI (WebAssembly System Interface) node pools in Azure Kubernetes Service by May. This development requires anyone using server-side WASI code on AKS to start planning their migration to alternative runtimes. However, the discontinuation does not imply moving away from WASI altogether. Instead, it highlights the necessity of finding viable alternatives to maintain WASI workloads within Kubernetes environments seamlessly.
The synergy between WebAssembly and Kubernetes is evident, with open-source projects now emerging to fill the void left by Microsoft’s decision. These projects promise minimal disruption by allowing new layers to be added to the AKS platform. For those utilizing WASI node pools, May 5 is the deadline for creating new ones. Existing workloads can still function, making the transition an urgent yet manageable task. In response, Microsoft has officially supported two alternative methods, which we will explore in detail, guiding users through the steps necessary for a smooth transition.
1. Establishing an AKS Cluster with Azure CLI
The first step involves creating an Azure Kubernetes Service (AKS) cluster using the Azure Command Line Interface (CLI). The Azure CLI streamlines the management and configuration of cloud services, providing a straightforward way to set up and maintain AKS clusters. Begin by installing the Azure CLI if it is not already present on your machine. Once installed, authenticate with your Azure account to gain access to the Azure services.
Using the az aks create
command, you can quickly deploy an AKS cluster. This command specifies various parameters such as resource group name, cluster name, node count, and Kubernetes version, tailoring the cluster to your requirements. After execution, the cluster will be live and ready for further configurations. The Azure CLI facilitates the real-time monitoring and scaling of the deployed resources, ensuring optimal performance for your applications. With the AKS cluster set up, you’re now prepared to deploy additional components essential for running WASI workloads.
2. Deploying cert-manager with Helm
Next, Helm is utilized to deploy cert-manager, an essential component for managing certificates within Kubernetes clusters securely. Cert-manager automates the issuance and renewal of TLS certificates, providing seamless HTTPS support for your services. Helm, a Kubernetes package manager, significantly simplifies the deployment process by managing complex applications using Helm charts.
Before proceeding, ensure Helm is installed on your system. Add the cert-manager Helm repository using helm repo add jetstack https://charts.jetstack.io
, followed by running helm repo update
to fetch the latest charts. Deploy cert-manager to your cluster using the helm install
command, specifying the namespace and release name. After deployment, cert-manager will begin handling certificate requests and renewals, vital for secure communications within the cluster.
3. Installing runtime-class-manager and Operator
To manage WASI workloads, the runtime-class-manager and its corresponding controller need to be installed from the KWasm repository. The runtime-class-manager oversees different runtime configurations, essential for running WASI applications smoothly.
Install the runtime-class-manager by cloning the KWasm repository and applying the relevant YAML files using kubectl apply
. This step involves deploying the operator, which communicates with the Kubernetes API to manage runtime classes and node configurations. The runtime-class-manager effectively separates runtime environments, ensuring WASI applications run in isolated, optimized settings.
4. Applying containerd shim to nodes
To signal the usage of the containerd shim-spin, kubectl is employed to annotate nodes within the cluster. This annotation tells the runtime-class-manager to deploy the containerd shim to these nodes. The shim acts as an intermediary, allowing WASI workloads to be treated as standard Kubernetes resources.
Use the kubectl annotate node
command to apply the necessary annotations, specifying the nodes designated for hosting WASI applications. This annotation process ensures nodes are correctly labeled and ready to handle the containerd shim’s deployment. These labeled nodes can host a WASI runtime, providing the environment needed to execute WASI workloads effectively.
5. Adding SpinKube Custom Resources and Runtime Classes
Deploying SpinKube custom resources and runtime classes is the next step in configuring your AKS cluster for WASI workloads. These custom resources enable Kubernetes to recognize and schedule WASI applications, using runtime classes to define their execution environments.
Apply the necessary custom resource definitions (CRDs) and runtime classes using kubectl apply
. This step includes fetching the SpinKube YAML files from their repository and applying them to your cluster. These definitions inform Kubernetes about the various WASI-related components, ensuring seamless integration with existing Kubernetes operations. Once deployed, SpinKube will manage WASI applications, treating them as standard Kubernetes resources.
6. Installing spin-operator with Helm
The spin-operator is deployed using Helm, an essential tool for managing WASI applications on the AKS platform. The spin-operator handles the lifecycle of WASI workloads, facilitating tasks such as scaling, updating, and monitoring.
Install the spin-operator by adding its Helm repository and executing the helm install
command. Specify the namespace and release name to deploy the operator correctly. The spin-operator works closely with the containerd shim-spin, ensuring WASI applications are managed efficiently within the cluster. This setup allows for automated operations, reducing the manual effort required for managing WASI workloads.
7. Deploying SpinAppExecutor
Introducing the SpinAppExecutor is crucial for executing WASI applications within the AKS environment. The SpinAppExecutor handles the deployment and execution of WASI workloads, working alongside the spin-operator to ensure optimal performance.
Deploy the SpinAppExecutor by applying its YAML definitions to the cluster using kubectl apply
. This step involves configuring the execution environment, including specifying runtime classes and setting up necessary parameters for workload management. The SpinAppExecutor streamlines the execution process, handling tasks such as scaling and monitoring automatically.
8. Transferring WASI Applications to SpinKube
Migrating existing WASI applications to the newly established SpinKube environment involves several steps, including updating configurations and ensuring compatibility with the new runtime. The migration process requires careful planning to minimize disruptions and maintain service continuity.
Start by reviewing your current WASI applications, identifying dependencies and configurations that need adjustment. Update application manifests to align with SpinKube’s runtime classes and execution parameters. Use kubectl apply
to deploy updated manifests to the cluster, enabling SpinKube to manage these workloads effectively. This transition ensures your WASI applications continue running smoothly within the AKS environment.
9. Setting up a WASI Code Registry
Configuring a WASI code registry is essential for managing and distributing WASI modules efficiently. An OCI-compliant registry within Azure or a CI/CD integrated registry like GitHub Packages can streamline this process.
Set up the registry by following the provider’s documentation, configuring necessary authentication and access controls. For Azure, use Azure Container Registry to manage WASI modules. For GitHub Packages, ensure CI/CD integration by setting up workflows that compile and store WASI code. This setup involves defining build pipelines and repository structures, ensuring your applications always have access to the latest WASI modules.
10. Automating Deployment with GitHub Actions
Automating deployment with GitHub Actions significantly enhances the efficiency of managing WASI workloads. GitHub Actions provides a powerful platform for CI/CD, integrating seamlessly with AKS and other cloud services.
Set up workflows in your GitHub repository to compile WASI code and save it to the OCI-compliant registry. Define actions that build, test, and deploy WASI modules automatically. Use event triggers to initiate these workflows, ensuring consistent updates and deployments. This automation reduces manual intervention, ensuring your WASI applications remain up-to-date and functional.
11. Describing Applications with YAML
Defining applications using YAML files is a standard practice in Kubernetes environments, providing a clear and structured way to describe configurations and dependencies. These YAML descriptions are crucial for managing WASI workloads effectively.
Create YAML files that detail application components, runtime configurations, and execution parameters. Ensure these descriptions align with SpinKube’s requirements, specifying containerd shim as the executor. Use kubectl apply
to deploy these YAML files to the cluster, enabling Kubernetes to manage and schedule WASI applications accurately. This method ensures consistency and clarity in application management, facilitating smooth operations within the AKS environment.
Future Considerations and Key Takeaways
Microsoft recently announced that it will discontinue its experimental support for WASI (WebAssembly System Interface) node pools in Azure Kubernetes Service (AKS) by May. This move necessitates that users running server-side WASI code on AKS begin planning their migration to alternative runtimes. However, this doesn’t mean abandoning WASI. Instead, it underscores the need to find viable alternatives for maintaining WASI workloads in Kubernetes environments.
The synergy between WebAssembly and Kubernetes is clear, with open-source projects arising to fill the void left by Microsoft’s decision. These projects promise minimal disruption by adding new layers to the AKS platform. For those using WASI node pools, May 5 is the deadline for creating new ones. Existing workloads will continue to function, making the transition both urgent and manageable. Microsoft has officially supported two alternative methods to facilitate a smooth migration, which we will explore in detail, guiding users through the necessary steps for a seamless transition.