<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[appfleet cloud Blog]]></title><description><![CDATA[Containers on the Edge by appfleet]]></description><link>https://appfleet.com/blog/</link><generator>Ghost 3.42</generator><lastBuildDate>Sat, 18 Apr 2026 20:41:26 GMT</lastBuildDate><atom:link href="https://appfleet.com/blog/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[appfleet joins Cloudflare]]></title><description><![CDATA[<p><strong>Summary:</strong></p><p>Due to a great synergy between our products, I am happy to announce that Cloudflare and appfleet are joining forces!</p><p>The appfleet platform is shutting down, with all clusters going offline on October 31st 2021.</p><p></p><p><strong>Long story:</strong></p><p>When we started working on appfleet our goal was to build an</p>]]></description><link>https://appfleet.com/blog/appfleet-joins-cloudflare/</link><guid isPermaLink="false">6109b0e3fe93c868cb6d4145</guid><category><![CDATA[Announcement]]></category><dc:creator><![CDATA[Dmitriy A.]]></dc:creator><pubDate>Tue, 03 Aug 2021 21:22:59 GMT</pubDate><content:encoded><![CDATA[<p><strong>Summary:</strong></p><p>Due to a great synergy between our products, I am happy to announce that Cloudflare and appfleet are joining forces!</p><p>The appfleet platform is shutting down, with all clusters going offline on October 31st 2021.</p><p></p><p><strong>Long story:</strong></p><p>When we started working on appfleet our goal was to build an infinitely scalable edge compute platform while still offering an affordable and simple to use service. We embarked on our journey completely bootstrapped, a team of three developers and a lot of freelancers.</p><p>A year later we released a production-ready edge compute system. I am very proud of the work we did. We took a complex technology and made it simple to use for everyone, while staying affordable and accessible even by the smallest users.</p><p>And the market saw that: we had rapid growth of deployed clusters, deployments of new code and the underlying global VMs running the clusters.</p><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2021/08/image1.png" class="kg-image" alt srcset="https://appfleet.com/blog/content/images/size/w600/2021/08/image1.png 600w, https://appfleet.com/blog/content/images/2021/08/image1.png 899w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2021/08/image3.png" class="kg-image" alt srcset="https://appfleet.com/blog/content/images/size/w600/2021/08/image3.png 600w, https://appfleet.com/blog/content/images/2021/08/image3.png 903w" sizes="(min-width: 720px) 720px"></figure><p>We were very happy and at the same time overwhelmed by the influx of new users, feedback and feature requests.</p><p>The venture capital firms (VCs) also noticed us. During some weeks I had multiple calls per day, every day, talking with different funds that were exploring the edge compute industry and really liked the idea of appfleet.</p><p>At the same time I was also working on migrating BootstrapCDN from the old infrastructure to the jsDelivr organization and a new CDN system.</p><p>Due to the huge size of BootstrapCDN I knew that only one CDN cared about the open source community enough to sponsor it for free — Cloudflare!</p><p>As we talked about the details of the new sponsorship, we touched upon the edge compute industry and the things we are doing with appfleet.</p><p>Cloudflare already has an incredible scale and made sure their Workers product was up to snuff. It’s an incredible service that allows anyone to run JS code globally in 200+ locations!</p><p>That’s a lot more than the five regions supported at appfleet.</p><p>Eventually we both saw that it made sense to join forces and continue working on this together to build the best edge compute platform out there.</p><p>At the end of the day the goal was to work with cool tech and build the perfect edge compute platform. And together with Cloudflare, this is something we are going to achieve while touching the lives of more developers and users than we ever could with appfleet.</p><p>So today I want to announce the incredible news that appfleet is going to join Cloudflare and continue building incredible tech, this time with access to the huge scale of Cloudflare and all of their available resources.</p><p>At the same time the appfleet platform is going to shut down with all clusters going offline on October 31st 2021. We will do our best to ensure zero downtime and will help all of our users to migrate away.</p><p>I am really excited about this and invite all of our users to explore Cloudflare Workers as an alternative to appfleet!<br></p><p></p><p><a href="https://dakulov.com/">Dmitriy Akulov</a>,</p><p>Founder of appfleet</p>]]></content:encoded></item><item><title><![CDATA[Top 10 Container Orchestration Tools]]></title><description><![CDATA[<p></p><h3 id="the-importance-of-container-orchestration">The Importance of Container Orchestration</h3><p>Containers have revolutionized how we distribute applications by allowing replicated test environments, portability, resource efficiency, scalability and unmatched isolation capabilities. While containers help us package applications for easier deployment and updating, we need a set of specialized tools to manage them. </p><p>To help with this,</p>]]></description><link>https://appfleet.com/blog/top-10-container-orchestration-tools/</link><guid isPermaLink="false">60170c00884e6d0853b52538</guid><category><![CDATA[Docker]]></category><category><![CDATA[DevOps]]></category><dc:creator><![CDATA[Sudip Sengupta]]></dc:creator><pubDate>Mon, 22 Mar 2021 19:17:00 GMT</pubDate><media:content url="https://appfleet.com/blog/content/images/2021/03/96-Top-10-Container-Orchestration-Tools.png" medium="image"/><content:encoded><![CDATA[<img src="https://appfleet.com/blog/content/images/2021/03/96-Top-10-Container-Orchestration-Tools.png" alt="Top 10 Container Orchestration Tools"><p></p><h3 id="the-importance-of-container-orchestration">The Importance of Container Orchestration</h3><p>Containers have revolutionized how we distribute applications by allowing replicated test environments, portability, resource efficiency, scalability and unmatched isolation capabilities. While containers help us package applications for easier deployment and updating, we need a set of specialized tools to manage them. </p><p>To help with this, orchestration tools provide the framework through which we automate containerized workloads. Such tools help DevOps teams to manage the lifecycle of containers, and therefore implement their networking, load balancing, provisioning, scaling and more. As a result, orchestration tools help teams unlock the full benefits of containerization by offering application resilience, improved security and simplified operations.</p><p>Tasks performed using container orchestration tools include:</p><ul><li>Allocating resources among containers</li><li>Scaling containers up and down based on workloads</li><li>Routing traffic and balancing loads</li><li>Assigning services and applications to specific containers</li><li>Deployment and Provisioning</li></ul><p>In this article, let us find the some of the popular container orchestration tools that an organization can take use of.</p><h1 id="list-of-top-container-orchestration-tools">List of Top Container Orchestration Tools</h1><h2 id="1-kubernetes">1. <a href="https://kubernetes.io/">Kubernetes</a></h2><p>Kubernetes was developed by Google in 2008 and handed over to the Cloud Native Computing Foundation in 2014. As one of the most popular open-source container orchestration tool, Kubernetes offers a wide array of benefits, including auto-scaling and automated load balancing. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://appfleet.com/blog/content/images/2021/02/components-of-kubernetes--1-.png" class="kg-image" alt="Top 10 Container Orchestration Tools" srcset="https://appfleet.com/blog/content/images/size/w600/2021/02/components-of-kubernetes--1-.png 600w, https://appfleet.com/blog/content/images/size/w1000/2021/02/components-of-kubernetes--1-.png 1000w, https://appfleet.com/blog/content/images/2021/02/components-of-kubernetes--1-.png 1174w" sizes="(min-width: 720px) 720px"><figcaption>Image Source: https://kubernetes.io/</figcaption></figure><p>The Kubernetes framework consists of four main components:</p><ul><li><strong>Node - </strong>In Kubernetes, a node is is responsible to run containerized workloads, and could either be physical or virtual. These machines serve as hosts for container runtimes, and also facilitate communication between containers and the Kubernetes service.</li><li><strong>Cluster - </strong>This is a set of nodes that share resources and run containerized applications.</li><li><strong>Replication Controllers - </strong>Intelligent agents responsible for scheduling and resource allocation among containers.</li><li><strong>Labels - </strong>These are tags that the Kubernetes service uses to identify containers that are members of a pod.</li></ul><p>Kubernetes continues to be a popular choice among developers being open-source platform of extensive tools that offers flexibility and ease of use by improving workflows and maximizing productivity. The platform also offers a large library of functionalities developed by communities all over the world, giving it unmatched microservice management capabilities. As a result, plenty of <em>managed out-of-the-box</em> orchestration solutions are developed based on the Kubernetes. </p><h2 id="2-red-hat-openshift">2. <a href="https://www.openshift.com/">Red Hat OpenShift</a></h2><p>OpenShift was developed by Red Hat to provide a hybrid, enterprise-grade platform that extends Kubernetes functionalities to companies that require managed orchestration. The framework is built on an enterprise-grade Linux Operating System that lets you automate the lifecycle of your containerized application.  This lets you easily manage all your workloads using a container to virtualize every host. More so, with its various templates and pre-built images, OpenShift lets you create databases, frameworks and other application services easily. As a result, you get a highly-optimized platform that standardizes production workflows, enables continuous integration and helps companies automate the management of releases. As an added advantage, the <a href="https://marketplace.redhat.com/en-us">Red Hat Marketplace</a> lets you purchase certified applications that can help in a range of areas, such as, billing, visibility, governance and responsive support.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://appfleet.com/blog/content/images/2021/02/ezgif-6-c790efe52fea.png" class="kg-image" alt="Top 10 Container Orchestration Tools" srcset="https://appfleet.com/blog/content/images/size/w600/2021/02/ezgif-6-c790efe52fea.png 600w, https://appfleet.com/blog/content/images/2021/02/ezgif-6-c790efe52fea.png 736w" sizes="(min-width: 720px) 720px"><figcaption>Image Source: https://www.openshift.com</figcaption></figure><p>OpenShift offers both <strong>Platform-as-a-Service(PaaS)</strong> and <strong>Container-as-a-Service(CaaS)</strong> cloud service computing models. This essentially lets you either define your application source code in a Dockerfile or convert your source code to a container using a <strong>Source-to-Image</strong> builder. </p><h3 id="key-features-of-redhat-openshift-include-">Key features of Redhat OpenShift include:</h3><ul><li>Built-in Jenkins pipelines streamline workflows, allowing faster production</li><li>Comes with an <strong>Integrated Container Runtime (CoreOS)</strong>, but also integrates well with <strong>Standard CRI-O</strong> and <strong>Docker Runtimes</strong></li><li>Supports SDN and validates integration with various networking solutions</li><li>Integrates various development and operations tools to offer <strong>Self-Service Container Orchestration</strong></li><li>Its <strong>Embedded Operator Hub</strong> grants administrators easy access to services such as Kubernetes Operators, third-party solutions and direct access to cloud service providers, such as AWS</li><li>OpenShift is an Open-Source, vendor-agnostic platform, without a vendor lock-in commitment</li></ul><h2 id="3-apache-mesos">3. <a href="http://mesos.apache.org/">Apache Mesos</a></h2><p>Mesos is a cluster management tool developed by Apache that can efficiently perform container orchestration. The Mesos framework is open-source, and can easily provide resource sharing and allocation across distributed frameworks. It enables resource allocation using modern kernel features, such as <strong>Zones in Solaris</strong> and <strong>CGroups in Linux</strong>. Additionally, Mesos uses <em><strong>Chronos Scheduler </strong></em>to start and stop services, and the <em><strong>Marathon API </strong></em>to scale services and balance loads. To let developers define inter-framework policies, Mesos uses a pluggable application module . </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://appfleet.com/blog/content/images/2021/02/architecture3.jpg" class="kg-image" alt="Top 10 Container Orchestration Tools" srcset="https://appfleet.com/blog/content/images/size/w600/2021/02/architecture3.jpg 600w, https://appfleet.com/blog/content/images/2021/02/architecture3.jpg 836w" sizes="(min-width: 720px) 720px"><figcaption>Image Source: http://mesos.apache.org/</figcaption></figure><p>More details on the Mesos architecture can be found <a href="http://mesos.apache.org/documentation/latest/architecture/">here</a>. </p><h3 id="key-features-of-apache-mesos-include-">Key features of Apache Mesos include:</h3><ul><li>Linear scalability, allowing the deployment of 10,000s of nodes</li><li>Zookeeper integration for fault-tolerant master replication</li><li>APIs for developing new applications in Java, C++, etc.</li><li>Graphical User Interface for monitoring the state of your clusters</li><li>LXC isolation between tasks</li></ul><p>Advantages of using Mesos seem apparent as Apache claims to have build a number of software projects on Mesos, which include: Long running services such as <strong>Aurora, Marathon &amp; Singularity, Big Data Processing Solutions, Batch Scheduling</strong> and <strong>Data Storage Solutions</strong>.</p><h2 id="4-mirantis-kubernetes-engine">4. <a href="https://www.mirantis.com/software/docker/kubernetes/">Mirantis Kubernetes Engine</a></h2><p>Formerly known as Docker Enterprise Edition, Mirantis is an orchestration tool that lets you manage Kubernetes Clusters and Docker Swarms interchangeably to provide ultimate runtime flexibility. The solution offers multiple layers of security that include Role Based Access Control (RBAC) and built-in encryption, providing advanced authentication and access control. With its node-based isolation, Mirantis enables efficient multi-tenant architecture by offering a clear separation of resources. The Mirantis Engine uses Calico for Kubernetes networking and includes an Istio Ingress, enhancing load balancing and providing streamlined gateway controls. Mirantis allows development teams to ship code faster by providing a simple, consistent experience across all major cloud platforms, and offers a choice of various tools and frameworks you can use to improve application portability.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://appfleet.com/blog/content/images/2021/02/docker-enterprise-container-cloud-diagram.png" class="kg-image" alt="Top 10 Container Orchestration Tools" srcset="https://appfleet.com/blog/content/images/size/w600/2021/02/docker-enterprise-container-cloud-diagram.png 600w, https://appfleet.com/blog/content/images/size/w1000/2021/02/docker-enterprise-container-cloud-diagram.png 1000w, https://appfleet.com/blog/content/images/2021/02/docker-enterprise-container-cloud-diagram.png 1109w" sizes="(min-width: 720px) 720px"><figcaption>Image Source: https://www.mirantis.com/</figcaption></figure><p>Mirantis allows organizations to innovate and scale applications using its Drivetrain lifecycle management system. The tool integrates into the Mirantis Cloud platform to deliver regular updating and improvement of open-source software. Incremental updates are enabled via a Git repo, which reduces the cost and time of upgrades. Mirantis Drivetrain lifecycle enables a DevOps approach to development while also enabling a Continuous Delivery pipeline.</p><h3 id="key-features-of-mirantis-kubernetes-engine-include-">Key features of Mirantis Kubernetes Engine include:</h3><ul><li>Easier management of containers using standard image builds</li><li>Mirantis isolation ensures security by having all applications running independently in different containers</li><li>Allows application portability across various platforms</li><li>Mirantis Drivetrain enables continuous integration for faster deployment</li></ul><h2 id="5-helios">5. <a href="https://github.com/spotify/helios">Helios</a></h2><p>Developed by Spotify, Helios helps developers orchestrate Docker containers by deploying them across distributed servers. Helios is particularly popular with developers due to its pragmatic nature and its functionalities that enhance CI/CD pipelines. The platform also fits seamlessly with  most DevOps workflows, that doesn't require specific Operating Systems, Cloud Services or Network Topologies to manage containers. As an added advantage, Helios documents cluster history, with a log of events such as restarts, deployments and version changes. This essentially helps developers identify root causes or security vulnerabilities efficient using either a HTTP API Client or Command Line Interface. </p><h3 id="key-features-of-helios-include-">Key features of Helios include:</h3><ul><li>Easily integrates with DevOps philosophy</li><li>Vendor-agnostic, works well with any platform or network</li><li>Can run both single and multi-node instances</li><li>Does not require Apache Mesos as a prerequisite to run Helios</li><li>Does not depend on prescribed load balancers and routers</li></ul><h2 id="6-amazon-elastic-container-service-amazon-ecs-">6. <a href="https://aws.amazon.com/ecs/">Amazon Elastic Container Service (Amazon ECS)</a></h2><p>With Amazon ECS, organizations can easily deploy and run container clusters on Amazon’s Elastic Container (EC2) instances. Amazon ECS offers a secure, reliable, and highly scalable orchestration platform, making it appropriate for sensitive and mission critical applications without wasting compute resources. Amazon ECS easily integrates with <a href="https://aws.amazon.com/fargate/">Amazon Fargate</a>, a serverless computing tool, that lets developers specify resource requirements and eliminates the need for server provision. This lets organizations to focus more on streamlining applications rather than managing infrastructure. It is also easy to cost-optimize your application using Fargate Spot tasks and EC2 Spot instances, cutting off up to 90% of your infrastructure provision fees.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://appfleet.com/blog/content/images/2021/02/product-page-diagram_Amazon-ECS@2x.0d872eb6fb782ddc733a27d2bb9db795fed71185.png" class="kg-image" alt="Top 10 Container Orchestration Tools" srcset="https://appfleet.com/blog/content/images/size/w600/2021/02/product-page-diagram_Amazon-ECS@2x.0d872eb6fb782ddc733a27d2bb9db795fed71185.png 600w, https://appfleet.com/blog/content/images/size/w1000/2021/02/product-page-diagram_Amazon-ECS@2x.0d872eb6fb782ddc733a27d2bb9db795fed71185.png 1000w, https://appfleet.com/blog/content/images/size/w1600/2021/02/product-page-diagram_Amazon-ECS@2x.0d872eb6fb782ddc733a27d2bb9db795fed71185.png 1600w, https://appfleet.com/blog/content/images/2021/02/product-page-diagram_Amazon-ECS@2x.0d872eb6fb782ddc733a27d2bb9db795fed71185.png 2130w" sizes="(min-width: 720px) 720px"><figcaption>Image Source: https://aws.amazon.com/</figcaption></figure><p>ECS allows you to use <strong>Network Access Control Lists (ACLs)</strong> and <strong>Amazon Virtual Private Clouds (VPCs)</strong> for resource isolation and security. Possibly one of the key features of ECS is that it is available in 69 availability zones and 22 regions globally, guaranteeing peace of mind regarding uptime, reliability, and low latency. </p><h3 id="key-features-of-aws-ecs-include-">Key features of AWS ECS include:</h3><ul><li>ECS Supports Fargate, a serverless AWS offering, that eliminates the need to manage servers</li><li>Includes Capacity Providers which dynamically determine compute resources required to run your application</li><li>Help optimize costs using spot instances for non-persistent workloads</li><li>ECS creates Amazon VPCs for your containers, ensuring no sharing of resources between tenants</li><li>Container Registry makes applications compatible within multiple environments</li></ul><h2 id="7-google-kubernetes-engine-gke-">7. <a href="https://cloud.google.com/kubernetes-engine">Google Kubernetes Engine (GKE)</a></h2><p>GKE is a managed orchestration service that provides an easy-to-use environment to deploy, manage and scale Docker containers on the Google Cloud Platform. While doing so, the service engine lets you create agile and serverless applications without compromising security. With multiple release channels offering different node upgrade cadences, GKE makes it easier to streamline operations based on application needs. Through its enterprise-ready, pre-built deployment templates GKE enables enhanced developer productivity across multiple layers of a DevOps workflow. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://appfleet.com/blog/content/images/2021/02/cluster-architecture.png" class="kg-image" alt="Top 10 Container Orchestration Tools" srcset="https://appfleet.com/blog/content/images/size/w600/2021/02/cluster-architecture.png 600w, https://appfleet.com/blog/content/images/2021/02/cluster-architecture.png 835w" sizes="(min-width: 720px) 720px"><figcaption>Image Source: https://cloud.google.com/</figcaption></figure><p>For developers, the service engine helps streamline every stage of the SDLC using native CI/CD tooling accelerators, while Site Reliability Engineers (SREs) may utilize GKE for ease of infrastructure management by monitoring resource usage, clusters and networks. </p><h3 id="key-features-of-gke-include-">Key features of GKE include:</h3><ul><li>GKE offers rapid, regular and stable release channels, allowing developers to streamline operations</li><li>The platform sets up the baseline functionality and automates cluster management for ease of use</li><li>Integrates native Kubernetes tooling so organizations can develop applications faster without compromising security</li><li>Google Site Reliability Engineers offer support in the management of infrastructure</li><li>Google consistently improves the GKE platform with new features and enhancements, making it robust and reliable</li><li>Well-documented platform, making all its features easy to learn and use</li></ul><h2 id="8-azure-service-fabric">8. <a href="https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-overview">Azure Service Fabric</a></h2><p>Microsoft's Azure Service Fabric is a <strong>Platform-as-a-Service</strong> solution that lets developers focus on business logic and application development by making container deployments, packaging and management a lot easier. Service Fabric lets companies deploy and manage microservices across distributed machines, allowing the management of both <strong>stateful</strong> and<strong> stateless </strong>services. It also integrates seamlessly with CI/CD tools to help manage application life cycles while letting you create and manage clusters across different environments, including Linux, Windows Server, Azure on-premises and other public cloud offerings. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://appfleet.com/blog/content/images/2021/02/service-fabric-overview.png" class="kg-image" alt="Top 10 Container Orchestration Tools" srcset="https://appfleet.com/blog/content/images/size/w600/2021/02/service-fabric-overview.png 600w, https://appfleet.com/blog/content/images/size/w1000/2021/02/service-fabric-overview.png 1000w, https://appfleet.com/blog/content/images/size/w1600/2021/02/service-fabric-overview.png 1600w, https://appfleet.com/blog/content/images/size/w2400/2021/02/service-fabric-overview.png 2400w" sizes="(min-width: 720px) 720px"><figcaption>Image Source: https://docs.microsoft.com/</figcaption></figure><p>Service Fabric uses a .NET SDK to integrate with popular Windows Software Development Kits, such as PowerShell and Visual Studio. To integrate with Linux development solutions, such as Eclipse, it uses a Java SDK. Service Fabric is available across all Azure regions, and is included on all Azure Compliance Certifications.</p><h3 id="key-features-of-azure-service-fabric-include-">Key features of Azure Service Fabric include:</h3><ul><li>Service Fabric allows management of containerized applications on both <strong>stateful</strong> and <strong>stateless</strong> services </li><li>Can be used for <em>lift &amp; shift</em> migration using guest executables for legacy applications</li><li>Enables a Serverless Compute experience, so organizations don’t have to worry about backend provisioning</li><li>The Azure platform is data-aware, improving workload performance while reducing latency</li><li>Makes applications resilient by running different tracks for different servers hosting different microservices</li></ul><p>Azure Fabric Service can be teamed up with CI/CD services such as the Visual Studio Team Services to ensure successful migration of existing apps to the cloud. This makes it easy to debug applications remotely and seamless monitoring using the Operations Management Suite. </p><h2 id="9-amazon-elastic-kubernetes-service-eks-">9. <a href="https://aws.amazon.com/eks/?whats-new-cards.sort-by=item.additionalFields.postDateTime&amp;whats-new-cards.sort-order=desc&amp;eks-blogs.sort-by=item.additionalFields.createdDate&amp;eks-blogs.sort-order=desc">Amazon Elastic Kubernetes Service (EKS)</a></h2><p>Amazon EKS helps developers create, deploy and scale Kubernetes applications on-premises or in the AWS cloud. EKS automates tasks such as patching, updates and node provisioning, thereby helping organizations to ship reliable, secure and highly scalable clusters. While doing so, EKS takes away all the tedium and manual configuration tasks to manage Kubernetes clusters, helping to cut-down on efforts of performing repetitive tasks to run your applications. </p><p>Since EKS is an upstream offering of Kubernetes, you can use all existing Kubernetes plugins and tools for your application. This service automatically deploys Kubernetes with three master nodes across multiple availability zones for ultimate reliability and resilience. With Role Based Access Control (RBAC) and Amazon’s Identity and Access Management (IAM) entities, you can easily manage security in your AWS clusters using Kubernetes tools, such as <strong>kubectl. </strong>As one of its core features, EKS allows launching and managing Kubernetes clusters easy using a few easy steps.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://appfleet.com/blog/content/images/2021/02/product-page-diagram_Amazon-EKS@2x.ddc48a43756bff3baead68406d3cac88b4151a7e.png" class="kg-image" alt="Top 10 Container Orchestration Tools" srcset="https://appfleet.com/blog/content/images/size/w600/2021/02/product-page-diagram_Amazon-EKS@2x.ddc48a43756bff3baead68406d3cac88b4151a7e.png 600w, https://appfleet.com/blog/content/images/size/w1000/2021/02/product-page-diagram_Amazon-EKS@2x.ddc48a43756bff3baead68406d3cac88b4151a7e.png 1000w, https://appfleet.com/blog/content/images/size/w1600/2021/02/product-page-diagram_Amazon-EKS@2x.ddc48a43756bff3baead68406d3cac88b4151a7e.png 1600w, https://appfleet.com/blog/content/images/2021/02/product-page-diagram_Amazon-EKS@2x.ddc48a43756bff3baead68406d3cac88b4151a7e.png 1678w" sizes="(min-width: 720px) 720px"><figcaption>Image Source: https://aws.amazon.com/eks/</figcaption></figure><h3 id="key-features-of-aws-eks-include-">Key features of AWS EKS include:</h3><ul><li>EKS provides a flexible Kubernetes Control Plane available across all regions. This makes Kubernetes applications hosted on EKS highly available and scalable.</li><li>You can directly manage your applications from Kubernetes using AWS Controllers for Kubernetes</li><li>Extending the functionality of your Kubernetes cluster is simple thanks to EKS Add-ons</li><li>Easily scale, create, update and terminate nodes from your EKS cluster using a single command</li><li>Compatibility between EKS and Kubernetes clusters ensures a simple, code-free migration to AWS cloud</li><li>EKS implements automatic patches and identifies non-functioning masters, ensuring application reliability</li></ul><p>Amazon EKS prevents single failure points of Kubernetes cluster by running it across multiple availability zones. This makes the application reliable, resilient and secure since by reducing the Mean-time-to-Recovery (MTTR). Additionally, as a Managed Kubernetes platform, Amazon’s EKS makes your application optimized and scalable through a rich ecosystem of services that eases container management.</p><h2 id="10-docker-swarm">10. <a href="https://docs.docker.com/engine/swarm/">Docker Swarm</a></h2><p>Swarm is the native container orchestration platform for Docker applications. In Docker, a <em><strong>Swarm </strong></em>is a group of machines (physical or virtual) that work together to run Docker applications. A Swarm Manager controls activities of the swarm, and helps manage the interactions of containers deployed on different host machines (nodes). Docker Swarm fully leverages the benefits of containers, allowing highly portable and agile applications while offering redundancy to guarantee high availability for your applications. Swarm managers also assign workloads to the most appropriate hosts, ensuring proper load balancing of applications. While doing so, the Swarm Manager ensures proper scaling by adding and removing worker tasks to help maintain a cluster’s desired state.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://appfleet.com/blog/content/images/2021/02/service-lifecycle.png" class="kg-image" alt="Top 10 Container Orchestration Tools" srcset="https://appfleet.com/blog/content/images/size/w600/2021/02/service-lifecycle.png 600w, https://appfleet.com/blog/content/images/2021/02/service-lifecycle.png 931w" sizes="(min-width: 720px) 720px"><figcaption>Image Source: https://docs.docker.com/engine/swarm/</figcaption></figure><h3 id="key-features-of-docker-swarm-include-">Key features of Docker Swarm include:</h3><ul><li>Manager nodes help with load balancing by assigning tasks to the most appropriate hosts</li><li>Docker Swarm uses redundancy to enable high service availability</li><li>Swarm containers are lightweight and portable</li><li>Tightly integrated into the Docker Ecosystem, allowing easier management of containers</li><li>Does not require extra plugins for setup</li><li>Ensures high scalability by balancing loads and bringing up worker nodes when workload increases</li><li>Docker Swarm’s distributed environment allows for decentralized access and collaboration</li></ul><p>As Docker remains one of the most used container runtimes, Docker Swarm proves to be an efficient container orchestration tool. Swarm makes it easy to scale, update applications and balance workloads. This makes it perfect for application deployment and management even when dealing with extensive clusters.</p><h2 id="conclusion">Conclusion</h2><p>As more cloud deployment technologies emerge, container orchestration tools will keep evolving. As with every technology, each option has its benefits and drawbacks. Managed Service Platforms, such as GKE, EKS and Mirantis, provide unmatched functionality while incurring little costs. Other offerings, like Kubernetes and Docker Swarm should be evaluated for the trade-offs in performance, complexity and flexibility before adoption. </p><p>With that in mind, it is equally important to note that orchestration tools are cost and effort intensive. An efficient alternative to this is<a href="https://appfleet.com"> <strong>appfleet's global edge platform</strong></a> for deploying and hosting containerized applications. With appfleet, you do not have to spend time building a team to manage a mammoth clustering framework, learning a new technology, or digging through thousands of lines of documentation.<br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p>]]></content:encoded></item><item><title><![CDATA[appfleet is now production ready!]]></title><description><![CDATA[<p>First of all what is appfleet? <a href="https://appfleet.com/">appfleet is an edge compute platform</a> that allows people to deploy their web applications globally. Instead of running your code in a single centralized location you can now run it everywhere, at the same time.</p><p>In simpler terms appfleet is a next-gen CDN, instead</p>]]></description><link>https://appfleet.com/blog/appfleet-is-now-production-ready/</link><guid isPermaLink="false">603a8d68884e6d0853b528fe</guid><category><![CDATA[Announcement]]></category><dc:creator><![CDATA[Dmitriy A.]]></dc:creator><pubDate>Mon, 01 Mar 2021 09:55:23 GMT</pubDate><media:content url="https://appfleet.com/blog/content/images/2021/02/Youtube2560x1440@1x.png" medium="image"/><content:encoded><![CDATA[<img src="https://appfleet.com/blog/content/images/2021/02/Youtube2560x1440@1x.png" alt="appfleet is now production ready!"><p>First of all what is appfleet? <a href="https://appfleet.com/">appfleet is an edge compute platform</a> that allows people to deploy their web applications globally. Instead of running your code in a single centralized location you can now run it everywhere, at the same time.</p><p>In simpler terms appfleet is a next-gen CDN, instead of being limited to only serving static content closer to your users you can now do the same thing for your whole codebase. Run the whole thing where just your cache used to be.</p><p>Check out our video explainer too <a href="https://www.youtube.com/watch?v=7n617ZF-oT4">https://www.youtube.com/watch?v=7n617ZF-oT4</a> </p><p>This results in drastic performance improvement, lower latency, better uptime and an enormous amount of new use-cases and possibilities.</p><p>In part it's because we are not limiting our users to HTTP services. You have complete freedom to run any kind of service over any protocol you want, on any port. </p><p>Do you want to build your own global nameservers? Deploy a container running a DNS server over UDP port 53. It takes only a few clicks. A globally distributed database? Sure thing. Or something simple like image optimization on the edge? No problem. How about a monstrous container running a web service, redis, ssh, DNS, MySQL and an admin service all on different ports, all at the same time? Who are we to stop you, go ahead!</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://appfleet.com/blog/content/images/2021/02/clusters-list.png" class="kg-image" alt="appfleet is now production ready!" srcset="https://appfleet.com/blog/content/images/size/w600/2021/02/clusters-list.png 600w, https://appfleet.com/blog/content/images/size/w1000/2021/02/clusters-list.png 1000w, https://appfleet.com/blog/content/images/size/w1600/2021/02/clusters-list.png 1600w, https://appfleet.com/blog/content/images/size/w2400/2021/02/clusters-list.png 2400w" sizes="(min-width: 1200px) 1200px"><figcaption>The appfleet dashboard showing your deployed applications</figcaption></figure><p>We launched our closed beta quite a while ago. And since then we have worked with many developers and business owners to improve our platform and support the many exciting use-cases that people kept coming up with.</p><p>And while the system was technically ready to accept real customers months ago, we decided it was best to stay in beta and ensure the stability of the platform.</p><p>Since then we have polished our user-experience, redesigned many parts of our UI multiple times and made sure our backend is ready for whatever people throw at it.</p><p>Do you know what is the first thing people run when given a container and asked to test it? They run a fork bomb.  Of course we were ready for that and while the specific instance became unavailable it had no impact on our system or other clients. All thanks to the multiple layers of isolation we have built. And nowadays the instance won't even go down, so fork-bomb away.</p><p>Each container runs in a virtualized box of its own, with its own resources, filesystem and security in place. Security and stability were top priorities for us and everything we built had that in mind.</p><p>Whenever we were working on a new feature we kept asking ourselves "What if?". "What if this system goes down?", "What if the user does something unexpected?"</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://appfleet.com/blog/content/images/2021/02/new-cluster-regions.png" class="kg-image" alt="appfleet is now production ready!" srcset="https://appfleet.com/blog/content/images/size/w600/2021/02/new-cluster-regions.png 600w, https://appfleet.com/blog/content/images/size/w1000/2021/02/new-cluster-regions.png 1000w, https://appfleet.com/blog/content/images/size/w1600/2021/02/new-cluster-regions.png 1600w, https://appfleet.com/blog/content/images/size/w2400/2021/02/new-cluster-regions.png 2400w" sizes="(min-width: 1200px) 1200px"><figcaption>appfleet cluster creation process and region selection</figcaption></figure><p>appfleet is based on multiple services and modules interacting with each other. We made sure that even if something breaks, like our whole API goes down, or our DB, or anything else really, the already running applications would not feel a thing. Everything is built to run in standalone mode and if the worst happens to wait until things get better. </p><p>We also drew from our experience building and maintaining <a href="https://www.jsdelivr.com/">jsDelivr</a>, a free CDN for open source projects that currently serves 100 billion requests every month and more than 3 Petabytes of traffic! It is used by millions of websites all over the world and all of them trust us to ensure it never goes down. We integrated multiple levels of failover with multiple checks on every step to ensure the system can automatically handle different kinds of issues and fix itself.</p><p>The appfleet platform was built to be as simple as possible and make edge compute accessible to everyone, from open source projects, to solo developers and even big enterprises that need something that works without relying on an army of DevOps engineers. This is one of the reasons we decided to build on top of containers, this allows our users to easily migrate to or from appfleet and to even run legacy applications. </p><p>Today March 1st 2021 is the beginning of a long an exciting journey to make the web faster and accessible to all! </p><p><a href="https://dashboard.appfleet.com/register">Register now and get $10 of free credits</a> to use as you see fit. No need to enter your credit card until you are ready for production workloads.</p><p>And if you are a non-profit or an open source project let us know to get sponsored with free services. We even <a href="https://github.com/jsdelivr/jsdelivr/issues/18154">offer free design services</a></p><p>Join us and make sure you send us your feedback and ideas! Even better email the founder directly at d@appfleet.com</p>]]></content:encoded></item><item><title><![CDATA[Best Practices and Considerations for Multi-Tenant SaaS Application Using AWS EKS]]></title><description><![CDATA[A multi-tenant SaaS application on EKS provides you with multiple possibilities for compute, network, storage isolations, while keeping the workflow secured. ]]></description><link>https://appfleet.com/blog/best-practices-and-considerations-for-multi-tenant-saas-application-using-kubernetes-and-aws-ecs/</link><guid isPermaLink="false">5f7f76b26dc1db2ec7d79b13</guid><category><![CDATA[AWS]]></category><category><![CDATA[EKS]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Sudip Sengupta]]></dc:creator><pubDate>Wed, 04 Nov 2020 20:41:06 GMT</pubDate><media:content url="https://appfleet.com/blog/content/images/2020/11/95-Best-Practices-and-Considerations-for-Multi-Tenant-SaaS-Application-Using-AWS-EKS.png" medium="image"/><content:encoded><![CDATA[<img src="https://appfleet.com/blog/content/images/2020/11/95-Best-Practices-and-Considerations-for-Multi-Tenant-SaaS-Application-Using-AWS-EKS.png" alt="Best Practices and Considerations for Multi-Tenant SaaS Application Using AWS EKS"><p></p><p>Today, most organizations, large or small, are <a href="https://appfleet.com/">hosting</a> their SaaS application on the cloud using multi-tenant architecture. There are multiple reasons for this, but the most simple and straightforward reasons are cost and scalability. In a multi-tenant architecture, one instance of a software application is shared by multiple tenants (clients). These tenants share various resources such as databases, web servers, computing resources, etc., that make multi-tenant architecture cost-efficient while keeping scalability in mind.</p><p><a href="https://appfleet.com/blog/amazon-elastic-container-service-for-kubernetes-eks/"><strong>Amazon EKS</strong> (Elastic Kubernetes Service)</a> is one of the most popular container orchestration platforms offered by AWS, which is popularly used host multi-tenant SaaS applications. However, while adopting a multi-tenancy framework, it is also important to note the challenges that might prevail due to cluster resource sharing.</p><p>Let us delve into the best practices and considerations for multi-tenant SaaS applications using Amazon EKS in this article.</p><h2 id="practice-1-separate-namespaces-for-each-tenant-compute-isolation-">Practice 1: Separate Namespaces for Each Tenant (Compute Isolation)</h2><p>Having separate namespaces remains an essential consideration while deploying a multi-tenant SaaS application, as it essentially divides a single cluster resource across multiple clients. Namespaces in a multi-tenant architecture are the primary unit of isolation in <a href="https://appfleet.com/blog/local-kubernetes-testing-with-kind/">Kubernetes</a>. As one of its core features, Amazon EKS helps you create a separate namespace for each tenant running the SaaS application. This attribute aids in isolating every tenant and its own environment within the corresponding Kubernetes cluster, thereby enforcing data privacy in the sense that you don't have to create different clusters per tenant. This aspect eventually brings substantial cost reduction in compute resources and AWS hosting costs.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://appfleet.com/blog/content/images/2020/10/image-9.png" class="kg-image" alt="Best Practices and Considerations for Multi-Tenant SaaS Application Using AWS EKS" srcset="https://appfleet.com/blog/content/images/size/w600/2020/10/image-9.png 600w, https://appfleet.com/blog/content/images/2020/10/image-9.png 618w"><figcaption>Source - https://aws.amazon.com/</figcaption></figure><h2 id="practice-2-setting-resourcequota-on-resource-consumption">Practice 2: Setting <code>ResourceQuota</code> on Resource Consumption</h2><p>A multi-tenant SaaS application serves several tenants, with each of them accessing the same Kubernetes cluster resources concurrently. Often there would be scenarios where a particular tenant consume resources disproportionately to exhaust all the cluster resources on its own, without leaving resources to be used by other tenants. To avoid such instances of capacity downtime the <strong>ResourceQuota</strong> concept comes to rescue, with which you can set limit on the resources the containers (hosting the SaaS application) can use.</p><p>Here is an example:</p><!--kg-card-begin: markdown--><pre><code>apiVersion: v1
kind: ResourceQuota
metadata:
  name: resourceQuotaSetting
  namespace: tenant1
spec:
  hard:
    requests.cpu: &quot;2&quot;
    requests.memory: &quot;1Gi&quot;
    limits.cpu: &quot;4&quot;
    limits.memory: &quot;2Gi&quot;
</code></pre>
<!--kg-card-end: markdown--><p>To get the above in context, once you set up the above configuration on Amazon EKS for resource quota, a container can only request 2 CPUs at a time while in total it can have only 4 CPUs concurrently. Additionally, it can request 1 Gi memory at a time, with the maximum memory limit defined as 2Gi. With this you essentially limit the usage of resources while ensuring that a particular tenant does not end up consuming all resources.</p><h2 id="practice-3-network-isolation-using-network-policies">Practice 3: Network Isolation using Network Policies</h2><p>By default, the Kubernetes production cluster allows namespaces to talk to each other. But if your SaaS application is running in a multi-tenant architecture, you would like to avoid that to bring in the isolation between different namespaces. To do that, you can use tenant isolation network policy and network segmentation on Amazon EKS. As a best practice, you can install <strong>Calico</strong> on Amazon EKS and assign network policies to pods as shown below. </p><p>For reference, the following policy allows traffic only from the <code>same-namespace</code> while restricting other traffic. The pods with label <code>app: api</code> in <code>tenant-a</code> namespace will only receive traffic from <code>same-namespace</code>. The communication of 	<code>tenant-a</code> with other tenants and vice-versa is being denied here to achieve network isolation.</p><pre><code>apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
  name: same-namespace
  namespace: tenant-a
spec:
  podSelector:
	matchLabels:
  	app: api
  policyTypes:
  - ingress
  - egress
  ingress:
  - from:
	- namespaceSelector:
    	    matchLabels:
      	 nsname: tenant-a
  egress:
  - to:
	- namespaceSelector:
    	    matchLabels:
      	 nsname: tenant-a</code></pre><h2 id="practice-4-storage-isolation-using-persistentvolume-and-persistentvolumeclaim">Practice 4: Storage Isolation using <code>PersistentVolume</code> and <code>PersistentVolumeClaim</code></h2><p>As opposed to a single-tenant framework, a multi-tenant framework requires a different approach to managing application storage. On Amazon EKS, you can assign and manage storage for different tenants seamlessly using <code>PersistentVolume</code> (PV). On the other hand, a storage request sent by a tenant is called referred as <code>PersistentVolumeClaim</code> (PVC). Since PVC is a namespaced resource, you can bring isolation of storage among different tenants easily. </p><p>In the example below for <code>tenant1</code>, we have configured PVC with <code>ReadWriteOnce</code> access mode and storage space of <code>2 Gi</code>.</p><pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-storage
  namespace: tenant1
spec:
  accessModes:
	- ReadWriteOnce
  resources:
	requests:
  	storage: 2Gi </code></pre><h2 id="practice-5-amazon-iam-integration-with-amazon-eks-for-setting-rbac">Practice 5: Amazon IAM Integration with Amazon EKS for Setting RBAC</h2><p>Just like any other AWS service, EKS integrates with AWS IAM to administer Role-Based Access Control (RBAC) on a Kubernetes cluster. Through the AWS IAM authenticator, you can authenticate any tenant namespace to a Kubernetes cluster. To use IAM on multi-tenancy, you need to add the tenant's (user) IAM role inside <code>aws-auth configmap</code> for authenticating the tenant's namespace. Once the authentication is successful by AWS IAM, defined <strong>Role </strong>(namespaced resource) and/or <strong>ClusterRole </strong>(non-namespaced resource) for a particular tenant's namespace will be implemented on the cluster. By provisioning <strong>ClusterRole</strong> and <strong>Role</strong> policies on the cluster, you can adopt a hardened security posture on a multi-tenant SaaS application.</p><h2 id="practice-6-manage-tenant-placement-on-kubernetes-nodes">Practice 6: Manage Tenant Placement on Kubernetes nodes</h2><p>Similar to the default Kubernetes service, Amazon EKS provides <strong>Node Affinity</strong> and <strong>Pod Affinity</strong> for managing tenant placement on Kubernetes nodes through which you can use taint and toleration. Using Node Affinity, you can decide on which node you want to run a particular tenant's pod. On the other hand, using Pod Affinity you can decide if <code>tenant 1</code> and <code>tenant 2</code> should be on the same or on different nodes. </p><p>With the command below, no pods will be scheduled on <code>node 1</code> until it matches the toleration arguments, while the <em>key  value</em> (client) must be <code>tenant1</code>. With this method, you can run pods on a particular node just for a particular tenant.</p><pre><code>kubectl taint nodes node1 client=tenant1:NoSchedule
</code></pre><p>This is how a pod configuration looks like:</p><pre><code>apiVersion: v1
kind: Pod
metadata:
  name: ubuntu
  labels:
   env: prod
spec:
  containers:
  - name: ubuntu
	image: ubuntu
	imagePullPolicy: IfNotPresent
tolerations:
  - key: “client”
	operator: "Equal"
	value: “tenant1”
	effect: "NoSchedule"
</code></pre><h2 id="conclusion">Conclusion</h2><p>That was all about the best practices and considerations for running a multi-tenant SaaS application on Amazon EKS. A multi-tenant SaaS application on EKS provides you with multiple possibilities for compute, network, storage isolations, while keeping the workflow secured. Go ahead and try out these practices and let us know your experience. </p><p>Alternately, if you have any additional best practices to share, do let us know. </p>]]></content:encoded></item><item><title><![CDATA[Integrating Ansible and Docker for a CI/CD Pipeline Using Jenkins]]></title><description><![CDATA[In this guide, we will use Ansible as a Deployment tool in a Continuous Integration/Continuous Deployment process using Jenkins Job.]]></description><link>https://appfleet.com/blog/integrating-ansible-and-docker-in-ci-cd-process-using-jenkins-job/</link><guid isPermaLink="false">5e5368c239f27869d61f3200</guid><category><![CDATA[Ansible]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[CI/CD]]></category><category><![CDATA[Tutorial]]></category><dc:creator><![CDATA[Gaurav Sadawarte]]></dc:creator><pubDate>Mon, 21 Sep 2020 12:01:00 GMT</pubDate><media:content url="https://appfleet.com/blog/content/images/2020/08/94-Integrating-Ansible-and-Docker-for-a-CI-CD-Pipeline-Using-Jenkins.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://appfleet.com/blog/content/images/2020/08/94-Integrating-Ansible-and-Docker-for-a-CI-CD-Pipeline-Using-Jenkins.png" alt="Integrating Ansible and Docker for a CI/CD Pipeline Using Jenkins"><p>In this guide, we will use Ansible as a Deployment tool in a Continuous Integration/Continuous Deployment process using Jenkins Job.</p>
<p>In the world of CI/CD process, Jenkins is a popular tool for provisioning development/production environments as well as application deployment through pipeline flow. Still, sometimes, it gets overwhelming to maintain the application's status, and script reusability becomes harder as the project grows.</p>
<p>To overcome this limitation, Ansible plays an integral part as a shell script executor, which enables Jenkins to execute the workflow of a process.</p>
<p>Let us begin the guide by installing Ansible on our Control node.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="installandconfigureansible">Install and Configure Ansible</h2>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p><strong>Installing Ansible:</strong><br>
Here we are using CentOS 8 as our Ansible Control Node. To install Ansible, we are going to use <code>python2-pip</code>, and to do so, first, we have to install <code>python2</code>. Use the below-mentioned command to do so:</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code># sudo yum update
# sudo yum install python2
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>After Python is installed on the system, use <code>pip2</code> command to install Ansible on the Control Node:</p>
<pre><code># sudo pip2 install ansible
# sudo pip2 install docker
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>It might take a minute or two to complete the installation, so sit tight. Once the installation is complete, verify:</p>
<pre><code class="language-bash"># ansible --version
 
ansible 2.9.4
  config file = None
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.16 (default, Nov 17 2019, 00:07:27) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Through the above command, we notice that the <code>config file</code> path is missing, which we will create and configure later. For now, let’s move to the next section.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p><strong>Configuring Ansible Control Node User:</strong><br>
The first thing we are going to do is create a user named <code>ansadmin</code>, as it is considered the best practice. So let’s create a user, by using the command <code>adduser</code>, which will create a new user to our system:</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code># useradd ansadmin
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Now, use the <code>passwd</code> command to update the <code>ansadmin</code> user’s password. Make sure that you use a strong password.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code class="language-bash"># passwd ansadmin

Changing password for user ansadmin.
New password: 
Retype new password: 
passwd: all authentication tokens updated successfully.
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Copy the password for user <code>ansadmin</code> and save it somewhere safe.</p>
<p>Once we have created the user, it's time to grant sudo access to it, so it doesn't ask for a password when we log in as <code>root</code>. To do so, follow the below-mentioned steps:</p>
<pre><code># nano /etc/sudoers
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Go to the end of the file and paste the below-mentioned line as it is:</p>
<pre><code>...
ansadmin ALL=(ALL)       NOPASSWD: ALL
...
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Before moving forward, we have one last thing to do. By default, SSH password authentication is disabled in our instance. To enable it, follow the below-mentioned steps:</p>
<pre><code># nano /etc/ssh/sshd_config
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Find <code>PasswordAuthentication</code>, uncomment it and replace <code>no</code> with <code>yes</code>, as shown below:</p>
<pre><code>...
PasswordAuthentication yes
...
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>You will see why we are doing this in the next few steps. To reflect changes, reload the <code>ssh</code> service:</p>
<pre><code># service sshd reload
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Now, log in as an <code>ansadmin</code> user on your Control Node and generate <code>ssh</code> key, which we will use to connect with our remote or managed host. To generate the private and public key, follow the below-mentioned commands:</p>
<pre><code># su - ansadmin
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Use <code>ssh-keygen</code> command to generate key:</p>
<pre><code class="language-bash"># ssh-keygen

Enter file in which to save the key (/home/ansadmin/.ssh/id_rsa): ansible-CN   
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in ansible-CN.
Your public key has been saved in ansible-CN.pub.
The key fingerprint is:
SHA256:6G0xzIrIsmsBwCakACI8CVr8AOuRR8v5F1p2+CsB6EY ansadmin@ansible-host
The key's randomart image is:
+---[RSA 3072]----+
|&amp;+o.             |
|OO* +   .        |
|Bo.E . = .       |
|o = o =++        |
|.. o o.oS.       |
| o.. o.o.o.      |
|. + . o.o.       |
| +     ..        |
|+.               |
+----[SHA256]-----+
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Usually, keys are generated in the <code>.ssh/</code> directory. In our case, you can find keys at <code>/home/ansadmin/.ssh/</code>. Now let us configure our Managed Host for Ansible.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p><strong>Configuring Ansible Managed Host User:</strong><br>
First, we will create a user on our managed host, so log in to your host and create a user with the same name and password.</p>
<p>As our managed host is an Ubuntu machine, therefore here we have to use the <code>adduser</code> command. Please make sure that the password for the username <code>ansadmin</code> is the same for Control and Managed Host.</p>
<pre><code># adduser ansadmin
# su - ansadmin
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Other than this, it is also an excellent thing to cross-check if password authentication is enabled on the Managed Host as we need to copy the ssh public key from the control node to the Managed Host.</p>
<p>Switch to Control Node machine; to copy the public key to our Managed Host machine, we will use the command <code>ssh-copy-id</code>:</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code>$ su - ansadmin
$ ssh-copy-id -i .ssh/ansible-CN.pub ansadmin@managed-host-ip-here 
</code></pre>
<p>For the first time, it will ask for the password. Enter the password for <code>ansadmin</code>, and you are done. Now, if you wish, you can disable Password Authentication on both machines.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p><strong>Setting Ansible Inventory:</strong><br>
Ansible allows us to manage multiple nodes or hosts at the same time. The default location for the inventory resides in <code>/etc/ansible/hosts</code>. In this file, we can define groups and sub-groups.</p>
<p>If you remember, earlier, the hosts' file was not created automatically for our Ansible. So let's create one:</p>
<pre><code># cd /etc/ansible
# touch hosts &amp;&amp; nano hosts
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Add the following lines in your hosts' file and save it:</p>
<pre><code>[docker_group]
docker_host ansible_host=your-managed-host-ip ansible_user=ansadmin ansible_ssh_private_key_file=/home/ansadmin/.ssh/ansible-CN ansible_python_interpreter=/usr/bin/python3
ansible_CN ansible_connection=local
</code></pre>
<p>Make sure that you replace <code>your-managed-host-ip</code> with your host IP address.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Let's break down the basic INI format:</p>
<ul>
<li><code>docker_group</code> - Heading in brackets is your designated group name.</li>
<li><code>docker_host</code> &amp; <code>ansible_CN</code> - The first hostname is <code>docker_host</code>, which points to our Managed Host. While the second hostname is <code>ansible_CN</code>, which is pointing towards our localhost, to be used in Ad-Hoc commands and Playbooks.</li>
<li><code>ansible_host</code> - Here, you need to specify the IP address of our Managed Host.</li>
<li><code>ansible_user</code> - We mentioned our Ansible user here.</li>
<li><code>ansible_ssh_private_key_file</code> - Add the location of your private key.</li>
<li><code>ansible_python_interpreter</code> - You can specify which Python version you want to use; by default, it will be Python2.</li>
<li><code>ansible_connection</code> - This variable helps Ansible to understand that we are connecting the local machine. It also helps to avoid the SSH error.</li>
</ul>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>It is time to test our Ansible Inventory, which can be done through the following command. Here we are going to use a simple Ansible module <em><strong>PING</strong></em>:</p>
<pre><code># ansible all -m ping
ansible_CN | SUCCESS =&gt; {
    &quot;ansible_facts&quot;: {
        &quot;discovered_interpreter_python&quot;: &quot;/usr/libexec/platform-python&quot;
    }, 
    &quot;changed&quot;: false, 
    &quot;ping&quot;: &quot;pong&quot;
}
docker_host | SUCCESS =&gt; {
    &quot;ansible_facts&quot;: {
        &quot;discovered_interpreter_python&quot;: &quot;/usr/bin/python3&quot;
    }, 
    &quot;changed&quot;: false, 
    &quot;ping&quot;: &quot;pong&quot;
}
</code></pre>
<p>It looks like the Ansible system can now communicate with our Managed Host as well as with the localhost.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="installdocker">Install Docker:</h2>
<p>We need a Docker ready system to manage our process; for this, we have to install Docker on both systems. So follow the below-mentioned steps:</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p><strong>For CentOS (Control Node):</strong><br>
Run the following command on your Control Node:</p>
<pre><code># sudo yum install -y yum-utils device-mapper-persistent-data lvm2
 
# sudo yum-config-manager --add-repo \  https://download.docker.com/linux/centos/docker-ce.repo
 
# sudo yum install docker-ce docker-ce-cli containerd.io
</code></pre>
<p>In case you encounter the below-mentioned error during installation:</p>
<pre><code>Error: 
 Problem: package docker-ce-3:19.03.5-3.el7.x86_64 requires containerd.io &gt;= 1.2.2-3, but none of the providers can be installed
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Next, run the following command:</p>
<pre><code># sudo yum install docker-ce docker-ce-cli containerd.io --nobest
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p><strong>For Ubuntu OS (Managed Host):</strong><br>
Run the following command on your Managed Host, which is a Ubuntu-based machine:</p>
<pre><code>$ sudo apt-get remove docker docker-engine docker.io containerd runc
 
$ sudo apt-get update &amp;&amp; sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

$ sudo apt-get update &amp;&amp; sudo apt-get install docker-ce docker-ce-cli containerd.io
</code></pre>
<p>That’s it for this section. Next, we are going to cover how to integrate Ansible with Jenkins.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="integratingansiblewithjenkins">Integrating Ansible with Jenkins:</h2>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>In this section, we will integrate Ansible with Jenkins. Fire up your Jenkins, go to <code>Dashboard &gt; Manage Jenkins &gt; Manage Plugins &gt; Available</code> and then search for <em>Publish Over SSH</em> as shown in the image below:</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2020/02/screenshot-157.245.243.72_8080-2020.01.31-14_31_23.png" class="kg-image" alt="Integrating Ansible and Docker for a CI/CD Pipeline Using Jenkins"></figure><!--kg-card-begin: markdown--><p>Now, go to <em>Configure System</em> and find <em>Publish over SSH</em>; under this section, go to <em>SSH Servers</em> and click on the <em>Add</em> button. Here we are going to add our Docker Server as well as Ansible Server, as shown in the image:</p>
<p>SSH server setting for Docker:</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2020/02/screenshot-157.245.243.72_8080-2020.01.31-14_56_39.png" class="kg-image" alt="Integrating Ansible and Docker for a CI/CD Pipeline Using Jenkins"></figure><!--kg-card-begin: markdown--><p>SSH server setting for Ansible:</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2020/02/screenshot-157.245.243.72_8080-2020.01.31-14_58_00.png" class="kg-image" alt="Integrating Ansible and Docker for a CI/CD Pipeline Using Jenkins"></figure><!--kg-card-begin: markdown--><p>In the <em>Hostname</em> field, add your IP address or domain name of Docker and Ansible server. Before saving the setting, make sure that you test the connection before saving the configuration, by clicking on the <code>Test Configuration</code> button as shown in the image below:</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2020/02/screenshot-157.245.243.72_8080-2020.01.31-15_01_47.png" class="kg-image" alt="Integrating Ansible and Docker for a CI/CD Pipeline Using Jenkins"></figure><!--kg-card-begin: markdown--><h2 id="createjenkinsjob">Create Jenkins Job</h2>
<p>The next step is to create Jenkins jobs. The sole propose of this Job is to build, test, and upload the artifact to our Ansible Server. Here we are going to create Job as a Maven Project, as shown in the image below:</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2020/02/screenshot-157.245.243.72_8080-2020.01.31-21_49_53.png" class="kg-image" alt="Integrating Ansible and Docker for a CI/CD Pipeline Using Jenkins"></figure><!--kg-card-begin: markdown--><p>Next in Job setting page, go to the <em>Source Code Management</em> section and add your Maven project repo URL, as shown in the image below:</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2020/02/screenshot-157.245.243.72_8080-2020.01.31-22_03_50.png" class="kg-image" alt="Integrating Ansible and Docker for a CI/CD Pipeline Using Jenkins"></figure><!--kg-card-begin: markdown--><p>Find the <em>Build</em> section, and in <em>Root POM</em> field enter your <code>pom.xml</code> file name. Additionally in the <em>Goals and options</em> field enter <code>clean install package</code>:</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2020/02/screenshot-157.245.243.72_8080-2020.01.31-22_06_08.png" class="kg-image" alt="Integrating Ansible and Docker for a CI/CD Pipeline Using Jenkins"></figure><!--kg-card-begin: markdown--><p>After successful build completion, your goal is to send the <em>war</em> file to the specified directory to your Ansible server with the right permissions so that it doesn't give us the writing permission by assigning ansadmin to the directory.</p>
<p>Right now, we don't have such a directory, so let us create one. Follow the below-mentioned steps:</p>
<pre><code class="language-bash"># sudo su
# mkdir /opt/docker
# chown ansadmin:ansadmin /opt/docker -R
# ls -al /opt/docker/

total 0
drwxr-xr-x. 2 ansadmin ansadmin  6 Jan 31 16:57 .
drwxr-xr-x. 4 root     root     38 Jan 31 17:10 ..
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Directory <code>/opt/docker</code> will be used as our workspace, where Jenkins will upload the artifacts to Ansible Server.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Now, go to the <em>Post-build Actions</em> section and from the drop-down menu, select <em>Send build artifacts over SSH</em>, as shown in the image below:</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card kg-width-wide"><img src="https://appfleet.com/blog/content/images/2020/03/screenshot-157.245.243.72_8080-2020.02.03-16_18_07.png" class="kg-image" alt="Integrating Ansible and Docker for a CI/CD Pipeline Using Jenkins"></figure><!--kg-card-begin: markdown--><p>Make sure that in the <em>Remote Directory</em> field, you enter the pattern <code>//opt//docker</code> as it doesn’t support special characters. Apart from this, for now, we are going to leave the <em>Exec Command</em> field empty so that we can test whether our existing configuration works or not.</p>
<p>Now <em>Build</em> the project, and you will see the following output in your Jenkins’s console output:</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card kg-width-wide"><img src="https://appfleet.com/blog/content/images/2020/03/screenshot-157.245.243.72_8080-2020.02.03-16_31_42.png" class="kg-image" alt="Integrating Ansible and Docker for a CI/CD Pipeline Using Jenkins"></figure><!--kg-card-begin: markdown--><p>Go to your Ansible Server terminal and see if the artifact was sent with right user privileges:</p>
<pre><code class="language-bash"># ls -al /opt/docker/

total 4
drwxr-xr-x. 2 ansadmin ansadmin   24 Feb  3 10:54 .
drwxr-xr-x. 4 root     root       38 Jan 31 17:10 ..
-rw-rw-r--. 1 ansadmin ansadmin 2531 Feb  3 10:54 webapp.war
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>It looks like our <code>webapp.war</code> file was transferred successfully. In the following step, we will create an Ansible Playbook and Dockerfile.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="creatingdockerfileandansibleplaybook">Creating Dockerfile and Ansible Playbook:</h2>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>To create a Docker Image with the <code>webapp.war</code> file, first, we will create a DockerFile. Follow the below-mentioned steps:</p>
<p>First, log in to your Ansible Server and go to directory <code>/opt/docker</code> and create a file named as <code>Dockerfile</code>:</p>
<pre><code class="language-bash"># cd /opt/docker/
# touch Dockerfile
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Now open the Dockerfile in your preferred editor, and copy the below-mentioned lines and save it:</p>
<pre><code class="language-bash">FROM tomcat:8.5.50-jdk8-openjdk
MAINTAINER Your-Name-Here
COPY ./webapp.war /usr/local/tomcat/webapps
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Here instructions are to pull a Tomcat image with tag <code>8.5.50-jdk8-openjdk</code> and copying the <code>webapp.war</code> file to Tomcat default webapp directory., which is <code>/usr/local/tomcat/webapps</code></p>
<p>With the help of this Dockerfile, we will create a Docker container. So let us create the Ansible Playbook, which will enable us to automate the Docker image build process and later run the Docker container out of it.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>We are creating a Ansible Playbook, which does two tasks for us:</p>
<ol>
<li>Pull Tomcat’s latest version and build an image using <code>webapp.war</code> file.</li>
<li>Run the built image on the desired host.</li>
</ol>
<p>For this, we are going to create a new YAML format file for your Ansible Playbook:</p>
<pre><code class="language-bash"># nano simple-ansible.yaml
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Now copy the below-mentioned line into your <code>simple-ansible.yaml</code> file:</p>
<pre><code class="language-yaml">---
#Simple Ansible Playbook to build and run a Docker containers
 
- name: Playbook to build and run Docker
  hosts: all
  become: true
  gather_facts: false
 
  tasks:
    - name: Build a Docker image using webapp.war file
      docker_image:
        name: simple-docker-image
        build:
          path: /opt/docker
          pull: false
        source: build
 
    - name: Run Docker container using simple-docker-image
      docker_container:
        name: simple-docker-container
        image: simple-docker-image:latest
        state: started
        recreate: yes
        detach: true
        ports:
          - &quot;8888:8080&quot;
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>You can get more help here: <a href="https://docs.ansible.com/ansible/latest/modules/docker_image_module.html">docker_image</a> and <a href="https://docs.ansible.com/ansible/latest/modules/docker_container_module.html">docker_container</a>.  Now, as our Playbook is created, we can run a test to see if it works as planned:</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code class="language-bash"># cd /opt/docker
# ansible-playbook simple-ansible-playbook.yaml --limit ansible_CN
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Here we have used the <code>--limit</code> flag, which means it will only run on our Ansible Server (Control Node). You might see the following output, in your terminal window:</p>
<pre><code class="language-bash">PLAY [Playbook to build and run Docker] 
***************************************************************************
 
TASK [Build Docker image using webapp.war file] 
***************************************************************************
changed: [ansible_CN]
 
TASK [Run Docker image using simple-docker-image]
***************************************************************************
changed: [ansible_CN]
 
PLAY RECAP 
***************************************************************************
ansible_CN                 : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
</code></pre>
<p>Look's like Playbook ran sccessfully and no error was detected during the Ansible Playbook check, so now we can move to Jenkins to complete our CI/CD process using Ansible.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="runansibleplaybookusingjenkins">Run Ansible Playbook using Jenkins</h2>
<p>In this step, we would execute our Ansible Playbook (i.e., <code>simple-ansible-playbook.yaml</code>) file, and to do so let us go back to the <em>Project Configuration</em> page in Jenkins and find <strong>Post-build Actions</strong> there.</p>
<p>In this section, copy the below-mentioned command in the <em>Exec command</em> field:</p>
<pre><code class="language-bash">sudo ansible-playbook --limit ansible_CN /opt/docker/simple-ansible-playbook.yaml;
</code></pre>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card kg-width-wide"><img src="https://appfleet.com/blog/content/images/2020/03/screenshot-157.245.243.72_8080-2020.02.03-18_43_06.png" class="kg-image" alt="Integrating Ansible and Docker for a CI/CD Pipeline Using Jenkins"></figure><!--kg-card-begin: markdown--><p>Now, let us try to build the project and see the Jenkins Job's console output:</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card kg-width-wide"><img src="https://appfleet.com/blog/content/images/2020/03/screenshot-157.245.243.72_8080-2020.02.03-18_48_46.png" class="kg-image" alt="Integrating Ansible and Docker for a CI/CD Pipeline Using Jenkins"></figure><!--kg-card-begin: markdown--><p>In the output, you can see that our Ansible playbook ran successfully. Let us verify if at Ansible Server the image is created and the container is running:</p>
<p>For Docker Image list:</p>
<pre><code class="language-bash"># docker images
 
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
simple-docker-image   latest              d47875d99095        32 seconds ago      507MB
tomcat                latest              5692d26ea179        15 hours ago        507MB
</code></pre>
<p>For Docker Container list:</p>
<pre><code class="language-bash"># docker ps

CONTAINER ID        IMAGE                        COMMAND             CREATED             STATUS              PORTS                    NAMES
5a824d0a43d5        simple-docker-image:latest   &quot;catalina.sh run&quot;   15 seconds ago      Up 14 seconds       0.0.0.0:8888-&gt;8080/tcp   simple-docker-container
</code></pre>
<p>It looks like Jenkins was able to run the Ansible Playbook successfully. Next, we are going to push Docker Image to Docker Hub.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="pushingdockerimagetodockerhubusingansible">Pushing Docker Image to Docker Hub Using Ansible</h2>
<p>We are going to use Docker Hub public repository for this guide; in case you want to work on a live project, then you should consider using the Docker Hub private registry.</p>
<p>For this step, you have to create a Docker Hub account if you haven’t had one yet.</p>
<p>Our end goal for this step is to publish the Docker Image to Docker Hub using Ansible Playbook. So go to your Ansible Control Node and follow the below-mentioned steps:</p>
<pre><code class="language-bash"># docker login
 
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: your-docker-hub-user
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
 
Login Succeeded
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Make sure that you enter the right username and password.</p>
<p>Now it’s time to create a new Ansible Playbook which will build and push the Docker image to your Docker Hub account. Note that this image will be publicly available, so be cautious.</p>
<pre><code class="language-bash"># nano build-push.yaml
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Create a new Ansible Playbook, which will build a Docker image and push it to our Docker Hub account:</p>
<pre><code class="language-bash">---
#Simple Ansible Playbook to build and push Docker image to Registry
 
- name: Playbook to build and run Docker
  hosts: ansible_CN
  become: true
  gather_facts: false
 
  tasks:
    - name: Delete existing Docker images from the Control Node
      shell: docker rmi $(docker images -q) -f 
      ignore_errors: yes
 
    - name: Push Docker image to Registry
      docker_image:
        name: simple-docker-image
        build:
          path: /opt/docker
          pull: true
        state: present
        tag: &quot;latest&quot;
        force_tag: yes
        repository: gauravsadawarte/simple-docker-image:latest
        push: yes
        source: build
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Let us run the playbook now and see what we get:</p>
<pre><code class="language-bash"># ansible-playbook --limit ansible_CN build-push.yaml
 
PLAY [Playbook to build and run Docker] 
*****************************************************************************************
 
TASK [Push Docker image to Registry] 
*****************************************************************************************
changed: [ansible_CN]
 
PLAY RECAP 
*****************************************************************************************
ansible_CN                 : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Go to your Docker Hub account and see if the image was pushed successfully, as shown in the image below:</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2020/03/screenshot-hub.docker.com-2020.02.07-14_10_48.png" class="kg-image" alt="Integrating Ansible and Docker for a CI/CD Pipeline Using Jenkins"></figure><!--kg-card-begin: markdown--><p>Next, let us modify our <code>simple-ansible-playbook.yaml</code> playbook, which we created earlier, as from here on, we are going to pull the Docker image from Docker Hub Account and create a container out of it.</p>
<pre><code class="language-bash">---
#Simple Ansible Playbook to pull Docker Image from the registry and run a Docker containers
 
- import_playbook: build-push.yaml
 
- name: Playbook to build and run Docker
  hosts: docker_host
  gather_facts: false
 
  tasks:
    - name: Run Docker container using simple-docker-image
      docker_container:
        name: simple-docker-container
        image: gauravsadawarte/simple-docker-image:latest
        state: started
        recreate: yes
        detach: true
        pull: yes
        ports:
          - &quot;8888:8080&quot;
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Note that we have used the <code>import_playbook</code> statement at the top of the existing playbook, which means that we want to run the <code>build-push.yaml</code> playbook first along with our main playbook, and this way, we don’t have to run multiple playbooks manually.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Let us break the whole process into steps:</p>
<ol>
<li>With the help of <code>build-push.yaml</code> playbook, we are asking Ansible to build an image with the artifacts sent by Jenkins to our Control Node, and later push the built image (i.e., <code>simple-docker-image</code>) to our Docker Hub’s account or any other private registry like AWS ECR or Google’s Container Registry.</li>
<li>In the <code>simple-ansible-playbook.yaml</code> file, we have imported the <code>build-push.yaml</code> file, which is going to run prior to any statement present within the <code>simple-ansible-playbook.yaml</code> file.</li>
<li>Once <code>build-push.yaml</code> playbook is executed, Ansible will launch a container into our Managed Docker Host by pulling our image from our defined registry.</li>
</ol>
<p>Now, it's time to build our job. So in the next step, we will deploy the artifact to our Control Node, where Ansible Playbook will build an image, push to Docker Hub and run the container in Managed Host. Let us get started!</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="jenkinsjobstodeploydockercontainerusingansible">Jenkins Jobs to Deploy Docker Container Using Ansible</h2>
<p>To begin, go to <em>JenkinstoDockerUsingAnsible</em> configure page and change the <em>Exec command</em> in the <em>Post-build Actions</em> section.</p>
<p>Copy the below-mentioned command and add it as shown in the image below:</p>
<pre><code class="language-bash">sudo ansible-playbook /opt/docker/simple-ansible-playbook.yaml;
</code></pre>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2020/03/screenshot-157.245.243.72_8080-2020.02.12-16_03_17.png" class="kg-image" alt="Integrating Ansible and Docker for a CI/CD Pipeline Using Jenkins"></figure><!--kg-card-begin: markdown--><p>Save the configuration and start the build; you will see the following output:</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2020/03/screenshot-157.245.243.72_8080-2020.02.12-16_22_39.png" class="kg-image" alt="Integrating Ansible and Docker for a CI/CD Pipeline Using Jenkins"></figure><!--kg-card-begin: markdown--><p>Now go to your Control Node and verify if our images were built:</p>
<pre><code class="language-bash"># docker images
REPOSITORY                            TAG                   IMAGE ID            CREATED             SIZE
gauravsadawarte/simple-docker-image   latest                9ccd91b55796        2 minutes ago       529MB
simple-docker-image                   latest                9ccd91b55796        2 minutes ago       529MB
tomcat                                8.5.50-jdk8-openjdk   b56d8850aed5        5 days ago          529MB
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>It looks like Ansible Playbook was successfully executed on our Control Node. It’s time to verify if Ansible was able to launch containers on our Managed Host or not.</p>
<p>Go to your Managed Host and enter the following command:</p>
<pre><code class="language-bash"># docker ps
CONTAINER ID        IMAGE                                        COMMAND                  CREATED             STATUS              PORTS                               NAMES
6f5e18c20a68        gauravsadawarte/simple-docker-image:latest   &quot;catalina.sh run&quot;        4 minutes ago       Up 4 minutes        0.0.0.0:8888-&gt;8080/tcp              simple-docker-container
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Now visit the following URL <code>http://your-ip-addr:8888/webapp/</code> in your browser. Note that, Tomcat Server may take some time before you can see the output showing your project is successfully setup.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>And you are done!</p>
<p>You successfully managed to deploy your application using Jenkins, Ansible, and Docker. Now, whenever someone from your team pushes code to the repository, Jenkins will build the artifact and send it to Ansible, from there Ansible will be responsible for publishing the application to the desired machine.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Building Images Faster and Better With Multi-Stage Builds]]></title><description><![CDATA[<p>There is no doubt about the fact that Docker makes it very easy to deploy multiple applications on a single box. Be it different versions of the same tool, different applications with different version dependencies - Docker has you covered. But then nothing comes free. This flexibility comes with some</p>]]></description><link>https://appfleet.com/blog/speed-up-docker-builds-with-multi-stage/</link><guid isPermaLink="false">5e35512439f27869d61ef8ed</guid><category><![CDATA[Docker]]></category><category><![CDATA[Golang]]></category><category><![CDATA[CI/CD]]></category><dc:creator><![CDATA[Vikas Yadav]]></dc:creator><pubDate>Mon, 07 Sep 2020 16:31:00 GMT</pubDate><media:content url="https://appfleet.com/blog/content/images/2020/08/93-Building-Images-Faster-and-Better-With-Multi-Stage-Builds.png" medium="image"/><content:encoded><![CDATA[<img src="https://appfleet.com/blog/content/images/2020/08/93-Building-Images-Faster-and-Better-With-Multi-Stage-Builds.png" alt="Building Images Faster and Better With Multi-Stage Builds"><p>There is no doubt about the fact that Docker makes it very easy to deploy multiple applications on a single box. Be it different versions of the same tool, different applications with different version dependencies - Docker has you covered. But then nothing comes free. This flexibility comes with some problems - like <strong>high disk usage and large images</strong>. With Docker, you have to be careful about writing your Dockerfile efficiently in order to reduce the image size and also improve the build times. </p><!--kg-card-begin: markdown--><p>Docker provides a set of <a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/">standard practices</a> to follow in order to keep your image size small - also covers multi-stage builds in brief.</p>
<!--kg-card-end: markdown--><p><strong>Multi-stage</strong> builds are specifically useful for use cases where we build an <em>artifact, binary or executable</em>. Usually, there are lots of dependencies required for building the binary - for example - GCC, Maven, build-essentials, etc., but once you have the executable, you don’t need those dependencies to run the executable. Multi-stage builds use this to skim the image size. They let you build the executable in a separate environment and then build the final image only with the executable and minimal dependencies required to run the executable.</p><!--kg-card-begin: markdown--><p>For example, here’s a <a href="https://github.com/go-training/helloworld">simple application</a> written in <em>Go</em>. All it does is to print “Hello World!!” as output. Let’s start without using multi-stage builds.</p>
<!--kg-card-end: markdown--><pre><code>Dockerfile
FROM golang
ADD . /app
WORKDIR /app
RUN go build # This will create a binary file named app
ENTRYPOINT /app/app</code></pre><!--kg-card-begin: markdown--><ul>
<li>Build and run the image</li>
</ul>
<!--kg-card-end: markdown--><pre><code>docker build -t goapp .
~/g/helloworld ❯❯❯ docker run -it --rm goapp
Hello World!!</code></pre><!--kg-card-begin: markdown--><ul>
<li>Now let us check the image size</li>
</ul>
<!--kg-card-end: markdown--><pre><code>~/g/helloworld ❯❯❯ docker images | grep goapp
goapp                                          latest              b4221e45dfa0        18 seconds ago      805MB</code></pre><!--kg-card-begin: markdown--><ul>
<li>New Dockerfile</li>
</ul>
<!--kg-card-end: markdown--><pre><code># Build executable stage
FROM golang
ADD . /app
WORKDIR /app
RUN go build
ENTRYPOINT /app/app
# Build final image
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /app/app .
CMD ["./app"]</code></pre><!--kg-card-begin: markdown--><ul>
<li>Re-build and run the image</li>
</ul>
<!--kg-card-end: markdown--><pre><code>docker build -t goapp .
~/g/helloworld ❯❯❯ docker run -it --rm goapp
Hello World!!</code></pre><!--kg-card-begin: markdown--><ul>
<li>Let us check the image again</li>
</ul>
<!--kg-card-end: markdown--><pre><code>~/g/helloworld ❯❯❯ docker images | grep goapp
goapp                                          latest              100f92d756da        8 seconds ago       8.15MB
~/g/helloworld ❯❯❯
</code></pre><p>We can see a massive reduction in the image size -&gt; From <strong>805 MB</strong> we are down to <strong>8.15MB</strong>. This is mostly because the <em>Golang</em> image has lots of dependencies which our final executable doesn’t even require for running. <br></p><p><strong>What’s happening here?</strong></p><p>We are building the image in two stages. First, we are using a <em>Golang</em> base image, copying our code inside it and building our executable file <em>App</em>. Now in the next stage, we are using a new <em>Alpine</em> base image and copying the binary which we built earlier to our new stage. Important point to note here is that the image built at each stage is entirely independent.</p><!--kg-card-begin: markdown--><ul>
<li>Stage 0</li>
</ul>
<!--kg-card-end: markdown--><pre><code class="language-Dockerfile"># Build executable stage
FROM golang
ADD . /app
WORKDIR /app
RUN go build
ENTRYPOINT /app/app</code></pre><!--kg-card-begin: markdown--><ul>
<li>Stage 1</li>
</ul>
<!--kg-card-end: markdown--><pre><code># Build final image
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /app/app .
CMD ["./app”]</code></pre><p>Note the line <strong><code>COPY —from=0 /app/app</code></strong>  -&gt; this lets you access data from inside the image built in previous image. </p><h2 id="how-multi-stage-builds-work">How multi-stage builds work? </h2><p>If you look at the process carefully multi-stage builds are not much different from actual Docker builds. The only major difference is that you build multiple independent images (1 per stage) and you get the capability to copy artifacts/files from one image to another easily. The feature which multi-stage build is providing right now was earlier achieved through scripts. People used to create the build image - copy artifact from it manually - and then copy it to a new image with no additional dependencies. In the above example, we build one image in <em>Stage 0</em> and then in <em>Stage 1</em> we build another image, to which we copy files from the older image - nothing complicated.<br></p><figure class="kg-card kg-image-card"><img src="https://lh3.googleusercontent.com/uJeE654QDaAJ4yexYLc9oMWymXDL2wa-UY7rt98pFJ6jssW0IjcXtc3Kdup6rXR18PGjryKIHFkmlmWPkN6Ph0Je-TXIdUEkDFgDBVHw0VBd3LabPGQAPX-A3bKGq_7MmEjC27ZZ" class="kg-image" alt="Building Images Faster and Better With Multi-Stage Builds"></figure><p><br></p><p><em>NOTE: We are copying <code>/app</code> from one image to another - not one container to another.</em> <br></p><p>This can speed up deployments and save cost in multiple ways</p><ul><li>You build efficient lightweight images - hence you ship lesser data during deployment which saves both cost and time.</li><li>You can stop a multi-stage build at any stage - so you can use it to avoid the builder pattern with multi-stage builds. Have a single Dockerfile for both <em>dev</em>, <em>staging</em> and <em>deployment</em>. </li></ul><p>The above was just a small example, multi-stage builds can be used to improve Docker images of applications written in other languages as well. Also, multi-stage builds can help to avoid writing multiple Dockerfiles (builder pattern) - instead a single Dockerfile with multiple stages can be adapted to streamline the development process. If you haven't explored it already - go ahead and do it. </p>]]></content:encoded></item><item><title><![CDATA[Demystifying Open-Source Orchestration of Unikernels With Unik]]></title><description><![CDATA[Similar to the way Docker builds and orchestrates containers, UniK automates compilation of popular languages into unikernels. ]]></description><link>https://appfleet.com/blog/getting-to-know-unik-a-open-source-orchestration-system-for-unikernels/</link><guid isPermaLink="false">5e76fcb439f27869d61f5b03</guid><category><![CDATA[DevOps]]></category><dc:creator><![CDATA[James D. Bohrman]]></dc:creator><pubDate>Mon, 24 Aug 2020 20:15:00 GMT</pubDate><media:content url="https://appfleet.com/blog/content/images/2020/07/92-Demystifying-Open-Source-Orchestration-of-Unikernels-With-Unik.png" medium="image"/><content:encoded><![CDATA[<h2 id="abstract">Abstract</h2><img src="https://appfleet.com/blog/content/images/2020/07/92-Demystifying-Open-Source-Orchestration-of-Unikernels-With-Unik.png" alt="Demystifying Open-Source Orchestration of Unikernels With Unik"><p>As the cloud-native ecosystem continues to evolve, many alternative solutions are popping-up, that challenges the status quo of application deployment methodologies. One of these solutions that is quickly gaining traction is <strong>Unikernels, which are executable images that can run natively on a hypervisor without the need for a separate operating system</strong>. <br><br>For cloud-native platforms to integrate unikernels into their ecosystem, they are required to provide unikernels with the same services they provide for containers. In this post, we are going to introduce <strong>UniK, an open-source orchestration system for unikernels. </strong><br><br>UniK (pronounced you-neek) is a tool for simplifying compilation and orchestration of unikernels. Similar to the way Docker builds and orchestrates containers, UniK automates compilation of popular languages (C/C++, Golang, Java, Node.js. Python) into unikernels. UniK deploys unikernels as virtual machines on various cloud platforms. </p><h2 id="unik-design">UniK Design</h2><p>The UniK Daemon consists of 3 major components.</p><ul><li>The <strong>API server</strong></li><li><strong>Compilers</strong></li><li><strong>Providers</strong></li></ul><p>The <strong>API Server</strong> handles requests from the CLI or any HTTP Client, then determines which is the appropriate <strong>provider</strong> and/or <strong>compiler</strong> to service the request.</p><p>When the <strong>API Server</strong> receives a <em>build</em> request (<code>POST /images/:image_name/create</code>), it calls the specified <strong>compiler</strong> to build the raw image, and then passes the raw image to the specified <strong>provider</strong>, which processes the raw image with the <code>Stage()</code> method, turning it into an infrastructure-specific bootable image (e.g. an <em>Amazon AMI</em> on AWS)<br><br>Let's go ahead and try spinning up a unikernel ourselves.</p><h2 id="deploying-a-go-http-daemon-with-unik">Deploying a GO HTTP daemon with UniK</h2><p>In this tutorial, we are going to be:</p><ol><li><a href="https://github.com/solo-io/unik/blob/master/docs/getting_started.md#installing-unik">Installing UniK</a></li><li><a href="https://github.com/solo-io/unik/blob/master/docs/getting_started.md#write-a-go-http-server">Writing a simple HTTP Daemon in Go</a></li><li><a href="https://github.com/solo-io/unik/blob/master/docs/getting_started.md#compile-an-image-and-run-on-virtualbox">Compiling to a unikernel and launching an instance on Virtualbox</a></li></ol><h3 id="install-configure-and-launch-unik">Install, configure, and launch UniK</h3><p></p><p><strong>Prerequisites</strong></p><p>Ensure that each of the following are installed</p><ul><li><a href="http://www.docker.com/" rel="nofollow">Docker</a> installed and running with at least 6GB available space for building images</li><li><a href="https://stedolan.github.io/jq/" rel="nofollow"><code>jq</code></a></li><li><a href="https://www.gnu.org/software/make/" rel="nofollow"><code>make</code></a></li><li><a href="https://www.virtualbox.org/" rel="nofollow">Virtualbox</a></li></ul><p><strong>Install UniK</strong></p><pre><code>$ git clone https://github.com/solo-io/unik.git
$ cd unik
$ make binary</code></pre><p><strong>Note: </strong><code>make</code> will take quite a few minutes the first time it runs. The UniK <code>Makefile</code> is pulling all of the Docker images that bundle UniK's dependencies.</p><p>Then, place the <code>unik</code> executable in your <code>$PATH</code> to make running UniK commands easier:</p><pre><code>$ mv _build/unik /usr/local/bin/
</code></pre><p><strong>Configure a Host-only network on VirtualBox</strong></p><!--kg-card-begin: markdown--><ol>
<li>Open Virtualbox.</li>
<li>Open <strong>Preferences &gt; Network &gt; Host-only Networks</strong>.</li>
<li>Click the green <em>Add</em> button on the right side of the UI.</li>
<li>Record the name of the new Host-only adapter (e.g. &quot;vboxnet0&quot;). You will need this in your UniK configuration.</li>
<li>Ensure that the Virtualbox DHCP Server is Enabled for this Host-only Network</li>
<li>With the Host-only Network selected, Click the edit button (screwdriver image)</li>
<li>In the <strong>Adapter</strong> tab, note the IPv4 address and netmask of the adapter.</li>
<li>In the <strong>DHCP Server</strong> tab, check the <strong>Enable Server</strong> box.</li>
<li>Set <strong>Server Address</strong> an IP on the same subnet as the Adapter IP. For example, if the adapter IP is <code>192.168.100.1</code>, make set the DHCP server IP as <code>192.168.100.X</code>, where X is a number between 2-254.</li>
<li>Set <strong>Server Mask</strong> to the netmask you just noted.</li>
<li>Set <strong>Upper/Lower Address Bound</strong> to a range of IPs on the same subnet. We recommend using the range <code>X-254</code> where X is one higher than the IP you used for the DHCP server itself. E.g., if your DHCP server is <code>192.168.100.2</code>, you can set the lower and upper bounds to <code>192.168.100.3</code> and <code>192.168.100.254</code>, respectively.</li>
</ol>
<!--kg-card-end: markdown--><p><strong>Configure UniK daemon</strong></p><p>UniK configuration files are stored in <code>$HOME/.unik</code>. Create this directory, if it is not present:</p><pre><code>$mkdir $HOME/.unik
</code></pre><p>Using a text editor, create and save the following to <code>$HOME/.unik/daemon-config.yaml</code>:</p><pre><code>providers:
  virtualbox:
    - name: my-vbox
      adapter_type: host_only
      adapter_name: NEW_HOST_ONLY_ADAPTER</code></pre><p>Replacing <code>NEW_HOST_ONLY_ADAPTER</code> with the name of the network adapter you created.</p><p><strong>Launch UniK and automatically deploy the <em>Virtualbox Instance Listener</em></strong></p><ul><li>Open a new terminal window/tab. This terminal will be where we leave the UniK daemon running.</li><li><code>cd</code> to the <code>_build</code> directory created by <code>make</code></li><li>run <code>unik daemon --debug</code> (the <code>--debug</code> flag is optional, if you want to see more verbose output)</li><li>UniK will compile and deploy its own 30 MB unikernel. This unikernel is the <a href="https://github.com/solo-io/unik/blob/master/docs/instance_listener.md">Unik Instance Listener</a>. The Instance Listener uses udp broadcast to detect (the IP address) and bootstrap instances running on Virtualbox.</li><li>After this is finished, UniK is running and ready to accept commands.</li><li>Open a new terminal window and type <code>unik target --host localhost</code> to set the CLI target to the your local machine.</li></ul><h3 id="write-a-go-http-server">Write a Go HTTP server</h3><p>Open a new terminal window, but leave the window with the daemon running. This window will be used for running UniK CLI commands.</p><!--kg-card-begin: markdown--><p><strong>1. Create httpd.go file</strong></p>
<!--kg-card-end: markdown--><p>Create a file <code>httpd.go</code> using a text editor. Copy and paste the following code in that file:</p><pre><code>package main

import (
    "fmt"
    "net/http"
)

func main() {
    http.HandleFunc("/", handler)
    http.ListenAndServe(":8080", nil)
}

func handler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "my first unikernel!")
}</code></pre><!--kg-card-begin: markdown--><p><strong>2. Run the code</strong></p>
<!--kg-card-end: markdown--><p>Try running this code with <code>go run httpd.go</code>. Visit <a href="http://localhost:8080/" rel="nofollow">http://localhost:8080/</a> to see that the server is running.</p><!--kg-card-begin: markdown--><p><strong>3. Create a dummy <code>Godeps</code> file</strong></p>
<!--kg-card-end: markdown--><p>We need to create a dummy <code>Godeps</code> file. This is necessary to tell the Go compiler how Go projects and their dependencies are structured. Fortunately, with this example, our project has no dependencies, and we can just fill out a simple <code>Godeps</code> file without installing <a href="https://github.com/tools/godep"><code>godep</code></a>. </p><p>Note: For Go projects with imported dependencies, and nested packages, you will need to install Godeps and run <code>GO15VENDOREXPERIMENT=1 godep save ./...</code> in your project. See <a href="https://github.com/solo-io/unik/blob/master/docs/compilers/rump.md#golang">Compiling Go Apps with UniK</a> for more information.</p><p>To create the dummy Godeps file, create a folder named <code>Godeps</code> in the same directory as <code>httpd.go</code>. Inside, create a file named <code>Godeps.json</code> and paste the following inside:</p><pre><code>{
	"ImportPath": "my_httpd",
	"GoVersion": "go1.6",
	"GodepVersion": "v63",
	"Packages": [
		"./.."
	],
	"Deps": [
		{
			"ImportPath": "github.com/solo-io/unik/docs/examples",
			"Rev": "f8cc0dd435de36377eac060c93481cc9f3ae9688"
		}
	]
}</code></pre><p>For the purposes of this example, what matters here is <code>my_httpd</code>. It instructs the Go compiler that the project should be installed from <code>$GOPATH/src/my_httpd</code>.</p><!--kg-card-begin: markdown--><p><strong>4. Great! Now we're ready to compile this code to a unikernel.</strong></p>
<!--kg-card-end: markdown--><h3 id="compile-an-image-to-run-on-virtualbox">Compile an image to run on VirtualBox</h3><!--kg-card-begin: markdown--><p><strong>1. Compile sources</strong></p>
<!--kg-card-end: markdown--><p>Run the following command from the directory where your <code>httpd.go</code> is located:</p><pre><code>unik build --name myImage --path ./ --base rump --language go --provider virtualbox
</code></pre><p>This command will instruct UniK to compile the sources found in the working directory (<code>./</code>) using the <code>rump-go-virtualbox</code> compiler.</p><!--kg-card-begin: markdown--><p><strong>2. Watch output</strong></p>
<!--kg-card-end: markdown--><p>You can watch the output of the <code>build</code> command in the terminal window running the daemon.</p><!--kg-card-begin: markdown--><p><strong>3. Locate disk image</strong></p>
<!--kg-card-end: markdown--><p>When <code>build</code> finishes, the resulting disk image will reside at <code>$HOME/.unik/virtualbox/images/myImage/boot.vmdk</code></p><!--kg-card-begin: markdown--><p><strong>4. Run instance of disk image</strong></p>
<!--kg-card-end: markdown--><p>Run an instance of this image with:</p><pre><code>unik run --instanceName myInstance --imageName myImage
</code></pre><!--kg-card-begin: markdown--><p><strong>5. Check IP</strong></p>
<!--kg-card-end: markdown--><p>When the instance finishes launching, let's check its IP and see if it is running our application. Run <code>unik instances</code>. The instance IP Address should be listed.</p><!--kg-card-begin: markdown--><p><strong>6. View the instance!</strong></p>
<!--kg-card-end: markdown--><p>Direct your browser to <code>http://instance-ip:8080</code> and see that your instance is running! </p><h2 id="finishing-up">Finishing up</h2><p>To clean up your image and the instance you created</p><pre><code>unik rmi --force --image myImage
</code></pre><p>And we're done. We hope you will benefit from this post and tutorial and stay tuned for future posts!<br></p>]]></content:encoded></item><item><title><![CDATA[Tutorial: Kubernetes-Native Backup and Recovery With Stash]]></title><description><![CDATA[Stash is a Restic Operator that accelerates the task of backing up and recovering your Kubernetes infrastructure. ]]></description><link>https://appfleet.com/blog/kubernetes-native-backup-and-recovery-with-stash/</link><guid isPermaLink="false">5e771b5839f27869d61f5b47</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Stash]]></category><category><![CDATA[Tutorial]]></category><dc:creator><![CDATA[James D. Bohrman]]></dc:creator><pubDate>Mon, 03 Aug 2020 20:14:00 GMT</pubDate><media:content url="https://appfleet.com/blog/content/images/2020/07/91-Tutorial-Kubernetes-Native-Backup-and-Recovery-With-Stash.png" medium="image"/><content:encoded><![CDATA[<hr><h2 id="intro">Intro</h2><img src="https://appfleet.com/blog/content/images/2020/07/91-Tutorial-Kubernetes-Native-Backup-and-Recovery-With-Stash.png" alt="Tutorial: Kubernetes-Native Backup and Recovery With Stash"><p>Having a proper backup recovery plan is vital to any organization's IT operation. However, when you begin to distribute workloads across data centers and regions, that process begins to become more and more complex. Container orchestration platforms such as Kubernetes have begun to ease this burden and enabled the management of distributed workloads in areas that were previously very challenging. <br><br>In this post, we are going to introduce you to a Kubernetes-native tool for taking backups of your disks, helping with the crucial recovery plan. <strong>Stash is a <a href="https://restic.net/">Restic</a> Operator that accelerates the task of backing up and recovering your Kubernetes infrastructure</strong>. You can read more about the Operator Framework via <a href="https://appfleet.com/blog/first-steps-with-the-kubernetes-operator/">this blog post</a>.</p><h2 id="how-does-stash-work">How does Stash work?</h2><p>Using Stash, you can backup Kubernetes volumes mounted in following types of workloads:</p><ul><li>Deployment</li><li>DaemonSet</li><li>ReplicaSet</li><li>ReplicationController</li><li>StatefulSet</li></ul><p>At the heart of Stash is a Kubernetes <a href="https://book.kubebuilder.io/basics/what_is_a_controller.html">controller</a> which uses <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/">Custom Resource Definition (CRD)</a> to specify targets and behaviors of the backup and restore process in a Kubernetes native way. A simplified architecture of Stash is shown below:</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://appfleet.com/blog/content/images/2020/03/stash_architecture.svg" class="kg-image" alt="Tutorial: Kubernetes-Native Backup and Recovery With Stash"></figure><h2 id="installing-stash">Installing Stash</h2><h3 id="using-helm-3">Using Helm 3</h3><p>Stash can be installed via <a href="https://helm.sh/">Helm</a> using the <a href="https://github.com/stashed/installer/tree/v0.9.0-rc.6/charts/stash">chart</a> from <a href="https://github.com/appscode/charts">AppsCode Charts Repository</a>. To install the chart with the release name <code>stash-operator</code>:</p><pre><code class="language-console">$ helm repo add appscode https://charts.appscode.com/stable/
$ helm repo update
$ helm search repo appscode/stash --version v0.9.0-rc.6
NAME            CHART          VERSION      APP VERSION DESCRIPTION
appscode/stash  v0.9.0-rc.6    v0.9.0-rc.6  Stash by AppsCode - Backup your Kubernetes Volumes

$ helm install stash-operator appscode/stash \
  --version v0.9.0-rc.6 \
  --namespace kube-system</code></pre><h3 id="using-yaml">Using YAML</h3><p>If you prefer to not use Helm, you can generate YAMLs from Stash chart and deploy using <code>kubectl</code>:</p><pre><code class="language-console">$ helm repo add appscode https://charts.appscode.com/stable/
$ helm repo update
$ helm search repo appscode/stash --version v0.9.0-rc.6
NAME            CHART VERSION APP VERSION DESCRIPTION
appscode/stash  v0.9.0-rc.6    v0.9.0-rc.6  Stash by AppsCode - Backup your Kubernetes Volumes

$ helm template stash-operator appscode/stash \
  --version v0.9.0-rc.6 \
  --namespace kube-system \
  --no-hooks | kubectl apply -f -</code></pre><h3 id="installing-on-gke-cluster"><strong>Installing on GKE Cluster</strong></h3><p>If you are installing Stash on a GKE cluster, you will need cluster admin permissions to install Stash operator. Run the following command to grant admin permission to the cluster.</p><pre><code class="language-console">$ kubectl create clusterrolebinding "cluster-admin-$(whoami)" \
  --clusterrole=cluster-admin \
  --user="$(gcloud config get-value core/account)"
</code></pre><p>In addition, if your GKE cluster is a <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters">private cluster</a>, you will need to either add an additional firewall rule that allows master nodes access port <code>8443/tcp</code> on worker nodes, or change the existing rule that allows access to ports <code>443/tcp</code> and <code>10250/tcp</code> to also allow access to port <code>8443/tcp</code>. The procedure to add or modify firewall rules is described in the official GKE documentation for private clusters mentioned above.</p><h3 id="verify-installation">Verify installation</h3><p>To check if Stash operator pods have started, run the following command:</p><pre><code class="language-console">$ kubectl get pods --all-namespaces -l app=stash --watch

NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   stash-operator-859d6bdb56-m9br5   2/2       Running   2          5s
</code></pre><p>Once the operator pods are running, you can cancel the above command by typing <code>Ctrl+C</code>.</p><p>Now, to confirm CRD groups have been registered by the operator, run the following command:</p><pre><code class="language-console">$ kubectl get crd -l app=stash

NAME                                 AGE
recoveries.stash.appscode.com        5s
repositories.stash.appscode.com      5s
restics.stash.appscode.com           5s
</code></pre><p>With this, you are ready to take your first backup using Stash.</p><h2 id="configuring-auto-backup-for-database">Configuring Auto Backup for Database</h2><p>To keep everything isolated, we are going to use a separate namespace called <code>demo</code> throughout this tutorial.</p><pre><code class="language-console">$ kubectl create ns demo
namespace/demo created</code></pre><h3 id="prepare-backup-blueprint">Prepare Backup Blueprint</h3><p>We are going to use <a href="https://stash.run/docs/v0.9.0-rc.6/guides/latest/backends/gcs">GCS Backend</a> to store the backed up data. You can use any supported backend you prefer. You just have to configure Storage Secret and <code>spec.backend</code> section of <code>BackupBlueprint</code> to match with your backend. Visit <a href="https://stash.run/docs/v0.9.0-rc.6/guides/latest/backends/overview">here</a> to learn which backends are supported by Stash and how to configure them.</p><p>For GCS backend, if the bucket does not exist, Stash needs <code>Storage Object Admin</code> role permissions to create the bucket. For more details, please check the following <a href="https://stash.run/docs/v0.9.0-rc.6/guides/latest/backends/gcs">guide</a>.</p><p><strong>Create Storage Secret:</strong></p><p>At first, let’s create a Storage Secret for the GCS backend,</p><pre><code class="language-console">$ echo -n 'changeit' &gt; RESTIC_PASSWORD
$ echo -n '&lt;your-project-id&gt;' &gt; GOOGLE_PROJECT_ID
$ mv downloaded-sa-json.key &gt; GOOGLE_SERVICE_ACCOUNT_JSON_KEY
$ kubectl create secret generic -n demo gcs-secret \
    --from-file=./RESTIC_PASSWORD \
    --from-file=./GOOGLE_PROJECT_ID \
    --from-file=./GOOGLE_SERVICE_ACCOUNT_JSON_KEY
secret/gcs-secret created
</code></pre><p><strong>Create BackupBlueprint:</strong></p><p>Next, we have to create a <code>BackupBlueprint</code> CRD with a blueprint for <code>Repository</code> and <code>BackupConfiguration</code> object.</p><p>Below is the YAML of the <code>BackupBlueprint</code> object that we are going to create:</p><pre><code class="language-yaml">apiVersion: stash.appscode.com/v1beta1
kind: BackupBlueprint
metadata:
  name: postgres-backup-blueprint
spec:
  # ============== Blueprint for Repository ==========================
  backend:
    gcs:
      bucket: appscode-qa
      prefix: stash-backup/${TARGET_NAMESPACE}/${TARGET_APP_RESOURCE}/${TARGET_NAME}
    storageSecretName: gcs-secret
  # ============== Blueprint for BackupConfiguration =================
  task:
    name: postgres-backup-${TARGET_APP_VERSION}
  schedule: "*/5 * * * *"
  retentionPolicy:
    name: 'keep-last-5'
    keepLast: 5
    prune: true</code></pre><p>Note that we have used few variables (format: <code>${&lt;variable name&gt;}</code>) in the <code>spec.backend.gcs.prefix</code> field. Stash will substitute these variables with values from the respective target. To learn which variables you can use in the <code>prefix</code> field, please visit <a href="https://stash.run/docs/v0.9.0-rc.6/concepts/crds/backupblueprint#repository-blueprint">here</a>.</p><p>Let’s create the <code>BackupBlueprint</code> that we have shown above.</p><pre><code class="language-console">$ kubectl apply -f https://github.com/stashed/docs/raw/v0.9.0-rc.6/docs/examples/guides/latest/auto-backup/database/backupblueprint.yaml
backupblueprint.stash.appscode.com/postgres-backup-blueprint created
</code></pre><p>With this, automatic backup is configured for PostgreSQL database. We just have to add an annotation to the <code>AppBinding</code> of the targeted database.</p><p><strong>Required Annotation for Auto-Backup Database:</strong></p><p>You have to add the following annotation to the <code>AppBinding</code> CRD of the targeted database to enable backup for it:</p><pre><code class="language-yaml">stash.appscode.com/backup-blueprint: &lt;BackupBlueprint name&gt;
</code></pre><p>This annotation specifies the name of the <code>BackupBlueprint</code> object where a blueprint for <code>Repository</code> and <code>BackupConfiguration</code> has been defined.</p><h3 id="prepare-databases">Prepare Databases</h3><p>Next, we are going to deploy two sample PostgreSQL databases of two different versions using KubeDB. We are going to backup these two databases using auto-backup.</p><p><strong>Deploy First PostgreSQL Sample:</strong></p><p>Below is the YAML of the first <code>Postgres</code> CRD:</p><pre><code class="language-yaml">apiVersion: kubedb.com/v1alpha1
kind: Postgres
metadata:
  name: sample-postgres-1
  namespace: demo
spec:
  version: "11.2"
  storageType: Durable
  storage:
    storageClassName: "standard"
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 1Gi
  terminationPolicy: Delete
</code></pre><p>Let’s create the <code>Postgres</code> we have shown above:</p><pre><code class="language-console">$ kubectl apply -f https://github.com/stashed/docs/raw/v0.9.0-rc.6/docs/examples/guides/latest/auto-backup/database/sample-postgres-1.yaml
postgres.kubedb.com/sample-postgres-1 created
</code></pre><p>KubeDB will deploy a PostgreSQL database according to the above specification and it will create the necessary secrets and services to access the database. It will also create an <code>AppBinding</code> CRD that holds the necessary information to connect with the database.</p><p>Verify that an <code>AppBinding</code> has been created for this PostgreSQL sample:</p><pre><code class="language-console">$ kubectl get appbinding -n demo
NAME                AGE
sample-postgres-1   47s
</code></pre><p>If you view the YAML of this <code>AppBinding</code>, you will see it holds service and secret information. Stash uses this information to connect with the database.</p><pre><code class="language-console">$ kubectl get appbinding -n demo sample-postgres-1 -o yaml
</code></pre><pre><code class="language-yaml">apiVersion: appcatalog.appscode.com/v1alpha1
kind: AppBinding
metadata:
  name: sample-postgres-1
  namespace: demo
  ...
spec:
  clientConfig:
    service:
      name: sample-postgres-1
      path: /
      port: 5432
      query: sslmode=disable
      scheme: postgresql
  secret:
    name: sample-postgres-1-auth
  secretTransforms:
  - renameKey:
      from: POSTGRES_USER
      to: username
  - renameKey:
      from: POSTGRES_PASSWORD
      to: password
  type: kubedb.com/postgres
  version: "11.2"</code></pre><p><strong>Deploy Second PostgreSQL Sample:</strong></p><p>Below is the YAML of the second <code>Postgres</code> object:</p><pre><code class="language-yaml">apiVersion: kubedb.com/v1alpha1
kind: Postgres
metadata:
  name: sample-postgres-2
  namespace: demo
spec:
  version: "10.6-v2"
  storageType: Durable
  storage:
    storageClassName: "standard"
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 1Gi
  terminationPolicy: Delete
</code></pre><p>Let’s create the <code>Postgres</code> we have shown above.</p><pre><code class="language-console">$ kubectl apply -f https://github.com/stashed/docs/raw/v0.9.0-rc.6/docs/examples/guides/latest/auto-backup/database/sample-postgres-2.yaml
postgres.kubedb.com/sample-postgres-2 created
</code></pre><p>Verify that an <code>AppBinding</code> has been created for this PostgreSQL database:</p><pre><code class="language-console">$ kubectl get appbinding -n demo
NAME                AGE
sample-postgres-1   2m49s
sample-postgres-2   10s
</code></pre><p>Here, we can see <code>AppBinding</code> <code>sample-postgres-2</code> has been created for our second PostgreSQL sample.</p><h2 id="backup"><strong>Backup</strong></h2><p>Next, we are going to add auto-backup specific annotation to the <code>AppBinding</code> of our desired database. Stash watches for <code>AppBinding</code> CRD. Once it finds an <code>AppBinding</code> with auto-backup annotation, it will create a <code>Repository</code> and a <code>BackupConfiguration</code> CRD according to respective <code>BackupBlueprint</code>. Then, rest of the backup process will proceed as normal database backup as described <a href="https://stash.run/docs/v0.9.0-rc.6/guides/latest/addons/overview">here</a>.</p><h3 id="backup-first-postgresql-sample"><strong>Backup First PostgreSQL Sample</strong></h3><p>Let’s backup our first PostgreSQL sample using auto-backup.</p><p><strong>Add Annotations:</strong></p><p>At first, add the auto-backup specific annotation to the AppBinding <code>sample-postgres-1</code>:</p><pre><code class="language-console">$ kubectl annotate appbinding sample-postgres-1 -n demo --overwrite \
  stash.appscode.com/backup-blueprint=postgres-backup-blueprint
</code></pre><p>Verify that the annotation has been added successfully:</p><pre><code class="language-console">$ kubectl get appbinding -n demo sample-postgres-1 -o yaml
</code></pre><pre><code class="language-yaml">apiVersion: appcatalog.appscode.com/v1alpha1
kind: AppBinding
metadata:
  annotations:
    stash.appscode.com/backup-blueprint: postgres-backup-blueprint
  name: sample-postgres-1
  namespace: demo
  ...
spec:
  clientConfig:
    service:
      name: sample-postgres-1
      path: /
      port: 5432
      query: sslmode=disable
      scheme: postgresql
  secret:
    name: sample-postgres-1-auth
  secretTransforms:
  - renameKey:
      from: POSTGRES_USER
      to: username
  - renameKey:
      from: POSTGRES_PASSWORD
      to: password
  type: kubedb.com/postgres
  version: "11.2"
</code></pre><p>Following this, Stash will create a <code>Repository</code> and a <code>BackupConfiguration</code> CRD according to the blueprint.</p><p><strong>Verify Repository:</strong></p><p>Verify that the <code>Repository</code> has been created successfully by the following command:</p><pre><code class="language-console">$ kubectl get repository -n demo
NAME                         INTEGRITY   SIZE   SNAPSHOT-COUNT   LAST-SUCCESSFUL-BACKUP   AGE
postgres-sample-postgres-1                                                                2m23s
</code></pre><p>If we view the YAML of this <code>Repository</code>, we are going to see that the variables <code>${TARGET_NAMESPACE}</code>, <code>${TARGET_APP_RESOURCE}</code> and <code>${TARGET_NAME}</code> has been replaced by <code>demo</code>, <code>postgres</code> and <code>sample-postgres-1</code> respectively.</p><pre><code class="language-console">$ kubectl get repository -n demo postgres-sample-postgres-1 -o yaml
</code></pre><pre><code class="language-yaml">apiVersion: stash.appscode.com/v1beta1
kind: Repository
metadata:
  creationTimestamp: "2019-08-01T13:54:48Z"
  finalizers:
  - stash
  generation: 1
  name: postgres-sample-postgres-1
  namespace: demo
  resourceVersion: "50171"
  selfLink: /apis/stash.appscode.com/v1beta1/namespaces/demo/repositories/postgres-sample-postgres-1
  uid: ed49dde4-b463-11e9-a6a0-080027aded7e
spec:
  backend:
    gcs:
      bucket: appscode-qa
      prefix: stash-backup/demo/postgres/sample-postgres-1
    storageSecretName: gcs-secret
</code></pre><p><strong>Verify BackupConfiguration:</strong></p><p>Verify that the <code>BackupConfiguration</code> CRD has been created by the following command:</p><pre><code class="language-console">$ kubectl get backupconfiguration -n demo
NAME                         TASK                   SCHEDULE      PAUSED   AGE
postgres-sample-postgres-1   postgres-backup-11.2   */5 * * * *            3m39s
</code></pre><p>Notice the <code>TASK</code> field. It denotes that this backup will be performed using <code>postgres-backup-11.2</code> task. We had specified <code>postgres-backup-${TARGET_APP_VERSION}</code> as task name in the <code>BackupBlueprint</code>. Here, the variable <code>${TARGET_APP_VERSION}</code> has been substituted by the database version.</p><p>Let’s check the YAML of this <code>BackupConfiguration</code>.</p><pre><code class="language-console">$ kubectl get backupconfiguration -n demo postgres-sample-postgres-1 -o yaml
</code></pre><pre><code class="language-yaml">apiVersion: stash.appscode.com/v1beta1
kind: BackupConfiguration
metadata:
  creationTimestamp: "2019-08-01T13:54:48Z"
  finalizers:
  - stash.appscode.com
  generation: 1
  name: postgres-sample-postgres-1
  namespace: demo
  ownerReferences:
  - apiVersion: v1
    blockOwnerDeletion: false
    kind: AppBinding
    name: sample-postgres-1
    uid: a799156e-b463-11e9-a6a0-080027aded7e
  resourceVersion: "50170"
  selfLink: /apis/stash.appscode.com/v1beta1/namespaces/demo/backupconfigurations/postgres-sample-postgres-1
  uid: ed4bd257-b463-11e9-a6a0-080027aded7e
spec:
  repository:
    name: postgres-sample-postgres-1
  retentionPolicy:
    keepLast: 5
    name: keep-last-5
    prune: true
  runtimeSettings: {}
  schedule: '*/5 * * * *'
  target:
    ref:
      apiVersion: v1
      kind: AppBinding
      name: sample-postgres-1
  task:
    name: postgres-backup-11.2
  tempDir: {}
</code></pre><p>Notice that the <code>spec.target.ref</code> is pointing to the AppBinding <code>sample-postgres-1</code> that we have just annotated with auto-backup annotation.</p><p><strong>Wait for BackupSession:</strong></p><p>Now, wait for the next backup schedule. Run the following command to watch <code>BackupSession</code> CRD:</p><pre><code class="language-console">$ watch -n 1 kubectl get backupsession -n demo -l=stash.appscode.com/backup-configuration=postgres-sample-postgres-1

Every 1.0s: kubectl get backupsession -n demo -l=stash.appscode.com/backup-configuration=postgres-sample-postgres-1  workstation: Thu Aug  1 20:35:43 2019

NAME                                    INVOKER-TYPE          INVOKER-NAME                 PHASE       AGE
postgres-sample-postgres-1-1564670101   BackupConfiguration   postgres-sample-postgres-1   Succeeded   42s
</code></pre><p>Note: Backup CronJob creates <code>BackupSession</code> CRD with the following label <code>stash.appscode.com/backup-configuration=&lt;BackupConfiguration crd name&gt;</code>. We can use this label to watch only the <code>BackupSession</code> of our desired <code>BackupConfiguration</code>.</p><p><strong>Verify Backup:</strong></p><p>When backup session is completed, Stash will update the respective <code>Repository</code> to reflect the latest state of backed up data.</p><p>Run the following command to check if a snapshot has been sent to the backend:</p><pre><code class="language-console">$ kubectl get repository -n demo postgres-sample-postgres-1
NAME                         INTEGRITY   SIZE        SNAPSHOT-COUNT   LAST-SUCCESSFUL-BACKUP   AGE
postgres-sample-postgres-1   true        1.324 KiB   1                73s                      6m7s
</code></pre><p>If we navigate to <code>stash-backup/demo/postgres/sample-postgres-1</code> directory of our GCS bucket, we are going to see that the snapshot has been stored there.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://appfleet.com/blog/content/images/2020/03/sample_postgres_1.png" class="kg-image" alt="Tutorial: Kubernetes-Native Backup and Recovery With Stash"></figure><h3 id="backup-second-sample-postgresql"><strong>Backup Second Sample PostgreSQL</strong></h3><p>Now, lets backup our second PostgreSQL sample using the same <code>BackupBlueprint</code> we have used to backup the first PostgreSQL sample.</p><p><strong>Add Annotations:</strong></p><p>Add the auto backup specific annotation to AppBinding <code>sample-postgres-2</code>.</p><pre><code class="language-console">$ kubectl annotate appbinding sample-postgres-2 -n demo --overwrite \
  stash.appscode.com/backup-blueprint=postgres-backup-blueprint
</code></pre><p><strong>Verify Repository:</strong></p><p>Verify that the <code>Repository</code> has been created successfully by the following command:</p><pre><code class="language-console">$ kubectl get repository -n demo
NAME                         INTEGRITY   SIZE        SNAPSHOT-COUNT   LAST-SUCCESSFUL-BACKUP   AGE
postgres-sample-postgres-1   true        1.324 KiB   1                2m3s                     6m57s
postgres-sample-postgres-2                                                                     15s
</code></pre><p>Here, repository <code>postgres-sample-postgres-2</code> has been created for the second PostgreSQL sample.</p><p>If we view the YAML of this <code>Repository</code>, we will see that the variables <code>${TARGET_NAMESPACE}</code>, <code>${TARGET_APP_RESOURCE}</code> and <code>${TARGET_NAME}</code> have been replaced by <code>demo</code>, <code>postgres</code> and <code>sample-postgres-2</code> respectively.</p><pre><code class="language-console">$ kubectl get repository -n demo postgres-sample-postgres-2 -o yaml
</code></pre><pre><code class="language-yaml">apiVersion: stash.appscode.com/v1beta1
kind: Repository
metadata:
  creationTimestamp: "2019-08-01T14:37:22Z"
  finalizers:
  - stash
  generation: 1
  name: postgres-sample-postgres-2
  namespace: demo
  resourceVersion: "56103"
  selfLink: /apis/stash.appscode.com/v1beta1/namespaces/demo/repositories/postgres-sample-postgres-2
  uid: df58523c-b469-11e9-a6a0-080027aded7e
spec:
  backend:
    gcs:
      bucket: appscode-qa
      prefix: stash-backup/demo/postgres/sample-postgres-2
    storageSecretName: gcs-secret
</code></pre><p><strong>Verify BackupConfiguration:</strong></p><p>Verify that the <code>BackupConfiguration</code> CRD has been created by the following command:</p><pre><code class="language-console">$ kubectl get backupconfiguration -n demo
NAME                         TASK                   SCHEDULE      PAUSED   AGE
postgres-sample-postgres-1   postgres-backup-11.2   */5 * * * *            7m52s
postgres-sample-postgres-2   postgres-backup-10.6   */5 * * * *            70s
</code></pre><p>Again, notice the <code>TASK</code> field. This time, <code>${TARGET_APP_VERSION}</code> has been replaced with <code>10.6</code> which is the database version of our second sample.</p><p><strong>Wait for BackupSession:</strong></p><p>Now, wait for the next backup schedule. Run the following command to watch <code>BackupSession</code> CRD:</p><pre><code class="language-console">$ watch -n 1 kubectl get backupsession -n demo -l=stash.appscode.com/backup-configuration=postgres-sample-postgres-2
Every 1.0s: kubectl get backupsession -n demo -l=stash.appscode.com/backup-configuration=postgres-sample-postgres-2  workstation: Thu Aug  1 20:55:40 2019

NAME                                    INVOKER-TYPE          INVOKER-NAME                 PHASE       AGE
postgres-sample-postgres-2-1564671303   BackupConfiguration   postgres-sample-postgres-2   Succeeded   37s
</code></pre><p><strong>Verify Backup:</strong></p><p>Run the following command to check if a snapshot has been sent to the backend:</p><pre><code class="language-console">$ kubectl get repository -n demo postgres-sample-postgres-2
NAME                         INTEGRITY   SIZE        SNAPSHOT-COUNT   LAST-SUCCESSFUL-BACKUP   AGE
postgres-sample-postgres-2   true        1.324 KiB   1                52s                      19m
</code></pre><p>If we navigate to <code>stash-backup/demo/postgres/sample-postgres-2</code> directory of our GCS bucket, we are going to see that the snapshot has been stored there.</p><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2020/03/sample_postgres_2.png" class="kg-image" alt="Tutorial: Kubernetes-Native Backup and Recovery With Stash"></figure><h2 id="cleanup"><strong>Cleanup</strong></h2><p>To cleanup the Kubernetes resources created by this tutorial, run:</p><pre><code class="language-console">kubectl delete -n demo pg/sample-postgres-1
kubectl delete -n demo pg/sample-postgres-2

kubectl delete -n demo repository/postgres-sample-postgres-1
kubectl delete -n demo repository/postgres-sample-postgres-2

kubectl delete -n demo backupblueprint/postgres-backup-blueprint</code></pre><h2 id="final-thoughts">Final thoughts</h2><p>You've now gotten a deep dive into setting up a Kubernetes-native disaster recovery and backup solution with Stash. You can find a lot of really helpful information on their documentation site <a href="https://stash.run/">here</a>. I hope you gained some educational knowledge from this post and will stay tuned for future tutorials!</p>]]></content:encoded></item><item><title><![CDATA[Using Helm with Kubernetes]]></title><description><![CDATA[<p>Kubernetes is a powerful orchestration system, however, it can be really hard to configure its deployment process. Specific apps can help you manage multiple independent resources like pods, services, deployments, and replica sets. Yet, each must be described in the YAML manifest file.<br></p><p>It’s not a problem for a</p>]]></description><link>https://appfleet.com/blog/using-helm-with-kubernetes/</link><guid isPermaLink="false">5e7c925b39f27869d61f61d0</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Tutorial]]></category><category><![CDATA[DevOps]]></category><dc:creator><![CDATA[ActiveWizards]]></dc:creator><pubDate>Mon, 20 Jul 2020 13:47:00 GMT</pubDate><media:content url="https://appfleet.com/blog/content/images/2020/07/90-Using-Helm-with-Kubernetes.png" medium="image"/><content:encoded><![CDATA[<img src="https://appfleet.com/blog/content/images/2020/07/90-Using-Helm-with-Kubernetes.png" alt="Using Helm with Kubernetes"><p>Kubernetes is a powerful orchestration system, however, it can be really hard to configure its deployment process. Specific apps can help you manage multiple independent resources like pods, services, deployments, and replica sets. Yet, each must be described in the YAML manifest file.<br></p><p>It’s not a problem for a single trivial app, but during production, it’s best to simplify this process: search, use, and share already implemented configurations, deploy these configurations, create configuration templates, and deploy them without effort. In other words, we need an extended version of a package manager like <em>APT</em> for Ubuntu or <em>PIP</em> for Python to work with the Kubernetes cluster. Luckily, we have Helm as a package manager.</p><h1 id="what-is-helm">What is Helm?<br></h1><p><a href="https://www.helm.sh/">Helm</a> is an open-source package manager for Kubernetes that allows developers and operators to package, configure, and deploy applications and services onto Kubernetes clusters easily. It was inspired by Homebrew for macOS and now is a part of the Cloud Native Computing Foundation. </p><p>In this article, we will explore Helm 3.x which is the newest version at the time of writing this article. </p><p></p><figure class="kg-card kg-image-card"><img src="https://lh4.googleusercontent.com/rPbDDlALZ9hn9tZRz2KgEvEjJim24BBvtcEePMWcKJ27l4JCarEZt1cOIU_O1vy6syCSJ2WEzDkPjmIRn11ctNIXQKy3-mXgezzGmxBhIhECN4lVY1Yw9p7nHK_rf19FAaTODL5K" class="kg-image" alt="Using Helm with Kubernetes"></figure><p><em>Searches on Helm Hub for PostgreSQL from dozens of different repositories<br></em></p><p>Helm can install software and dependencies, upgrade software, configure software deployments, fetch packages from repositories, alongside managing repositories.<br></p><p><strong>Some key features of Helm include:</strong></p><ul><li>Role-based access controls (RBAC)</li><li>Golang templates which allows you to work with configuration as text</li><li>Lua scripts to process configuration as an object</li><li>Deployment versions control system <br></li></ul><p>Templates allow you to configure your deployments by changing few variable values without changing the template directly. Helm packages are called <strong>charts</strong>, and they consist of a few YAML configuration files and templates that are rendered into Kubernetes manifest files. <br></p><p><strong>The basic package (chart) structure:</strong></p><ul><li><strong>c<strong><strong>hart.yaml</strong> </strong></strong>- a YAML file containing information about the chart</li><li><strong><strong><strong>LICENSE</strong></strong> (optional)<strong> </strong></strong>- a plain text file containing the license for the chart</li><li><strong><strong><strong>README.md</strong></strong></strong> <strong>(optional)</strong> - a human-readable README file</li><li><strong><strong><strong>values.yaml</strong></strong></strong> - the default configuration values for this chart</li><li><strong><strong><strong>values.schema.json</strong></strong> (optional)</strong> - a JSON Schema for imposing a structure on the values.yaml file</li><li><strong><strong><strong>charts/</strong></strong></strong> - defines chart dependencies (recommended to use the <em>dependencies</em> section in <code>chart.yaml</code>)</li><li><strong><strong><strong>crds/</strong></strong></strong> - Custom Resource Definitions</li><li><strong><strong><strong>templates/ </strong></strong></strong>- directory of templates that when combined with values, will generate valid Kubernetes manifest files<br></li></ul><p>Templates give you a wide range of capabilities. You can use variables from context, apply different functions (such as ‘quote’, sha256sum), use cycles and conditional cases, and import other files (also other templates or partials).</p><h1 id="what-are-helm-s-abilities">What are Helm’s abilities?</h1><p></p><ol><li>As you operate Helm though a Command Line Interface (CLI), the <code>helm search</code> command allows you to search for a package by keywords from the repositories. </li><li>You can inspect <code>chart.yaml</code>, <code>values.yaml</code>, and <code>README.md</code> for a certain package. along with creating your own chart with the <code>helm create &lt;chart-name&gt;</code> command. This command will generate a folder with a specified name in which you can find the mentioned structure.</li><li>Helm can install both folder or <code>.tgz</code> archives. To create a <code>.tgz</code> from your package folder, use the <code>helm package &lt;path to folder&gt;</code> command. This will create a <code>&lt;package_name&gt;</code> package in your working directory, using the name and version from the metadata defined in the <code>chart.yaml</code> file.</li><li>Helm has built-in support for installing packages from an HTTP server. Helm reads a repository index hosted on the server, which describes what chart packages are available and where they are located. This is how the default stable repository works.</li><li>You can also create a repository from your machine with <code>helm serve</code>. This eventually lets you create your own corporate repository or contribute to the official stable one.</li><li>You can also call the <code>helm dependencies update &lt;package name&gt;</code> command which verifies that the required charts, as expressed in <code>chart.yaml</code>, are present in <code>charts/</code> and are in an acceptable version. It will additionally pull down the latest charts that satisfy the dependencies, and clean up the old dependencies.</li><li>Apart from <em>Chart</em> and <em>Repository</em> another significant concept you should know is <em>Release </em>which is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster. And each time it is installed, a new Release is created. So, you can have multiple PostgreSQL in the same cluster, in which each Release will have its own release name. You can think of this like 'multiple Docker containers from one image'.<br></li></ol><h1 id="how-does-it-work">How does it work?</h1><figure class="kg-card kg-image-card"><img src="https://lh6.googleusercontent.com/nufxEk9LdGzkU8267Uq9x7-uInFm2sEKGLHAuo3VWeDCpry22j3rcfHzB_E63PUVVl6aKNxGa2DmWEhiUtEXwmer4VpBsbY3j3RtG547BRVgAIaT5FqEdExnfH5dUuIa0U9S3Kin" class="kg-image" alt="Using Helm with Kubernetes"></figure><p><em>											Source: <a href="https://developer.ibm.com/technologies/containers/blogs/kubernetes-helm-3/">developer.ibm.com</a></em><br></p><p>Helm client is used for installing, updating and creating charts, as well as compiling and sending them to a Kubernetes API in an acceptable form. The previous version had a client-server architecture, using a program run on a cluster with Kubernetes, called Tiller. This software was responsible for deployment’s lifetime. But this approach led to some security issues which is one of the reasons why all functions are now handled by the client.</p><p>Installing Helm 3 is noticeably easier than the previous version since only the client needs to be installed. It is available for Windows, macOS, and Linux. You can install the program from binary releases, Homebrew, or through a configured installation script.</p><h1 id="let-s-try-an-example">Let’s try an example<br></h1><ol><li>Let's start with installing Helm.</li></ol><pre><code class="language-bash">bash master $ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                 Dload  Upload   Total   Spent    Left  Speed100  6794  100  6794    0     0  25961      0 --:--:-- --:--:-- --:--:-- 25931Error: could not find tillerHelm v3.1.2 is available. Changing from version .Downloading https://get.helm.sh/helm-v3.1.2-linux-amd64.tar.gzPreparing to install helm into /usr/local/binhelm installed into /usr/local/bin/helm</code></pre><p>2. Check if everything is installed properly.</p><pre><code class="language-bash">master $ helm version --short
V3.1.2+gd878d4d</code></pre><p>3. By default, Helm doesn’t have a connection to any of the repositories. Let’s add connection to the most common <em>stable</em> one. (You can check all the available repositories with <code>helm repo list</code>).</p><pre><code class="language-bash">master $ helm repo add stable 

https://kubernetes-charts.storage.googleapis.com/
"stable" has been added to your repositories</code></pre><p>4. After adding the repository, we should let Helm get updated. The current local state of Helm is kept in your environment in the home location.</p><pre><code class="language-bash">master $ helm repo update

Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
</code></pre><blockquote>The Helm command defaults to discovering the host already set in <code>~/.kube/config</code>. There is a way to change or override the host, but that's beyond the scope of this scenario.</blockquote><pre><code class="language-bash">master $ helm env

HELM_BIN="helm"
HELM_DEBUG="false"
HELM_KUBECONTEXT=""
HELM_NAMESPACE="default"
HELM_PLUGINS="/root/.local/share/helm/plugins"
HELM_REGISTRY_CONFIG="/root/.config/helm/registry.json"
HELM_REPOSITORY_CACHE="/root/.cache/helm/repository"
HELM_REPOSITORY_CONFIG="/root/.config/helm/repositories.yaml"</code></pre><p>5. Let's search for a WordPress in the <a href="https://hub.helm.sh/">Helm Hub</a></p><pre><code class="language-bash">master $ helm search hub wordpress

URL                                                     CHART VERSION   APP VERSION     DESCRIPTION https://hub.helm.sh/charts/presslabs/wordpress-...      v0.8.4          v0.8.4          Presslabs WordPress Operator Helm Chart
https://hub.helm.sh/charts/presslabs/wordpress-...      v0.8.3          v0.8.3          A Helm chart for deploying a WordPress site on ...
https://hub.helm.sh/charts/bitnami/wordpress            9.0.3           5.3.2           Web publishing platform for building blogs and ...</code></pre><p>And also search in our repositories (we have only <em>stable</em> for now).</p><pre><code class="language-bash">master $ helm search repo wordpress

NAME                    CHART VERSION   APP VERSION     DESCRIPTION
stable/wordpress        9.0.2           5.3.2           DEPRECATED Web publishing platform for building...</code></pre><p>6. As mentioned earlier, you can inspect a Chart. For example, let’s take info from <code>chart.yaml</code> for the <em>Wordpress</em> chart. <br>You can also check <code>helm show readme stable/wordpress</code> and <code>helm show values stable/wordpress</code>.</p><pre><code class="language-bash">master $ helm show chart stable/wordpress

apiVersion: v1
appVersion: 5.3.2
dependencies:
- condition: mariadb.enabled
  name: mariadb
  repository: https://kubernetes-charts.storage.googleapis.com/
  tags:
  - wordpress-database
  version: 7.x.xdeprecated: truedescription: DEPRECATED Web publishing platform for building blogs and websites.
home: http://www.wordpress.com/
icon: https://bitnami.com/assets/stacks/wordpress/img/wordpress-stack-220x234.png
keywords:- wordpress- cms
- blog
- http- web- application
- php
name: wordpress
sources:
- https://github.com/bitnami/bitnami-docker-wordpress
version: 9.0.2</code></pre><p><br>7. Let’s create a namespace for WordPress and install a test chart.</p><pre><code class="language-bash">master $ kubectl create namespace wordpress

namespace/wordpress created</code></pre><pre><code class="language-bash">master $ helm install test-wordpress stable/wordpress --namespace wordpress</code></pre><p>The output of this command appears messy just because it’s so big.</p><p>You can also set variables, such as:</p><pre><code class="language-bash">helm install test-wordpress \
  --set wordpressUsername=admin \
  --set wordpressPassword=password \
  --set mariadb.mariadbRootPassword=secretpassword \
    stable/wordpress</code></pre><p>8. For now, let’s ensure that everything is deployed correctly:</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://lh6.googleusercontent.com/agdgBQKPBqZgwQSDTBUv8pluB3sfCbJup5l991KbYWrHzM_f3h3kRC6ESdNUJzInue9Qn3gUKZeNALREBEHfUYANLgATC1-l5k6AQc98XwbtNhqCbKIX04MYyM3b9A3k3sHAbnhI" class="kg-image" alt="Using Helm with Kubernetes"></figure><p>As you can see, everything has been deployed properly.</p><h1 id="conclusion">Conclusion</h1><p>Helm is a popular open-source package manager that offers users a more flexible way to manage Kubernetes cluster. You can either create your own, or use public packages from your own or external repositories. Each package is quite flexible and, in most cases, all you need is define the right constants from which the template will be compiled to suit your needs. To create your own chart, you can use the power of Go templates and/or Lua scripts. Each update will create a history unit to which you can rollback anytime you want. With Helm, you have all the power of Kubernetes. And, in the end, Helm allows you to work with role-based access, so you can manage your cluster in a team.</p><p>This brings us to the end of this brief article explaining the basics and features of Helm. We hope you enjoyed it and were able to make use of it. </p>]]></content:encoded></item><item><title><![CDATA[Understanding Amazon Elastic Container Service for Kubernetes (EKS)]]></title><description><![CDATA[<p>Amazon Elastic Container Service for Kubernetes or EKS provides a <em>Managed Kubernetes Service</em>. Amazon does the undifferentiated heavy lifting, such as provisioning the cluster, performing upgrades and patching. Although it is compatible with existing plugins and tooling, EKS is not a proprietary AWS fork of Kubernetes in any way. This</p>]]></description><link>https://appfleet.com/blog/amazon-elastic-container-service-for-kubernetes-eks/</link><guid isPermaLink="false">5e77703739f27869d61f5c02</guid><category><![CDATA[AWS]]></category><category><![CDATA[DevOps]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Sergej Kalenicenko]]></dc:creator><pubDate>Mon, 13 Jul 2020 13:46:00 GMT</pubDate><media:content url="https://appfleet.com/blog/content/images/2020/07/89-Understanding-Amazon-Elastic-Container-Service-for-Kubernetes--EKS-.png" medium="image"/><content:encoded><![CDATA[<img src="https://appfleet.com/blog/content/images/2020/07/89-Understanding-Amazon-Elastic-Container-Service-for-Kubernetes--EKS-.png" alt="Understanding Amazon Elastic Container Service for Kubernetes (EKS)"><p>Amazon Elastic Container Service for Kubernetes or EKS provides a <em>Managed Kubernetes Service</em>. Amazon does the undifferentiated heavy lifting, such as provisioning the cluster, performing upgrades and patching. Although it is compatible with existing plugins and tooling, EKS is not a proprietary AWS fork of Kubernetes in any way. This means you can easily migrate any standard Kubernetes application to EKS without any changes to your code base. You'll connect to your EKS cluster with <code>kubectl</code> in the same way you would have done in a <em>self-hosted</em> Kubernetes. </p><p>At this stage, EKS is very loosely integrated with other AWS services. This is definitely expected to change over time though, as EKS adoption increases. That, said, Kubernetes is much more popular than either Elastic Beanstalk or ECS. </p><!--kg-card-begin: markdown--><h3 id="managedcontrolplane">Managed Control Plane</h3>
<!--kg-card-end: markdown--><p>EKS provides a Managed Control Plane, which includes Kubernetes master nodes, API server and the <code>etcd</code> persistence layer. As part of the <em>highly-available</em> control plane, you get 3 masters, 3 <code>etcd</code> and 3 worker nodes, where AWS provisions automatic backup snapshotting of <code>etcd</code> nodes alongside automated scaling. </p><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2020/03/Screenshot-from-2020-03-22-16-39-07.png" class="kg-image" alt="Understanding Amazon Elastic Container Service for Kubernetes (EKS)"></figure><p>With EKS, AWS is responsible for maintaining master nodes for you by provisioning these nodes in multiple high-availability zones to maintain redundancy. So, as your workload increases, AWS will add master nodes for you. If you were running your own Kubernetes cluster, you'd have to scale it up whenever you added a worker node.  </p><!--kg-card-begin: markdown--><h3 id="vpcnetworking">VPC Networking</h3>
<!--kg-card-end: markdown--><p>EKS runs a network topology that integrates tightly with a Virtual Private Network. EKS uses a <strong>Container Network Interface</strong> plugin that integrates the standard Kubernetes overlay network with VPC networking. This plugin allows you to treat your EKS deployment as just another part of your existing AWS infrastructure. Things like <strong>network access</strong>, <strong>control list</strong>, <strong>routing tables</strong> and <strong>subnets</strong> are all available in the Kubernetes applications running in EKS.</p><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2020/03/Screenshot-from-2020-03-22-17-13-40.png" class="kg-image" alt="Understanding Amazon Elastic Container Service for Kubernetes (EKS)"></figure><p>Each pod gets an IP address on an Elastic Network Interface, where these addresses belong to the block of the subnet where the worker node is deployed. In the diagram above, you can see the IP addresses assigned to the Virtual Ethernet Adapter on each pod. These pod IP addresses are <em>fully rideable</em> within the VPC, and they comply with all the policies and access controls at the network level. So, things like security groups and <code>ACL</code> remain in effect. On each EC2 instance or worker node, Kubernetes runs a daemon set that hosts the CNI plugin. This plugin is a thin layer that communicates with the network local control point. This network local control plane maintains a pool of available IP addresses. So, when the <code>kubelet</code> on a node schedules a pod, it asks the CNI plugin to allocate an IP address. At this point, the CNI plugin allocates an IP, grabs secondary IP address and associates it with the pod. It then hands that configuration back to the <code>kubelet</code>.</p><!--kg-card-begin: markdown--><h3 id="eksoptimizedami">EKS-Optimized AMI</h3>
<!--kg-card-end: markdown--><p>This thing is based on Amazon Linux too. It comes pre-configured to work with EKS out-of-the-box. It has all the required services pre-installed including Docker, the Kubelet and AWS IAM Authenticator. When you are provisioning your EKS worker nodes with the AWS supplied cloud formation template, it launches your worker nodes with some EC2 user data script which bootstraps the nodes with configuration allowing them to join your EKS cluster automatically.  </p><!--kg-card-begin: markdown--><h3 id="conclusion">Conclusion</h3>
<!--kg-card-end: markdown--><p>Amazon EKS on AWS provides a great opportunity to create a self-hosted <em>managed</em> Kubernetes cluster. It is also compatible with open-source Kubernetes, and can be safely migrated to any other Kubernetes instance at any time. Worth to mention, that for users who use solutions for centralized management of Kubernetes clusters, it makes sense to go with EKS instead of any other option such as ECS, since EKS exposes the same API as an open-source Kubernetes. </p>]]></content:encoded></item><item><title><![CDATA[Autoscaling an Amazon Elastic Kubernetes Service cluster]]></title><description><![CDATA[<p>In this article we are going to consider the two most common methods for Autoscaling in EKS cluster:</p><ul><li><strong>Horizontal Pod Autoscaler (HPA)</strong></li><li><strong>Cluster Autoscaler (CA)</strong></li></ul><p>The <strong>Horizontal Pod Autoscaler or HPA</strong> is a Kubernetes component that automatically scales your service based on metrics such as CPU utilization or others, as</p>]]></description><link>https://appfleet.com/blog/autoscaling-an-eks-cluster/</link><guid isPermaLink="false">5e88957a39f27869d61f6319</guid><category><![CDATA[AWS]]></category><category><![CDATA[EKS]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Sergej Kalenicenko]]></dc:creator><pubDate>Mon, 29 Jun 2020 13:17:00 GMT</pubDate><media:content url="https://appfleet.com/blog/content/images/2020/05/87-Autoscaling-an-Amazon-Elastic-Kubernetes-Service-cluster.png" medium="image"/><content:encoded><![CDATA[<img src="https://appfleet.com/blog/content/images/2020/05/87-Autoscaling-an-Amazon-Elastic-Kubernetes-Service-cluster.png" alt="Autoscaling an Amazon Elastic Kubernetes Service cluster"><p>In this article we are going to consider the two most common methods for Autoscaling in EKS cluster:</p><ul><li><strong>Horizontal Pod Autoscaler (HPA)</strong></li><li><strong>Cluster Autoscaler (CA)</strong></li></ul><p>The <strong>Horizontal Pod Autoscaler or HPA</strong> is a Kubernetes component that automatically scales your service based on metrics such as CPU utilization or others, as defined through the Kubernetes metric server. The HPA scales the pods in either a deployment or replica set, and is implemented as a Kubernetes API resource and a controller. The Controller Manager queries the resource utilization against the metrics specified in each horizontal pod autoscaler definition. It obtains the metrics from either the resource metrics API for per pod metrics or the custom metrics API for any other metrics.</p><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2020/04/Screenshot-from-2020-04-04-17-40-25.png" class="kg-image" alt="Autoscaling an Amazon Elastic Kubernetes Service cluster"></figure><p>To see this in action, we are going to configure HPA and then apply some load to our system to see it in action. </p><p>To start with, let us start with installing Helm as a package manager for Kubernetes. </p><pre><code>curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get &gt; helm.sh
 chmod +x helm.sh
 ./helm.sh</code></pre><p>Now, we are going to set up the server base portion of Helm called <strong>Tiller</strong>. This requires a service account:</p><pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
 </code></pre><p>The above defines a Tiller service account to which we have assigned the cluster admin role. Now let's go ahead and apply the configuration:</p><pre><code>kubectl apply -f tiller.yml</code></pre><p>Run <code>helm init</code> using the Tiller service account we have just created:</p><pre><code>helm init --service-account tiller</code></pre><p>With this we have installed Tiller onto the cluster, which gives access to manage those resources within it. </p><p>With Helm installed, we can now deploy the metric server. Metric servers are cluster wide aggregators of resource usage data where metrics are collected by <code>kubelet</code> on each worker node, and are used to dictate the scaling behavior of deployments. </p><p>So let's go ahead and install that now:</p><pre><code>helm install stable/metrics-server --name metrics-server --version 2.0.4 --namespace metrics</code></pre><p>Once all checks have passed, we are ready to scale the application. </p><p>For the purpose of this article, we will deploy a special build of Apache and PHP designed to generate CPU utilization:</p><figure class="kg-card kg-code-card"><pre><code>kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80</code></pre><figcaption>**requests=cpu=200m - requesting 200 millicores get allocated to pod</figcaption></figure><p>Now, let us autoscale our deployment:</p><pre><code>kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10</code></pre><p>The above specifies that the HPA will increase or decrease the number of replicas to maintain an average CPU utilization across all pods by 50%. Since each pod requests 200 millicores (as specified in the previous command), the average CPU utilization of 100 millicores is maintained. </p><p>Let's check the status:</p><pre><code>kubectl get hpa</code></pre><p>Review <code>Targets</code> column, if it says <code>unknown/50%</code> then it means that the current CPU consumption is 0%, as we are not currently sending any request to the server. This will take a couple of minutes to show the correct value, so let us grab a cup of coffee and come back when we have got some data here. </p><p>Rerun the last command and confirm that <code>Targets</code> column is now <code>0%/50%</code>. Now, let's generate some load in order to trigger scaling by running the following : </p><pre><code>kubectl run -i --tty load-generator --image=busybox /bin/sh</code></pre><p>Inside this container, we are going to send an infinite number of requests to our service. If we flip back over to the other terminal, we can watch the autoscaler in action:</p><pre><code>kubectl get hpa -w</code></pre><p>We can watch the HPA scaler pod up from 1 to our configured maximum of 10, until the average CPU utilization is below our target of 50%. It will take about 10 minutes to run and you could see we are now having 10 replicas. If we flip back to the other terminal to terminate the load test, and flip back to the scaler terminal, we can see the HPA reduce the replica count back to the minimum. </p><!--kg-card-begin: markdown--><h3 id="clusterautoscaler">Cluster Autoscaler</h3>
<!--kg-card-end: markdown--><p>The Cluster Autoscaler is the default Kubernetes component that can scale either pods or nodes in a cluster. It automatically increases the size of an autoscaling group, so that pods can continue to get placed successfully. It also tries to remove unused worker nodes from the autoscaling group (the ones with no pods running).</p><figure class="kg-card kg-image-card"><img src="https://appfleet.com/blog/content/images/2020/04/Screenshot-from-2020-04-04-19-10-15.png" class="kg-image" alt="Autoscaling an Amazon Elastic Kubernetes Service cluster"></figure><p>The following AWS CLI command will create an Auto scaling group with minimum of one and maximum count of ten:</p><pre><code>eksctl create nodegroup --cluster &lt;CLUSTER_NAME&gt; --node-zones &lt;REGION_CODE&gt; --name &lt;REGION_CODE&gt; --asg-access --nodes-min 1 --nodes 5 --nodes-max 10 --managed</code></pre><p>Now, we need to apply an inline IAM policy to our worker nodes:</p><pre><code>{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:DescribeTags",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup",
                "ec2:DescribeLaunchTemplateVersions"
            ],
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}</code></pre><p>This basically allows the EC2 worker nodes posting the cluster auto scaler the ability to manipulate auto scaling. Copy it and add to your EC2 IAM role. </p><p>Next, download the following file:</p><pre><code>wget https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml</code></pre><p>And update the following line with your cluster name:</p><pre><code>            - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/&lt;YOUR CLUSTER NAME&gt;
</code></pre><p>Finally, we can deploy our Autoscaler:</p><pre><code>kubectl apply -f cluster-autoscaler-autodiscover.yaml</code></pre><p>Of course we should wait for the pods to finish creating. Once done, we can scale our cluster out. We will consider a simple <code>nginx</code> application with the following <code>yaml</code> file:</p><pre><code>apiVersion: extensions/v1beta2
kind: Deployment
metadata:
  name: nginx-scale
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        resources: 
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 500m
            memory: 512Mi</code></pre><p>Let's go ahead and deploy the application:</p><pre><code>kubectl apply -f nginx.yaml</code></pre><p>And check the deployment: </p><pre><code>kubectl get deployment/nginx-scale</code></pre><p>Now, let's scale a replica up to 10:</p><pre><code>kubectl scale --replicas=10 deployment/nginx-scale</code></pre><p>We can see our some pods in the pending state, which is the trigger that the cluster auto scaler uses to scale out our fleet of EC2 instances. </p><pre><code>kubectl get pods -o wide --watch</code></pre><!--kg-card-begin: markdown--><h3 id="conclusion">Conclusion</h3>
<!--kg-card-end: markdown--><p>In this article, we considered both types of EKS cluster autoscaling. We learnt how the Cluster Autoscaler initiates scale-in and scale-out operations each time it detects under-utilized instances or pending pods. Horizontal Pod Autoscaler and Cluster Autoscaler are essential features of Kubernetes when it comes to scaling a microservice application. Hope you found this article useful but there is more to come. Till then, happy scaling!</p>]]></content:encoded></item><item><title><![CDATA[Cloud-native benchmarking with Kubestone]]></title><description><![CDATA[<hr><h2 id="intro">Intro</h2><p>Organizations are increasingly looking to containers and distributed applications to provide the agility and scalability needed to satisfy their clients. While doing so, modern enterprises also need the ability to benchmark their application and be aware of certain metrics in relation to their infrastructure. <br><br>In this post, I am</p>]]></description><link>https://appfleet.com/blog/cloud-native-benchmarking-with-kubestone/</link><guid isPermaLink="false">5e7718d239f27869d61f5b3d</guid><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[James D. Bohrman]]></dc:creator><pubDate>Mon, 22 Jun 2020 19:23:00 GMT</pubDate><media:content url="https://appfleet.com/blog/content/images/2020/04/23-Cloud-native-benchmarking-with-Kubestone.png" medium="image"/><content:encoded><![CDATA[<hr><h2 id="intro">Intro</h2><img src="https://appfleet.com/blog/content/images/2020/04/23-Cloud-native-benchmarking-with-Kubestone.png" alt="Cloud-native benchmarking with Kubestone"><p>Organizations are increasingly looking to containers and distributed applications to provide the agility and scalability needed to satisfy their clients. While doing so, modern enterprises also need the ability to benchmark their application and be aware of certain metrics in relation to their infrastructure. <br><br>In this post, I am introducing you to a cloud-native bench-marking tool known as <strong>Kubestone</strong>. This tool is meant to assist your development teams with getting performance metrics from your Kubernetes clusters. </p><h2 id="how-does-kubestone-work">How does Kubestone work?</h2><p>At it's core, Kubestone is implemented as a <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/">Kubernetes Operator</a> in <a href="https://golang.org/">Go language</a> with the help of <a href="https://kubebuilder.io/">Kubebuilder</a>. You can find more info on the Operator Framework via <a href="https://appfleet.com/blog/first-steps-with-the-kubernetes-operator/">this blog post</a>. <br>Kubestone leverages Open Source benchmarks to measure Core Kubernetes and Application performance. As benchmarks are executed in Kubernetes, they must be containerized to work on the cluster. A certified set of benchmark containers is provided via <a href="https://hub.docker.com/r/xridge/">xridge's DockerHub space</a>. Here is a list of currently supported benchmarks:</p><!--kg-card-begin: html--><style type="text/css">
  
.tftable tr:hover {background-color:#f2f2f2;}
</style>


<table class="tftable" border="0">
<tr><th>Type</th><th>Benchmark Name</th><th>Status</th></tr>
<tr><td>Core/CPU</td><td><a href="https://kubestone.io/en/latest/benchmarks/sysbench/">sysbench</a></td><td><a href="https://kubestone.io/en/latest/apidocs/#perf.kubestone.xridge.io/v1alpha1.SysbenchSpec">Supported</a></td></tr>
<tr><td>Core/Disk</td><td><a href="https://kubestone.io/en/latest/benchmarks/fio/">fio</a></td><td><a href="https://kubestone.io/en/latest/apidocs/#perf.kubestone.xridge.io/v1alpha1.FioSpec">Supported</a></td></tr>
<tr><td>Core/Disk</td><td><a href="https://kubestone.io/en/latest/benchmarks/ioping/">ioping</a></td><td><a href="https://kubestone.io/en/latest/apidocs/#perf.kubestone.xridge.io/v1alpha1.IopingSpec">Supported</a></td></tr>
<tr><td>Core/Memory</td><td><a href="https://kubestone.io/en/latest/benchmarks/sysbench/">sysbench</a></td><td><a href="https://kubestone.io/en/latest/apidocs/#perf.kubestone.xridge.io/v1alpha1.SysbenchSpec">Supported</a></td></tr>
<tr><td>Core/Network</td><td><a href="https://kubestone.io/en/latest/benchmarks/iperf3/">iperf3</a></td><td><a href="https://kubestone.io/en/latest/apidocs/#perf.kubestone.xridge.io/v1alpha1.Iperf3Spec">Supported</a></td></tr>
<tr><td>Core/Network</td><td><a href="https://kubestone.io/en/latest/benchmarks/qperf/">qperf</a></td><td><a href="https://kubestone.io/en/latest/apidocs/#perf.kubestone.xridge.io/v1alpha1.QperfSpec">Supported</a></td></tr>
<tr><td>HTTP Load Tester</td><td><a href="https://kubestone.io/en/latest/benchmarks/drill/">drill</a></td><td><a href="https://kubestone.io/en/latest/apidocs/#perf.kubestone.xridge.io/v1alpha1.DrillSpec">Supported</a></td></tr>
<tr><td>Application/Etcd</td><td>etcd</td><td><a href="https://github.com/xridge/kubestone/issues/15">Planned</a></td></tr>
<tr><td>Application/K8S</td><td>kubeperf</td><td><a href="https://github.com/xridge/kubestone/issues/14">Planned</a></td></tr>
<tr><td>Application/PostgreSQL</td><td><a href="https://kubestone.io/en/latest/benchmarks/pgbench/">pgbench</a></td><td><a href="https://kubestone.io/en/latest/apidocs/#perf.kubestone.xridge.io/v1alpha1.PgbenchSpec">Supported</a></td></tr>

<tr><td>Application/Spark</td><td>sparkbench</td><td><a href="https://github.com/xridge/kubestone/issues/83">Planned</a></td></tr>
</table>

<!--kg-card-end: html--><p></p><p>Let's try installing Kubestone and running a benchmark ourselves and see how it works.</p><h2 id="installing-kubestone">Installing Kubestone</h2><h3 id="requirements">Requirements</h3><ul><li><a href="https://kubernetes.io/">Kubernetes</a> v1.13 (or newer)</li><li><a href="https://kustomize.io/">Kustomize v3.1.0</a></li><li>Cluster admin privileges</li></ul><p>Deploy Kubestone to <code>kubestone-system</code> namespace with the following command:</p><pre><code>$ kustomize build github.com/xridge/kubestone/config/default | kubectl create -f -</code></pre><p>Once deployed, Kubestone will listen for Custom Resources created with the <code>kubestone.xridge.io</code> group.</p><h3 id="benchmarking">Benchmarking</h3><p>Benchmarks can be executed via Kubestone by creating Custom Resources in your cluster.</p><h3 id="namespace">Namespace</h3><p>It is recommended to create a dedicated namespace for benchmarking.</p><pre><code>$ kubectl create namespace kubestone</code></pre><p>After the namespace is created, you can use it to post a benchmark request to the cluster.</p><p>The resulting benchmark executions will reside in this namespace.</p><h3 id="custom-resource-rendering">Custom Resource rendering</h3><p>We will be using <a href="https://kustomize.io/">kustomize</a> to render the Custom Resource from the <a href="https://github.com/xridge/kubestone/tree/master/config/samples/fio/">github repository</a>.</p><p>Kustomize takes a <a href="https://github.com/xridge/kubestone/blob/master/config/samples/fio/base/fio_cr.yaml">base yaml</a>, and patches with an <a href="https://github.com/xridge/kubestone/blob/master/config/samples/fio/overlays/pvc/patch.yaml">overlay file</a> to render the final yaml file, which describes the benchmark.</p><pre><code>$ kustomize build github.com/xridge/kubestone/config/samples/fio/overlays/pvc
</code></pre><p>The Custom Resource (rendered yaml) looks as follows:</p><pre><code>apiVersion: perf.kubestone.xridge.io/v1alpha1
kind: Fio
metadata:
  name: fio-sample
spec:
  cmdLineArgs: --name=randwrite --iodepth=1 --rw=randwrite --bs=4m --size=256M
  image:
    name: xridge/fio:3.13
  volume:
    persistentVolumeClaimSpec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
    volumeSource:
      persistentVolumeClaim:
        claimName: GENERATED</code></pre><p>When we create this resource in Kubernetes, the operator interprets it and creates the associated benchmark. The fields of the Custom Resource controls how the benchmark will be executed:</p><ul><li><code>metadata.name</code>: Identifies the Custom Resource. Later, this can be used to query or delete the benchmark in the cluster.</li><li><code>cmdLineArgs</code>: Arguments passed to the benchmark. In this case we are providing the arguments to <strong>Fio</strong> (a filesystem benchmark). It instructs the benchmark to execute a random write test with 4Mb of block size with an overall transfer size of 256 MB.</li><li><code>image.name</code>: Describes the Docker Image of the benchmark. In case of <a href="https://fio.readthedocs.io/">Fio</a>, we are using <a href="https://cloud.docker.com/u/xridge/repository/docker/xridge/fio">xridge's fio Docker Image</a>, which is built from <a href="https://github.com/xridge/fio-docker/">this repository</a>.</li><li><code>volume.persistentVolumeClaimSpec</code>: Given that Fio is a disk benchmark, we can set a <strong>PersistentVolumeClaim</strong> for the benchmark to be executed. The above setup instructs Kubernetes to take 1GB of space from the default StorageClass and use it for the benchmark.</li></ul><h2 id="running-the-benchmark">Running the benchmark</h2><p>Now, as we understand the definition of the benchmark, we can try to execute it.</p><p><em>Note: Make sure you installed the kubestone operator and have it running before executing this step.</em></p><pre><code>$ kustomize build github.com/xridge/kubestone/config/samples/fio/overlays/pvc | kubectl create --namespace kubestone -f -
</code></pre><p>Since we pipe the output of the <code>kustomize build</code> command into <code>kubectl create</code>, it will create the object in our Kubernetes cluster.</p><p>The resulting object can be queried using the object's type (<code>fio</code>) and it's name (<code>fio-sample</code>):</p><pre><code>$ kubectl describe --namespace kubestone fio fio-sample
Name:         fio-sample
Namespace:    kubestone
Labels:       &lt;none&gt;
Annotations:  &lt;none&gt;
API Version:  perf.kubestone.xridge.io/v1alpha1
Kind:         Fio
Metadata:
  Creation Timestamp:  2019-09-14T11:31:02Z
  Generation:          1
  Resource Version:    31488293
  Self Link:           /apis/perf.kubestone.xridge.io/v1alpha1/namespaces/kubestone/fios/fio-sample
  UID:                 21cdbe92-d6e3-11e9-ba70-4439c4920abc
Spec:
  Cmd Line Args:  --name=randwrite --iodepth=1 --rw=randwrite --bs=4m --size=256M
  Image:
    Name:  xridge/fio:3.13
  Volume:
    Persistent Volume Claim Spec:
      Access Modes:
        ReadWriteOnce
      Resources:
        Requests:
          Storage:  1Gi
    Volume Source:
      Persistent Volume Claim:
        Claim Name:  GENERATED
Status:
  Completed:  true
  Running:    false
Events:
  Type    Reason           Age   From       Message
  ----    ------           ----  ----       -------
  Normal  Created  11s   kubestone  Created /api/v1/namespaces/kubestone/configmaps/fio-sample
  Normal  Created  11s   kubestone  Created /api/v1/namespaces/kubestone/persistentvolumeclaims/fio-sample
  Normal  Created  11s   kubestone  Created /apis/batch/v1/namespaces/kubestone/jobs/fio-sample</code></pre><p>As the <code>Events</code> section shows, Kubestone has created a <code>ConfigMap</code>, a <code>PersistentVolumeClaim</code> and a<code>Job</code> for the provided Custom Resource. The <code>Status</code> field tells us that the benchmark has completed.</p><h3 id="inspecting-the-benchmark">Inspecting the benchmark</h3><p>The created objects related to the benchmark can be listed using <code>kubectl</code> command:</p><pre><code>$ kubectl get pods,jobs,configmaps,pvc --namespace kubestone
NAME                   READY   STATUS      RESTARTS   AGE
pod/fio-sample-bqqmm   0/1     Completed   0          54s

NAME                   COMPLETIONS   DURATION   AGE
job.batch/fio-sample   1/1           15s        54s

NAME                   DATA   AGE
configmap/fio-sample   0      54s

NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
persistentvolumeclaim/fio-sample   Bound    pvc-b3898236-c698-11e9-8071-4439c4920abc   1Gi        RWO            rook-ceph-block   54s</code></pre><p>As shown above, Fio controller has created a PersistentVolumeClaim and a ConfigMap which is used by the Fio Job during benchmark execution. The Fio Job has an associated Pod which contains our test execution. The results of the run can be shown with the <code>kubectl logs</code> command:</p><pre><code>$ kubectl logs --namespace kubestone fio-sample-bqqmm
randwrite: (g=0): rw=randwrite, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=psync, iodepth=1
fio-3.13
Starting 1 process
randwrite: Laying out IO file (1 file / 256MiB)

randwrite: (groupid=0, jobs=1): err= 0: pid=47: Sat Aug 24 17:58:10 2019
  write: IOPS=470, BW=1882MiB/s (1974MB/s)(256MiB/136msec); 0 zone resets
    clat (usec): min=1887, max=2595, avg=2042.76, stdev=136.56
     lat (usec): min=1953, max=2688, avg=2107.35, stdev=142.94
    clat percentiles (usec):
     |  1.00th=[ 1893],  5.00th=[ 1926], 10.00th=[ 1926], 20.00th=[ 1958],
     | 30.00th=[ 1991], 40.00th=[ 2008], 50.00th=[ 2024], 60.00th=[ 2040],
     | 70.00th=[ 2057], 80.00th=[ 2073], 90.00th=[ 2114], 95.00th=[ 2409],
     | 99.00th=[ 2606], 99.50th=[ 2606], 99.90th=[ 2606], 99.95th=[ 2606],
     | 99.99th=[ 2606]
  lat (msec)   : 2=34.38%, 4=65.62%
  cpu          : usr=2.22%, sys=97.78%, ctx=1, majf=0, minf=9
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, &gt;=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%
     issued rwts: total=0,64,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=1882MiB/s (1974MB/s), 1882MiB/s-1882MiB/s (1974MB/s-1974MB/s), io=256MiB (268MB), run=136-136msec

Disk stats (read/write):
  rbd7: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%</code></pre><h3 id="listing-benchmarks">Listing benchmarks</h3><p>We have learned that Kubestone uses Custom Resources to define benchmarks. We can list the installed custom resources using the <code>kubectl get crds</code> command:</p><pre><code>$ kubectl get crds | grep kubestone
drills.perf.kubestone.xridge.io         2019-09-08T05:51:26Z
fios.perf.kubestone.xridge.io           2019-09-08T05:51:26Z
iopings.perf.kubestone.xridge.io        2019-09-08T05:51:26Z
iperf3s.perf.kubestone.xridge.io        2019-09-08T05:51:26Z
pgbenches.perf.kubestone.xridge.io      2019-09-08T05:51:26Z
sysbenches.perf.kubestone.xridge.io     2019-09-08T05:51:26Z</code></pre><p>Using the CRD names above, we can list the executed benchmarks in the system.</p><p>Kubernetes provides a convenience feature regarding CRDs: one can use the shortened name of the CRD, which is the singular part of the fully qualified CRD name. In our case, <code>fios.perf.kubestone.xridge.io</code> can be shortened to <code>fio</code>. Hence, we can list the executed <code>fio</code> benchmark using the following command:</p><pre><code>$ kubectl get --namespace kubestone fios.perf.kubestone.xridge.io
NAME         RUNNING   COMPLETED
fio-sample   false     true</code></pre><h3 id="cleaning-up">Cleaning up</h3><p>After a successful benchmark run the resulting objects are stored in the Kubernetes cluster. Given that Kubernetes can hold a limited number of pods in the system, it is advised that the user cleans up the benchmark runs time to time. This can be achieved by deleting the Custom Resource, which initiated the benchmark:</p><pre><code>$ kubectl delete --namespace kubestone fio fio-sample
</code></pre><p>Since the Custom Resource has ownership on the created resources, the underlying pods, jobs, configmaps, pvcs, etc. are also removed by this operation.</p><h2 id="next-steps">Next steps</h2><p>Now you are familiar with the key concepts of Kubestone, it is time to explore and benchmark. You can play around with Fio Benchmark via it's <code>cmdLineArgs</code>, Persistent Volume and Scheduling related settings. You can find more information about that in Fio's benchmark page. Hopefully you gained some valuable knowledge from this post!</p>]]></content:encoded></item><item><title><![CDATA[Enabling multicloud K8s communication with Skupper]]></title><description><![CDATA[<hr><h2 id="intro">Intro</h2><p>There are many challenges that engineering teams face when attempting to incorporate a multi-cloud approach into their infrastructure goals. Kubernetes does a good job of addressing some of these issues, but managing the communication of clusters that span multiple cloud providers in multiple regions can become a daunting task</p>]]></description><link>https://appfleet.com/blog/connecting-multiple-k8s-clusters-across-cloud-with-skupper/</link><guid isPermaLink="false">5e760fd739f27869d61f5a0b</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Skupper]]></category><dc:creator><![CDATA[James D. Bohrman]]></dc:creator><pubDate>Mon, 15 Jun 2020 10:06:00 GMT</pubDate><media:content url="https://appfleet.com/blog/content/images/2020/04/22-Enabling-multicloud-K8s-communication-with-Skupper.png" medium="image"/><content:encoded><![CDATA[<hr><h2 id="intro">Intro</h2><img src="https://appfleet.com/blog/content/images/2020/04/22-Enabling-multicloud-K8s-communication-with-Skupper.png" alt="Enabling multicloud K8s communication with Skupper"><p>There are many challenges that engineering teams face when attempting to incorporate a multi-cloud approach into their infrastructure goals. Kubernetes does a good job of addressing some of these issues, but managing the communication of clusters that span multiple cloud providers in multiple regions can become a daunting task for teams. Often this requires complex VPNs and special firewall rules to multi-cloud cluster communication. <br><br>In this post, I will be introducing you to Skupper, an open source project for enabling secure communication across Kubernetes cluster. Skupper allows your application to span multiple cloud providers, data centers, and regions. Let's see it in action!</p><h2 id="getting-started">Getting Started</h2><p>This tutorial will demonstrate how to distribute the <a href="https://istio.io/docs/examples/bookinfo/" rel="nofollow">Istio Bookinfo Application</a> microservices across multiple public and private clusters. The services require no coding changes to work in the distributed application environment. With Skupper, the application behaves as if all the services are running in the same cluster.</p><p>In this tutorial, you will deploy the <em>productpage</em> and <em>ratings</em> services on a remote, public cluster in namespace <code>aws-eu-west</code> and the <em>details</em> and <em>reviews</em> services in a local, on-premises cluster in namespace <code>laptop</code>.</p><h3 id="overview">Overview</h3><h5 id="figure-1-bookinfo-service-deployment">Figure 1 - Bookinfo service deployment</h5><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://github.com/skupperproject/skupper-example-bookinfo/raw/master/graphics/skupper-example-bookinfo-deployment.gif" class="kg-image" alt="Enabling multicloud K8s communication with Skupper"><figcaption>Service Deployment</figcaption></figure><p>The image above shows how the services will be deployed.</p><ul><li>Each cluster runs two of the application services.</li><li>An ingress route to the <em>productpage</em> service provides internet user access to the application.</li></ul><p>If all services were installed on the public cluster, then the application would work as originally designed. However, since two of the services are on the <em>laptop</em> cluster, the application fails. <em>productpage</em> can not send requests to <em>details</em> or to <em>reviews</em>.</p><p>This demo will show how Skupper can solve the connectivity problem presented by this arrangement of service deployments.</p><h5 id="figure-2-bookinfo-service-deployment-with-skupper">Figure 2 - Bookinfo service deployment with Skupper</h5><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://github.com/skupperproject/skupper-example-bookinfo/raw/master/graphics/skupper-example-bookinfo-details.gif" class="kg-image" alt="Enabling multicloud K8s communication with Skupper"><figcaption>Clusters post Skupper set-up</figcaption></figure><p></p><p>Skupper is a distributed system with installations running in one or more clusters or namespaces. Connected Skupper installations share information about what services each installation exposes. Each Skupper installation learns which services are exposed on every other installation. Skupper then runs proxy service endpoints in each namespace to properly route requests to or from every exposed service.</p><ul><li>In the public namespace, the <em>details</em> and <em>reviews</em> proxies intercept requests for their services and forward them to the Skupper network.</li><li>In the private namespace, the <em>details</em> and <em>reviews</em> proxies receive requests from the Skupper network and send them to the related service.</li><li>In the private namespace, the <em>ratings</em> proxy intercepts requests for its service and forwards them to the Skupper network.</li><li>In the public namespace, the <em>ratings</em> proxy receives requests from the Skupper network and sends them to the related service.</li></ul><h3 id="prerequisites">Prerequisites</h3><p>To run this tutorial you will need:</p><ul><li>The <code>kubectl</code> command-line tool, version 1.15 or later (<a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="nofollow">installation guide</a>)</li><li>The <code>skupper</code> command-line tool, the latest version (<a href="https://skupper.io/start/index.html#step-1-install-the-skupper-command-line-tool-in-your-environment" rel="nofollow">installation guide</a>)</li><li>Two Kubernetes namespaces, from any providers you choose, on any clusters you choose</li><li>The yaml files from <a href="https://github.com/skupperproject/skupper-examples-bookinfo.git">https://github.com/skupperproject/skupper-examples-bookinfo.git</a></li><li>Two logged-in console terminals, one for each cluster or namespace</li></ul><h2 id="step-1-deploy-the-bookinfo-application">Step 1: Deploy the Bookinfo application</h2><p>This step creates a service and a deployment for each of the four Bookinfo microservices.</p><p>Namespace <code>aws-eu-west</code>:</p><pre><code>$ kubectl apply -f public-cloud.yaml
service/productpage created
deployment.extensions/productpage-v1 created
service/ratings created
deployment.extensions/ratings-v1 created</code></pre><p>Namespace <code>laptop</code>:</p><pre><code>$ kubectl apply -f private-cloud.yaml 
service/details created
deployment.extensions/details-v1 created
service/reviews created
deployment.extensions/reviews-v3 created</code></pre><h2 id="step-2-expose-the-public-productpage-service">Step 2: Expose the public productpage service</h2><p>Namespace <code>aws-eu-west</code>:</p><pre><code>kubectl expose deployment/productpage-v1 --port 9080 --type LoadBalancer
</code></pre><p>The Bookinfo application is accessed from the public internet through this ingress port to the <em>productpage</em> service.</p><h2 id="step-3-observe-that-the-application-does-not-work">Step 3: Observe that the application does not work</h2><p>The web address for the Bookinfo application can be discovered from namespace <code>aws-eu-west</code>:</p><pre><code>$ echo $(kubectl get service/productpage -o jsonpath='http://{.status.loadBalancer.ingress[0].hostname}:9080')
</code></pre><p>Open the address in a web browser. <em>Productpage</em> responds but the page will show errors as services in namespace <code>laptop</code> are not reachable.</p><p>We can fix that now.</p><h2 id="step-4-set-up-skupper">Step 4: Set up Skupper</h2><p>This step initializes the Skupper environment on each cluster.</p><p>Namespace <code>laptop</code>:</p><pre><code>skupper init
</code></pre><p>Namespace <code>aws-eu-west</code>:</p><pre><code>skupper init
</code></pre><p>Now the Skupper infrastructure is running. Use <code>skupper status</code> in each console terminal to see that Skupper is available.</p><pre><code>$ skupper status
Namespace '&lt;ns&gt;' is ready.  It is connected to 0 other namespaces.
</code></pre><p>As you move through the steps that follow, you can use <code>skupper status</code> at any time to check your progress.</p><h2 id="step-5-connect-your-skupper-installations">Step 5: Connect your Skupper installations</h2><p>Now you need to connect your namespaces with a Skupper connection. This is a two step process.</p><p>The <code>skupper connection-token &lt;file&gt;</code> command directs Skupper to generate a secret token file with certificates that grant permission to other Skupper instances to connect to this Skupper's network.</p><p>Note: Protect this file as you would do for any file that holds login credentials.</p><ul><li>The <code>skupper connect &lt;file&gt;</code> command directs Skupper to connect to another Skupper's network. This step completes the Skupper connection.</li></ul><p>Note that in this arrangement the Skupper instances join to form peer networks. Typically the Skupper opening the network port will be on the public cluster. A cluster running on <code>laptop</code> may not even have an address that is reachable from the internet. After the connection is made, the Skupper network members are peers and it does not matter which Skupper opened the network port and which connected to it.</p><p>The console terminals in this demo are run by the same user on the same host. This makes the token file in the ${HOME} directory available to both terminals. If your terminals are on different machines then you may need to use <code>scp</code> or a similar tool to transfer the token file to the system hosting the <code>laptop</code> terminal.</p><h3 id="generate-a-skupper-network-connection-token">Generate a Skupper network connection token</h3><p>Namespace <code>aws-eu-west</code>:</p><pre><code>skupper connection-token ${HOME}/PVT-to-PUB-connection-token.yaml
</code></pre><h3 id="open-a-skupper-connection">Open a Skupper connection</h3><p>Namespace <code>laptop</code>:</p><pre><code>skupper connect ${HOME}/PVT-to-PUB-connection-token.yaml
</code></pre><h3 id="check-the-connection">Check the connection</h3><p>Namespace <code>aws-eu-west</code>:</p><pre><code>$ skupper status
Skupper enabled for "aws-eu-west". It is connected to 1 other sites.
</code></pre><p>Namespace <code>laptop</code>:</p><pre><code>$ skupper status
Skupper enabled for "laptop". It is connected to 1 other sites.
</code></pre><h2 id="step-6-virtualize-the-services-you-want-shared">Step 6: Virtualize the services you want shared</h2><p>You now have a Skupper network capable of multi-cluster communication but no services are associated with it. This step uses the <code>kubectl annotate</code> command to notify Skupper that a service is to be included in the Skupper network.</p><p>Skupper uses the annotation as the indication that a service must be virtualized. The service that receives the annotation is the physical target for network requests and the proxies that Skupper deploys in other namespaces are the virtual targets for network requests. The Skupper infrastructure then routes requests between the virtual services and the target service.</p><p>Namespace <code>aws-eu-west</code>:</p><pre><code>$ kubectl annotate service ratings skupper.io/proxy=http
service/ratings annotated
</code></pre><p>Namespace <code>laptop</code>:</p><pre><code>$ kubectl annotate service details skupper.io/proxy=http
service/details annotated

$ kubectl annotate service reviews skupper.io/proxy=http
service/reviews annotated
</code></pre><p>Skupper is now making the annotated services available to every namespace in the Skupper network. The Bookinfo application will work as the <em>productpage</em> service on the public cluster has access to the <em>details</em> and <em>reviews</em> services on the private cluster and as the <em>reviews</em> service on the private cluster has access to the <em>ratings</em> service on the public cluster.</p><h2 id="step-7-observe-that-the-application-works">Step 7: Observe that the application works</h2><p>The web address for the Bookinfo app can be discovered from namespace <code>aws-eu-west</code>:</p><pre><code>$ echo $(kubectl get service/productpage -o jsonpath='http://{.status.loadBalancer.ingress[0].hostname}:9080')
</code></pre><p>Open the address in a web browser. The application should now work with no errors.</p><h2 id="clean-up">Clean up</h2><p>Skupper and the Bookinfo services may be removed from the clusters.</p><p>Namespace <code>aws-eu-west</code>:</p><pre><code>skupper delete
kubectl delete -f public-cloud.yaml
</code></pre><p>Namespace <code>laptop</code>:</p><pre><code>skupper delete
kubectl delete -f private-cloud.yaml </code></pre><h2 id="final-thoughts">Final Thoughts</h2><p>Enabling a multi-cloud approach has a lot of benefits and is getting easier, thanks to tools like Skupper. If you have time, try some of  Skupper's other examples on its <a href="https://github.com/skupperproject">Github Repo</a>. I hope you learned something from this post. Stay tuned for more!</p>]]></content:encoded></item><item><title><![CDATA[Optimize Ghost Blog Performance Including Rewriting Image Domains to a CDN]]></title><description><![CDATA[<p>The Ghost blogging platform offers a lean and minimalist experience. And that's why we love it. But unfortunately sometimes, it can be too lean for our requirements. </p><p>Web performance has become more important and relevant than ever, especially since Google started including it as a parameter in its SEO rankings.</p>]]></description><link>https://appfleet.com/blog/optimize-ghost-blog-performance-including-rewriting-image-domains-to-a-cdn/</link><guid isPermaLink="false">5ed4c20a6dc1db2ec7d79494</guid><category><![CDATA[DevOps]]></category><category><![CDATA[Performance]]></category><dc:creator><![CDATA[Dmitriy A.]]></dc:creator><pubDate>Mon, 08 Jun 2020 09:31:25 GMT</pubDate><media:content url="https://appfleet.com/blog/content/images/2020/06/88-Optimize-Ghost-Blog-Performance-Including-Rewriting-Image-Domains-to-a-CDN.png" medium="image"/><content:encoded><![CDATA[<img src="https://appfleet.com/blog/content/images/2020/06/88-Optimize-Ghost-Blog-Performance-Including-Rewriting-Image-Domains-to-a-CDN.png" alt="Optimize Ghost Blog Performance Including Rewriting Image Domains to a CDN"><p>The Ghost blogging platform offers a lean and minimalist experience. And that's why we love it. But unfortunately sometimes, it can be too lean for our requirements. </p><p>Web performance has become more important and relevant than ever, especially since Google started including it as a parameter in its SEO rankings. We make sure to optimize our websites as much as possible, offering the best possible user experience. This article will walk you through the steps you can take to optimize a Ghost Blog's performance while keeping it lean and resourceful. </p><p>When we started working on the <a href="https://appfleet.com/blog">appfleet blog</a> we began with a few simple things:</p><h3 id="ghost-responsive-images">Ghost responsive images</h3><p>The featured image in a blog have lots of parameters, which is a good thing. For example, you can set multiple sizes in <code>package.json</code> and have Ghost automatically resize them for a responsive experience for users on mobile devices or smaller screens.</p><pre><code>"config": {
		"posts_per_page": 10,
		"image_sizes": {
			"xxs": {
				"width": 30
			},
			"xs": {
				"width": 100
			},
			"s": {
				"width": 300
			},
			"m": {
				"width": 600
			},
			"l": {
				"width": 900
			},
			"xl": {
				"width": 1200
			}
                 }
}</code></pre><p>And then, all you have to do is update the theme's code</p><pre><code>&lt;img class="feature-image"
    srcset="{{img_url feature_image size="s"}} 300w,
            {{img_url feature_image size="m"}} 600w,
            {{img_url feature_image size="l"}} 900w,
            {{img_url feature_image size="xl"}} 1200w"
    sizes="800px"
    src="{{img_url feature_image size="l"}}"
    alt="{{title}}"
/&gt;</code></pre><h3 id="common-html-tags-for-performance">Common HTML tags for performance</h3><p>Next we take a few simple steps to optimize <em>Asset Download Time</em>. That includes adding <code>preconnect</code> and <code>preload</code> headers in <code>default.hbs</code>:</p><pre><code>&lt;link rel="preconnect" href="https://fonts.gstatic.com/" crossorigin="anonymous"&gt;
&lt;link rel="preconnect" href="https://cdn.jsdelivr.net/" crossorigin="anonymous"&gt;
&lt;link rel="preconnect" href="https://widget.appfleet.com/" crossorigin="anonymous"&gt;

&lt;link rel="preload" as="style" href="https://fonts.googleapis.com/css?family=Red+Hat+Display:400,500,700&amp;display=swap" /&gt;
&lt;link rel="preload" as="style" href="https://cdn.jsdelivr.net/npm/@fortawesome/fontawesome-free@5.13.0/css/all.min.css" /&gt;</code></pre><p>As we load many files from <a href="https://www.jsdelivr.com">jsDelivr</a> to improve our performance, we instruct the browser to establish a connection with the domain as soon as possible. Same goes for Google Fonts and the sidebar widget that was custom coded.</p><p>Most often than not, users coming from Google or some other source to a specific blog post will navigate to the homepage to check what else we have written. For the same reason, on blog posts we also added <code>prefetch</code> and <code>prerender</code> tags for the main blog page. </p><p>That way the browser will asynchronously download and cache it, making the next most probable action of the user almost instant:</p><pre><code>&lt;link rel="prefetch" href="https://appfleet.com/blog"&gt;
&lt;link rel="prerender" href="https://appfleet.com/blog"&gt;</code></pre><hr><p>Now these optimizations definitely helped but we still had a big problem. Our posts often have many screenshots and images in them, eventually impacting the page load time. </p><p>To solve this problem we took two steps. Lazy load the images and use a CDN. The issue is that Ghost doesn't allow to modify or filter the contents of the post. All you can do is output the HTML.</p><p>The easiest solution to this is to use a dynamic content CDN like <a href="https://www.cloudflare.com/">Cloudflare</a>. A CDN will proxy the whole site, won't cache the HTML, but cache all static content like images. They also have an option to lazy load all images by injecting their own Javascript.</p><p>But we didn't want to use Cloudflare in this case. And didn't feel like injecting third-party JS to lazy load the images either. So what did we do?</p><h3 id="nginx-to-the-rescue-">Nginx to the rescue!</h3><p>Our blog is hosted on a <a href="https://www.digitalocean.com/">DigitalOcean</a> droplet created using its marketplace apps. It's basically an Ubuntu VM that comes pre-installed with Node.js, NPM, Nginx and Ghost.</p><blockquote>Note that even if you don't use DigitalOcean, you are still recommended to use Nginx in-front of the Node.js app of Ghost.</blockquote><p>This eventually makes the solution pretty simple. We use Nginx to rewrite the HTML, along with enabling a CDN and lazy-loading images at the same time, without any extra JS.</p><p>For CDN, you may also use the free CDN offered by Google to all AMP projects. Not many people are aware that you can use it as a regular CDN without actually implementing AMP. </p><p>All you have to do is use this URL in front of your images:</p><p><code>https://appfleet-com.cdn.ampproject.org/i/s/appfleet.com/</code></p><p>Replace the domains with your own and change your <code>&lt;img&gt;</code> tags, and you are done. All images are now served through Google's CDN.</p><p>The best part is that the images are not only served but optimized as well. Additionally, it will even serve a WebP version of the image when possible, further improving the performance of your site.</p><p>As for lazy loading, you may use the native functionality of modern browsers that looks like this <code>&lt;img loading="lazy"</code>. By adding <code>loading="lazy"</code> to all images, you instruct the browsers to automatically lazy load them once they become visible by the user.</p><p>And now the code itself to achieve this:</p><pre><code>server {
    listen 80;

    server_name NAME;

    location ^~ /blog/ {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host       "appfleet.com";
        proxy_set_header        X-Forwarded-Proto https;
        proxy_pass http://127.0.0.1:2368;
        proxy_redirect off;
		
        #disable compression 
        proxy_set_header Accept-Encoding "";
        #rewrite the html
        sub_filter_once off;
        sub_filter_types text/html;
 		sub_filter '&lt;img src="https://appfleet.com' '&lt;img loading="lazy" src="https://appfleet-com.cdn.ampproject.org/i/s/appfleet.com';
    }

}</code></pre><p>First we disable compression between node.js and nginx. Otherwise nginx can't modify the HTML if it comes in binary form. </p><p>Next we use the <code>sub_filter</code> parameter to rewrite the HTML. Ghost is using absolute paths in images, so we add the beginning as well. And in 1 line enabled both the CDN and lazyloading.</p><p>Reload the config and you are good to go. Check our blog to see this in real time. </p>]]></content:encoded></item><item><title><![CDATA[Local Kubernetes testing with KIND]]></title><description><![CDATA[<hr><h2 id="intro">Intro</h2><p>If you've spent days (or even weeks?) trying to spin up a Kubernetes cluster for learning purposes or to test your application, then your worries are over. Spawned from a Kubernetes Special Interest Group, KIND is a tool that provisions a Kubernetes cluster running IN Docker. </p><p>From the docs:</p>]]></description><link>https://appfleet.com/blog/local-kubernetes-testing-with-kind/</link><guid isPermaLink="false">5e672b2039f27869d61f4bfe</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[James D. Bohrman]]></dc:creator><pubDate>Thu, 04 Jun 2020 12:34:00 GMT</pubDate><media:content url="https://appfleet.com/blog/content/images/2020/04/21-Local-Kubernetes-testing-with-KIND.png" medium="image"/><content:encoded><![CDATA[<hr><h2 id="intro">Intro</h2><img src="https://appfleet.com/blog/content/images/2020/04/21-Local-Kubernetes-testing-with-KIND.png" alt="Local Kubernetes testing with KIND"><p>If you've spent days (or even weeks?) trying to spin up a Kubernetes cluster for learning purposes or to test your application, then your worries are over. Spawned from a Kubernetes Special Interest Group, KIND is a tool that provisions a Kubernetes cluster running IN Docker. </p><p>From the docs:</p><blockquote><code>kind</code> is a tool for running local Kubernetes clusters using Docker container "nodes".<br><code>kind</code> is primarily designed for testing Kubernetes 1.11+, initially targeting the <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/conformance-tests.md">conformance tests</a>.</blockquote><h2 id="installing-kind">Installing KIND</h2><p>As it is built using <code>go</code>, you will need to make sure you have the latest version of <code>golang</code> installed on your machine. </p><p>According to the k8s <a href="https://kind.sigs.k8s.io/docs/contributing/getting-started/" rel="noopener">docs</a>, <code>golang -v 1.11.5</code> is preferred. To install kind, run these commands (it takes a while):</p><pre><code>go get -u sigs.k8s.io/kind
kind create cluster</code></pre><p>Then confirm <code>kind</code> cluster is available:</p><pre><code>kind get clusters
</code></pre><h1 id="setting-up-kubectl">Setting up kubectl</h1><p>Also, install the latest <code>kubernetes-cli</code> using <a href="https://brew.sh/" rel="noopener">Homebrew</a> or <a href="https://chocolatey.org/" rel="noopener">Chocolatey</a>.<br>The latest Docker has Kubernetes feature but it may come with older <code>kubectl</code> . Check its version by running this command:</p><pre><code>kubectl version
</code></pre><p>Make sure it shows <code>GitVersion: "v1.14.1"</code> or above.<br>If you find you are running <code>kubectl</code>from Docker, try <code>brew link</code> or reorder path environment variable.</p><p>Once <code>kubectl</code> and kind are ready, open bash console and run these commands:</p><pre><code>export KUBECONFIG=”$(kind get kubeconfig-path)”
kubectl cluster-info</code></pre><p>If <code>kind</code> is properly set up, some information will be shown.<br>Now you are ready to proceed. Yay!</p><h1 id="deploying-first-application">Deploying first application</h1><p>What should we deploy on the cluster? We are going to attempt deploying Cassandra since the docs have a pretty decent walk-through on it. </p><p>First of all, download <code><a href="https://kubernetes.io/examples/application/cassandra/cassandra-service.yaml">cassandra-service.yaml</a></code> and <code><a href="https://kubernetes.io/examples/application/cassandra/cassandra-statefulset.yaml">cassandra-statefulset.yaml</a></code> for later. Then create <code>kustomization.yaml</code> by running two <code>cat</code> commands.<br>Once those <code>yaml</code> files are prepared, layout them as following:</p><pre><code>k8s-wp/
  kustomization.yaml
  mysql-deployment.yaml
  wordpress-deployment.yaml</code></pre><p>Then apply them to your cluster:</p><pre><code>cd k8s-wp
kubectl apply -k ./</code></pre><h3 id="validating-optional-">Validating (optional)</h3><p>Get the Cassandra Service.</p><pre><code class="language-shell">kubectl get svc cassandra</code></pre><p>The response is:</p><pre><code>NAME        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
cassandra   ClusterIP   None         &lt;none&gt;        9042/TCP   45s
</code></pre><p>Note that Service creation might have failed if anything else is returned. Read <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/">Debug Services</a> for common issues.</p><h2 id="finishing-up">Finishing up</h2><p>That's really all you need to know to get started with KIND, I hope this makes your life a little easier and lets you play with Kubernetes a little bit more :)</p>]]></content:encoded></item></channel></rss>