Dienstag, 11. Februar 2014

Zero-Downtime Deployment for Grails Applications

Often, it's okay to have a (short) downtime when deploying a new version of your application. But my recent customer is into a time-critical round-the-clock business. Downtime is very critical, there is only a short window of time for deployment, once a day. In this context, continuous deployment is not an option, which limits the level of support and the possibilities for feedback.

The solution is Blue/Green Deployment. One deploys a new version to an offline service and moves the incoming traffic from the old version to the new one once it's deployed. I adapted a solution from Jakub Holy.
There are several options to deploy different version of an application in parallel to Tomcat. I want to discuss them shortly:

Different context roots

Deploying to different context roots within the same Tomcat container, e.g. localhost:8080/version1, localhost:8080/version2 etc.

Pros

  • No changes to the Tomcat installation or configuration

Cons

  • Requires URL rewriting by the reverse proxy which is harder to configure.
  • Very likely, due to memory leaks, the Tomcat instance will run out of memory (PermGen), and there is no possibility to restart the instance without downtime.

Different Tomcat listeners

One can start multiple listeners within the same container, providing the applications on different ports, e.g. localhost:8080/ and localhost:8081

Pros

  • No changes to the Tomcat installation (startup scripts, default environment variables and paths).

Cons

  • Some changes to the Tomcat config file, server.xml, necessary.
  • Very likely, due to memory leaks, the Tomcat instance will run out of memory (PermGen), and there is no possibility to restart the instance without downtime.

Different Tomcat instances

Last, but definitely not least, there is the "big" solution; start two completely separate Tomcat instances.

Pros

  • It is possible to restart the offline Tomcat instance without any downtime.
  • This enables repeated deployments without running out of memory at some time.

Cons

  • Requires very many changes to the system configuration, because every configuration artifact must be available twice. You need two startup scripts, two Catalina home directories, two server.xml, context.xml, two logging directories and so on.
Being the only option that allows real zero-downtime operations, I chose the latter option.

    Session-Handling

    The last problem to tackle is the session handling. By default, the session information like logins is limited to one application instance. If every deployment requires the users to login again, zero-downtime will result in zero-acceptance, too. The solution to this problem is clustering the two Tomcat instances.
    This requires a few changes to the application itself. The application must be marked as 'distributable'. The simplest way to achieve this is creating a deployment descriptor in src/templates/war/web.xml:
    <web-app ...>
      <display-name>/@grails.project.key@</display-name>
     
      <!-- Add this line -->
      <distributable />
      ...
    </web-app>
    Besides, clustering must be activated in Tomcat's server.xml:
    <Server port="8005" shutdown="SHUTDOWN">
      <!-- ... -->
      <Service name="Catalina">
        <!-- ... -->
        <Engine name="Catalina" defaultHost="localhost">

          <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
                   channelSendOptions="8">

            <Manager className="org.apache.catalina.ha.session.DeltaManager"
                     expireSessionsOnShutdown="false"
                     notifyListenersOnReplication="true"/>

            <Channel className="org.apache.catalina.tribes.group.GroupChannel">
              <Membership className="org.apache.catalina.tribes.membership.McastService"
                          address="228.0.0.4"
                          port="45564"
                          frequency="500"
                          dropTime="3000"/>
              <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                        address="auto"
                        port="4000"
                        autoBind="100"
                        selectorTimeout="5000"
                        maxThreads="6"/>

              <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
                <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
              </Sender>

              <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
              <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
              <Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
            </Channel>

            <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
            <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>

            <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
            <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
          </Cluster>

        </Engine>
      </Service>
    </Server>
    This configuration enables session replication using TCP multicasting. There are alternatives where session information is persisted to disk which would enable failover and recovery from crashes. But for my scenario—just two Tomcat instances on the same machine—direct TCP synchronization seems sufficient.

    Moving from Blue to Green

    Finally, incoming requests have to be routed to the active Tomcat instance. In my setup, that's the duty of haproxy. As described in the documentation, one can configure haproxy to forward incoming requests to either of several backends.
    To simplify the process of deployment and reconfiguration of haproxy, I developed a little Bash script:
    #!/bin/bash
    if [ $# -ne 1 ]; then
      echo "Usage: $0 <war-file>"
      exit 1
    fi

    set -e
    retry=60
    war_file=$1

    current_link=`readlink /etc/haproxy/haproxy.cfg`
    if [ $current_link = "./haproxy.green.cfg" ]; then
      current_environment="GREEN"
      target_environment="BLUE"
      target_service="tomcat6-blue"
      target_port="8080"
      target_webapps="/var/lib/tomcat6-blue/webapps"
      target_config_file="./haproxy.blue.cfg"
    fi
    if [ $current_link = "./haproxy.blue.cfg" ]; then
      current_environment="BLUE"
      target_environment="GREEN"
      target_service="tomcat6-green"
      target_port="8081"
      target_webapps="/var/lib/tomcat6-green/webapps"
      target_config_file="./haproxy.green.cfg"
    fi
    echo "haproxy is connected to $current_environment backend"

    curl --user deployer:supersecret http://localhost:$target_port/manager/undeploy?path=/
    service $target_service stop

    cp --verbose $war_file $target_webapps/ROOT.war
    service $target_service start
    until curl --head --fail --max-time 10 http://localhost:$target_port/; do
        if [ $retry -le 0 ]; then
          echo "$war_file was not deployed successfully within retry limit"
          exit 1
        fi
        echo "Waiting 5 secs for successful deployment"
        sleep 5
        echo "$((--retry)) attempts remaining"
    done
    ln --symbolic --force --no-target-directory --verbose $target_config_file /etc/haproxy/haproxy.cfg
    service haproxy reload
     

    Putting everything together

    Finally, I collected all of the configuration, scripts, and so on into a Chef cookbook, forked from the original Tomcat cookbook. I provide a GitHub repository that helps you setup a virtual machine with Vagrant and the described Tomcat / haproxy configuration.
    git clone https://github.com/andreassimon/zero-downtime.git
    cd zero-downtime
    bundle install
    librarian-chef install
    vagrant up
    Copy your WAR file into the project directory, and deploy it to the virtual machine:
    cp /home/foo/your-war-file.war .
    vagrant ssh
    sudo -i
    deploy-war /vagrant/your-war-file.war
    Now, you can access the virtual machine in your host browser via http://localhost:8080/.

    Keine Kommentare:

    Kommentar veröffentlichen