How Builds, Deployments and Services Work in OpenShift V3

来源:互联网 发布:矩阵没有分配律 编辑:程序博客网 时间:2024/06/03 19:06

As promised last time, I’m going to cover some new capabilities that have just dropped in OpenShift v3. If you haven’t read part one of this series, you might want to go back and do so now so you’re familiar with the building blocks I’m going to be expanding on. Unlike last time, I’m not going to walk through all the deployment steps, however there are current step-by-step instructions for the sample application located here which demonstrates the features I’m going to talk about.

Source-to-Image Builds

The first new feature I want to talk about is Source-to-Image (STI) builds. In the last article I showed how OpenShift can build your application by performing a docker build on a Dockerfile that you supply. Although that’s an extremely flexible way to define an application build process, we want to provide a developer-centric flow that focuses on turning your source code into a running application as simply as possible. Source-to-Image is a project we started to make it easy to take source code and combine it with an image that contains both a build and runtime environment for that source code (called a “builder image”) . Having a strong separation between source code (or even binary artifacts like WARs or EARs in Java) and the runtime environment in the Docker image helps migrate your code between runtime environments like Tomcat and other JEE servers, across major versions of a runtime like Ruby 1.9 and Ruby 2.0, or even across operating system versions like CentOS and Red Hat Enterprise Linux.

The builder image provides the language runtime/framework for your application (e.g. a JEE application server, a Ruby runtime environment) and the build tools needed to assemble applications (e.g. Maven).

Source-to-Image scripts work in conjunction with a builder image to provide the logic to assemble your application into whatever form and directory structure the runtime consumes. The scripts also know how to launch the runtime when the Docker image is started.

Finally your application source is what you’d expect. You provide your application source code in a structure that the STI scripts can consume. That structure will normally map to the standard application structure for the project type. For example, if you’re creating a Java application your source will be in a standard java package directory structure and include a pom.xml for maven building. When developing STI, we also wanted to support binary deployments, so your scripts can take a prebuilt artifact like a WAR or RubyGem in place of source and deploy that instead.

Let’s take a closer look at one of the builder images that exists today for Ruby.  You can view the repository here: https://github.com/openshift/ruby-20-centos/

As you can see, the builder repository starts with a Dockerfile that defines a basic Docker image that installs a few dependencies including the Ruby runtime.

It also defines an optional STI_SCRIPTS_URL which will tell the STI tool where to get the STI scripts from during build time. In this example the STI scripts are also located in the builder image repository, but it is also possible to separate the two. In this way, one can take an arbitrary existing Docker image (perhaps created by a third party) and create STI scripts for it. When invoking STI, you would then provide the image name and the scripts URL separately and STI will combine them. Let’s look at those STI scripts.

Assemble

The assemble script is the build logic for organizing some Ruby source into a runnable form. Notice how it will perform Rails compilation steps if the application calls for it. It also performs a bundle install operation to pull down any necessary dependencies. The actions this script takes can vary widely, but will often resemble the common build or preparation actions for a given language.

Run

The run script will become the start up command for your application image after STI is done constructing it. In this case, the run script launches the application using either the Puma or Rack Ruby application servers

Save-Artifacts

The save-artifacts script is used to preserve dependencies between successive builds. For example, the Ruby builder will extract all gem dependencies that were downloaded into the previous application image. STI will then inject them into the new application image before downloading dependencies. This allows the build process to skip over unchanged dependencies when rebuilding your application either due to application code changes, or because the underlying base image has been updated due to security fixes.

So why would you as an application developer want to use this? There were a few goals for STI.

  • Image flexibility: STI allows you to use almost any existing Docker image as the base for your application. STI scripts can be written to layer application code onto almost any existing Docker image, so you can take advantage of the existing ecosystem. (Why only “almost” all images? Currently STI relies on tar/untar to inject application source so the image needs to be able to process tarred content.)
  • Speed: Adding layers as part of a Dockerfile can be slow. With STI the assemble process can perform a large number of complex operations without creating a new layer at each step. In addition, STI scripts can be written to re-use dependencies stored in a previous version of the application image rather than re-downloading them each time the build is run.
  • Patchability: If an underlying image needs to be patched due to a security issue, OpenShift can use STI to rebuild your application on top of the patched builder image.
  • Operational efficiency: By restricting build operations instead of allowing arbitrary actions such as in a Dockerfile, the PaaS operator can avoid accidental or intentional abuses of the build system.
  • Operational security: Allowing users to build arbitrary Dockerfiles exposes the host system to root privilege escalation by a malicious user because the entire docker build process is run as a user with docker privileges. STI restricts the operations performed as a root user, and can run the scripts as an individual user
  • User efficiency: STI prevents developers from falling into a trap of performing arbitrary “yum install” type operations during their application build, which would result in slow development iteration.
  • Ecosystem: Encourages a shared ecosystem of images with best practices you can leverage for your applications.

To use STI instead of Docker as your build mechanism in OpenShift, you just need to change a few lines of the configuration json we looked at last time:

{

“id”: “ruby-sample-build”,

“kind”: “BuildConfig”,

“apiVersion”: “v1beta1”,

“parameters”: {

“source” : {

“type” : “Git”,

“git” : {

“uri”: “git://github.com/openshift/ruby-hello-world.git”

}

},

“strategy”: {

“type”: “STI”,

“stiStrategy”: {

“builderImage”: “openshift/ruby-20-centos”

}

},

“output”: {

“imageTag”: “openshift/origin-ruby-sample:latest”,

“registry”: “172.121.17.1:5001”

},

},

“secret”: “secret101”,

“labels”: {

“name”: “ruby-sample-build”

}

}

Specifically the strategy type is “STI” and we point to an STI builder image, in this case openshift/ruby-20-centos. We also have wildfly and NodeJS STI builders available today.

With just those changes we’ve swapped out the build mechanism and you should notice faster builds, particularly on subsequent updates to your application. Note that the Ruby-hello-world sample repository still contains a Dockerfile so it can be built with either build type, but the Dockerfile is not needed for the STI build itself.

Build Logs

In addition to the new build type, we’ve added a command to allow you to easily view the build logs regardless of which build type you’ve used. While running a build or after it completes, you can view the build logs with the following command:

openshift kube buildLogs --id=[buildID]

The buildID is the value seen in the first column when running

$ openshift kube list builds

for example:

ID                              Status     Pod ID----------                           ---------- ----------639b5067-69f4-11e4-b598-3c970e3bf0b7 complete   build-docker-20f54507-3dcd-11e4-984b-3c970e3bf0b7

Example build log from an STI type build:

$ openshift kube buildLogs --id=639b5067-69f4-11e4-b598-3c970e3bf0b72014-11-11T22:45:15.292127394Z + DOCKER_SOCKET=/var/run/docker.sock2014-11-11T22:45:15.292171820Z + '[' '!' -e /var/run/docker.sock ']'2014-11-11T22:45:15.292171820Z + TAG=openshift/origin-ruby-sample:latest2014-11-11T22:45:15.292171820Z + '[' -n 172.121.17.1:5001 ']'2014-11-11T22:45:15.292171820Z + TAG=172.121.17.1:5001/openshift/origin-ruby-sample:latest2014-11-11T22:45:15.292171820Z + REF_OPTION=2014-11-11T22:45:15.292171820Z + '[' -n '' ']'2014-11-11T22:45:15.292210875Z + BUILD_TEMP_DIR=/tmp/stibuild5148311372014-11-11T22:45:15.292239006Z + TMPDIR=/tmp/stibuild5148311372014-11-11T22:45:15.292248484Z + sti build git://github.com/openshift/ruby-hello-world.git openshift/ruby-20-centos 172.121.17.1:5001/openshift/origin-ruby-sample:latest ''2014-11-11T22:45:16.241601499Z Downloading git://github.com/openshift/ruby-hello-world.git to directory /tmp/stibuild514831137/sti288167802/src2014-11-11T22:45:16.730476434Z Cloning into '/tmp/stibuild514831137/sti288167802/src'...2014-11-11T22:45:17.425560376Z Existing image for tag 172.121.17.1:5001/openshift/origin-ruby-sample:latest detected for incremental build.[2014-11-11T22:45:31.054616868Z] ---> Installing application source[2014-11-11T22:45:31.057352734Z] ---> Building your Ruby application from source[2014-11-11T22:45:31.057384754Z] ---> Running 'bundle install --deployment'[2014-11-11T22:45:36.419945974Z] Fetching gem metadata from https://rubygems.org/.........[2014-11-11T22:45:38.489559829Z] Installing rake (10.3.2)………………………………………….2014-11-11T22:46:36.753561336Z Pushing tag for rev [1b443197b5bc] on {http://172.121.17.1:5001/v1/repositories/openshift/origin-ruby-sample/tags/latest}

Here we see the STI operations installing required gems for an application and then ultimately pushing the new image tag to the docker registry.

Deployment Configuration

I touched on Deployments last time. We now have DeploymentConfig objects which allow for repeated deployments of a particular configuration. Deployments specify what is going to be constructed for your application (replication controllers, pods, containers within those pods). A DeploymentConfig allows you to specify those things and the conditions under which the Deployment is triggered. The obvious use case is to trigger a deployment when a new version of your application image becomes available (such as after a build occurs). Other trigger conditions include changing the configuration parameters of your application.

“id”: “frontend”,
“kind”: “DeploymentConfig”,
“apiVersion”: “v1beta1”,
“triggers”: [

{

“type”: “ImageChange”,

“imageChangeParams”: {

“automatic”: true,

“containerNames”: [

“ruby-helloworld”

],

“repositoryName”: “172.121.17.1:5001/openshift/origin-ruby-sample”,

“tag”: “latest”

}

}

]

Here we see a DeploymentConfig with a trigger defined that will cause the deployment to occur any time a particular image changes. This means every time you trigger a new build of your application by pushing a change to your repository, that new image will be deployed, updating the running instances of your application.

Specifically this definition is going to watch a particular image repository (172.121.17.1:5001/openshift/origin-ruby-sample) and whenever a change occurs it will update running containers named “ruby-helloworld”. Note that this capability depends on hook logic that has been added to the openshift/docker-registry image which is used as the docker registry server for this sample.

The other bit that ties image builds together with deployments is the imageRepository stanza:

{

“id”: “origin-ruby-sample”,

“kind”: “ImageRepository”,

“apiVersion”: “v1beta1”,

“dockerImageRepository”: “172.121.17.1:5001/openshift/origin-ruby-sample”,

“labels”: {

“name”: “origin-ruby-sample”

}

}

This is the imageRepository referenced from the deployment trigger. When new images are pushed to the docker registry, a hook in the registry will update this OpenShift imageRepository configuration to notify it of the new image that is available and in turn it will trigger the deployment.

The mechanism by which the roll out is accomplished is to define new Pods and ReplicationControllers with a deployment specific label. The old ReplicationControllers and Pods (from the previous deployment) are then torn down.

Figure 1: Flow from application source change to new build requestedFigure 1: Flow from application source change to new build requested

 

Figure 2: Flow from build completion to new application version deploymentFigure 2: Flow from build completion to new application version deployment

 

With all of this in place, you can simply push a change to your application repository and once the build completes, your running application will automatically be updated to reflect the changes.

Service Linking

The sample application now makes use of a database pod which is deployed separately:

 

“podTemplate”: {

“desiredState”: {

“manifest”: {

“version”: “v1beta1”,

“containers”: [

{

“name”: “ruby-helloworld-database”,

“image”: “mysql”,

“env”: [

{

“name”: “MYSQL_ROOT_PASSWORD”,

“value”: “${MYSQL_ROOT_PASSWORD}”

},

{

“name”: “MYSQL_DATABASE”,

“value”: “${MYSQL_DATABASE}”

}

],

“ports”: [

{

“containerPort”: 3306

}

]

}

]

}

},

“labels”: {

“name”: “database”

}

}

As you can see in the application template, both the database pod and the application pod share the MYSQL_ROOT_PASSWORD and MYSQL_DATABASE environment variables. A new service is also defined to make the database available on port 5434:

{

“id”: “database”,

“kind”: “Service”,

“apiVersion”: “v1beta1”,

“port”: 5434,

“containerPort”: 3306,

“selector”: {

“name”: “database”

}

}

The sample application code then references this database service via environment variables provided by Kubernetes and uses it to make a connection to the mysql DB:

def self.connect_to_databasebeginActiveRecord::Base.establish_connection(:adapter => "mysql2",:host => "#{ENV["DATABASE_SERVICE_HOST"]}",:port => "#{ENV["DATABASE_SERVICE_PORT"]}",:database => "#{ENV["MYSQL_DATABASE"]}",:password => "#{ENV["MYSQL_ROOT_PASSWORD"]}")ActiveRecord::Base.connection.active?rescue Exceptionreturn falseendend

In this way, the main application container is able to access the database service through the kubelet proxy, allowing for decoupled deployment of the database and the frontend application. When the application is updated a new deployment occurs, the database remains running and untouched.

Conclusion

We hope with the addition of these pieces you can start to see where we are going with a full PaaS experience built on top of Kubernetes and Docker. You can now start from nothing more than a standard application source repository and deploy it onto a running PaaS built on Docker containers, giving your applications total flexibility in terms of runtime frameworks and library dependencies.


https://blog.openshift.com/builds-deployments-services-v3/

0 0
原创粉丝点击