DockerInAction-Persistent storage and shared state with volumes

来源:互联网 发布:3d66网软件下载 编辑:程序博客网 时间:2024/06/01 09:25

Introducing volumes

A volume is a mount point on the container’s directory tree where a portion of the host directory tree has been mounted.

4.1

Volumes provide container-independent data management

Semantically, a volume is a tool for segmenting and sharing data that has a scope or life cycle that’s independent of a single container.

Images are appropriate for packaging and distributing relatively static files like programs; volumes hold dynamic data or specializations.

Using volumes with a NoSQL database

Get started by creating a single container that defines a volume. This is called a volume container.

docker run -d \    --volume /var/lib/cassandra/data \ # Specify volume mount point inside the container    --name cass-shared \    alpine echo Data Container

You’re going to use the volume it created when you create a new container running Cassandra:

docker run -d \    --volumes-from cass-shared \ # Inherit volume definitions    --name cass1 \    cassandra:2.2

After that, both containers have a volume mounted at /var/lib/cassandra/data that points to the same location on the host’s directory tree. Next, start a container from the cassandra:2.2 image, but run a Cassandra client tool and connect to your running server:

docker runit --rm \    --link cass1:cass \    cassandra:2.2 cqlsh cass

First, look for a keyspace named docker_hello_world:

select *from system.schema_keyspaceswhere keyspace_name = 'docker_hello_world';

Cassandra should return an empty list. This means the database hasn’t been modified by the example. Next, create that keyspace with the following command:

create keyspace docker_hello_worldwith replication = {    'class' : 'SimpleStrategy',    'replication_factor': 1};

Now that you’ve modified the database, you should be able to issue the same query again to see the results and verify that your changes were accepted. The following command is the same as the one you ran earlier:

select *from system.schema_keyspaceswhere keyspace_name = 'docker_hello_world';

This time Cassandra should return a single entry with the properties you specified when you created the keyspace.

Quit the CQLSH program to stop the client container:

# Leave and stop the current containerquit

Continue cleaning up the first part of this example by stopping and removing the Cassandra node you created:

docker stop cass1docker rm -vf cass1

If the modifications you made are persisted, the only place they could remain is the volume container.

You can test this by repeating these steps. Create a new Cassandra node, attach a client, and query for the keyspace. Figure 4.2 illustrates the system and what you will have built.

4.2

The next three commands will test recovery of the data:

docker run -d \            --volumes-from cass-shared \            --name cass2 \            cassandra:2.2docker run –it --rm \          --link cass2:cass \          cassandra:2.2 \          cqlsh cassselect *from system.schema_keyspaceswhere keyspace_name = 'docker_hello_world';

The last command in this set returns a single entry, and it matches the keyspace you created in the previous container. This confirms the previous claims and demonstrates how volumes might be used to create durable systems.

Make sure to remove that volume container as well:

quitdocker rm -vf cass2 cass-shared

Volume types

There are two types of volume. The first type of volume is a bind mount. Bind mount volumes use any user-specified directory or file on the host operating system. The second type is a managed volume. Managed volumes use locations that are created by the Docker daemon in space controlled by the daemon, called Docker managed space.

4.3

Bind mount volumes

Bind mount volumes are useful if you want to share data with other processes running outside a container, such as components of the host system itself.

The following command will start an Apache HTTP server where your new directory is bind mounted to the server’s document root:

docker run -d --name bmweb \    -v ~/example-docs:/usr/local/apache2/htdocs \    -p 80:80 \    httpd:latest

Docker provides a mechanism to mount volumes as read-only. You can do this by appending :ro to the volume map specification. In the example, you should change the run command to something like the following:

docker rm -vf bmwebdocker run --name bmweb_ro \    --volume ~/example-docs:/usr/local/apache2/htdocs/:ro \    -p 80:80 \    httpd:latest

The first problem with bind mount volumes is that they tie otherwise portable container descriptions to the file system of a specific host.

The next big problem is that they create an opportunity for conflict with other containers.

Bind mount volumes are appropriate tools for workstations or machines with specialized concerns. It’s better to avoid these kinds of specific bindings in generalized platforms or hardware pools.

Docker-managed volumes

Managed volumes are created when you use the -v option (or –volume) on docker run but only specify the mount point in the container directory tree.

docker run -d \    -v /var/lib/cassandra/data \ # Specify volume mount point inside container    --name cass-shared \    alpine echo Data Container

Docker created each of the volumes in a directory controlled by the Docker daemon on the host:

docker inspect -f "{{json .Volumes}}" cass-shared

The inspect subcommand will output a list of container mount points and the corresponding path on the host directory tree. The output will look like this:

{"/var/lib/cassandra/data":"/mnt/sda1/var/lib/docker/vfs/dir/632fa59c..."}

The Volumes key points to a value that is itself a map.

Sharing volumes

Host-dependent sharing

Two or more containers are said to use host-dependent sharing when each has a bind mount volume for a single known location on the host file system.

mkdir ~/web-logs-example     # Set up a known locationdocker run --name plath -d \    -v ~/web-logs-example:/data \ # Bind mount the location into a log-writing container    dockerinaction/ch4_writer_adocker run --rm \    -v ~/web-logs-example:/reader-data \ # Bind mount the same location into a container for reading    alpine:latest \    head /reader-data/logAcat ~/web-logs-example/logA # View the logs from the hostdocker stop plath           # Stop the writer

The next example starts four containers—two log writers and two readers:

docker run --name woolf -d \    --volume ~/web-logs-example:/data \    dockerinaction/ch4_writer_adocker run --name alcott -d \    -v ~/web-logs-example:/data \    dockerinaction/ch4_writer_bdocker run --rm --entrypoint head \    -v ~/web-logs-example:/towatch:ro \    alpine:latest \    /towatch/logAdocker run --rm \    -v ~/web-logs-example:/toread:ro \    alpine:latest \    head /toread/logB

Generalized sharing and the volumes-from flag

docker run --name fowler \            -v ~/example-books:/library/PoEAA \            -v /library/DSL \            alpine:latest \            echo "Fowler collection created."docker run --name knuth \    -v /library/TAoCP.vol1 \    -v /library/TAoCP.vol2 \    -v /library/TAoCP.vol3 \    -v /library/TAoCP.vol4.a \    alpine:latest \    echo "Knuth collection created"docker run --name reader \    --volumes-from fowler \ # List all volumes as they were copied into new container    --volumes-from knuth \    alpine:latest ls -l /library/docker inspect --format "{{json .Volumes}}" reader # Checkout volume list for reader

In this example you created two containers that defined Docker-managed volumes as well as a bind mount volume. To share these with a third container without the –volumes-from flag, you’d need to inspect the previously created containers and then craft bind mount volumes to the Docker-managed host directories.

The managed volume life cycle

Volume ownership

4.6

A container owns all managed volumes mounted to its file system, and multiple containers can own a volume like in the fowler, knuth, and reader example.

Cleaning up volumes

Docker can delete managed volumes when deleting containers. Any managed volumes that are referenced by other containers will be skipped, but the internal counters will be decremented.

Advanced container patterns with volumes

Volume container pattern

4.8

Data-packed volume containers

4.9

docker run --name dpvc \ # Copy image content into a volume    -v /config \    dockerinaction/ch4_packed /bin/sh -c 'cp /packed/* /config/'docker run --rm --volumes-from dpvc \ # List shared material    alpine:latest ls /configdocker run --rm --volumes-from dpvc \ # View shared material    alpine:latest cat /config/packedDatadocker rm -v dpvc # Remember to use –v when you clean up

Polymorphic container pattern

Consider a situation where an operational issue has occurred. In order to triage the issue, you might need tools available in an image that you had not anticipated when the image was built. But if you mount a volume where you make additional tools available, you can use the docker exec command to run additional processes in a container:

docker run --name tools dockerinaction/ch4_tools # Create data-packed volume container with toolsdocker run --rm \    --volumes-from tools \    alpine:latest \    ls /operations/* # List shared toolsdocker run -d --name important_application \    --volumes-from tools \ # Start another container with shared tools    dockerinaction/ch4_iadocker exec important_application /operations/tools/someTool # Use shared tool in running containerdocker rm -vf important_application # Shut down the applicationdocker rm -v tools # Clean up the tools

You can inject files into otherwise static containers to change all types of behavior.

docker run --name devConfig \    -v /config \    dockerinaction/ch4_packed_config:latest \    /bin/sh -c 'cp /development/* /config/'docker run --name prodConfig \    -v /config \    dockerinaction/ch4_packed_config:latest \    /bin/sh -c 'cp /production/* /config/'docker run --name devApp \    --volumes-from devConfig \    dockerinaction/ch4_polyappdocker run --name prodApp \    --volumes-from prodConfig \    dockerinaction/ch4_polyapp

In this example, you start the same application twice but with a different configuration file injected.

0 0
原创粉丝点击