External Links
- @[]
- @[] D.Peman@github
- @[]
- @[]

Docker API
- @[])
- @[]
- @[]

DockerD summary
--config-file string default "/etc/docker/daemon.json"
  -D, --debug                                 Enable debug mode
  --experimental                          Enable experimental features
  --icc         Enable inter-container communication (default true)
  --log-driver string   default "json-file"
  -l, --log-level string  default "info"
  --mtu int  Set the containers network MTU
  --network-control-plane-mtu int         Network Control plane MTU (default 1500)
  --rootless  Enable rootless mode; typically used with RootlessKit (experimental)

  --data-root   string default "/var/lib/docker"
  --exec-root   string default "/var/run/docker"
  -s, --storage-driver string                 Storage driver to use
  --storage-opt list                      Storage driver options

  DOCKER_DRIVER     The graph driver to use.
  DOCKER_RAMDISK    If set this will disable \u2018pivot_root\u2019.
  DOCKER_TMPDIR     Location for temporary Docker files.
  MOBY_DISABLE_PIGZ Do not use unpigz to decompress layers in parallel
                    when pulling images, even if it is installed.
  DOCKER_NOWARN_KERNEL_VERSION Prevent warnings that your Linux kernel is 
                   unsuitable for Docker.

BºDockerd Summaryº
  └ persistent process managing containers.

  └ can listen for Docker Engine API requests via
    - IPC socket: default /var/run/docker.sock
    - tcp       : WARN: default setup un-encrypted/un-authenticated 
    - fd        : Systemd based systems only. 
                  dockerd -H fd://. 

BºDaemon storage-driverº:
  See also: @[]
  Docker daemon support next storage drivers:
  └ aufs        :Rºoldest (linux kernel patch unlikely to be merged)º
  ·              BºIt allows containers to share executable and shared library memory, º
  ·              Bº→ useful choice when running thousands of repeated containersº
  └ devicemapper:
  · thin provisioning and Copy on Write (CoW) snapshots. 
  · - For each devicemapper graph location - /var/lib/docker/devicemapper -
  ·   a thin pool is created based on two block devices:
  ·   - data    : loopback mount of automatically created sparse file
  ·   - metadata: loopback mount of automatically created sparse file
  └ btrfs       :
  · -Bºvery fastº
  · -Rºdoes not share executable memory between devicesº
  · -$º# dockerd -s btrfs -g /mnt/btrfs_partition º
  └ zfs         :
  · -Rºnot as fast as btrfsº
  · -Bºlonger track record on stabilityº.
  · -BºSingle Copy ARC shared blocks between clones allowsº
  ·  Bºto cache just onceº
  · -$º# dockerd -s zfsº  ← select a different zfs filesystem by setting
  ·                         set zfs.fsname option
  └ overlay     :
  · -Bºvery fast union filesystemº.
  · -Bºmerged in the main Linux kernel 3.18+º
  · -Bºsupport for page cache sharingº
  ·    (multiple containers accessing the same file
  ·     can share a single page cache entry/ies)
  · -$º# dockerd -s overlay º
  · -RºIt can cause excessive inode consumptionº
  └ overlay2    :
    -Bºsame fast union filesystem of overlayº
    -BºIt takes advantage of additional features in Linux kernel 4.0+
     Bºto avoid excessive inode consumption.º
    -$º#Call dockerd -s overlay2    º
    -Rºshould only be used over ext4 partitions (vs Copy on Write FS like btrfs)º

  └ Vfs: a no thrills, no magic, storage driver, and one of the few 
  ·      that can run Docker in Docker.
  └ Aufs: fast, memory hungry, not upstreamed driver, which is only 
  ·       present in the Ubuntu Kernel. If the system has the aufs utilities 
  ·       installed, Docker would use it. It eats a lot of memory in cases 
  ·       where there are a lot of start/stop container events, and has issues 
  ·       in some edge cases, which may be difficult to debug.
  └ "... Diffs are a big performance area because the storage driver needs to 
     calculate differences between the layers, and it is particular to 
     each driver. Btrfs is fast because it does some of the diff 
     operations natively..."
    - The Docker portable image format is composed of tar archives that 
      are largely for transit:
      - Committing container to image with commit.
      - Docker push and save.
      - Docker build to add context to existing image.
    - When creating an image, Docker will diff each layer and create a 
      tar archive of just the differences. When pulling, it will expand the 
      tar in the filesystem. If you pull and push again, the tarball will 
      change, because it went through a mutation process, permissions, file 
      attributes or timestamps may have changed.
    - Signing images is very challenging, because, despite images being 
      mounted as read only, the image layer is reassembled every time. Can 
      be done externally with docker save to create a tarball and using gpg 
      to sign the archive.

BºDocker runtime execution optionsº
  └ The daemon relies on a OCI compliant runtime (invoked via the 
    containerd daemon) as its interface to the Linux kernel namespaces, 
    cgroups, and SELinux.

  └ By default,Bºdockerd automatically starts containerdº.
    - to control/tune containerd startup, manually start 
      containerd and pass the path to the containerd socket
      using the --containerd flag. For example:
    $º# dockerd --containerd /var/run/dev/docker-containerd.sockº

BºInsecure registriesº

  └ Docker considers a private registry either:
    - secure
      - It uses TLS.
      - CA cert exists in /etc/docker/certs.d/myregistry:5000/ca.crt. 
    - insecure
      - not TLS used or/and
      - CA-certificate unknown.
      -$º--insecure-registry myRegistry:5000º flag needed to use it.

BºDaemon user namespace optionsº
  - The Linux kernel user namespace support provides additional security 
    by enabling a process, and therefore a container, to have a unique 
    range of user and group IDs which are outside the traditional user 
    and group range utilized by the host system. Potentially the most 
    important security improvement is that, by default, container 
 ☞Bºprocesses running as the root user will have expected administrativeº
  Bºprivilege (with some restrictions) inside the container but willº
  Bºeffectively be mapped to an unprivileged uid on the host.º
    More info at:

- Docker supports softlinks for :
  - Docker data directory:  (def. /var/lib/docker)
  - temporal    directory:  (def. /var/lib/docker/tmp) 
Resizing containers with the Device Mapper
$ docker help
Usage:	docker COMMAND

A self-sufficient runtime for containers

      --config string      Location of client config files (default "/root/.docker")
  -D, --debug              Enable debug mode
  -H, --host list          Daemon socket(s) to connect to
  -l, --log-level string   Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info")
      --tls                Use TLS; implied by --tlsverify
      --tlscacert string   Trust certs signed only by this CA (default "/root/.docker/ca.pem")
      --tlscert string     Path to TLS certificate file (default "/root/.docker/cert.pem")
      --tlskey string      Path to TLS key file (default "/root/.docker/key.pem")
      --tlsverify          Use TLS and verify the remote
  -v, --version            Print version information and quit

Management Commands:       | Commands:
            Manage ...     |   attach      Attach local STDIN/OUT/ERR streams to a running container
config      Docker configs |   build       Build an image from a Dockerfile
container   containers     |   commit      Create a new image from a container's changes
image       images         |   cp          Copy files/folders between a container and the local filesystem
network     networks       |   create      Create a new container
node        Swarm nodes    |   diff        Inspect changes to files or directories on a container's filesystem
plugin      plugins        |   events      Get real time events from the server
secret      Docker secrets |   exec        Run a command in a running container
service     services       |   export      Export a container's filesystem as a tar archive
swarm       Swarm          |   history     Show the history of an image
system      Docker         |   images      List images
trust       trust on       |   import      Import the contents from a tarball to create a filesystem image
            Docker images  |   info        Display system-wide information
volume      volumes        |   inspect     Return low-level information on Docker objects
                           |   kill        Kill one or more running containers
                           |   load        Load an image from a tar archive or STDIN
                           |   login       Log in to a Docker registry
                           |   logout      Log out from a Docker registry
                           |   logs        Fetch the logs of a container
                           |   pause       Pause all processes within one or more containers
                           |   port        List port mappings or a specific mapping for the container
                           |   ps          List containers
                           |   pull        Pull an image or a repository from a registry
                           |   push        Push an image or a repository to a registry
                           |   rename      Rename a container
                           |   restart     Restart one or more containers
                           |   rm          Remove one or more containers
                           |   rmi         Remove one or more images
                           |   run         Run a command in a new container
                           |   save        Save one or more images to a tar archive (streamed to STDOUT by default)
                           |   search      Search the Docker Hub for images
                           |   start       Start one or more stopped containers
                           |   stats       Display a live stream of container(s) resource usage statistics
                           |   stop        Stop one or more running containers
                           |   tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
                           |   top         Display the running processes of a container
                           |   unpause     Unpause all processes within one or more containers
                           |   update      Update configuration of one or more containers
                           |   version     Show the Docker version information
                           |   wait        Block until one or more containers stop, then print their exit codes
Install ⅋ setup
Proxy settings
To configure Docker to work with an HTTP or HTTPS proxy server, follow
instructions for your OS:
Windows - Get Started with Docker for Windows
macOS   - Get Started with Docker for Mac
Linux   - Control⅋config. Docker with Systemd
docker global info
system setup
running/paused/stopped cont.
$ sudo docker info
Containers: 23
 Running: 10
 Paused: 0
 Stopped: 1
Images: 36
Server Version: 17.03.2-ce
ºStorage Driver: devicemapperº
 Pool Name: docker-8:0-128954-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: ext4
 Data file: /dev/loop0
 Metadata file: /dev/loop1
ºData Space Used: 3.014 GBº
ºData Space Total: 107.4 GBº
ºData Space Available: 16.11 GBº
ºMetadata Space Used: 4.289 MBº
ºMetadata Space Total: 2.147 GBº
ºMetadata Space Available: 2.143 GBº
ºThin Pool Minimum Free Space: 10.74 GBº
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
ºData loop file: /var/lib/docker/devicemapper/devicemapper/dataº
ºMetadata loop file: /var/lib/docker/devicemapper/devicemapper/metadataº
 Library Version: 1.02.137 (2016-11-30)
ºLogging Driver: json-fileº
ºCgroup Driver: cgroupfsº
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
ºSecurity Options:º
º seccompº
º  Profile: defaultº
Kernel Version: 4.17.17-x86_64-linode116
Operating System: Debian GNU/Linux 9 (stretch)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.838 GiB
Name: 24x7
ºDocker Root Dir: /var/lib/dockerº
ºDebug Mode (client): falseº
ºDebug Mode (server): falseº
Experimental: false
Insecure Registries:
Live Restore Enabled: false
- Unix socket the Docker daemon listens on by default,
  used to communicate with the daemon from within a container.
- Can be mounted on containers to allow them to control Docker:
$ docker runº-v /var/run/docker.sock:/var/run/docker.sockº  ....


# STEP 1. Create new container
$ curl -XPOSTº--unix-socket /var/run/docker.sockº \
  -d '{"Image":"nginx"}' \
  -H 'Content-Type: application/json' \
Returns something similar to:
→ {"Id":"fcb65c6147efb862d5ea3a2ef20e793c52f0fafa3eb04e4292cb4784c5777d65","Warnings":null}

# STEP 2. Use /containers//start to start the newly created container.
$ curl -XPOSTº--unix-socket /var/run/docker.sockº \

# STEP 3: Verify it's running:
$ docker container ls
fcb65c6147ef nginx “nginx -g ‘daemon …” 5 minutes ago Up 5 seconds 80/tcp, 443/tcp ecstatic_kirch

ºStreaming events from the Docker daemonº

- Docker API also exposes the*/events endpoint*

$ curlº--unix-socket /var/run/docker.sockº http://localhost/events
  command hangs on, waiting for new events from the daemon.
  Each new event will then be streamed from the daemon.
avoid "sudo" docker
$º $ sudo usermod -a -G docker "myUser"º
Docker components
Docker Networks
Create new network and use it in containers:
  $ docker ºnetwork createº OºredisNetworkº
  $ docker run --rm --name redis-server --network OºredisNetworkº -d redis
  $ docker run --rm --network OºredisNetworkº -it redis redis-cli -h redis-server -p 6379

List networks:
  $ docker network ls

Disconect and connect a container to the network:
  $ docker disconnect OºredisNetworkº redis-server
  $ docker connect --alias db OºredisNetworkº redis-server

  STEP 0: Create new container with volume
    host-mach $ docker run -it Oº--name alphaº º-v "hostPath":/var/logº ubuntu bash
    container $ date > /var/log/now

  STEP 1: Create new container using volume from previous container:
    host-mach $ docker run --volumes-from Oºalphaº ubuntu
    container $ cat /var/log/now


  STEP 0: Create Volume
  host-mach $ docker volume create --name=OºwebsiteVolumeº
  STEP 1: Use volume in new container
  host-mach $ docker run -d -p 8888:80 \
              -v OºwebsiteVolumeº:/usr/share/nginx/html
              -v logs:/var/log/nginx nginx
  host-mach $ docker run
              -v OºwebsiteVolumeº:/website
              -w /website \
              -it alpine vi index.html

Ex.: Update redis version without loosing data:
  host-mach $ docker network create dbNetwork
  host-mach $ docker run -d --network dbNetwork \
              --network-alias redis \
              --name redis28 redis:2.8
  host-mach $ docker run -it --network dbNetwork \
              alpine telnet redis 6379
              → SET counter 42
              → INFO server
              → SAVE
              → QUIT
  host-mach $ docker stop redis28
  host-mach $ docker run -d --network dbNetwork \
              --network-alias redis \
              --name redis30 \
              --volumes-from redis28 \
  host-mach $ docker run -it --network dbNetwork \
              alpine telnet redis 6379
              → GET counter
              → INFO server
              → QUIT
version: "3"
    build: .         # ← use Dockerfile to build image
      - "8000:8000"
    image: redis     # ← use DockerHub image
      - "redis-data:/data"

(server store for images)
- utility to simplify running applications in docker containers. 
  BºIt allows you to:º
  Bº- generate app config. files at container startup timeº
  Bº  from templates and container environment variablesº
  Bº- Tail multiple log files to stdout and/or stderrº
  Bº- Wait for other services to be available using TCP, HTTP(S),º
  Bº  unix before starting the main process.º

typical use case:
 - application that has one or more configuration files and
   you would like to control some of the values using environment variables.
 - dockerize allows to set an environment variable and update the config file before
   starting the contenerized application
 - other use case: forward logs from harcoded files on the filesystem to stdout/stderr
   (Ex: nginx logs to /var/log/nginx/access.log and /var/log/nginx/error.log by default)
Managing Containers
Boot-up/run container:
$ docker run \                             $ docker run \
  --rm  \        ←------ Remove ---------→   --rm  \
  --name clock  \        on exit             --name clock  \
 º-dº\             ← Daemon    interactive →º-tiº\
                     mode      mode
  jdeiviz/clock                              jdeiviz/clock

Show container logs:
$ docker logs docker
$ logs --tail 3
$ docker logs --tail 1 --follow

Stop container:
$ docker stop # Espera 10s docker kill

Prune stopped containers:

$ docker container prune

container help:
$ docker container
Extracted from:
- @[]
The ENTRYPOINT of an image is similar to a COMMAND because it specifies what
executable to run when the container starts,
ºbut it is (purposely) more difficult to overrideº.

- The ENTRYPOINT gives a container its default nature or behavior, so that when
you set an ENTRYPOINT you can run the container as if it were that binary,
complete with default options, and you can pass in more options via the COMMAND.
But, sometimes an operator may want to run something else inside the container,
so you can override the default ENTRYPOINT at runtime by using a string to specify the

*Override Entrypoint @ docker-run passing extra parameters
$ docker run -it --entrypoint /bin/bash ${DOCKER_IMAGE} -c ls -l
                 └───────┬────────────┘                 └───┬───┘
                  overrides the entrypoint             extra params.
                                                      (exec 'ls -l' script)

Monitoring running containers
Monitoring (Basic)
List containers instances:
   $ docker ps     # only running
   $ docker ps -a  # also finished, but not yet removed (docker rm ...)
   $ docker ps -lq # TODO:

"top" containers showing Net IO read/writes, Disk read/writes:
   $ docker stats
   | CONTAINER ID   NAME                    CPU %   MEM USAGE / LIMIT     MEM %   NET I/O          BLOCK I/O      PIDS
   | c420875107a1   postgres_trinity_cache  0.00%   11.66MiB / 6.796GiB   0.17%   22.5MB / 19.7MB  309MB / 257kB  16
   | fdf2396e5c72   stupefied_haibt         0.10%   21.94MiB / 6.796GiB   0.32%   356MB / 693MB    144MB / 394MB  39

   $ docker top 'containerID'
   | UID       PID     PPID    C  STIME  TTY   TIME     CMD
   | systemd+  26779   121423  0  06:11  ?     00:00:00 postgres: ddbbName cache idle
   | ...
   | systemd+  121423  121407  0  Jul06  pts/0 00:00:44 postgres
   | systemd+  121465  121423  0  Jul06  ?     00:00:01 postgres: checkpointer process
   | systemd+  121466  121423  0  Jul06  ?     00:00:26 postgres: writer process
   | systemd+  121467  121423  0  Jul06  ?     00:00:25 postgres: wal writer process
   | systemd+  121468  121423  0  Jul06  ?     00:00:27 postgres: autovacuum launcher process
   | systemd+  121469  121423  0  Jul06  ?     00:00:57 postgres: stats collector process

Container-focused Linux troubleshooting and monitoring tool.

Once Sysdig is installed as a process (or container) on the server,
it sees every process, every network action, and every file action
on the host. You can use Sysdig "live" or view any amount of historical
data via a system capture file.

Example: take a look at the total CPU usage of each running container:
   $ sudo sysdig -c topcontainers\_cpu
   | CPU%
   | ----------------------------------------------------
   | 80.10% postgres
   | 0.14% httpd
   | ...

Example: Capture historical data:
   $ sudo sysdig -w historical.scap

Example: "Zoom into a client":
   $ sudo sysdig -pc -c topprocs\_cpu container. name=client
   | CPU% Process
   | ----------------------------------------------
   | 02.69% bash client
   | 31.04%curl client
   | 0.74% sleep client
Show a graph of running containers dependencies and
image dependencies.

Other options:
$ºdockviz images -tº
└─511136ea3c5a Virtual Size: 0.0 B
  ├─f10ebce2c0e1 Virtual Size: 103.7 MB
  │ └─82cdea7ab5b5 Virtual Size: 103.9 MB
  │   └─5dbd9cb5a02f Virtual Size: 103.9 MB
  │     └─74fe38d11401 Virtual Size: 209.6 MB Tags: ubuntu:12.04, ubuntu:precise
  ├─ef519c9ee91a Virtual Size: 100.9 MB
  └─02dae1c13f51 Virtual Size: 98.3 MB
    └─e7206bfc66aa Virtual Size: 98.5 MB
      └─cb12405ee8fa Virtual Size: 98.5 MB
        └─316b678ddf48 Virtual Size: 169.4 MB Tags: ubuntu:13.04, ubuntu:raring

$ºdockviz images -t -l º← show only labelled images
└─511136ea3c5a Virtual Size: 0.0 B
  ├─f10ebce2c0e1 Virtual Size: 103.7 MB
  │ └─74fe38d11401 Virtual Size: 209.6 MB Tags: ubuntu:12.04, ubuntu:precise
  ├─ef519c9ee91a Virtual Size: 100.9 MB
  │ └─a7cf8ae4e998 Virtual Size: 171.3 MB Tags: ubuntu:12.10, ubuntu:quantal
  │   ├─5c0d04fba9df Virtual Size: 513.7 MB Tags: nate/mongodb:latest
  │   └─f832a63e87a4 Virtual Size: 243.6 MB Tags: redis:latest
  └─02dae1c13f51 Virtual Size: 98.3 MB
    └─316b678ddf48 Virtual Size: 169.4 MB Tags: ubuntu:13.04, ubuntu:raring

$ºdockviz images -tº-i º ← Show incremental size rather than cumulative
└─511136ea3c5a Virtual Size: 0.0 B
  ├─f10ebce2c0e1 Virtual Size: 103.7 MB
  │ └─82cdea7ab5b5 Virtual Size: 255.5 KB
  │   └─5dbd9cb5a02f Virtual Size: 1.9 KB
  │     └─74fe38d11401 Virtual Size: 105.7 MB Tags: ubuntu:12.04, ubuntu:precise
  └─02dae1c13f51 Virtual Size: 98.3 MB
    └─e7206bfc66aa Virtual Size: 190.0 KB
      └─cb12405ee8fa Virtual Size: 1.9 KB
        └─316b678ddf48 Virtual Size: 70.8 MB Tags: ubuntu:13.04, ubuntu:raring


Managing Images
  Managing images
(List all image related commands with: $ docker image)

  $ docker images        # ← List local ("downloaded/instaled") images

  $ docker search redis  # ← Search remote images @ Docker Hub: 

  $ docker rmi /${IMG_NAME}:${IMG_VER}  # ← remove (local) image
  $ docker image prune                  # ← removeºallºnon used images

-ºPUSH/PULL Images from Private Registry:º

  -ºPRE-SETUP:º(Optional opinionated, but recomended)
    Define ENV. VARS. in BºENVIRONMENTº file

    IMG_VER="1.0"  # ← Defaults to 'latest'
    # }} 
    SESSION_TOKEN="dAhYK9Z8..."  # ← Updated Each 'N' hours
    # }}

    $ cat │  $ cat
    #!/bin/bash                             │  #!/bin/bash
    set -e # ← stop on first error          │  set -e # ← stop on first error
    .BºENVIRONMENTº                         │  .BºENVIRONMENTº
    sudo dockerºloginº\                     │  sudo dockerºloginº\
       -u ${LOGIN_USER} \                   │     -u ${LOGIN_USER} \
       -p ${SESSION_TOKEN} \                │ 
       ${REGISTRY}                          │ 
    sudo dockerºpushº \                     │  sudo dockerºpushº \
       ${REGISTRY}/${USER}/\                │  /\
       /${IMG_NAME}:${IMG_VER}              │  /${IMG_NAME}:${IMG_VER}

   $ docker pull \                          │ $ docker pull \
     ${REGISTRY}/${USER}/\                  │   \
     ${IMG_NAME}:${IMG_VER}                 │   ${IMG_NAME}:${IMG_VER}
Build image
72.7 MB layer ←→ FROM              Put most frequently changed layer
40.0 MB layer ←→ COPY target/dependencies /app/dependencies    down the layer "stack", so that
 9.0 MB layer ←→ COPY target/resources    /app/resources       when uploading new images only it
 0.5 MB layer ←→ COPY target/classes      /app/classes       ← will be uploaded. Probably the most
                                                               frequently changed layer is also 
                                                               the smaller layer
                 ENTRYPOINT java -cp \
                   /app/dependencies/*:/app/resources:/app/classes \

$ docker build \
   --build-arg http_proxy=http://...:8080 \
   --build-arg https_proxy=https://..:8080 \
   -t figlet .

$ cat ./Dockerfile
FROM ubuntu

RUN apt-get update
# Instalar figlet

ENTRYPOINT ["figlet", "-f", "script"]

Note: Unless you tell Docker otherwise, it will do as little work as possible when 
building an image. It caches the result of each build step of a Dockerfile that 
it has executed before and uses the result for each new build.
   If a new version of the base image you’re using becomes available that 
   conflicts with your app, however, you won’t notice that when running the tests in 
   a container using an image that is built upon the older, cached version of the base image.
 BºYou can force build to look for newer verions of base image "--pull" flagº.
   Because new base images are only available once in a while, it’s not really 
   wasteful to use this argument all the time when building images.
   (--no-cache can also be useful)

  Image tags
adding a tag to the image essentially adds an alias
The tags consists of:
    default one if not

Tag image:
  $ docker tag jdeiviz/clock /clock:1.0
Show image
change history
   $ docker history /clock:1.0
Commit image
(Discouraged most of the time, modify Dockerbuild instead)
host-mach $ docker run -it ubuntu bash     # Boot up existing image
container # apt-get install ...            # Apply changes to running instance
host-mach $ docker diff $(docker ps -lq)   # Show changes done in running container
host-mach $ docker commit $(docker ps -lq) # Commit/Confirm changes
host-mach $ docker tag figlet              # Tage new image
host-mach $ docker run -it figlet          # Boot new image instance
Future Improvements
"Rethinking container image delivery"
Container images today are mostly delivered via container registries, 
like Docker Hub for public access, or an internal registry deployment 
within an organization. Crosby explained that Docker images are 
identified with a name, which is basically a pointer to content in a 
given container registry. Every container image comes down to a 
digest, which is a content address hash for the JSON files and layers 
contained in the image. Rather than relying on a centralized registry 
to distribute images, what Crosby and Docker are now thinking about 
is an approach whereby container images can also be accessed and 
shared via some form of peer-to-peer (P2P) transfer approach across 

Crosby explained that a registry would still be needed to handle the 
naming of images, but the content address blobs could be transferred 
from one machine to another without the need to directly interact 
with the registry. In the P2P model for image delivery, a registry 
could send a container image to one node, and then users could share 
and distribute images using something like BitTorrent sync. Crosby 
said that, while container development has matured a whole lot since 
2013, there is still work to be done. "From where we've been over the 
past few years to where we are now, I think we'll see a lot of the 
same type of things and we'll still focus on stability and 
performance," he said.

A video of this talk is available.
Advanced Image creation
(base Dockerfile
 for devel)

Modify base image adding "ONBUILD" in places that are executed just during build
in the image extending base image:
| Dockerfile.base                | Dockerfile
| FROM node:7.10-alpine          | FROM node-base
|                                |
| RUN mkdir /src                 | EXPOSE 8000
| WORKDIR /src
| COPY package.json /src
| RUN npm install
| COPY . /src
| CMD [ "npm", "start" ]

  $ docker build -t node-base -f Dockerfile.base . # STEP 1: Compile base image
  $ docker build -t node -f Dockerfile .           # STEP 2: Compile image
  $ docker run -p 8000:8000 -d node
- Multi-Stage allows for final "clean" images that will
  contain just the application binaries, with no building
  or compilation intermediate tools needed during the build.
  This allow for much lighter final images.
                   │ "STANDARD" BUILD              │ multi─stage BUILD                              │
│Dockerfile        │ Dockerfile                    │                                  │
│                  │ FROM golang:alpine            │ FROM ºgolang:alpineº AS Oºbuild─envº           │
│                  │ WORKDIR /app                  │ ADD . /src                                     │
│                  │ ADD . /app                    │ RUN cd /src ; go build ─o app                  │
│                  │ RUN cd /app ; go build ─o app │                                                │
│                  │ ENTRYPOINT ./app              │ FROMºalpineº                                   │
│                  │                               │ WORKDIR /app                                   │
│                  │                               │ COPY ──from=Oºbuild─envº /src/app /app/        │
│                  │                               │ ENTRYPOINT ./app                               │
│ Compile image    │ $ docker build . ─t hello─go  │ $ docker build . ─f ─t hello─goms│
│ Exec container   │ $ docker run hello─go         │ $ docker run hello─goms                        │
│ Check image size │ $ docker images               │ $ docker images                                │
- "Distroless" images contain only your application and its runtime dependencies.
(not package managers, shells,...)
Notice: In kubernetes we can also use init containers with non-light images
        containing all set of tools (sed, grep,...) for pre-setup, avoiding
        any need to include in the final image.

Stable:                      experimental (2019-06)

Ex java Multi-stage Dockerfile:
 ºFROMºopenjdk:11-jdk-slim  ASOºbuild-envº
  ADD . /app/examples
  WORKDIR /app
  RUN javac examples/*.java
  RUN jar cfe main.jar examples.HelloJava examples/*.class

  COPY --from=Oºbuild-envº /app /app
  WORKDIR /app
  CMD ["main.jar"]
rootless Buildah
- Building containers in unprivileged environments
  - Buildah is a tool and library for building Open Container Initiative (OCI) container images.
  - In previous articles, including How does rootless Podman work?, I talked
  - about Podman, a tool that enables users to manage pods, containers, and container images.
  - Buildah is a tool and library for building Open Container Initiative (OCI)
    container images that is complementary to Podman. (Both projects are
    maintained by the containers organization, of which I'm a member.) In this
    article, I will talk about rootless Buildah, including the differences between it and Podman.

Build speed @[] This article will address a second problem with build speed when using dnf/yum commands inside containers. Note that in this article I will use the name dnf (which is the upstream name) instead of what some downstreams use (yum) These comments apply to both dnf and yum.
pre-configured application stacks for rapid development
of quality microservice-based applications.

Stacks include language runtimes, frameworks, and any additional
libraries and tools needed for local development, providing 
consistency and best practices.

It consists of:

  - local development
  - It defines the environment and specifies the stack behavior
    during the development lifecycle of the application.

-ºProject templatesº
  - starting point ('Hello World')
  - They can be customized/shared.

- Stack layout example, my-stack: 
  ├──               # describes stack and how to use it
  ├── stack.yaml              # different attributes and which template 
  ├── image/                  # to use by default
  |   ├── config/
  |   |   └── app-deploy.yaml # deploy config using Appsody Operator
  |   ├── project/
  |   |   ├── php/java/...stack artifacts
  |   |   └── Dockerfile      # Final   (run) image ("appsody build")
  │   ├── Dockerfile-stack    # Initial (dev) image and ENV.VARs
  |   └── LICENSE             # for local dev.cycle. It is independent
  └── templates/              # of Dockerfile
      ├── my-template-1/
      |       └── "hello world"
      └── my-template-2/
              └── "complex application"

BºGenerated filesº
  -º".appsody-config.yaml"º. Generated by $º$ appsody initº
    It specifies the stack image used and can be overridden
    for testing purposes to point to a locally built stack.

Bºstability levels:
  -ºExperimentalº ("proof of concept")
    - Support  appsody init|run|build

  -ºIncubatorº: not production-ready.
    - active contributions and reviews by maintainers
    - Support  appsody init|run|build|test|deploy
    - Limitations described in

  -ºStableº: production-ready.
    - Support all Appsody CLI commands
    - Pass appsody stack 'validate' and 'integration' tests
      on all three operating systems that are supported by Appsody
      without errors. 
      - stack must not bind mount individual files as it is
        not supported on Windows.
      - Specify the minimum Appsody, Docker, and Buildah versions
        required in the stack.yaml
      - Support appsody build command with Buildah
      - Prevent creation of local files that cannot be removed 
        (i.e. files owned by root or other users)
      - Specify explicit versions for all required Docker images
      - Do not introduce any version changes to the content
        provided by the parent container images
        (No yum upgrade, apt-get dist-upgrade, npm audit fix).
         - If package contained in the parent image is out of date,
           contact its maintainers or update it individually.
      - Tag stack with major version (at least 1.0.0)
      - Follow Docker best practices, including:
        - Minimise the size of production images 
        - Use the official base images
        - Images must not have any major security vulnerabilities
        - Containers must be run by non-root users
      - Include a detailed, documenting:
        - short description
        - prerequisites/setup required
        - How to access any endpoints provided
        - How users with existing projects can migrate to
          using the stack
        - How users can include additional dependencies 
          needed by their application

BºOfficial Appsody Repositories:º

- By default, Appsody comes with the incubator and experimental repositories
  (RºWARNº: Not stable by default). Repositories can be added by running :
  $º$ appsody repoº
alpine how-to
Next image (golang) is justº6Mbytesºin size:
    01	FROM alpine
    02	MAINTAINER chriseth 
    04	RUN \
    05	  apk --no-cache --update add build-base cmake boost-dev git ⅋⅋ \
    06	  sed -i -E -e 's/include ˂sys\/poll.h˃/include ˂poll.h˃/' /usr/include/boost/asio/detail/socket_types.hpp  ⅋⅋ \
    07	  git clone --depth 1 --recursive -b release                           ⅋⅋ \
    08	  cd /solidity ⅋⅋ cmake -DCMAKE_BUILD_TYPE=Release -DTESTS=0 -DSTATIC_LINKING=1                             ⅋⅋ \
    09	  cd /solidity ⅋⅋ make solc ⅋⅋ install -s  solc/solc /usr/bin                                               ⅋⅋\
    10	  cd / ⅋⅋ rm -rf solidity                                                                                   ⅋⅋ \
    11	  apk del sed build-base git make cmake gcc g++ musl-dev curl-dev boost-dev                                 ⅋⅋ \
    12	  rm -rf /var/cache/apk/*

  - line 07: º--depth 1º: faster cloning (just last commit)
  - line 07: the cloned repo contains next º.dockerignoreº:
    01 # out-of-tree builds usually go here. This helps improving performance of uploading
    02 # the build context to the docker image build server
    05 # in-tree builds
TODO Classify
- /var/lib/docker/devicemapper/devicemapper/data consumes too much space
$ sudo du -sch /var/lib/docker/devicemapper/devicemapper/data
14G     /var/lib/docker/devicemapper/devicemapper/data
Live Restore
Keep containers alive during daemon downtime
Weaveworks is the company that delivers the most productive way for 
developers to connect, observe and control Docker containers.

This repository contains Weave Net, the first product developed by 
Weaveworks, with over 8 million downloads to date. Weave Net enables 
you to get started with Docker clusters and portable apps in a 
fraction of the time required by other solutions.

- Weave Net
  - Quickly, easily, and securely network and cluster containers 
    across any environment. Whether on premises, in the cloud, or hybrid, 
    there’s no code or configuration.
  - Build an ‘invisible infrastructure’
  - powerful cloud native networking toolkit. It creates a virtual network
    that connects Docker containers across multiple hosts and enables their 
    automatic discovery. Set up subsystems and sub-projects that provide
    DNS, IPAM, a distributed virtual firewall and more.

- Weave Scope:
  - Understand your application quickly by seeing it in a real time 
    interactive display. Pick open source or cloud hosted options.
  - Zero configuration or integration required — just launch and go.
  - automatically detects processes, containers, hosts.
    No kernel modules, agents, special libraries or coding.
  - Seamless integration with Docker, Kubernetes, DCOS and AWS ECS.

- Cortex: horizontally scalable, highly available, multi-tenant, 
  long term storage for Prometheus.

- Flux:
  - Flux is the operator that Bºmakes GitOps happen in your clusterº.
    It ensures that the cluster config matches the one in git and
    automates your deployments.
  - continuous delivery of container images, using version control
    for each step to ensure deployment is reproducible, 
    auditable and revertible. Deploy code as fast as your team creates 
    it, confident that you can easily revert if required.
    Learn more about GitOps. 
open source project for the static analysis of vulnerabilities in 
appc and docker containers.

Vulnerability data is continuously imported from a known set of sources and
correlated with the indexed contents of container images in order to produce
lists of vulnerabilities that threaten a container. When vulnerability data
changes upstream, the previous state and new state of the vulnerability along
with the images they affect can be sent via webhook to a configured endpoint.
All major components can be customized programmatically at compile-time
without forking the project.

Skopeo is a tool for moving container images between different types 
of container storages.  It allows you to copy container images 
between container registries like,, and your 
internal container registry or different types of storage on your 
local system. You can copy to a local container/storage repository, 
even directly into a Docker daemon.  

skopeo is a command line utility that performs various operations on container images and image repositories.
skopeo does not require the user to be running as root to do most of its operations.
skopeo does not require a daemon to be running to perform its operations.
skopeo can work with OCI images as well as the original Docker v2 images.
Podman (RedHat alt)
OCI Spec.
- CLI tool for spawning and running containers according to the OCI 
Security Tunning
Portainer UI
(See also LazyDocker)
- Portainer, an open-source management interface used to manage a 
  Docker host, Swarm and k8s cluster.
- It's used by software engineers and DevOps teams to simplify and
  speed up software deployments.

Available on LINUX, WINDOWS, & OSX
$ docker container run -d \
  -p 9000:9000 \
  -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer
A simple terminal UI for both docker and docker-compose, written in 
Go with the gocui library.
Convoy (Volume Driver for backups)
Introducing Convoy a Docker Storage Driver for Backup and Recovery of Volumes