Git
External Links
- @[https://git-scm.com/book/en/v2]
- @[https://learnxinyminutes.com/docs/git/]
- @[https://learngitbranching.js.org/?demo]

Related:
See UStore: Distributed Storage with rich semantics!!!
@[https://arxiv.org/pdf/1702.02799.pdf]
Who-is-who
  (Forcibly incomplete but still quite pertinent list of core people and companies)
- Linus Torvald:  
  L.T. initiated the project to fix problems with distributed
  development of the Linux Kernel.
- Junio C. Hamano:  lead git maintainer (+8700 commits)
 @[https://git-blame.blogspot.com/]

Full Journey

Setup Server⅋Clients
- Non-normative ssh access to Git server
 ──────────────────────────────────────────┬──────────────────────────────────────────────────────────
 ºSTEP 1:º                                 │ ºSTEP 2:º
 SSH Server                                │ remote client/s
 ──────────────────────────────────────────┼──────────────────────────────────────────────────────────
  #!/bin/bash                              │   GIT_SSH_COMMAND="ssh "   # ← ENV.VAR To tune SSH *1
                                           │   GIT_SSH_COMMAND="$GIT_SSH_COMMAND Oº-oPort=1234º"
  if [[ $EUID != 0 ]] ; then               │   GIT_SSH_COMMAND="$GIT_SSH_COMMANDGº-i ~/.ssh/key07.keyº"
    echo "exec as root/sudo"               │   GIT_SSH_COMMAND="$GIT_SSH_COMMAND ... "
    exit 1                                 │
  fi                                       │   GIT_URL="myRemoteSSHServer"
  TEAM=team01                              │ BºGIT_URL="${GIT_URL}/var/lib/my_git_team"º
  addgroup ${TEAM}                         │ GºGIT_URL="${GIT_URL}/ourFirstProject"º
  for USER in lyarzas earizonb ; do        │                                                         
     grep "^${USER}:" /etc/passwd          │  ºgit cloneº GºmyUser1º@${GIT_URL}
     if [[ $? != 0 ]]; then                │       ^^^^^
       useradd ${USER} \                   │       create working copy of bare/non-bare repository
          --shell=/usr/bin/git-shell \     │
          --groups ${TEAM} \               │ºMake branch appear on shell prompt :º(☜strongly recomended)
          --password ${SECRET}             │(Must be done just once)
     fi                                    │ ModifyºPS1 promptº(Editing $HOME/.bashrc) to look like:
     # Add to group                        │ PS1="\h[\$(git branch 2˃/dev/null | grep ^\* | sed 's/\*/branch:/')]@\$(pwd |rev| awk -F / '{print \$1,\$2}' | rev | sed s_\ _/_) \$ "
     usermod -a -G ${TEAM} ${USER}         │          └─────────────    ºshow git branchº   ───────────────────┘   └────────────── show current and parent dir. only ────────┘
  done                                     │          $(command ...): bash syntax that executes command ...
                                           │                          and replaces standard output dynamically
BºBASE_GIT_DIR=/var/lib/${TEAM}º           │                          in PS1
GºPROJECT_NAME=project01º                  │  host1 $                           ← PROMPT BEFORE:                          
  mkdir -p ${BASE_GIT_DIR}/${PROJECT_NAME} │  host01[branch: master]@dir1/dir2  ← PROMPT AFTER:
  pushd .                                  │
  cd ${BASE_GIT_DIR}/${PROJECT_NAME} ;     │
  gitºinit --bareº                         │
  popd                                     │
  FIND="find ${BASE_GIT_DIR}"              │
  find ${BASE_GIT_DIR}         \           │
   -exec chown -R root:${TEAM} {} \;       ← Fix group
  find ${BASE_GIT_DIR} -type d \           │
   -exec chmod g+rwx {}           \;       ← Fix permissions
  find ${BASE_GIT_DIR} -type f \           │
   -exec chmod g+rw  {}           \;       ← Fix permissions 
 ──────────────────────────────────────────┴──────────────────────────────────────────────────────────
*1:@[https://stackoverflow.com/questions/5767850/git-on-custom-ssh-port/50854760#50854760]


Common flows
OºFLOW 1:º(Simplest one) no one else pushed changes before our push)
local ─→ git status ─→ git add . ─→ºgit commitº──────────────────────────────────────────────────────────→ºgit push origin featureXº
edit           ^             ^              ^                                                                     ^
               │             │              │                                                                     │
               │         add file/s         │                                                           push to remote  repository
               │         to next commit     │                                                          (ussually origin) and branch
               │                            │                                                          (featureX, master,...)      
           display changes               commit
           pending to commit             new version


OºFlow 2:ºsomeone else pushed changed before our push but there are no conflicts (each user edited different files)

local ─→ git status ─→ git add . ─→ºgit commitº─→ git pull ──────────────────────────────────────────────→ºgit push origin featureXº
edit                                               ^
                                                   │
                                         - git will abort and warn that changes has been pushed
                                           to remote repository+branch if we try to skip this step.
                                         - Otherwise an automatic merge is done with our local
                                           changes and any other user remote changes.

OºFlow 3:ºsomeone else pushed changed before our push, but there are  conflicts (each user edited one or more common files)

local ─→ git status ─→ git add . ─→ºgit commitº─→ git pull  ─→ "fix conflicts" ─→ git add ─→ git commit ─→ºgit push origin featureXº
edit                                                                     ^                   ^      
                                                                   │                   │      
                                                                   │             Tell git that
                                                                   │             conflicts were
                                                                   │             resolved
                                                                   │                          
                                                           manually edit   
                                                           conflicting changes

OºFlow 4:ºAmend local commit
local → git status ─→ git add . ─→ºgit commitº─→ git commit ─amend ─→ ... ─→ git commit ────────────────→ºgit push origin featureXº
edit  


OºGit-Flowº Meta-Flow using widely accepted branches rules to treat with
            common issues when releasing software and managing versions
            REF: @[https://nvie.com/posts/a-successful-git-branching-model/]
 ┌────────────────┬───────────────────────────────────
 │ Standarized    │ Intended use
 │ Branch names   │                                
 ]────────────────┼───────────────────────────────────
 │feature/...     │ merged back into main body of code
 │                │ when the developer/s are confident
 │                │ with the code quality.
 │                │ If asked to switch to another task just
 │                │ commit changes to this feature/... branch
 │                │ to continue later on.
 ├────────────────┼───────────────────────────────────
 │develop         │ Release Staging Area:
 │                │ Merge here feature/... completed features
 │                │ NOT yet been released.
 ├────────────────┼───────────────────────────────────
 │release         │ stable (release tagged branch)
 ├────────────────┼───────────────────────────────────
 │hotfix branches │ branches from a tagged release.
 │                │ Fix quickly, merge to release
 │                │ and tag in release with new minor version.
 │                │ Ideally never used since our released
 │                │ software has no bugs ;D 
 └────────────────┴───────────────────────────────────

branching
Change branch (checkout)
$ git checkout -b newBranch       ← alt 1, -b: creates new local branch
$ git checkout    existingBranch  ← alt 2,   : switch to existing local branch 
$ git branch -av                  ←  List (-a)ll existing branches
$ git branch -d branchToDelete    ← -d: Delete branch

$ git checkout --track "remote/branch"  ← Create  new tracking branch (TODO)

View Change History
$ git log -n 10           ← -n 10. See only 10 last commits.
$ git log -p path_to_file ← See log for file with line change details (-p: Patch applied)

Tags
$ git tag                 ← List tags
→ ...
→ v2.1.0-rc.2
→ v2.1.1
→ v2.1.2
→ ...
$ git tag -a v1.4 -m "..." ← Create annotated tag (recomended)
                                    ^^^^^^^^^
                                - stored as full objects in Git database.
                                - They’re checksummed; contain the tagger name,
                                - email, and date; have a tagging message (-m).
                                - can be signed and verified with GPG.

$ git tag v1.4-lw          ← Create lightweight tag
                                    ^^^^^^^^^^^    
                                  - "alias" for a commit checksum stored in a file
                                  - No other info is kept.

$ git tag -a v1.2 9fceb02  ← Tag some commit in history

ºSharing Tagsº
WARN: git push command doesn’t transfer tags to remote servers. 

$ git push origin v1.5    ← Share/push tag to remote repo
$ git push origin --tags  ← Share/push all the tags
$ git tag -d v1.4-lw      ← Delete local tag (remote tags will persist)
$ git push origin --delete v1.4-lw    ← Delete remote tag. Alt 1
$ git push origin :refs/tags/v1.4-lw  ← Delete remote tag. Alt 2
                  ^^^^^^^^^^^^^^^^^^
                  null value before the colon is
                  being pushed to the remote tag name,
                  effectively deleting it.
$ git checkout v1.4-lw          ← Move back to (DETACHED) commit
$ git show-ref --tags    ← Map tag to commit
→ ...
→ 75509731d28ddbbb6f6cbec6e6b50aeaa413df69 refs/tags/v2.1.0-rc.2
→ 8fc0a3af313d9372fc9b8d3e5dc57b804df0588e refs/tags/v2.1.1
→ 3e1f5b0d4d081db7b40f9817c060ee7220a51633 refs/tags/v2.1.2
→ ...

Comparing diffs
TODO:


Git Plumbings
  Summary extracted from:
  @[https://alexwlchan.net/a-plumbers-guide-to-git/1-the-git-object-store/]
  @[https://alexwlchan.net/a-plumbers-guide-to-git/2-blobs-and-trees/]
  @[https://alexwlchan.net/a-plumbers-guide-to-git/3-context-from-commits/]
 
 ┌── Graphical Summary ────────────────────────────────────────────────────┐
 │                                                                         │
 │Rº.git/refs/heads/masterº···→ Gºcommit id02º ···→  Bºtreeº               │
 │                                └────┬────┘                              │
 │                                     ·                                   │
 │Oº.git/HEADº                         ·                                   │
 │      |                              ·                                   │
 │      v                              v                                   │
 │Rº.git/refs/heads/devº   ···→ Gºcommit id01º ···→  Bºtreeº ····→  Blog   │
 │                                                    └──┬─┘               │
 │                             ·                         ·               · │
 │                             ·                         ·               · │
 │                             ·                         ·               · │
 │                             ·                         v               · │
 │                             ·                     Bºtreeº ····→  Blog · │
 │                             ·                        └········→  Blog · │
 │                             └───────────────────┬─────────────────────┘ │
 │                              Object store is composed of blogs (files), │
 │                              trees (directories) and commit (ordered    │
 │                              history of "important" trees).             │
 └─────────────────────────────────────────────────────────────────────────┘
                                                                       
BºObject Storeº
  
  $ git init  ← Creates .git/
                        objects/     ← Object store for objects, trees and commits
                        refs/
                        HEAD         ← pointer to branch
                        description  ← used by GitWeb
                        info/exclude ← like .gitignore but non-checked in
                        config
                        hooks/
  
  $ echo "..." ˃ animals.txt
  $ git hash-object -w animals.txt           ← 1 Hash file content,
  a37f3f668f09c61b7c12e857328f587c311e5d1d     2 Save content to Object store (plumbing)
  └────────────────┬─────────────────────┘
  saved to  .git/objects/a3/7f3f668f09c61b7c12e857328f587c311e5d1d
                         └───────────────────┬───────────────────┘
                         content-addressable filesystem.
                       RºWARN:º original File name/path is lost. We need
                                trees to save them.
  
  $ git cat-file -p a37f3f668f09c61b7c12e857328f587c311e5d1d
                 ^^
                 pretty print

BºBlobs and treesº
Bº───────────────º
  Git index: staging area (temporary snapshot) of the repository.
             - files modified, but not yet saved to the permanent history.
               ('git add' to add to index, then 'git commit' to take a
                permanent snapshot using porcelain)
              
  $ git update-index --add animals.txt  ← Plumbing to save file to .git/index 
                                          content-addressable hash-object
                                          created automatically if not
                                          yet done.
  $ git ls-files                        ← Porcelain git status for verbose view 
  animals.txt
  
  $ git write-tree                      ← Take permanent snapshot of .git/index
  dc6b8ea09fb7573a335c5fb953b49b85bb6ca985 to object of typeºtreeº
  └────────────────┬─────────────────────┘
  saved to .git/objects/dc/6b8ea09fb7573a335c5fb953b49b85bb6ca985
  
  $ git cat-file -p dc6b8ea09fb7573a335c5fb953b49b85bb6ca985
  tree: list of pointers to other objects (one object per row)
    100644 blob b133......  animals.txt   ← Pointer and "filename" to blob
                                            
    040000 tree 8972......  subdirectory  ← Pointer and "dirname"  to (sub)tree
    
    ^^^^^^ ^^^^ ^^^^^^^     ^^^^^^^^^^^^  
    file   type content     Name of file
    permis.     addressable
                ID

  $ git cat-file -t a37f.....             ← output: blob (file path/name ignored)
  $ git cat-file -t dc6b.....             ← output: tree
                 ^^
                show object type
  
   ☞KEY-POINT:☜
  BºIf we start at a tree, we can rebuild everything it points to.º
  BºAll that rest is mapping trees to some "context" in history.º

BºContext from commitsº
  - We can have many trees to start rebuilding from but just a few
    of them has historic importance.
  
  $ echo "initial commit" | git commit-tree $tree_id ← create commit from a tree 
  65b080f1fe43e6e39b72dd79bda4953f7213658b           
  └────────────────┬─────────────────────┘
  saved to .git/objects/65/b080f1fe43e6e39b72dd79bda4953f7213658b
  
  $ git cat-file -t 65b...
  commit
  $ git cat-file -p 65b...
  $ git cat-file -p 65b080f1fe43e6e39b72dd79bda4953f7213658b
  tree 11e2f923d36175b185cfa9dcc34ea068dc2a363c   ← Pointer to tree of interest 
  author    Alex Chan ... 1520806168 +0000        ← Context with author/commiter/
  committer Alex Chan ... 1520806168 +0000          creation time/ ...
  ...
  
  BºBuilding a linear history (ordered list of commits)º:
  $ git update-index --add ... ← add to index
  $ git write-tree             ← Take permanent snapshot
  $ echo "commit message" | \
    git commit-tree $tree_id \
  Bº-p $previous_commitº     ← ☞BºAdd commit order!!!º (next to previous one)
  fd9274...........  ← git cat-file -p will show a line
                       parent $previous_commit_id

BºRefs and branchesº
@[https://alexwlchan.net/a-plumbers-guide-to-git/4-refs-and-branches/]
  $ ls .git/refs
  heads/ tags/
  
  $ git update-ref refs/heads/master       ← Create named reference to commit
  fd9274dbef2276ba8dc501be85a48fbfe6fc3e31   $ cat  .git/refs/heads/master
                                             fd92...  (pointer to commit)
  
  Now 'master' is an alias for the fd92... commit ID.
  
  git cat-file -p master "==" git cat-file -p fd92...
  
  $ git rev-parse master                   ← check value of ref
  fd92...
  
 ☞BºA ref in the heads folder is more commonly called a branch.º

  $ git update-index --add ... 
  $ git write-tree
  $ echo "..." | git commit-tree   ← (Not yet added to master branch)
                                      branches/refs aren’t automatically 
                                      advanced to point at new commits in
                                      plumbing.
  $ git update-ref \                ← update heads/master to point to 
  refs/heads/master $new_commit_id    latest commit.
  b023d92829d5d076dc31de5cca92cf0bd5ae8f8e
  
 
  - BºWorking with multiple branchesº
  $ git update-ref refs/heads/dev    ← Create a second head/ref (==branch)
  65b0...
  
  $ git branch
    dev
  * master                           ← current branch is determined by the
                                       contents of .git/HEAD. 
                                       $ cat .git/HEAD. 
                                       ref: refs/heads/master
  
  $ git symbolic-ref HEAD \          ← HEAD now point to dev ref/branch
    refs/heads/dev 
  
  'HEAD' reference can be used as a shortcut for "commit hashes". 

gitbase: Query Git with SQL
- Gitbase is a Go-powered open source project that allows SQL queries to be run on Git repositories.

Revert Changes
https://www.systutorials.com/2845/how-to-revert-changes-in-git/
Debug Changes grep/bisect/blame 
REF: @[https://git-scm.com/book/en/v2/Appendix-C:-Git-Commands-Debugging]
GitOps
Single Src of Truth
@[https://www.weave.works/blog/gitops-operations-by-pull-request]
GitOps is implemented by using the Git distributed version control system 
(DVCS) as aºsingle source of truthºfor declarative infrastructure and 
applications. Every developer within a team can issue pull requests against a 
Git repository, and when merged, a "diff and sync" tool detects a difference 
between the intended and actual state of the system. Tooling can then be 
triggered to update and synchronise the infrastructure to the intended state.
merge vs rebase vs cherry-pick
REF:
@[https://stackoverflow.com/questions/9339429/what-does-cherry-picking-a-commit-with-git-mean]
@[https://git-scm.com/docs/git-cherry-pick]


       A → B → C → D → E → HEAD branch01
               │ 
               └─→ H → I → J      branch02
       ───────────────────────────────────────────────────────────────────────
ºMERGE   º"mix" full list of commits

        A → B → C → D → E → HEAD branch01        $ git checkout branch01
                │             ↑                  $ git merge    branch02
                └─→ H → I → J ┘  branch02
       ───────────────────────────────────────────────────────────────────────
ºREBASE:º"Append" full list of commits to head

    A → B → C → D → E → → H → I → J →         HEAD branch01    
                                             
                                             $ git checkout branch01                                        
                                             $ git merge    branch02

──────────────────────────────────────────────────────────────────────────────
ºCHERRY-PICK:º"Pick unique-commit" from branch and apply to another branch

    A → B → C → D →ºEº→ → HEAD  branch01   
            │                                $ git checkout branch02
            └─→ H → I → J →ºEº  branch02     $ git cherry-pick -x branch02ºHEAD~2º
                                                              └┬┘ 
                                              - Useful is "source" branch is 
                                                public. Generates 
                                                standardized commit message 
                                                allowing co-workers to still 
                                                keep track of the origin of 
                                                the commit avoiding merge 
                                                conflicts in the future
                                              - Notes attached to the commit do NOT
                                                follow the cherry-pick. Use
                                                $ git notes copy "from" "to"
Notes
@[http://alblue.bandlem.com/2011/11/git-tip-of-week-git-notes.html]
pretty branch print
@[https://stackoverflow.com/questions/1057564/pretty-git-branch-graphs]

$ git log --all --decorate --oneline --graph

$ git log --graph --abbrev-commit --decorate --date=relative --all
Gitea (Gogs)
painless self-hosted Git service
- Fork of gogs, since it was unmaintained.
Quick-clone
$ git clone --depth=1 ${URL_to_Git_repo}
            ^^^^^^^^^
            "fast clone"
            Create shallow clone with
            history truncated to the
            specified number of commits. 
            Implies --single-branch
            to clone submodules shallowly,
            use also --shallow-submodules.


Quick-tag-clone

$ git clone --depth=1   --branch '1.3.2'   --single-branch ${URL_to_Git_repo}
                        ^^^^^^^^^^^^^^^^   ^^^^^^^^^^^^^^^
                        point to branch    Clone only history
                        hash or tag        leading to the tip
                        default to HEAD)   of a single branch




Git LFS (Large Files extension)
- Git Large File Storage (LFS) replaces large files such as audio samples,
  videos, datasets, and graphics with text pointers inside Git, while storing 
  the file contents on a remote server like GitHub.com or GitHub Enterprise
4 secrets encryption tools
@[https://www.linuxtoday.com/security/4-secrets-management-tools-for-git-encryption-190219145031.html]
Encrypt Git repos
@[https://www.atareao.es/como/cifrado-de-repositorios-git/]
Garbage Collector
-  Git occasionally does garbage collection as part of its normal operation, 
by invoking git gc --auto. The pre-auto-gc hook is invoked just before the 
garbage collection takes place, and can be used to notify you that this is 
happening, or to abort the collection if now isn’t a good time.
Scalable Git VFS
@[https://github.com/Microsoft/VFSForGit]
@[https://vfsforgit.org/]
- Microsoft project to enable managing massive Git 
  repositories possible. (hundreds of Gigabytes).
Sparse-Checkout
- sparse-checkout (Git v2.25+) allows to checkout just a subset 
  of a given monorepo, speeding up commands like git pull and
  git status.
@[https://github.blog/2020-01-17-bring-your-monorepo-down-to-size-with-sparse-checkout/]

GPG signed commits 
@[https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work]
BºGPG PRESETUPº

  See @[General/cryptography_map.html?id=pgp_summary] for a summary on
  how to generate and manage pgp keys.

BºGIT PRESETUPº
  $ git config --global \        
        user.signingkey 0A46826A  ← STEP 1: Set default key for tags+commits sign.

  $ git tagº-s v1.5º-m 'my signed 1.5 tag'  ← BºSigning tagsº
            └──┬──┘                           (follow instructions to sign)
          replaces -a/--anotate

  $ git show v1.5
  tag v1.5
  Tagger: ...
  Date:   ...
  
  my signed 1.5 tag
Oº-----BEGIN PGP SIGNATURE-----                                   º
OºVersion: GnuPG v1                                               º
Oº                                                                º
OºiQEcBAABAgAGBQJTZbQlAAoJEF0+sviABDDrZbQH/09PfE51KPVPlanr6q1v4/Utº
Oº...                                                             º
Oº=EFTF                                                           º
Oº-----END PGP SIGNATURE-----                                     º
  
  commit ...

  $ git tagº-vºv1.4.2.1  ← GºVerify tagº
            └┘             Note: signer’s pub.key must be in local keyring
  object 883653babd8ee7ea23e6a5c392bb739348b1eb61
  type commit
  ...
Gºgpg: Signature made Wed Sep 13 02:08:25 2006 PDT using DSA key ID º
GºF3119B9A                                                          º
Gºgpg: Good signature from "Junio C Hamano ˂junkio@cox.net˃"        º
Gºgpg:                 aka "[jpeg image of size 1513]"              º
GºPrimary key fingerprint: 3565 2A26 2040 E066 C9A7  4A7D C0C6 D9A4 º
GºF311 9B9A                                                         º
  └──────────────────────────┬────────────────────────────────────┘
   Or error similar to next one will be displayed:
     gpg: Can't check signature: public key not found
   error: could not verify the tag 'v1.4.2.1'

  $ git commit -aº-Sº-m 'Signed commit'  ← BºSigning Commits (git 1.7.9+)º

  $ git log --show-signature -1          ← GºVerify Signaturesº
  commit 5c3386cf54bba0a33a32da706aa52bc0155503c2
Gºgpg: Signature made Wed Jun  4 19:49:17 2014 PDT using RSA key IDº
Gº0A46826A                                                         º
Gºgpg: Good signature from "1stName 2ndName (Git signing key)      º
Gº˂user01@gmail.com˃"                                              º
  Author: ...
  ...
$º$ git log --pretty="format:%h %G? %aN  %s"º
                                ^^^
                                check and list found signatures
         Ex. Output:
    5c3386cGºGº1stName 2ndName  Signed commit
    ca82a6dRºNº1stName 2ndName  Change the version number
    085bb3bRºNº1stName 2ndName  Remove unnecessary test code
    a11bef0RºNº1stName 2ndName  Initial commit

  
You can also use the -S option with the git merge command to sign the 
resulting merge commit itself. The following example both verifies 
that every commit in the branch to be merged is signed and 
furthermore signs the resulting merge commit.

                                                                 
$ git merge \             ← Gº# Verify signature at merge timeº
 º--verify-signaturesº\     
  -S \                    ← Sign merge itself.
  signed-branch-to-merge  ← Commit must have been signed. 
                           
$ git pull  \             ← Gº# Verify signature at pull timeº
  --verify-signatures     
Git Hooks
Client Hooks
@[https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks]

BºClient-Side Hooksº
  - not copied when you clone a repository
    - to enforce a policy do on the server side
  - committing-workflow hooks:
    -ºpre-commitºhook:
      - First script to be executed.
      - used to inspect the snapshot that's about to be committed.
        - Check you’ve NOT forgotten something
        - make sure tests run
        - Exiting non-zero from this hook aborts the commit
      (can be bypassed with git commit --no-verify flag)
    -ºprepare-commit-msgºhook:
      - Params:
        - commit_message_path (template for final commit message)
        - type of commit
        - commit SHA-1 (if this is an amended commit)
      - run before the commit message editor is fired up 
        but after the default message is created.
      - It lets you edit the default message before the
        commit author sees it.
      - Used for non-normal-commits with auto-generated messages
        - templated commit messages
        - merge commits
        - squashed commits
        - amended commits
    -ºcommit-msgºhook:
        - commit_message_path (written by the developer)
    -ºpost-commitºhook:
      - (you can easily get the last commit by running git log -1 HEAD)
      - Generally, this script is used for notification or something similar.
  
  -ºemail-workflowº hooks:
    - invoked by ºgit amº
                  ^^^^^^
                  Apply a series of patches from a mailbox
                  prepared by git format-patch
  
    -ºapplypatch-msgº: 
      - Params:
        - temp_file_path containing the proposed commit message.
    -ºpre-applypatchº:
      - confusingly, it is run after the patch is 
        applied but before a commit is made.
      - can be used it to inspect the snapshot before making the commit,
        run tests,  inspect the working tree with this script.
    -ºpost-applypatchº:
      - runs after the commit is made.
      - Useful to notify a group or the author of the patch
        you pulled in that you’ve done so. 
  
  - Others:
    -ºpre-rebaseºhook:
      - runs before you rebase anything
      - Can be used to disallow rebasing any commits
        that have already been pushed.
    -ºpost-rewriteºhook:
      - Params:
        - command_that_triggered_the_rewrite: 
          - It receives a list of rewrites on stdin.
      - run by commands that replace commits
        such as 'git commit --amend' and 'git rebase'
        (though not by git filter-branch).
      - This hook has many of the same uses as the
        post-checkout and post-merge hooks.
    -ºpost-checkoutºhook:
      - Runs after successful checkout
      - you can use it to set up your working directory
        properly for your project environment.
        This may mean moving in large binary files that 
        you don't want source controlled, auto-generating
        documentation, or something along those lines.
    -ºpost-mergeºhook:
      - runs after a successful merge command.
      - You can use it to restore data in the working tree
        that Git can't track, such as permissions data.
        It can likewise validate the presence of files 
        external to Git control that you may want copied 
        in when the working tree changes.
    -ºpre-pushºhook:
      - runs during git push, after the remote refs
        have been updated but before any objects have
        been transferred.
      - It receives the name and location of the remote
        as parameters, and a list of to-be-updated refs
        through stdin.
      - You can use it to validate a set of ref updates before
        a push occurs (a non-zero exit code will abort the push).
Server-Side Hooks
(system administrator only)
- Useful to enforce nearly any kind of policy in repository.

- exit non-zero to rollback/reject push 
  and print error message back to the client.

ºpre-receive hookº:
 - first script to run 
 - INPUT: STDIN reference list 
 - Rollback all references on non-zero exit

 - Ex.
   - Ensure none of the updated references are non-fast-forwards.
   - do access control for all the refs and files being modifying
     by the push.

ºupdate hookº:
 - similar to pre-receive hook.but ºrun once for each branch theº
  ºpush is trying to updateº (ussually just one branch is updated)
 - INPUT: this script takes three arguments:
   - reference name (for branch),
   - SHA-1 reference pointed to before the push, 
   - SHA-1 reference user is trying to push.
 - Rollback a given reference on non-zero exit
   (others will be updated).

ºpost-receiveº
 - can be used to update other services or notify users.
 - INPUT: STDIN reference list
 - Useful for:
   - emailing a list.
   - trigger CI/CD.
   - update ticket system 
     (commit messages can be parsed for "open/closed/..."
 - RºWARNº: can't stop the push process.
   client  will block until completion.
Advanced
revert/rerere
Submodules
Subtrees
- TODO: how subtrees differ from submodules
- how to use the subtree to create a new project from split content
Interactive rebase
-  how to rebase functionality to alter commits in various ways.
- how to squash multiple commits down into one. 
Supporting files
- Git attributes file and how it can be used to identify binary files,
  specify line endings for file types, implement custom filters, and 
  have Git ignore specific file paths during merging.
Cregit token level blame
@[https://www.linux.com/blog/2018/11/cregit-token-level-blame-information-linux-kernel]
cregit: Token-Level Blame Information for the Linux Kernel
Blame tracks lines not tokens, cgregit blames on tokens (inside a line)
Implementations
JGit (client)
@[https://wiki.eclipse.org/JGit/User_Guide]
- Eclipse Distribution License - v 1.0
- lightweight, pure Java library implementing the Git version control system
  - repository access routines
  - network protocols
  - core version control algorithms

- suitable for embedding in any Java application
Gerrit (by Google) 
@[https://www.gerritcodereview.com/index.html]
Gerrit is a Git Server that provides:
- Code Review:
  - One dev. writes code, another one is asked to review it.
    (Goal is cooperation, not fauilt-finding)
  @[https://docs.google.com/presentation/d/1C73UgQdzZDw0gzpaEqIC6SPujZJhqamyqO1XOHjH-uk/]
  - UI for seing changes.
  - Voting pannel.


- Access Control on the Git Repositories.
- Extensibility through Java plugins.
@[https://www.gerritcodereview.com/plugins.html]


Gerrit does NOT provide:
- Code Browsing
- Code SEarch
- Project Wiki
- Issue Tracking
- Continuous Build
- Code Analyzers
- Style Checkers
Client implementations
JGit
Repository = objects store + refs 

  final FileRepositoryBuilder builder = new FileRepositoryBuilder();
  final Repository repository = 
      builder
     .setGitDir(new File(configRepoPath01))  // ← Set root file path
     .readEnvironment()                      // ← scan GIT_* env.vars 
     .findGitDir()                           // ← scan up file system tree
     .build();


Bº┌────────────┐º
Bº│Plumbing API│º
Bº└────────────┘º
 AnyObjectId/ObjectId: Represent the SHA-1 (256 in 2.29+) of tag/commit/trees/blobs

 final ObjectId head = repository.resolve("HEAD");

 final Ref HEAD =        // ← Ref: single object Id representing a tag/commit/tree/blob
     repository.getRef("refs/heads/master");
   
 final RevWalk walk01 =                              // ← walks commit graph in order.
     new RevWalk(repository);

 final RevCommit commit =                            // get commit from Hash ID
     walk01.parseCommit(objectIdOfCommit);

 final RevTag tag =                                  // get commit for tag
     walk01.parseTag(objectIdOfTag);

 final RevTree tree =
     walk01.parseTree(objectIdOfTree);

Bº┌─────────────┐º
Bº│Porcelain API│º (org.eclipse.jgit.api package)
Bº└─────────────┘º

  final Git git01 = new Git(db);
  final AddCommand addCom = git01.add();  // command obj. used to add files to index.
  addCom.addFilepattern("someDirectory").call();

  final CommitCommand commitCom = git01.commit();    //  git commit
  commitCom.setMessage("initial commit").call();
           .setAuthor()....
           .setCommitter()....
           .setAll()...

  RevTag tag = git01.tag().setName("tag").call();    //  git tag
                          .setMessage()
                          .setTagger()
                          .setObjectId()
                          .setForceUpdate()
                          .setSigned()               ← Rºnot supported yet, throws exceptionº


  final Iterable log =                    // Walks commit graph (porcelain). Ops. available:
        git01.log().call();                             add(AnyObjectId start), 
                                                        addRange(AnyObjectId since, AnyObjectId until)

  // TODO: MergeCommand (git-merge)


- org.eclipse.jgit.ant ANT TASKS:
  
      
        
        
        
      
  

  - 

  - 

  - 

 - Ex.  Finding children of a commit
   PlotWalk revWalk = new PlotWalk(repo());
   ObjectId rootId = (branch==null)
                     ? repo().resolve(HEAD)
                     : branch.getObjectId();
   RevCommit root = revWalk.parseCommit(rootId);
   revWalk.markStart(root);
   PlotCommitList plotCommitList = 
                     new PlotCommitList();
   plotCommitList.source(revWalk);
   plotCommitList.fillTo(Integer.MAX_VALUE);
   return revWalk;


BºReducing memory usage with RevWalkº
  RevWalk and RevCommit are designed to be light-weight, but still consume lot of memory for big repos.
  Some tips:
  -ºRestrict walked revision graph to "only needed"º:
    - Ex: looking for commits
          in 'refs/heads/master'           ← markStart        () to refs/heads/master
  not yet in 'refs/remotes/origin/master'  ← markUninteresting() to refs/remotes/origin/master
                                             └────────┬────────┘
                                            RevWalk traversal will only parse commits necessary 
                                            avoiding to  look further back in history.

      final ObjectId 
              from = repository.resolve("refs/heads/master"         ),
                to = repository.resolve("refs/remotes/origin/master");

      walk01.markStart(walk.parseCommit(from));
      walk01.markUninteresting(walk.parseCommit(to));

  -ºDiscard body of a commitº:
    walk01.setRetainBody(false);   ← Ignore author, committer, message, signature?, ...
    Useful when computing just merge base between branches or 'git rev-list' like commands.

    - consider also extract any needed data and call ºdispose()º on RevCommit instance.
      otherwise JGit use a compact internal byte[] UTF-8 representation (vs String UTF-16)

    Ex:
    ...
    Set authorEmails = new HashSet();
    for (RevCommit commit : walk01) {
        authorEmails.add(
           commit.getAuthorIdent().getEmailAddress()
        );
        commit.dispose();
    }

BºSubclassing RevWalk/RevCommitº
  - Subclassing allows to attach additional data to a commit.
    - use RevWalk→createCommit() method to create new instance of RevCommit subclass. 
    - Put additional data as fields in RevCommit subclass 
      (and so avoiding the non-type safe HashMap to translate RevCommit|ObjectId ←→  additional data fields)
    - Ej:
      public class ReviewedRevision extends RevCommit {
      
          private final Date reviewDate; // ← New field
          private ReviewedRevision(AnyObjectId id, Date reviewDate) {
              super(id);
              this.reviewDate = reviewDate;
          }
      
          public List getReviewedBy() { return getFooterLines("Reviewed-by"); }
      
          public Date getReviewDate() { return reviewDate; }
      
          public static class Walk extends RevWalk {
      
              public Walk(Repository repo) { super(repo); }
      
              @Override
              protected RevCommit createCommit(AnyObjectId id) {
                  return new ReviewedRevision(id, getReviewDate(id));
              }
      
              private Date getReviewDate(AnyObjectId id) { ... }
          }
      }

BºClean up revision walkº
  - reusing an existing object map is much faster but can consume lot of memory.
    Speed vs memory consumption must be balanced.
  - To clean up:
    for (RevCommit commit : walk01) { ... }
  Oºwalk01.dispose();º


BºJGit CookBook Recipes:º TODO:(0) @[https://github.com/centic9/jgit-cookbook]
Non-Classified
git-pw
@[http://jk.ozlabs.org/projects/patchwork/]
@[https://www.collabora.com/news-and-blog/blog/2019/04/18/quick-hack-git-pw/]
- git-pw requires patchwork v2.0, since it uses the 
  new REST API and other improvements, such as understanding
  the difference between patches, series and cover letters,
  to know exactly what to try and apply.

- python-based tool that integrates git and patchwork.

  $ pip install --user git-pw

CONFIG:
  $ git config pw.server https://patchwork.kernel.org/api/1.1
  $ git config pw.token YOUR_USER_TOKEN_HERE

ºDaily work exampleº
finding and applying series
- Alternative 1: Manually
  - We could use patchwork web UI search engine for it.
    - Go to "linux-rockchip" project 
    - click on _"Show patches with" to access the filter menu.
    - filter by submitter. 

- Alternative 2: git-pw (REST API wrapper)
  - $ git-pw --project linux-rockchip series list "dynamically"
    → ID    Date         Name              Version   Submitter
    → 95139 a day ago    Add support ...   3         Gaël PORTAY
    → 93875 3 days ago   Add support ...   2         Gaël PORTAY
    → 3039  8 months ago Add support ...   1         Enric Balletbo i Serra


  - Get some more info:
    $ git-pw series show 95139
    → Property    Value
    → ID          95139
    → Date        2019-03-21T23:14:35
    → Name        Add support for drm/rockchip to dynamically control the DDR frequency.
    → URL         https://patchwork.kernel.org/project/linux-rockchip/list/?series=95139
    → Submitter   Gaël PORTAY
    → Project     Rockchip SoC list
    → Version     3
    → Received    5 of 5
    → Complete    True
    → Cover       10864561 [v3,0/5] Add support ....
    → Patches     10864575 [v3,1/5] devfreq: rockchip-dfi: Move GRF definitions to a common place.
    →     10864579 [v3,2/5] : devfreq: rk3399_dmc: Add rockchip, pmu phandle.
    →     10864589 [v3,3/5] devfreq: rk3399_dmc: Pass ODT and auto power down parameters to TF-A.
    →     10864591 [v3,4/5] arm64: dts: rk3399: Add dfi and dmc nodes.
    →     10864585 [v3,5/5] arm64: dts: rockchip: Enable dmc and dfi nodes on gru.


  - Applying the entire series (or at least trying to):
    $ git-pw series apply 95139
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    fetch all the patches in the series, and apply them in the right order.
GIT Commit Standard Emojis
@[https://gist.github.com/parmentf/035de27d6ed1dce0b36a]
ºCommit type               Emoji                    Graphº
 Initial commit           :tada:                      🎉 
 Version tag              :bookmark:                  🔖 
 New feature              :sparkles:                  ✨ 
 Bugfix                   :bug:                       🐛 
 Metadata                 :card_index:                📇 
 Documentation            :books:                     📚 
 Documenting src          :bulb:                      💡 
 Performance              :racehorse:                 🐎 
 Cosmetic                 :lipstick:                  💄 
 Tests                    :rotating_light:            🚨 
 Adding a test            :white_check_mark:          ✅ 
 Make a test pass        :heavy_check_mark:           ✔️  
 General update           :zap:                       ⚡️ 
 Improve format           :art:                       🎨 
 /structure                                              
 Refactor code            :hammer:                    🔨 
 Removing stuff           :fire:                      🔥 
 CI                       :green_heart:               💚 
 Security                 :lock:                      🔒 
 Upgrading deps.         :arrow_up:                   ⬆️  
 Downgrad. deps.         :arrow_down:                 ⬇️  
 Lint                     :shirt:                     👕 
 Translation              :alien:                     👽 
 Text                     :pencil:                    📝 
 Critical hotfix          :ambulance:                 🚑 
 Deploying stuff          :rocket:                    🚀 
 Work in progress         :construction:              🚧 
 Adding CI build system   :construction_worker:       👷 
 Analytics|tracking code  :chart_with_upwards_trend:  📈 
 Removing a dependency    :heavy_minus_sign:          ➖ 
 Adding a dependency      :heavy_plus_sign:           ➕ 
 Docker                   :whale:                     🐳 
 Configuration files      :wrench:                    🔧 
 Package.json in JS       :package:                   📦 
 Merging branches         :twisted_rightwards_arrows: 🔀 
 Bad code / need improv.  :hankey:                    💩 
 Reverting changes        :rewind:                    ⏪ 
 Breaking changes         :boom:                      💥 
 Code review changes      :ok_hand:                   👌 
 Accessibility            :wheelchair:                ♿️ 
 Move/rename repository  :truck:                      🚚 
GitHub: Custom Bug/Feature-request templates
RºWARNº: Non standard (Vendor lock-in) Microsoft extension.
º$ cat .github/ISSUE_TEMPLATE/bug_report.mdº
 | ---
 | name: Bug report
 | about: Create a report to help us improve
 | title: ''
 | labels: ''
 | assignees: ''
 | 
 | ---
 | 
 | **Describe the bug**
 | A clear and concise description of what the bug is.
 | 
 | **To Reproduce**
 | Steps to reproduce the behavior:
 | 1. Go to '...'
 | 2. Click on '....'
 | 3. Scroll down to '....'
 | 4. See error
 | 
 | **Expected behavior**
 | A clear and concise description of what you expected to happen.
 | 
 | ...
  
º$ cat .github/ISSUE_TEMPLATE/feature_request.mdº
  | ---
  | name: Feature request
  | about: Suggest an idea for this project
  | title: ''
  | labels: ''
  | assignees: ''
  | 
  | ---
  | 
  | **Is your feature request related to a problem? Please describe.**
  | A clear and concise description of what the problem is.... 
  | 
  | **Describe the solution you'd like**
  | A clear and concise description of what you want to happen.
  | 
  | **Describe alternatives you've considered**
  | A clear and concise description of any alternative solutions or features you've considered.
  | 
  | **Additional context**
  | Add any other context or screenshots about the feature request here.

º$ cat ./.github/pull_request_template.mdº
 ...

º$ ./.github/workflows/* º
RºWARNº: Non standard (Vendor lock-in) Microsoft extension.
 @[https://docs.github.com/en/free-pro-team@latest/actions/learn-github-actions]
Git Secrets
https://github.com/awslabs/git-secrets#synopsis
- Prevents you from committing passwords and other sensitive 
  information to a git repository.
Signing Git commits
@[https://dev.to/sdmg15/gpg-signing-your-git-commits-3epc]
What's new
-º2.28:º
@[https://github.blog/2020-07-27-highlights-from-git-2-28/]

 - Git 2.28 takes advantage of 2.27 commit-graph optimizations to
   deliver a handful of sizeable performance improvements.


-º2.27:º
 - commit-graph file format was extended to store changed-path Bloom
   filters. What does all of that mean? In a sense, 
   this new information helps Git find points in history that touched a 
   given path much more quickly (for example, git log -- ˂path˃, or git 
   blame). 

-º2.25:º
@[https://www.infoq.com/news/2020/01/git-2-25-sparse-checkout/]
  500+ changes since 2.24. 

 º[performance]º 
  Sparse checkouts are one of several approaches Git supports to improve   [scalability]
  performance when working with big(huge or monolithic) repositories.      [monolitic]
   They are useful to keep working directory clean by specifying which   
  directories to keep. This is useful, for example, with repositories 
  containing thousands of directories.

  See also: http://schacon.github.io/git/git-read-tree.html#_sparse_checkout

-º2.23:º
  https://github.blog/2019-08-16-highlights-from-git-2-23
Forgit: Interactive Fuzzy Finder
@[https://www.linuxuprising.com/2019/11/forgit-interactive-git-commands-with.html]

- It takes advantage of the popular "fzf" fuzzy finder to provide
  interactive git commands, with previews.
Isomorphic Git: 100% JS client 
@[https://isomorphic-git.org/] !!!

- Features:
  - clone repos
  - init new repos
  - list branches and tags
  - list commit history
  - checkout branches
  - push branches to remotes
  - create new commits
  - git config
  - read+write raw git objects
  - PGP signing
  - file status
  - merge branches
Git Monorepos
(Big) Monorepos in Git:
https://www.infoq.com/presentations/monorepos/
https://www.atlassian.com/git/tutorials/big-repositories
Git: Symbolic Ref best-patterns
@[https://stackoverflow.com/questions/4986000/whats-the-recommended-usage-of-a-git-symbolic-reference]
GitHub: Search by topic
https://help.github.com/en/github/searching-for-information-on-github/searching-topics

Ex:search by topic ex "troubleshooting" and language "java"
https://github.com/topics/troubleshooting?l=java 
Gitsec
@[https://github.com/BBVA/gitsec]

gitsec is an automated secret discovery service for git that helps 
you detect sensitive data leaks.

gitsec doesn't directly detect sensitive data but uses already 
available open source tools with this purpose and provides a 
framework to run them as one.
Unbreakable Branches
@[https://github.com/AmadeusITGroup/unbreakable-branches-jenkins]

- plugins for Bitbucket and Jenkins trying to fix next problem:

  Normal Pull Request workflow:
  Open pull-request (PR) to merge changes in target-branch
    → (build automatically triggered)
      → build OK
        repo.owner merges PR
         → second build triggered on target-branch
           →Rºsecond build randomnly fails     º
            Rºleading to broken targeted branchº
              └───────────────┬───────────────┘ 
               Reasons include:
               - Race condition: Parallel PR was merged in-between
               - Environment issue (must never happens)
               - lenient dependency declaration got another version
                 leading to a build break

  - If the Jenkins job is eligible to unbreakable build
    (by having environment variables such as UB_BRANCH_REF)
    at the end of the build a notification to Bitbucket is
    sent according to the build status.
    (or  manually through two verbs: ubValidate|ubFail)

- Difference stashnotifier-plugin:
  - stashplugin reports status-on-a-commit
  - unbreakable build a different API is dedicated on Bitbucket.

- On the Bitbucket side:
  - GIT HEAD@target-branch moved to  top-of-code to be validated in PR
    (target-branch can then always have a successful build status).

- Security restrictions added to Bitbucket:
  (once you activate the unbreakable build on a branch for your repository)
  -  merge button replaced by merge-request-button to queue the build.
  -  The merge will happen automatically at the end of the build if the build succeeds
  -  direct push on the branch is forbidden
  -BºMerge requests on different PRs will process the builds sequentiallyº

- Prerequisites to run the code locally:
  - Maven (tested agains 3.5)
  - Git should be installed

- PRE-SETUP:
  - Install UnbreakableBranch plugin at Bitbucket
  - bitbucketBranch source plugin Jenkins plugin should be
    a patched so that mandatory environment variables are
    injected. RºNote that this plugin hasn't been released yetº
Filter-repo
@[https://github.com/newren/git-filter-repo/]
- Create new repository from old ones, keeping just the 
  history of a given subset of directories.

(Replace: (buggy)filter-branch @[https://git-scm.com/docs/git-filter-branch])
- Python script for rewriting history:
  - cli for simple use cases.
  - library for writing complex tools.

- Presetup:
  - git 2.22.0+  (2.24.0+ for some features)
  - python 3.5+

  $ git filter-repo \
       --path src/ \                         ← commits not touching src/ removed
       --to-subdirectory-filter my-module \  ← rename  src/** → my-module/src/**
       --tag-rename '':'my-module-'            add 'my-module-' prefix to any tags 
                                               (avoid any conflicts later merging 
                                                into something else)

BºDesign rationale behind filter-repoº:
  - None existing tools with similr features.  
  - [Starting report] Provide analysis before pruning/renaming.
  - [Keep vs. remove] Do not just allow to remove selected paths
                      but to keep certain ones. 
    (removing all paths except a subset can be painful.
     We need to specify all paths that ever existed in
     any version of the repository)
  - [Renaming] It should be easy to rename paths:
  - [More intelligent safety].
  - [Auto shrink] Automatically remove old cruft and repack the 
    repository for the user after filtering (unless overridden); 
  - [Clean separation] Avoid confusing users (and prevent accidental 
    re-pushing of old stuff) due to mixing old repo and rewritten repo 
    together.
  - [Versatility] Provide the user the ability to extend the tool  
    ... rich data structures (vs hashes, dicts, lists, and arrays
        difficult to manage in shell)
    ... reasonable string manipulation capabilities 
  - [Old commit references] Provide a way for users to use old commit 
    IDs with the new repository.
  - [Commit message consistency] Rewrite commit messages pointing to other 
    commits by ID.
  - [Become-empty pruning] empty commits should be pruned.
  - [Speed]

- Work on filter-repo and predecessor has driven
  improvements to fast-export|import (and occasionally other 
  commands) in core git, based on things filter-repo needs to do its 
  work:
 
BºManual Summaryº:
@[https://htmlpreview.github.io/?https://github.com/newren/git-filter-repo/blob/docs/html/git-filter-repo.html]
- Overwrite entire repository history using user-specified filters.
  (WARN: deletes original history)
  - Use cases:
    - stripping large files (or large directories or large extensions)
    - stripping unwanted files by path (sensitive secrests) [secret]
    - Keep just an interesting subset of paths, remove anything else.
    - restructuring file layout. Ex:
      - move all files subdirectory
      - making subdirectory as new toplevel.
      - Merging two directories with independent filenames.
      - ...
    - renaming tags
    - making mailmap rewriting of user names or emails permanent
    - making grafts or replacement refs permanent
    - rewriting commit messages
Shell Scripting
Reference Script
Source: @[https://github.com/earizon/utility_shell_scripts/blob/master/scriptTemplate.sh]

Bº#!/bin/bashº
  
  OUTPUT="$(basename $0).$(whoami).log"     # ← $("command") takes command STDOUT as effective value 
                                            #   $(whoami) will avoid collisions  among
                                            #   different users even if writing to the 
                                            #   same directory and serves as audit trail.
                                            #   This happens frequently in DevOps when
                                            #   executing in sudo/non-sudo contexts.
 ºexecº3˃⅋1   # Copy current STDOUT to ⅋3
 ºexecº4˃⅋2   # Copy current STDERR to ⅋4
 ºechoº"Cloning STDOUT/STDERR to ${OUTPUT}"
 ºexecº⅋˃ ˃(tee -a "$OUTPUT") # Redirect to STDOUT and file REF:
 ºexecº2˃⅋1  
  echo "message logged to file ⅋ console"
 
  GLOBAL_EXIT_STATUS=0
  WD=$(pwd)             # TIP: write down current work dir are use it
                        #      to avoid problems when changing dir ("cd")
                        #      randomnly throughout the script execution
  
OºFILE_RESOURCE_01="${WD}/data/temp_data.csv"º
QºLOCK="/tmp/$(basename $0).lock"º
  function funCleanUp() {
    set +e
    echo "Cleaning resource and exiting"
    rm -fOº${FILE_RESOURCE_01}º
  }
 ºtrapºfunCleanUp EXIT   # ← Clean any resource on exit

 
  if [ ! ${STOP_ON_ERR_MSG} ] ; then
    #  default and recomended behaviour: Fail fast
    #  REF: @[https://en.wikipedia.org/wiki/Fail-fast]
    STOP_ON_ERR_MSG=true    ······························┐
  fi                                                      |
  ERR_MSG=""                                              |
  function funThrow {                                     |
      if [[ $STOP_ON_ERR_MSG != false ]] ; then      ←-···{
        echo "ERR_MSG DETECTED: Aborting now due to "     |
        echo -e ${ERR_MSG}                                |
        if [[ $1 != "" ]]; then                           |
            GLOBAL_EXIT_STATUS=$1 ;                       |
        elif [[ $GLOBAL_EXIT_STATUS == 0 ]]; then         |
            GLOBAL_EXIT_STATUS=1 ;                        |
        fi                                                |
        exit $GLOBAL_EXIT_STATUS  ←·······················┘
      else
        echo "ERR_MSG DETECTED: "
        echo -e ${ERR_MSG}
        echo "WARN: CONTINUING WITH ERR_MSGS "
  
        GLOBAL_EXIT_STATUS=1 ;
      fi
      ERR_MSG=""
  }

Qºexec 100˃${LOCK}º                       # Simple linux-way to use locks.
Qºflock 100º                              # First script execution will hold the lock
  if [[ $? != 0 ]] ; then                 # Next ones will have to wait. Use -w nSecs
      ERR_MSG="HOME ENV.VAR NOT DEFINED"  # to fail after timeout or -n to fail-fast
      funThrow 10 ;                       # lock will automatically be liberated on 
  fi                                      # exit. (no need to unlock manually)
                                          # REF

Bº# SIMPLE WAY TO PARSE ARGUMENTS WITH while-loopº
  while [  $#  -gt 0 ]; do  # $#  number of arguments
    case "$1" in
      -l|--list)
        echo "list arg"
        shift 1  # ºconsume arg         ←   $# = $#-1 
        ;; -p|--port) export PORT="${2}:"
      Bºshift 2º #← consume arg+value   ←   $# = $#-2 
        ;;
      -h|--host)
        export HOST="${2}:"
      Bºshift 2º #← consume arg+value   ←   $# = $#-2 
        ;;
      *)
        echo "non-recognised option '$1'"
      Bºshift 1º #← consume arg         ←   $# = $#-1 
    esac
  done
  set -e # exit on ERR_MSG
  
  function preChecks() {
    # Check that ENV.VARs and parsed arguments are in place
    if [[ ! ${HOME} ]] ; then ERR_MSG="HOME ENV.VAR NOT DEFINED" ; funThrow 41 ; fi
    if [[ ! ${PORT} ]] ; then ERR_MSG="PORT ENV.VAR NOT DEFINED" ; funThrow 42 ; fi
    if [[ ! ${HOST} ]] ; then ERR_MSG="HOST ENV.VAR NOT DEFINED" ; funThrow 43 ; fi
    set -u # From here on, ANY UNDEFINED VARIABLE IS CONSIDERED AN ERROR.
  }
  
  function funSTEP1 {
    echo "STEP 1: $HOME, PORT:$PORT, HOST: $HOST"
  }
  function funSTEP2 { # throw ERR_MSG
    ERR_MSG="My favourite ERROR@funSTEP2"
    funThrow 2
  }
  
  
  cd $WD ; preChecks
  cd $WD ; funSTEP1
  cd $WD ; funSTEP2
  
  echo "Exiting with status:$GLOBAL_EXIT_STATUS"
  exit $GLOBAL_EXIT_STATUS
Init Vars
complete Shell parameter expansion list available at:
- @[http://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html]
var1=$1 # init var $1 with first param
var1=$# # init var $1 with number of params
var1=$! # init var with PID of last executed command.
var1=${parameter:-word} # == $parameter if parameter set or 'word' (expansion)
var1=${parameter:=word} # == $parameter if parameter set or 'word' (expansion), then parameter=word 
var1=${parameter:?word} # == $parameter if parameter set or 'word' (expansion) written to STDERR, then exit.
var1=${parameter:+word} # == var1       if parameter set or 'word' (expansion).
${parameter:offset}
${parameter:offset:length}
# Substring Expansion. It expands to up to length characters of the value
  of parameter starting at the character specified by offset.
  If parameter is '@', an indexed array subscripted by '@' or '*', or an
  associative array name, the results differ as described below.

Parse arguments
#Oº$#º number of arguments
while [Oº$#º -gt 0 ]; do
  echo $1
  case "$1" in
    -l|--list)
      echo "list arg"
      shift 1  # ºconsume arg         ← Oº$# = $#-1º
      ;;
    -p|--port)
      export PORT="${2}:"
      echo "port: $PORT"
      shift 2  # ºconsume arg+valueº  ← Oº$# = $#-2º
      ;;
    *)
      echo "non-recognised option"
      shift 1  # ºconsume argº        ← Oº$# = $#-1º
  esac
done
jj
Temporal Files
TMP_FIL=$(mktemp)  
TMP_DIR=$(mktemp --directory)

Barrier synchronization
UUID:[9737647d-58dc-4999-8db4-4cd3c2682edd] 
Wait for background jobs to complete example:
(
  ( sleep 3 ; echo "job 1 ended" ) ⅋
  ( sleep 1 ; echo "job 2 ended" ) ⅋
  ( sleep 1 ; echo "job 3 ended" ) ⅋
  ( sleep 9 ; echo "job 4 ended" ) ⅋
  wait ${!}       # alt.1: Wait for all background jobs to complete
# wait %1 %2 %3   # alt.2: Wait for jobs 1,2,3. Do not wait for job 4
  echo "All subjobs ended"
) ⅋
bash REPL loop
REPL stands for Read-eval-print loop: More info at:
@[https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop]
    # Define the list of a menu item
   ºselectºOºlanguageººinºC# Java PHP Python Bash Exit
   ºdoº
      #Print the selected value
      if [[ Oº$languageº == "Exit" ]] ; then
        exit 0
      fi
      echo "Selected language is $language"
   ºdoneº
trap: Exit script cleanly
@[https://www.putorius.net/using-trap-to-exit-bash-scripts-cleanly.html]
Bash-it
@[https://www.tecmint.com/bash-it-control-shell-scripts-aliases-in-linux/]
- bundle of community Bash commands and scripts for Bash 3.2+,
  which comes with autocompletion, , aliases, custom functions, ....
- It offers a useful framework for developing, maintaining and
  using shell scripts and custom commands for your daily work.
Bash 4+ Maps
 (also known as associative array or hashtable)

  Bash Maps can be used as "low code" key-value databases.
  Very useful for daily config/devops/testing task. 
  Ex:
  #!/bin/bash            # ← /bin/sh will fail. Bash 4+ specific

Bºdeclare -A map01º      # ←ºSTEP 1)ºdeclare Map

  map01["key1"]="value1" # ←ºSTEP 2)ºInit with some elements.
  map01["key2"]="value2" #   Visually map01 will be a table similar to:
  map01["key3"]="value3" #   key  │ value
                         #   ─────┼───────
                         #   key1 │ value1  ← key?, value? can be any string
                         #   key2 │ value2
                         #   key3 │ value3
  
    keyN="key2"          # ←ºSTEP 3)ºExample Ussage
    ${map01[${key_var}]} #   ← fetch value for key "key2"
    ${!map01[@]}         #   ← fetch keys  . key2 key3 key1
    ${map01[@]}          #   ← fetch values. (value2 value3 value1)

    for keyN in "${!map01[@]}";      # ← walk over keys:
    do                               # (output)
      echo "$keyN : ${map01[$keyN]}" # key1 : value1
    done                             # key2 : value2       
                                     # key3 : value3      
test (shell conditionals)
(man test summary from GNU coreutils)

test

  EXPRESSION  # ← EXPRESSION true/false sets the exit status.
[ EXPRESSION ]

-n STRING                  # STRING length ˃0
                           # (or just STRING)
-z STRING                  #  STRING length == 0
STRING1 = STRING2          # String equality
STRING1 != STRING2         # String in-equality


INTEGER1 -eq INTEGER2      # ==
INTEGER1 -ge INTEGER2      # ˂=
INTEGER1 -gt INTEGER2
INTEGER1 -le INTEGER2
INTEGER1 -lt INTEGER2
INTEGER1 -ne INTEGER2
^^^^^^^^
BºNOTE:º INTEGER can be -l STRING (length of STRING)

ºFILE TEST/COMPARISIONº
RºWARN:º Except -h/-L, all FILE-related tests dereference symbolic links.
-e FILE                    #ºFILE existsº
-f FILE                    # FILE exists and is a1regular fileº
-h FILE                    # FILE exists and is aºsymbolic linkº (same as -L)
-L FILE                    #                                     (same as -h)
-S FILE                    # FILE exists and is aºsocketº
-p FILE                    #ºFILE exists and is a named pipeº
-s FILE                    # FILE exists and has aºsize greater than zeroº


-r FILE                    # FILE exists andºread  permissionºis granted
-w FILE                    # FILE exists andºwrite permissionºis granted
-x FILE                    # FILE exists andºexec  permissionºis granted

FILE1  -ef FILE2           # ← same device and inode numbers
FILE1 -nt FILE2            # FILE1 is newer (modification date) than FILE2
FILE1 -ot FILE2            # FILE1 is older (modification date) than FILE2
-b FILE                    # FILE exists and is block special
-c FILE                    # FILE exists and is character special
-d FILE                    #ºFILE exists and is a directoryº
-k FILE                    # FILE exists and has its sticky bit set


-g FILE                    # FILE exists and is set-group-ID
-G FILE                    # FILE exists and is owned by the effective group ID
-O FILE                    # FILE exists and is owned by the effective user ID
-t FD   file descriptor FD is opened on a terminal
-u FILE FILE exists and its set-user-ID bit is set

BOOLEAN ADITION
RºWARNº: inherently ambiguous.  Use
EXPRESSION1 -a EXPRESSION2 # AND # 'test EXPR1 ⅋⅋ test EXPR2' is prefered
EXPRESSION1 -o EXPRESSION2 # OR  # 'test EXPR1 || test EXPR2' is prefered


RºWARN,WARN,WARNº: your shell may have its own version of test and/or '[',
                   which usually supersedes the version described here.
                   Use /usr/bin/test to force non-shell ussage.

Full documentation at: @[https://www.gnu.org/software/coreutils/]


Curl Summary

- Suport for DICT, FILE, FTP, FTPS, GOPHER, HTTP GET/POST, HTTPS, HTTP2, IMAP,
           IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS,
           SMTP, SMTPS, TELNET,  TFTP, unix socket protocols.
- proxy support.
- kerberos support.
- HTTP  cookies, etags
- file transfer resume.
- Metalink
- SMTP / IMAP Multi-part 
- HAProxy PROXY protocol
- ...

BºHTTP Exampleº
  $ curl http://site.{one,two,three}.com  \    
         --silent                         \    ← Disable progress meter
         --anyauth                        \    ← make curl figure out auth. method
                                                 (--basic, --digest, --ntlm, and --negotiate)
                                                 not recommended if uploading from stdin since 
                                                 data can be sent 2+ times
                                                 - Used together with -u, --user.
         --cacert file_used_to_verify_peer \   ← Alt: Use CURL_CA_BUNDLE
                                                 - See also --capath dir, --cert-status,  --cert-type PEM|DER|...
         --cert certificate[:password]     \   ← Use cert to indentify curl client
         --ciphers list of TLS_ciphers     \
         --compressed                      \   ← (HTTP) Request compressed response. Save uncompressed response. 
         --config text_file_with_curl_args \   
         --connect-timeout sec_number      \
         --create-dirs                     \   ← When using --output 
         --data-binary data                \   ← HTTP POST alt 1: posts data with no extra processing whatsoever.
                                                 Or @data_file
         --data-urlencode data             \   ← HTTP POST alt 2
         --data           data             \   ← HTTP POST alt 3: post data in the same way that a browser  does for formats
                                                 (content-type application/x-www-form-urlencoded)
         --header ... 
         --limit-rate speed
         --location                        \  ← follow redirects 
         --include                         \  ← Include the HTTP response headers in the output.
                                                             See also -v, --verbose.
        --oauth2-bearer ...                \  ← (IMAP POP3 SMTP) 
        --fail-early                       \  ← Fail as soon as possible
        --continue-at -                    \  ← Continue a partial download
        --output out_file                  \  ← Write output to file (Defaults to stdout)


        curl --list-only https://..../dir1/ ← List contents of remote dir


  
Kapow!: Shell Script to HTTP API
@[https://github.com/BBVA/kapow]
by BBVA-Labs Security team members.
" If you can script it, you can HTTP it !!!!"

 Ex:
 Initial Script:
   $ cat /var/log/apache2/access.log | grep 'File does not exist' 
 
 To expose it as HTTP:

   $ cat search-apache-errors
   #!/usr/bin/env sh
   kapow route add /apache-errors - ˂-'EOF'
       cat /var/log/apache2/access.log | grep 'File does not exist' | kapow set /response/body
   EOF

 Run HTTP Service like:

 $ kapow server search-apache-errors  ← Client can access it like 
                                        curl http://apache-host:8080/apache-errors
                                        [Fri Feb 01 ...] [core:info] File does not exist: ../favicon.ico
                                        ...

    We can share information without having to grant SSH access to anybody.


BºRecipe: Run script as a given user:º
  # Note that `kapow` must be available under $PATH relative to /some/path
  kapow route add /chrooted\
      -e 'sudo --preserve-env=KAPOW_HANDLER_ID,KAPOW_DATA_URL \
          chroot --userspec=sandbox /some/path /bin/sh -c' \
          -c 'ls / | kapow set /response/body'


WebHook (TODO) @[https://github.com/adnanh/webhook] - lightweight incoming webhook server to run shell commands You can also pass data from the HTTP request (such as headers, payload or query variables) to your commands. webhook also allows you to specify rules which have to be satisfied in order for the hook to be triggered. - For example, if you're using Github or Bitbucket, you can use webhook to set up a hook that runs a redeploy script for your project on your staging server, whenever you push changes to the master branch of your project. - Guides featuring webhook: - Webhook and JIRA by @perfecto25 [jira] - Trigger Ansible AWX job runs on SCM (e.g. git) commit by @jpmens [ansible] - Deploy using GitHub webhooks by @awea [git][github] - Setting up Automatic Deployment and Builds Using Webhooks by Will Browning - Auto deploy your Node.js app on push to GitHub in 3 simple steps by [git][github] Karolis Rusenas - Automate Static Site Deployments with Salt, Git, and Webhooks by [git] Linode - Using Prometheus to Automatically Scale WebLogic Clusters on [prometheus][k8s] Kubernetes by Marina Kogan [weblogic] - Github Pages and Jekyll - A New Platform for LACNIC Labs by Carlos Martínez Cagnazzo - How to Deploy React Apps Using Webhooks and Integrating Slack on [slack] Ubuntu by Arslan Ud Din Shafiq - Private webhooks by Thomas - Adventures in webhooks by Drake - GitHub pro tips by Spencer Lyon [github] - XiaoMi Vacuum + Amazon Button = Dash Cleaning by c0mmensal - Set up Automated Deployments From Github With Webhook by Maxim Orlov VIDEO: Gitlab CI/CD configuration using Docker and adnanh/webhook to deploy on VPS - Tutorial #1 by Yes! Let's Learn Software
CI/CD
Jenkins 101
External Links
@[https://jenkins.io/doc/]
@[https://jenkins.io/doc/book/]
@[https://jenkins.io/user-handbook.pdf]
@[https://github.com/sahilsk/awesome-jenkins]

- @[https://jenkins.io/doc/book/using/using-credentials/]         Using credentials
- @[https://jenkins.io/doc/book/pipeline/running-pipelines]       Running Pipelines
- @[https://jenkins.io/doc/book/pipeline/multibranch]             Branches and Pull Requests
- @[https://jenkins.io/doc/book/pipeline/docker]                  Using Docker with Pipeline
- @[https://jenkins.io/doc/book/pipeline/shared-libraries]        Extending with Shared Libraries
- @[https://jenkins.io/doc/book/pipeline/development]             Pipeline Development Tools
- @[https://jenkins.io/doc/book/pipeline/syntax]                  Pipeline Syntax
- @[https://jenkins.io/doc/book/pipeline/pipeline-best-practices] Pipeline Best Practices
- @[https://jenkins.io/doc/book/pipeline/scaling-pipeline]        Scaling Pipelines
- @[https://jenkins.io/doc/book/blueocean]                        Blue Ocean
- @[https://jenkins.io/doc/book/blueocean/getting-started]        Getting started with Blue Ocean
- @[https://jenkins.io/doc/book/blueocean/creating-pipelines]     Creating a Pipeline
- @[https://jenkins.io/doc/book/blueocean/dashboard]              Dashboard
- @[https://jenkins.io/doc/book/blueocean/activity]               Activity View
- @[https://jenkins.io/doc/book/blueocean/pipeline-run-details]   Pipeline Run Details View
- @[https://jenkins.io/doc/book/blueocean/pipeline-editor]        Pipeline Editor
- @[https://jenkins.io/doc/book/managing]                         Managing Jenkins
- @[https://jenkins.io/doc/book/managing/system-configuration]    Configuring the System
- @[https://jenkins.io/doc/book/managing/security]                Managing Security
- @[https://jenkins.io/doc/book/managing/tools]                   Managing Tools
- @[https://jenkins.io/doc/book/managing/plugins]                 Managing Plugins
- @[https://jenkins.io/doc/book/managing/cli]                     Jenkins CLI
- @[https://jenkins.io/doc/book/managing/script-console]          Script Console
- @[https://jenkins.io/doc/book/managing/nodes]                   Managing Nodes
- @[https://jenkins.io/doc/book/managing/script-approval]         In-process Script Approval
- @[https://jenkins.io/doc/book/managing/users]                   Managing Users
- @[https://jenkins.io/doc/book/system-administration]            System Administration
- @[https://jenkins.io/doc/book/system-administration/backing-up] Backing-up/Restoring Jenkins
- @[https://jenkins.io/doc/book/system-administration/monitoring] Monitoring Jenkins
- @[https://jenkins.io/doc/book/system-administration/security]   Securing Jenkins
- @[https://jenkins.io/doc/book/system-administration/with-chef]  Managing Jenkins with Chef
- @[https://jenkins.io/doc/book/system-administration/with-puppet]Managing Jenkins with Puppet

- full list of ENV.VARs:
  ${BASE_JENKINS_URL}/pipeline-syntax/globals#env
Pipeline injected ENV.VARS
- full list of ENV.VARs:
  ${BASE_JENKINS_URL}/pipeline-syntax/globals#env

$env.BUILD_ID       :
$env.BUILD_NUMBER

$env.BUILD_TAG      : String of jenkins-${JOB_NAME}-${BUILD_NUMBER}.
                                                      ^^^^^^^^^^^^ .
                      Useful to subclassify resource/jar/etc output artifacts

$env.BUILD_URL      : where the results of this build can be found
                      Ex.: http://buildserver/jenkins/job/MyJobName/17/

$env.EXECUTOR_NUMBER: Unique number ID for current executor in same machine

$env.JAVA_HOME      : JAVA_HOME configured for a given job

$env.JENKINS_URL    :
$env.JOB_NAME       : Name of the project of this build
$env.NODE_NAME      : 'master', 'slave01',...
$env.WORKSPACE      : absolute path for workspace
Dockerized Jenkins
    docker run \
      --rm \
      -u root \
      -p 8080:8080 \
      -v jenkins-data:/var/jenkins_home \             ← if 'jenkins-data' Docker volumen
      \                                                  doesn't exists it will be created
      \
      -v /var/run/docker.sock:/var/run/docker.sock \  ← Jenkins need control of Docker to
      \                                                 launch new Docker instances during
      \                                                 the build process
      -v "$HOME":/home \
      --name jenkins01 \                              ← Allows to "enter" docker with:
      jenkinsci/blueocean                               $ docker exec -it jenkins01 bash
Export/import jobs

@[https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI]

********************************
* ALT 1: Using jenkins-cli.jar:*
********************************
  ºPRE-REQUISITES:º
   - jenkins-cli.jar version must match Server version
   - jnlp ports need to be open

  JENKINS_CLI="java -jar ${JENKINS_HOME}/war/WEB-INF/jenkins-cli.jar -s ${SERVER_URL}"
  ${JENKINS_CLI}    get-job job01 > job01.xml
  ${JENKINS_CLI} create-job job01 < job01.xml
                                    ^^^^^^^^^
                                - Can be stored in git,...
  RºWARN:º
  - REF: @[https://stackoverflow.com/questions/8424228/export-import-jobs-in-jenkins]
    There are issues with bare naked ampersands in the XML such as
    when you have & in Groovy code.


****************
* ALT 2: CURL  *
****************
SERVER_URL = "http://..."                      # ← Without Authentication.
SERVER_URL = "http://${USER}:${API_TOKEN}@..." # ← With    Authentication.

$ curl -s ${SERVER_URL}/job/JOBNAME/config.xml > job01.xml #ºExportº
                           º^^^^^^^^^^^^^^^^^^º

$ curl -X POST ${SERVER_URL}/createItem?name=JOBNAME' \         #ºImportº
       --header "Content-Type: application/xml" -d  job01.xml

*********************
* ALT 3: Filesystem *
* (backup)          *
*********************
tar cjf _var_lib_jenkins_jobs.tar.bz2 /var/lib/jenkins/jobs
Jenkins Pipelines
Jenkinsfile
REF: @[https://jenkins.io/doc/book/pipeline/jenkinsfile/]
     @[https://jenkins.io/doc/pipeline/steps/] 
     Reference of (hundreds of) Plugins compatible with Pipeline 
Commented Declarative Syntax Example
    pipeline {
       environment {
           T1 = 'development' ←······ Env.var with global visibility
           CC = """${sh(      ←······ Env.var set from shell STDOUT.
              ºreturnStdout:ºtrue, ←· trailing whitespace appended.
               script: 'echo "clang"' .trim() removes it.
               )}"""

        AWS_ACCESS_KEY_ID     =  ←··· Secret management
         ºcredentialsº('aws-key-id')← Protected by Jenkins.
        AWS_SECRET_ACCESS_KEY =
         ºcredentialsº('...')

       }

       parameters {  ←··············· allows to modifyºat runtimeº
         string(name: 'Greeting',     as ${params.Greeting}
                defaultValue: 'Hello',
                description: 'Hi!')
       }

      agent any    ←················· allocate anºexecutor and workspaceº
                                      It ensures that the src. repo. is imported to
                                      the workspace for folliwing stages
      stages {
        stage('Build') { ←··········· transpile/compile/package/... using
                                      (make, maven, gradle, ...) plugin

            environment {    ←······· Env.var with local stage visibility,
                                      also available to invoqued shell scripts.
              msg1 = "Building..."
              EXIT = """←············ Init to returned status code from shell
              ${sh(                   execution.
               ºreturnStatus:ºtrue,
                script: 'exit 1'
              )}"""
            }
            steps {
            echo "º${msg1}º:..."←···· shell like interpolation for double-coutes
                sh 'printenv'   ←···· msg1 and EXIT available here
                sshagent (
                  crendentials: ['key1']  ←····┬─ ssh with help of agent
                )                              │  (ssh-agent plugin needed)
                {                              │
                   sh 'ssh user@remoteIP' ←····┘
                }

            }
        }
        stage('Test') {
          steps {
              echo 'Testing..'
          }
        }
        stage('Deploy') {
          when {
            expression {
              currentBuild.result == null
           || currentBuild.result == 'SUCCESS'
            }
          }
          steps {
              sh 'make publish'
          }
        }
      }
      post { ←······················· BºHandling errorsº
          always {
              junit '**/target/*.xml'
          }
         ºfailureº{
              mail to:
                 team@example.com,
              subject: '...'
          }
          unstable { ...  }
          success  { ...  }
          failure  { ...  }
          changed  { ...  }
      }
    }

──────────────   ────────────   ────────────────────────────
    INPUT      →  PROCESSING  →  OUTPUT
──────────────   ────────────   ────────────────────────────
Jenkinsfile       Jenking       -ºarchived built artifactsº
                                -ºtest resultsº
                                -ºfull console outputº


For complex secrets (SSH keys, binary secrets,...)
use the realted Snippet Generators:
  GENERATOR             PARAMS
- SSH User Private Key  - Key File Variable
                        - Passphrase Variable
                        - Username Variable
────────────────────────────────────────────────────────────
- Credentials           SSH priv/pub keys stored in Jenkins.
────────────────────────────────────────────────────────────
- (PKCS#12) Certificate - Keystore Variable
                          Jenkins temporary assign it to the
                          secure location of the certificate's
                          keystore
                        - Password Variable (Opt)
                        - Alias Variable (Opt)
                        - Credentials: Cert.credentials stored
                          in Jenkins. The value of this field
                          is the credential ID, which Jenkins
                          writes out to the generated snippet.
────────────────────────────────────────────────────────────
- Docker client cert    - Handle Docker Host Cert.Auth.



Multiagent
Usefull from multi-target builds/tests/...
pipeline {
 ºagent noneº
  stages {
    stage('clone') {
      // REF: @[https://jenkins.io/doc/pipeline/steps/workflow-scm-step/]
      checkout Gºscmº ←··········· checkout code from scm ("git clone ...")
                           Gºscmº: special var. telling to use the same
                                   repository/revision used to checkout
                                   (git clone) the Jenkinsfile
      checkout poll: false,
               scm: [
                 $class: 'GitSCM',
                 branches: [[name: 'dev']],
                 doGenerateSubmoduleConfigurations: false,
                 extensions: [],
                 submoduleCfg: [],
                 userRemoteConfigs: [
                   [url: 'https://github.com/user01/project01.git',
                    credentialsId: 'UserGit01']
                 ]
               ]
    }
    stage('Build') {
     ºagent anyº
      steps {
        ...
      Oºstashºincludes: '**/target/*.jar', name:º'app'º
    }                                             ^
   }                                              │
   stage('Linux') {               ┌───────────────┘
    ºagent { label 'linux' }º     │
     steps {                      │
      Oºunstashºº'app'º←·········· copy named stash
        sh '...'                  Jenkins master → Current WorkSp.
      }                           · Note:Oºstashº = something put away for future use
      post { ...  }               ·      (In practice: Named cache of generated artifacts
    }                             ·       during same pipeline for reuse
    stage('Test on Windows') {    ·       in further steps). Once the pipeline is
     ºagent { label 'windows' }º  ·       finished, it is removed.
      steps {                     ·
        unstashº'app'º←············
        bat '...'
      }
      post { ...  }
    }
  }
}

Groovy Syntax Tips
git  key1: 'value1', key2: 'value2'   // ← sort form
git([key1: 'value1', key2: 'value2']) // ← long form

sh          'echo hello'     // ← sort form. Valid syntax for single param
sh([script: 'echo hello'])   // ← long form.


Parallel execution
stage('Test') {

 ºparallelº ←····················· Execute linux in parallel
 ºlinux:º{                         with windows
    node('linux') {
      try {
        unstash 'app' ←············· Copy
        sh 'make check'
      }
      finally {
        junit '**/target/*.xml'
      }
    }
  },
 ºwindows:º{
    node('windows') {
      /* .. snip .. */
    }
  }
}

git checkout summary
    checkout([
      $class    : 'GitSCM',
      poll      : false,
      branches  : [[name: commit]],
      extensions: [
        [$class: 'RelativeTargetDirectory', relativeTargetDir: reponame],
┌──→    [$class: 'CloneOption', reference: "/var/cache/${reponame}"]
│     ],
│     submoduleCfg: [],
│     userRemoteConfigs: [
│       [credentialsId: 'jenkins-git-credentials', url: repo_url]
│     ],
│     doGenerateSubmoduleConfigurations: false,
│   ])
└─CloneOption Class:
  - shallow (boolean) : do NOT download history              (Save time/disk)
  - noTags  (boolean) : do NOT download tags                 (Save time/disk)
                        (use only what specified in refspec)
  - depth (int)       : Set shallow clone depth              (Save time/disk)
  - reference(String) : local folder with existing repository
                        used by Git during clone operations.
  - timeout  (int)    : timeout for clone/fetch ops.
  - honorRefspec(bool): initial clone using given refspec   (Save time/disk)


End-to-End Multibranch Pl.
@[https://jenkins.io/doc/tutorials/build-a-multibranch-pipeline-project/]

PREREQUISITES
-ºGitº
- Docker

┌──────────────┬────────────┬────────────┐
│ INPUT        → JENKINS    → OUTPUT     │
│ ARTIFACTS    →            → ARTIFACTS  │
├──────────────┼────────────┼────────────┤
│ Node.js      │ build→test │ development│
│ React app    │            │ production │
│ npm          │            │            │
└──────────────┴────────────┴────────────┘

STEP 1) Setup local git repository
 - clone:
 $ git clone https://github.com/?????/building-a-multibranch-pipeline-project
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                                      Forked from
                                    @[https://github.com/jenkins-docs/building...]
 - Creak dev/pro branches:
   $ git branch development
   $ git branch production

STEP 2) Add 'Jenkinsfile' stub (agent, stages sections) to repo
       (initially in master branch)

STEP 3) Create new Pipeline in Jenkins Blue Ocean Interaface
        browse to "http://localhost:8080/"
          → click "Create a new Pipeline"
            → Choose "Git" in "In Where do you store your code?"
              → Repository URL: "/home/.../building-a-multibranch-pipeline-project"
                → Save

    Blue Ocean will detect the presence of the "Jenkinsfile" stub
    in each branch and will run each Pipeline against its respective branch,


STEP 4) Start adding functionality to the Jenkinsfile pipeline
        (commit to git once edited)
    pipeline {
        environment {
          docker_caching = 'HOME/.m2:/root/.m2'  ←  Cache to speed-up builds
          docker_ports   = '-p 3000:3000 -p 5000:5000'  ←  Cache to speed-up builds
        }
        agent {
            docker {
                image 'node:6-alpine'            ←   Good Enough to build simple
                                                     Node.js+React apps
                args '' ←   dev/pro port where the app will
                                                     listen for requests. Used during
                                                     functional testing

            }
        }
        environment {
            CI = 'true'
        }
        stages {
            stage('Build') {
                steps {
                    sh 'npm install'             ←  1st real build command
                }
            }
            stage('Test') {
                steps {
                    sh './jenkins/scripts/test.sh'
                }
            }
        }
    }

STEP 5) Click "run" icon of the master branch of your Pipeline project,
        and check the result.

STEP 6) Add "deliver" and "deploy" stages to the Jenkinsfile Pipeline
        (and commit changes)
       ºJenkins will selectively execute based on the branch that Jenkins is building fromº

      + stage('Deliver for development') {
      +    ºwhen {º
      +    º    branch 'development'º
      +    º}º
      +     steps {
      +         sh './jenkins/scripts/deliver-for-development.sh'
      +         input message: 'Finished using the web site? (Click "Proceed" to continue)'
      +         sh './jenkins/scripts/kill.sh'
      +     }
      + }
      + stage('Deploy for production') {
      +    ºwhen {º
      +    º    branch 'production'º
      +    º}º
      +     steps {
      +         sh './jenkins/scripts/deploy-for-production.sh'
      +         input message: 'Finished using the web site? (Click "Proceed" to continue)'
      +         sh './jenkins/scripts/kill.sh'
      +     }
      + }
Ex Pipeline script 
@[https://jenkins.io/doc/pipeline/steps/pipeline-build-step/]
build job: 'Pipeline01FromJenkinsfileAtGit', propagate: true, wait: false
build job: 'Pipeline02FromJenkinsfileAtGit', propagate: true, wait: false
build job: 'Pipeline03FromJenkinsfileAtGit', propagate: true, wait: false
                                                        ^^^^
                                result of step is that of downstream build
                                (success, unstable, failure, not built, or aborted).

                                false →  step succeeds even if the downstream build failed
                                         use result property of the return value as needed.

Jenkinless Pipeline
Jenkinsfile-runner:
- Executing Jenkinsfile pipeline without the need of having a Jenkin server 
  running (and wasting memory).
@[https://jenkins.io/blog/2019/02/28/serverless-jenkins/]
@[https://github.com/jenkinsci/jenkinsfile-runner]
Jenkins Unordered
AWS EC2 plugin
@[https://wiki.jenkins.io/display/JENKINS/Amazon+EC2+Fleet+Plugin]
- launch Amazon EC2 Spot Instances as worker nodes
  automatically scaling the capacity with the load.
Monitor GiHub/BitBucket Organization
Organization  Folders  enable  Jenkins  to  monitor  an  entire  GitHub  
Organization,  or  BitbucketTeam/Project  and  automatically  create  new  
Multibranch  Pipelines  for  repositories  which  contain branches and pull 
requests containing a Jenkinsfile.Currently,  this  functionality  exists  only 
for  GitHub  and  Bitbucket,  with  functionality  provided  by the 
plugin:github-organization-folder[GitHub Organization Folder] and 
plugin:cloudbees-bitbucket-branch-source[Bitbucket Branch Source] plugins
Serverless
@[https://medium.com/@jdrawlings/serverless-jenkins-with-jenkins-x-9134cbfe6870]
TODO:
Zuul
REF: IBM OpenStack Engineer Urges Augmenting Jenkins with Zuul for Hyperscale Projects
[https://thenewstack.io/ibm-openstack-engineer-urges-cncf-consider-augmenting-jenkins-zuul/]

@[https://zuul-ci.org/]
- Use the same Ansible playbooks to
  deploy your system and run your tests.


REF:@[https://www.mediawiki.org/wiki/Continuous_integration/Zuul]
"""...Zuul is a python daemon which acts as a gateway between
Gerrit and Jenkins. It listens to Gerrit stream-events feed and
trigger jobs function registered by Jenkins using the Jenkins Gearman
plugin. The jobs triggering specification is written in YAML and
hosted in the git repository integration/config.git as /zuul/layout.yaml """
Customize History Saving Policy
@[https://stackoverflow.com/questions/60391327/is-it-possible-in-jenkins-to-keep-just-first-and-last-failures-in-a-row-of-con]

Use Case: We are just interesing in keeping "build" changes when the execution
   changes from "success execution" to "failure". That's is, if w  have a history like:

   t1  t2  t3  t4  t5  t6  t7  t8  t9  t10 t11 t12 t13 t14 t15
   -----------------------------------------------------------
   OK, OK, OK, OK, KO, KO, KO, KO, OK, OK, OK, OK, KO, KO, OK
   ^               ^               ^               ^       ^
   status          status          status          status  status
   change          change          change          change  change


   We want to keep history just for:
   t1              t5              t9              t13     t15
   -----------------------------------------------------------
   OK,             KO,             OK,             KO,     OK

To allow this history saving policy a groovy job-post-build step is needed:

 Ex: discard all successful builds of a job except for the last 3 ones
     (since typically, you're more interested in the failed runs)

     def allSuccessfulBuilds = manager.build.project.getBuilds().findAll {
         it.result?.isBetterOrEqualTo( hudson.model.Result.SUCCESS )
     }
     
     allSuccessfulBuilds.drop(3).each {
       it.delete()
     }
build-status conditional Post-build actions
@[https://stackoverflow.com/questions/45456564/jenkins-declarative-pipeline-conditional-post-action]
Clone directovy (vs full repo)
@[https://softwaretestingboard.com/q2a/1791/how-clone-checkout-specific-directory-command-line-jenkins]

Ex:
  $ git checkout branch_or_version -- path/file

  $ git checkout HEAD -- main.c   ←  checkout main.c from HEAD

  $ git checkout e5224c883a...c9 /path/to/directory ← Checkout folder from commit
Jenkins: managing large git repos
@[https://jenkins.io/files/2016/jenkins-world/large-git-repos.pdf]
CircleCI
CircleCI Ex
REF:
@[https://github.com/interledger4j/ilpv4-connector/blob/master/.circleci/config.yml]

cat .circleci/config.yml
# Java Maven CircleCI 2.0 configuration file
#
# Check https://circleci.com/docs/2.0/language-java/ for more details
#
version: 2
jobs:

  # This job builds the entire project and runs all unit tests (specifically the persistence tests) against H2 by
  # setting the `spring.datasource.url` value. All Integration Tests are skipped.
  build:
    working_directory: ~/repo

    docker:
      # Primary container image where all commands run
      - image: circleci/openjdk:8-jdk
        environment:
          # Customize the JVM maximum heap limit
          MAVEN_OPTS: -Xmx4096m

    steps:

      # apply the JCE unlimited strength policy to allow the PSK 256 bit key length
      # solution from http://qiita.com/yoskhdia/items/f4702a3abc4467de69b0
      - run:
          name: Getting JCE unlimited strength policy to allow the 256 bit keys
          command: |
            curl -L --cookie 'oraclelicense=accept-securebackup-cookie;'  http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip -o /tmp/jce_policy.zip
            unzip -o /tmp/jce_policy.zip -d /tmp
            sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/US_export_policy.jar $JAVA_HOME/jre/lib/security/US_export_policy.jar
            sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/local_policy.jar $JAVA_HOME/jre/lib/security/local_policy.jar

      - checkout # check out source code to working directory

      # Restore the saved cache after the first run or if `pom.xml` has changed. Read about caching dependencies:
      # https://circleci.com/docs/2.0/caching/
      - restore_cache:
          keys:
            - v1-dependencies-{{ checksum "pom.xml" }}

      - run:
          name: Full Build (H2)
          command:  mvn dependency:go-offline -DskipITs install

      - save_cache: # saves the project dependencies
          paths:
            - ~/.m2
          key: v1-dependencies-{{ checksum "pom.xml" }}

      # save tests
      - run:
          name: Save test results
          command: |
            mkdir -p ~/junit/
            find . -type f -regex ".*/target/surefire-reports/.*xml" -exec cp {} ~/junit/ \;
            mkdir -p ~/checkstyle/
            find . -type f -regex ".*/target/checkstyle-reports/.*xml" -exec cp {} ~/junit/ \;

          when: always

      - store_test_results:
          path: ~/junit

      - store_artifacts:
          path: ~/junit

      # publish the coverage report to codecov.io
      - run: bash <(curl -s https://codecov.io/bash)

  # This job runs specific Ilp-over-HTTP Integration Tests (ITs) found in the `connector-it` module.
  # by executing a special maven command that limits ITs to the test-group `IlpOverHttp`.
  integration_tests_ilp_over_http:
    working_directory: ~/repo

    machine:
      image: ubuntu-1604:201903-01

    environment:
      MAVEN_OPTS: -Xmx4096m
      JAVA_HOME: /usr/lib/jvm/jdk1.8.0/

    steps:

      # apply the JCE unlimited strength policy to allow the PSK 256 bit key length
      # solution from http://qiita.com/yoskhdia/items/f4702a3abc4467de69b0
      - run:
          name: Getting JCE unlimited strength policy to allow the 256 bit keys
          command: |
            curl -L --cookie 'oraclelicense=accept-securebackup-cookie;'  http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip -o /tmp/jce_policy.zip
            unzip -o /tmp/jce_policy.zip -d /tmp
            sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/US_export_policy.jar $JAVA_HOME/jre/lib/security/US_export_policy.jar
            sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/local_policy.jar $JAVA_HOME/jre/lib/security/local_policy.jar
      - checkout # check out source code to working directory

      # Restore the saved cache after the first run or if `pom.xml` has changed. Read about caching dependencies:
      # https://circleci.com/docs/2.0/caching/
      - restore_cache:
          keys:
            - v1-dependencies-{{ checksum "pom.xml" }}

      # gets the project dependencies and installs sub-module deps
      - run:
          name: Install Connector Dependencies
          command: mvn dependency:go-offline -DskipTests -DskipITs install

      - save_cache: # saves the project dependencies
          paths:
            - ~/.m2
          key: v1-dependencies-{{ checksum "pom.xml" }}

      - run:
          name: Run Integration Tests (ITs)
          command: |
            cd ./connector-it
            docker network prune -f
            mvn verify -Pilpoverhttp

      # publish the coverage report to codecov.io
      - run: bash <(curl -s https://codecov.io/bash)

  # This job runs specific Settlement-related Integration Tests (ITs) found in the `connector-it` module.
  # by executing a special maven command that limits ITs to the test-group `Settlement`.
  integration_tests_settlement:
    working_directory: ~/repo

    machine:
      image: ubuntu-1604:201903-01

    environment:
      MAVEN_OPTS: -Xmx4096m
      JAVA_HOME: /usr/lib/jvm/jdk1.8.0/

    steps:

      # apply the JCE unlimited strength policy to allow the PSK 256 bit key length
      # solution from http://qiita.com/yoskhdia/items/f4702a3abc4467de69b0
      - run:
          name: Getting JCE unlimited strength policy to allow the 256 bit keys
          command: |
            curl -L --cookie 'oraclelicense=accept-securebackup-cookie;'  http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip -o /tmp/jce_policy.zip
            unzip -o /tmp/jce_policy.zip -d /tmp
            sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/US_export_policy.jar $JAVA_HOME/jre/lib/security/US_export_policy.jar
            sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/local_policy.jar $JAVA_HOME/jre/lib/security/local_policy.jar
      - checkout # check out source code to working directory

      # Restore the saved cache after the first run or if `pom.xml` has changed. Read about caching dependencies:
      # https://circleci.com/docs/2.0/caching/
      - restore_cache:
          keys:
            - v1-dependencies-{{ checksum "pom.xml" }}

      # gets the project dependencies and installs sub-module deps
      - run:
          name: Install Connector Dependencies
          command: mvn dependency:go-offline -DskipTests -DskipITs install

      - save_cache: # saves the project dependencies
          paths:
            - ~/.m2
          key: v1-dependencies-{{ checksum "pom.xml" }}

      - run:
          name: Run Integration Tests (ITs)
          command: |
            cd ./connector-it
            docker network prune -f
            mvn verify -Psettlement

      # publish the coverage report to codecov.io
      - run: bash <(curl -s https://codecov.io/bash)

  # This job runs specific Coordination-related Integration Tests (ITs) found in the `connector-it` module.
  # by executing a special maven command that limits ITs to the test-group `Coordination`.
  integration_tests_coordination:
    working_directory: ~/repo

    machine:
      image: ubuntu-1604:201903-01

    environment:
      MAVEN_OPTS: -Xmx4096m
      JAVA_HOME: /usr/lib/jvm/jdk1.8.0/

    steps:

      # apply the JCE unlimited strength policy to allow the PSK 256 bit key length
      # solution from http://qiita.com/yoskhdia/items/f4702a3abc4467de69b0
      - run:
          name: Getting JCE unlimited strength policy to allow the 256 bit keys
          command: |
            curl -L --cookie 'oraclelicense=accept-securebackup-cookie;'  http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip -o /tmp/jce_policy.zip
            unzip -o /tmp/jce_policy.zip -d /tmp
            sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/US_export_policy.jar $JAVA_HOME/jre/lib/security/US_export_policy.jar
            sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/local_policy.jar $JAVA_HOME/jre/lib/security/local_policy.jar
      - checkout # check out source code to working directory

      # Restore the saved cache after the first run or if `pom.xml` has changed. Read about caching dependencies:
      # https://circleci.com/docs/2.0/caching/
      - restore_cache:
          keys:
            - v1-dependencies-{{ checksum "pom.xml" }}

      # gets the project dependencies and installs sub-module deps
      - run:
          name: Install Connector Dependencies
          command: mvn dependency:go-offline -DskipTests -DskipITs install

      - save_cache: # saves the project dependencies
          paths:
            - ~/.m2
          key: v1-dependencies-{{ checksum "pom.xml" }}

      - run:
          name: Run Integration Tests (ITs)
          command: |
            cd ./connector-it
            docker network prune -f
            mvn verify -Pcoordination

      # publish the coverage report to codecov.io
      - run: bash <(curl -s https://codecov.io/bash)

  docker_image:
    working_directory: ~/repo

    machine:
      image: ubuntu-1604:201903-01

    environment:
      MAVEN_OPTS: -Xmx4096m
      JAVA_HOME: /usr/lib/jvm/jdk1.8.0/

    steps:
      - checkout
      - restore_cache:
          keys:
            - v1-dependencies-{{ checksum "pom.xml" }}
      - run:
          name: Deploy docker image
          command: mvn verify -DskipTests -Pdocker,dockerHub -Dcontainer.version=nightly -Djib.httpTimeout=60000 -Djib.to.auth.username=${DOCKERHUB_USERNAME} -Djib.to.auth.password=${DOCKERHUB_API_KEY}

workflows:
  version: 2

  # In CircleCI v2.1, when no workflow is provided in config, an implicit one is used. However, if you declare a
  #  workflow to run a scheduled build, the implicit workflow is no longer run. You must add the job workflow to your
  # config in order for CircleCI to also build on every commit.
  commit:
    jobs:
      - build
      - integration_tests_ilp_over_http:
          requires:
            - build
      - integration_tests_settlement:
          requires:
            - build
      - integration_tests_coordination:
          requires:
            - build

  nightly:
    triggers:
      - schedule:
          cron: "0 0 * * *"
          filters:
            branches:
              only:
                - master
    jobs:
      - build
      - integration_tests_ilp_over_http:
          requires:
            - build
      - integration_tests_settlement:
          requires:
            - build
      - integration_tests_coordination:
          requires:
            - build
      - docker_image:
          requires:
            - integration_tests_ilp_over_http
            - integration_tests_settlement
            - integration_tests_coordination
GitHub Actions
Github Actions
https://www.infoq.com/news/2020/02/github-actions-api/

GitHub Actions makes it easy to automate all your software workflows, 
now with world-class CI/CD. Build, test, and deploy your code right 
from GitHub. Make code reviews, branch management, and issue triaging 
work the way you want.

-GitHub Actions API add REST API endpoints for managing artifacts,
 secrets, runners, and workflows.
QA/Testing
Kayenta Canary Testing
@[https://github.com/spinnaker/kayenta]
- Kayenta platform:  Automated Canary Analysis (ACA)
SonarQube (QA)
Apply quality metrics to source-code
Selenium Browser test automation

See also QAWolf:

Source{d}: Large Scale Code Analysis with IA
@[https://www.linux.com/blog/holberton/2018/10/sourced-engine-simple-elegant-way-analyze-your-code]

- source{d} offers a suite of applications that uses machine learning on code 
  to complete source code analysis and assisted code reviews. Chief among them 
  is the source{d} Engine, now in public beta; it uses a suite of open source 
  tools (such as Gitbase, Babelfish, and Enry) to enable large-scale source 
  code analysis. Some key uses of the source{d} Engine include language 
  identification, parsing code into abstract syntax trees, and performing SQL 
  Queries on your source code such as:
    - What are the top repositories in a codebase based on number of commits?
    - What is the most recent commit message in a given repository?
    - Who are the most prolific contributors in a repository
Charles Proxy
@[https://www.charlesproxy.com/]
Charles is an HTTP proxy / HTTP monitor / Reverse Proxy that enables a 
developer to view all of the HTTP and SSL / HTTPS traffic between their machine 
and the Internet. This includes requests, responses and the HTTP headers (which 
contain the cookies and caching information).
Networking for DevOps
Load Balancer
BºHTTP balanced proxy Quick Setup with HAProxyº
REF: @[https://github.com/AKSarav/haproxy-nodejs-redis/blob/master/haproxy/]
   Only two steps are needed:

   └ haproxy/Dockerfile
     FROM haproxy
     COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

   └ haproxy/haproxy.cfg
    global
      daemon
      maxconn 256

    defaults
      mode http
      timeout connect 5000ms
      timeout client 50000ms
      timeout server 50000ms

    frontend http-in
      bind *:80                             ← Listen on port 80 on all interfaces
      default_backend servers

    backend servers                         ←  Forward to single backend "servers"
      server server1 host01:8081 maxconn 32 ←  composed of (single server) "server1"
                                               at host01:8081

BºReverse Proxyº
  [TODO]
BºForward Proxyº
  [TODO]


BºDNS Recordsº
 ┌─────────────────────────────────────────────┐
 │ A       root domain name IP address         │
 │         Ex: mydomain.com → 1.2.3.4          │
 │         Not recomended for changing IPs     │
 ├─────────────────────────────────────────────┤
 │ CNAME   maps name2 → name1                  │
 │         Ex: int.mydomain.com → mydomain.com │
 ├─────────────────────────────────────────────┤
 │ Alias   Amazon Route 53 virtual record      │
 │         to map AWS resources like ELBs,     │
 │         CloudFront, S3 buckets, ...         │
 ├─────────────────────────────────────────────┤
 │ MX      mail server name → IP address       │
 │         Ex: smtp.mydomain.com → 1.2.3.4     │
 ├─────────────────────────────────────────────┤
 │ AAAA    A record for IPv6 addresses         │
 └─────────────────────────────────────────────┘
nginx.conf summary
@[https://raazkumar.com/tutorials/nginx/nginx-conf/]

nginx ==   fast HTTP reverse proxy
         + reliable load balancer
         + high performance caching server
         + full-fledged web platform

Bºnginx.conf building blocksº
  - worker process    : should be equal to number cores of the server (or auto)
  - worker connection : 1024 (per thread. nginx doesn't block) 

  - rate limiting     : prevent brute force attacks.
  - proxy buffers     : (when used as proxy server)limits how much data to store as cache
                         gzip /brotil or compression
  - upload file size  : it should match php max upload size and nginx client max body size.
  - timeouts          : php to nginx communication time.
  - log rotation      : error log useful to know the errors and monitor resources
  - fastcgi cache     : very important to boost the performance for static sties.
  - SSL Configuration : there are default setting available with nginx itself 
                        (also see ssl performance tuning).

  GºExample nginx.conf:º

    user www-data;                                   
   ºload_moduleºmodules/my_favourite_module.so;      
    pid /run/nginx.pid;
                                                     | Alternative global config for 
                                                     | [4 cores, 8 threads, 32GB RAM] 
                                                     | handling  50000request/sec
                                                     |
    worker_processes auto;                           | worker_processes 8;
                                                     | worker_priority -15;
    include /etc/nginx/modules-enabled/*.conf;       | 
    worker_rlimit_nofile 100000;                     | worker_rlimit_nofile 400000;                                  
                                                     | timer_resolution 10000ms;
                                                     |
    events {                                         | events {
      worker_connections 1024;                       |     worker_connections 20000;                       
      multi_accept on;                               |     use epoll;
    }                                                |     multi_accept on;
                                                     | }

  Bºhttp {               ←  global configº           
      index index.php index.html index.htm;          
     º# Basic Settingsº                              
                                                     
      sendfile on;                                   
      tcp_nopush on;
      tcp_nodelay on;
      sendfile_max_chunk 512;
      keepalive_timeout 300;
      keepalive_requests 100000;
      types_hash_max_size 2048;
      server_tokens off;
      
      server_names_hash_bucket_size 128;
      # server_name_in_redirect off;
      
      include /etc/nginx/mime.types; ← ········· types {           
      default_type application/octet-stream;       text/html              html htm shtml;
      ##                                           application/javascript js;
      # SSL Settings                               ...
      ##                                         }
      
      #ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
      #ssl_prefer_server_ciphers on;
      #rate limit zone
      
      limit_req_zone $binary_remote_addr zone=one:10m rate=3r/m;
      #buffers
      
      client_body_buffer_size 128k;
      client_max_body_size 10m;
      client_header_buffer_size 32k;
      large_client_header_buffers 16 256k;
      output_buffers 1 32k;
      postpone_output 1460;
      #Porxy buffers
      proxy_buffer_size 256k;
      proxy_buffers 8 128k;
      proxy_busy_buffers_size 256k;
      proxy_max_temp_file_size 2048m;
      proxy_temp_file_write_size 2048m;
      
      ## fast cgi PHP
      fastcgi_buffers 8 16k;
      fastcgi_buffer_size 32k;
      fastcgi_connect_timeout 300;
      fastcgi_send_timeout 300;
      fastcgi_read_timeout 300;
      #static caching css/js/img
      
      open_file_cache max=10000 inactive=5m;
      open_file_cache_valid 2m;
      open_file_cache_min_uses 1;
      open_file_cache_errors on;
      #timeouts
      
      client_header_timeout 3m;
      client_body_timeout 3m;
      send_timeout 3m;
      
      # Logging Settings
      
      log_format main_ext ‘$remote_addr – $remote_user [$time_local] “$request” ‘
                          ‘$status $body_bytes_sent “$http_referer” ‘
                          ‘”$http_user_agent” “$http_x_forwarded_for” ‘
                          ‘”$host” sn=”$server_name” ‘
                          ‘rt=$request_time ‘
                          ‘ua=”$upstream_addr” us=”$upstream_status” ‘
                          ‘ut=”$upstream_response_time” ul=”$upstream_response_length” ‘
                          ‘cs=$upstream_cache_status’ ;
      
      access_log /dev/stdout main_ext;
      error_log /var/log/nginx/error.log warn;   Read more on nginx error log⅋common errors
      
       
      ##
      # Gzip Settings #brotil
      ##
      
      gzip on;
      gzip_disable “msie6”;
      
      gzip_vary on;
      gzip_proxied any;
      gzip_comp_level 6;
      gzip_buffers 16 8k;
      gzip_http_version 1.1;
      gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript application/x-font-ttf font/opentype image/svg+xml image/x-icon;
      ##
      # Virtual Host Configs
      ##
      
      include /etc/nginx/conf.d/*.conf;
      include /etc/nginx/sites-enabled/*;   
    }
    
  Bºserver {             ← Domain levelº
      listen 0.0.0.0:443 rcvbuf=64000 sndbuf=120000 backlog=20000 ssl http2;
      server_name example.com www.example.com;
      keepalive_timeout         60;
      ssl                       on;
      ssl_protocols             TLSv1.2 TLSv1.1 TLSv1;
      ssl_ciphers               'ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS:!RC4';
      ssl_prefer_server_ciphers on;
      ssl_session_cache         shared:TLSSL:30m;
      ssl_session_timeout       10m;
      ssl_buffer_size           32k;
      ssl_certificate           /etc/letsencrypt/live/example.com/fullchain.pem;
      ssl_certificate_key       /etc/letsencrypt/live/example.com/privkey.pem;
      ssl_dhparam           /etc/ssl/certs/dhparam.pem;
      more_set_headers          "X-Secure-Connection: true";
      add_header                Strict-Transport-Security max-age=315360000;
      root       /var/www;

  Bº  location {         ← Directory levelº
         root /var/www;
         index index.php index.html;
      }

  Bº  location ~ .php$ {º 
        fastcgi_keep_conn on;
        fastcgi_pass   unix:/run/php5.6-fpm.sock;
        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME /var/www$fastcgi_script_name;
        include fastcgi_params;
        fastcgi_intercept_errors off;
        fastcgi_buffer_size 32k;
        fastcgi_buffers 32 32k;
        fastcgi_connect_timeout 5;
      }
     
  Bº  location ~* ^.+.(jpg|jpeg|gif|png|svg|ico|css|less|xml|html?|swf|js|ttf)$ {º
          root /var/www;
          expires 10y;
     }

    }

  - /etc/nginx/conf.d/*: user defined config files

See also:
https://github.com/trimstray/nginx-admins-handbook
https://github.com/tldr-devops/nginx-common-configuration


Ansible (Configuration management)
External Links
- User Guide:
@[https://docs.ansible.com/ansible/latest/user_guide/index.html]
- Ansible in practice[Video] 
@[https://sysadmincasts.com/episodes/46-configuration-management-with-ansible-part-3-4]
- Playbooks best practices:
@[https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html]

Ronald Kurr has lot of very-useful and professional Ansible powered code to 
provide JVM, Python, Desktops, ... machines. For example:
 - Ansible Study Group Labs
@[https://github.com/kurron/ansible-study-group-labs]
 - An OpenVPN server in the cloud
   https://github.com/kurron/aws-open-vpn/blob/master/ansible/playbook.yml
 - Installation of tools than any self-respecting Operation person loves and needs.
   https://github.com/kurron/ansible-role-operations/blob/master/tasks/main.yml
 - Installation of tools than any self-respecting JVM developer loves and needs.
   https://github.com/kurron/ansible-role-jvm-developer/blob/master/tasks/main.yml
 - Installation of tools than any self-respecting AWS command-line user loves and needs.
 @[https://github.com/kurron/ansible-role-aws/blob/master/tasks/main.yml]
 - Connect to a Juniper VPN under Ubuntu.
 @[https://github.com/kurron/ansible-role-jvpn/blob/master/tasks/main.yml]
 - Installation of tools than any self-respecting Atlassian user loves and needs.
 @[https://github.com/kurron/ansible-role-atlassian/blob/master/tasks/main.yml]
 - Installation of tools than any self-respecting cross-platform .NET developer loves and needs.
 @[https://github.com/kurron/ansible-role-dot-net-developer/blob/master/tasks/main.yml]
 - Docker container that launches a pipeline of Docker containers that 
   ultimately deploy Docker containes via Ansible into EC2 instances
 @[https://github.com/kurron/docker-ec2-pipeline]
 - Increase operating system limits for Database workloads.
 @[https://github.com/kurron/ansible-role-os-limits/blob/master/tasks/main.yml]
 - Creation of an Amazon VPC. Public and private subnets are created
   in all availability zones.
 @[https://github.com/kurron/ansible-role-vpc]

- Command line tools
@[https://docs.ansible.com/ansible/latest/user_guide/command_line_tools.html]

- run a single task 'playbook' against a set of hosts
@[https://docs.ansible.com/ansible/latest/cli/ansible.html]

- ansible-config view, edit, and manage ansible configuration
@[https://docs.ansible.com/ansible/latest/cli/ansible-config.html]

- ansible-console  interactive console for executing ansible tasks
@[https://docs.ansible.com/ansible/latest/cli/ansible-console.html]

- manage Ansible roles in shared repostories (default to [https://galaxy.ansible.com] )
@[https://docs.ansible.com/ansible/latest/cli/ansible-galaxy.html]

- display/dump configured inventory:
@[https://docs.ansible.com/ansible/latest/cli/ansible-inventory.html]


@[https://docs.ansible.com/ansible/latest/cli/ansible-pull.html]
ansible-pull pulls playbooks from a VCS repo and executes them for the local host
@[https://docs.ansible.com/ansible/latest/cli/ansible-vault.html]
ansible-vault  encryption/decryption utility for Ansible data files

Ansible Summary
Bºansible-docº
 @[https://docs.ansible.com/ansible/latest/cli/ansible-doc.html]
 @[https://github.com/tldr-pages/tldr/blob/master/pages/common/ansible*]

- Display information on modules installed in Ansible libraries. 
  Display a terse listing of plugins and their short descriptions.

  $º$ ansible-doc --list \     º ← List available action plugins (modules):
  $º      --type $pluginType   º   (optional) filter by type

  $º$ ansible-doc $plugName \  º ← Show information for plugin
  $º$  --type $pluginType      º   (optional) filter by type

  $º$ ansible-doc \            º ← Show the playbook snippet for 
  $º$ --snippet $plugName      º   action plugin (modules) 
  $º$ --jsonº                     (optional) dump as JSON

Bºansible-playbookº
@[https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html]
  Execute tasks defined in playbook over SSH.

  $ ansible-playbook $playbook   \ ← Run tasks in playbook:
      -i $inventory_file01       \ ← Optional. def /etc/ansible/hosts → ./hosts
      -i $inventory_file02       \ ← Optional. 
      -e "$var1=val1 $var2=val2" \ ← Optional. Inject env.vars into task execution
      -e "@$variables.json"      \ ← Optional. Inject env.vars into task execution from json
      --tags $tag1,tag2          \ ← Optional. Run tasks in playbook matching tags.
      --start-at $task_name      \ ← Optional. Run tasks in playbook starting at task.
      --ask-vault-passº                ← alt.1. Ask for secrets interatively
                                        (alt.B_1 --vault-password-fileºpassFileº)
                                        (alt.B_2 export ANSIBLE_VAULT_PASSWORD_FILE=...)
                                        See @[#ansible_handling_secrets]
                                        for more info on secret management

Bºansible-galaxyº: Create and manage Ansible roles.
@[https://docs.ansible.com/ansible/latest/cli/ansible-galaxy.html]

  $º$ ansible-galaxy install $username.$role_nameº  ← Install a role
  $º$ ansible-galaxy remove  $username.$role_nameº  ← Remove a role
  $º$ ansible-galaxy list                        º  ← List installed roles
  $º$ ansible-galaxy search $role_name           º  ← Search for a given role:
  $º$ ansible-galaxy init   $role_name           º  ← Create a new role


Bºansibleº: Manage groups of computers (/etc/ansible/hosts) over SSH 
  $º$ ansible $group --list-hosts                º  ← List hosts belonging to a group
  $º$ ansible $group  -m ping                    º  ← Ping host group 
  $º$ ansible $group  -m setup                   º  ← Display facts about host-group 

  $º$ ansible $group  -m command -a 'command' \  º  ← Execute a command on host-group
  $º$    --become \                              º  ← (Optional) add admin privileges
  $º$    -i inventory_file                       º  ← (Optional) Use custom inventory 

    

ºlayout best practicesº                            ║ ºControllerº  1 ←→ N  ┌─→ ºModuleº
(Recommended, non─mandatory)                       ║                       │
best practice file layout approach:                ║ ºMachine   º          │  (community pre─packaged)
────────────────────────────────────────────────   ║  ^                    │ ─ abstracts recurrent system task
production            # inventory file             ║─ host with            │ ─ Provide the real power of Ansible
staging               # inventory file             ║  installed Ansible    │   avoiding custom scripts
                                                   ║  with modules         │ ─ $ ansible─doc "module_name"
group_vars/           # ← assign vars.             ║  prepakaged ←─────────┘ ─ Ex:
                      #   to particular groups.    ║  andºconfig.filesº        user:    name=deploy group=web
  all.yml             # ← Ex:                      ║      └─┬────────┘         ^             ^            ^
  │---                                             ║     1) $ANSIBLE_CONFIG  module   ensure creation of'deploy'
  │ntp: ntp.ex1.com                                ║     2) ./ansible.cfg    name     account in 'web' group
  │backup: bk.ex1.com                              ║     3) ~/.ansible.cfg   (executions are idempotent)
                                                   ║     4) /etc/ansible/ansible.cfg
  webservers.yml     # ← Ex:                       ║     Ex:
  │---                                             ║     [defaults]
  │apacheMaxClients: 900                           ║     inventory = hosts
  │apacheMaxRequestsPerChild: 3000                 ║     remote_user = vagrant
                                                   ║     private_key_file = ~/.ssh/private_key
  dbservers.yml      # ← Ex:                       ║     host_key_checking = False
  │---                                             ║─ "host" inventory file
  │maxConnectionPool: 100                          ║         listing target servers,groups
  │...                                             ║
                                                   ║
host_vars/                                         ║ Role   N  ←────→ 1   Playbook        1 ←─────→ N tasks
   hostname1.yml      # ←assign variables          ║    ^                    ^                         ^
   hostname2.yml      #  to particular systems     ║ Mechanism to          ─ main yaml defining        single proc.
                                                   ║ share files/...         task to be executed       to execute
library/              # (opt) custom modules       ║ for reuse *2          ─ Created by DevOps team
module_utils/         # (opt) custom module_utils  ║@[https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html]
                      #       to support modules   ║
filter_plugins/       # (opt) filter plugins       ║ºRUN SEQUENCEº
                                                   ║ |playbook| 1←→N |Play| 1 → apply to → N |Hosts|
webservers.yml        # ← Ex playbook:             ║                  ↑        
│---                  #   Map                      ║                  1 
│- hosts: webservers  # ← webservers─group         ║                  └─(contains)→ N |Task| 1→1 |Module|
│                     #   to                       ║  ┌────────────────────────────────┘
│  roles:             # ← roles                    ║  └→ each task is run in parallel across hosts in order
│    - common         #                            ║     waiting until all hosts have completed the task before
│    - webtier        #                            ║     moving to the next.(default exec.strategy, can be switched to "free") 
                                                   ║     | - name: ....                                           
dbservers.yml         # ← Ex playbook for db─tier  ║     |   hosts: groupTarget01                            
site.yml              #ºmaster playbookº           ║     | Oºserial:º   # ←  Alt1: serial schedule-tunning.
│---                    (whole infra)              ║     | Oº  - 1      # ←        first in 1 host                 
│# file: site.yml                                  ║     | Oº  - "10%"  # ←        if OK, runs 10% simultaneously  
│- import_playbook: webservers.yml                 ║     | Oº  - 30     # ←        finally 30 hosts in parallel
│- import_playbook: dbservers.yml                  ║     |  tasks: ...
                                                   ║     |#Bºstrategy: freeº ← Alt2:  Don't wait for other hosts
                                                   ║
ºRole layoutº                                      ║º|Playbook Play|º
roles/                                             ║  INPUT
├ webtierRole/     # ← same layout that common     ║ |Playbook| → Oºansible─playbookº  → Gather      ────→ exec tasks
│ ...                                              ║                ^                    host facts          │
├ monitoringRole/  # ← same layout that common     ║                exec tasks on       (network,            v
│ ...                                              ║                the target hostº*1º  storage,...)     async Handlers
├─common/          # ← Common Role.                ║                                     └────┬────┘      use to:
│ ├─tasks/         #                               ║                           Ussually gathered facts    service restart, 
│ │ └─ main.yml    #                               ║                           are used for               ...              
│ ├─handlers/      #                               ║                         OºConditionalºInclude. Ex:            
│ │ └─ main.yml    #                               ║                           ...
│ ├─templates/     #                               ║                           -Oºincludeº: Redhat.yml
│ │ └─ ntp.conf.j2 # ← notice .j2 extension        ║                            Oºwhenº: ansible_os_family == 'Redhat'    
│ ├─files/         #                               ║ Reminder: 
│ │ ├─ bar.txt     # ← input to   copy─resource    ║@[https://docs.ansible.com/ansible/2.4/playbooks_reuse_includes.html]
│ │ └─ foo.sh      # ← input to script─resource    ║ "include"        ← evaluated @ playbook parsing
│ ├─vars/          #                               ║ "import"         ← evaluated @ playbook execution
│ │ └─ main.yml    # ← role related vars           ║ "import_playbook"← plays⅋tasks in each playbook
│ ├─defaults/      #                               ║ "include_tasks"                                                      
│ │ └─ main.yml    # ← role related vars           ║ "import_tasks"
│ │                  ← with lower priority         ║
│ ├─meta/          #                               ║ºcommand moduleº
│ │ └─ main.yml    # ← role dependencies           ║─ Ex:
│ ├─library/       # (opt) custom modules          ║. $ ansible server01 -m command -a uptime
│ ├─module_utils/  # (opt) custom module_utils     ║                     ^^^^^^^^^^
│ └─lookup_plugins/# (opt) a given 'lookup_plugins'║                     default module. Can be ommited
│                          is used                 ║  testserver │ success │ rc=0 ⅋⅋
    ...                                            ║  17:14:07 up  1:16,  1 user, load average: 0.16, ...
═══════════════════════════════════════════════════╩════════════════════════════════════════════════════════════════
º*1:º@[https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html]


playbook-layout ºTASK vs ROLES PLAYBOOK LAYOUTº ──────────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────────── PLAYBOOK YAML LAYOUT WITHºTASKSº │ PLAYBOOK YAML LAYOUT WITHºROLESº ──────────────────────────────────────────────────────────┼─────────────────────────────────────────────────────────────────── --- │ ºbased on a well known file structureº. - hosts: webservers ← targeted (ssh) servers │ --- connection: ssh ← :=ssh, localhost,. .. │ - name : my list of Task name │ hosts: database vars: ← yaml-file-scoped var.list │ vars_files: - myYmlVar01 : "myVal01" │ - secrets.yml │ enviroment: ← runtime-scoped env.var.list │ Bº# pre_tasks execute before roles º - myEnvVar01 : "myEnv01" │ Bºpre_tasksº: │ - name: update the apt cache tasks: ← ordered task list to │ apt: update_cache=yes be executed │ - name: install apache2 ← task1 │ roles: apt: | │ - role: BºdatabaseRoleº name=apache2 │ # next vars override those in (vars|defaults)/main.yml update_cache=yes │ database_name: " {{ myProject_ddbb_name }}" state=lates │ database_user: " {{ myProject_ddbb_user }}" notify: │ - { role: consumer, when: tag | default('provider') == 'consumer'} - ºrestart-apache2-idº │ - { role: provider, when: tag | default('provider') == 'provider'} - name: next_task_to_exec │ "module": ... │ │ Bº# post_tasks execute after roles º handlers: ← tasks triggered by events │ Bºpost_tasksº: - name: restart-apache2 ← ºname as a Unique-IDº │ - name: notify Slack service: name=apache2 state=restarted │ local_action: ˃ │ slack - hosts: localhost │ domain=acme.slack.com connection: local │ token={{ slack_token }} gather_facts: False │ msg="database {{ inventory_hostname }} configured" │ vars: │ ... ... │ =========================== │ roles search path: ./roles → /etc/ansible/roles │ role file layout: │ roles/B*databaseRole*/tasks/main.yml │ roles/B*databaseRole*/files/ │ roles/B*databaseRole*/templates/ │ roles/B*databaseRole*/handlers/main.yml │ roles/B*databaseRole*/vars/main.yml # should NO be overrriden │ roles/B*databaseRole*/defaults/main.yml # can be overrriden │ roles/B*databaseRole*/meta/main.yml # dependency info about role ──────────────────────────────────────────────────────────┴─────────────────────────────────────────────────────────────────── - hosts: web_servers tasks: - shell: /usr/bin/foo Oºregisterº:ºfoo_resultº ←OºSTDOUT exec ouput to ansible varº ignore_errors: True Json schema output depends on module STDOUT to .rc in case. - shell: /usr/bin/bar Use -v on each module to investigate when: ºfoo_resultº.rc == 5 Error Handling[qa] - default behavior: - take a host out of the play if a task fails and continue with the other hosts. -Oºserialº, Oºmax_fail_percentageº can be used to define a playbook-play as failed. @[https://docs.ansible.com/ansible/2.5/user_guide/playbooks_delegation.html#maximum-failure-percentage] - Using 'block' (task grouping) inside tasks: - hosts: app-servers Oºmax_fail_percentage:º"10%" ← abort if surpassed. tasks: - name: Take VM out of the load balancer - name: Create a VM snapshot before the app upgrade - block: ← scope error/recovery/rollback - name: Upgrade the application - name: Run smoke tests ºrescue:º - name: Revert a VM to the snapshot after a failed upgrade ºalways:º - name: Re-add webserver to the loadbalancer - name: Remove a VM snapshot
inventory file - Defaults to: /etc/ansible/hosts - if marked as executable (+x) it's executed and the json-output taken as effective-inventory. - script must then support '--host=' and '--list=' flags Ex: hosts inventory file ┌─→ Ex: test("ssh─ping") host in inventory ───────────────────────── │ using 'ping' module: Gºdevelopmentº ←─────────┘ $ ansible -i ./hostsº-m pingº Gºdevelopmentº Oºproductionº [all:vars] group patterns ntp_server=ntp.ubuntu.com Other patterns:A All hosts Oºallº [Oºproductionº:vars] All Oºº* db_primary_host=rhodeisland.example.com Union devOº:ºstaging db_replica_host=virginia.example.com Intersection stagingOº:⅋ºdatabase db_name=widget_production Exclusion devOº:!ºqueue rabbitmq_host=pennsylvania.example.com Wildcard Oºº*.example.com Range webOº[5:10]º [Gºdevelopmentº:vars] Regex O*~web\d+\.example\.(com|org)* db_primary_host=quebec.example.com db_name=widget_staging rabbitmq_host=quebec.example.com [Gºvagrantº:vars] db_primary_host=vagrant3 db_name=widget_vagrant rabbitmq_host=vagrant3 [Gºvagrantº] Gºvagrant1 ansible_host=127.0.0.1 ansible_port=2222º Gºvagrant2 ansible_host=127.0.0.1 ansible_port=2200º [web_group01] Oºgeorgia.example.comº Oºnewhampshire.example.comº Oºnewjersey.example.comº Gºvagrant1º [rabbitmq] Oºpennsylvania.example.comº Gºvagrant2º [django:children] ← Group of groups web_group01 rabbitmq [web_group02] web_group01[01:20].example.com ← ranges web-[a-t].example.com ←
variable "scopes" Playbook Variable Main Scopes -ºGlobal:ºset by config, ENV.VARS and cli -ºPlay :ºeach play and contained structures, vars|vars_files|vars_prompt entries role defaults -ºHost :ºdirectly associated to a host, like inventory, include_vars, facts or registered task outputs Variable scope Overrinding rules: - The more explicit you get in scope, the more precedence 1 command line values (eg “-u user”) º(SMALLEST PRECEDENCE)º 2 role defaults 3 *1 inventory file || script group vars 4 *2 inventory group_vars/all 5 *2 playbook group_vars/all 6 *2 inventory group_vars/* 7 *2 playbook group_vars/* 8 *1 inventory file or script host vars 9 *2 inventory host_vars/* 10 *2 playbook host_vars/* 11 *4 host facts || cached set_facts 12 play vars 13 play vars_prompt 14 play vars_files 15 role vars (defined in role/vars/main.yml) 16 block vars (only for tasks in block) 17 task vars (only for the task) 18 include_vars 19 set_facts || registered vars 20 role (and include_role) params 21 include params 22 (-e) extra vars º(BIGEST PRECEDENCE)º ↑ *1 Vars defined in inventory file or dynamic inventory *2 Includes vars added by ‘vars plugins’ as well as host_vars and group_vars which are added by the default vars plugin shipped with Ansible. *4 When created with set_facts’s cacheable option, variables will have the high precedence in the play, but will be the same as a host facts precedence when they come from the cache.
Ad-hoc command
@[https://www.howtoforge.com/ansible-guide-ad-hoc-command/]
- Ad-Hoc allows to perform tasks without creating a playbook 
  first, such as rebooting servers, managing services, editing the line 
  configuration, copy a file to only one host, install only one package.

- An Ad-Hoc command will only have two parameters, the group of a host 
  that you want to perform the task and the Ansible module to run.
Must-know Modules
1) Package management
- module for major package managers (DNF, APT, ...)
  - install, upgrade, downgrade, remove, and list packages.
  - dnf_module
  - yum_module (required for Python 2 compatibility)
  - apt_module
  - slackpkg_module

  - Ex:
    |- name: install Apache,MariaDB
    |  dnf:                # ← dnf,yum,
    |    name:
    |      - httpd
    |      - mariadb-server
    |    state: latest     # ← !=latest|present|...

2) 'service' module
  - start, stop, and reload installed packages;
  - Ex:
    |- name: Start service foo, based on running process /usr/bin/foo
    |  service:
    |    name: foo
    |    pattern: /usr/bin/foo
    |    state: started     # ← started|restarted|...
    |    args: arg0value 

3) 'copy' module
  - copies file: local_machine → remote_machine
  |- name: Copy a new "ntp.conf file into place, 
  |  copy:
  |    src: /mine/ntp.conf
  |    dest: /etc/ntp.conf
  |    owner: root
  |    group: root
  |    mode: '0644'  # or u=rw,g=r,o=r
  |    backup: yes   # back-up original if different to new

4) 'debug' module (print values to STDOUT/file during execution)
  |- name: Display all variables/facts known for a host
  |  debug:
  |    var: hostvars[inventory_hostname]
  |    verbosity: 4
  |    dest: /tmp/foo.txt   # ← By default to STDOUT
  |    verbosity: 2         # ← optional. Display only with
                                $ ansible-playbook demo.yamlº-vvº

5) 'file' module: manage file and its properties.
    - set attributes of files, symlinks, or directories.
    - removes files, symlinks, or directories.
- Ex: 
  |- name: Change file ownership/group/perm
  |  file:
  |    path: /etc/foo # ← create if needed
  |    owner: foo
  |    group: foo
  |    mode: '0644'
  |    state: file ← file*|directory|...

6) 'lineinfile' module
   - ensures that particular line is in file
   - replaces existing line using regex.
   - Ex:
     |- name: Ensure SELinux is set to enforcing mode
     |  lineinfile:
     |    path: /etc/selinux/config
     |    regexp: '^SELINUX='       # ← (optional) creates if not found.
     |    line: SELINUX=enforcing   # new value, do nothing if found


7) 'git' module
   - manages git checkouts of repositories to deploy files or software.
   - Ex: Create git archive from repo
     |- git:
     |    repo: https://github.com/ansible/ansible-examples.git
     |    dest: /src/ansible-examples
     |    archive: /tmp/ansible-examples.zip

8) 'cli_config'
  -  platform-agnostic way of pushing text-based configurations
     to network devices
     - Ex1:
       | - name: commit with comment
       |   cli_config:
       |     config: set system host-name foo
       |     commit_comment: this is a test
     
     - Ex2:
       set switch-hostname and exits with a commit message.
       |- name: configurable backup path
       |  cli_config:
       |    config: "{{ lookup('template', 'basic/config.j2') }}"
       |    backup: yes
       |    backup_options:
       |      filename: backup.cfg
       |      dir_path: /home/user


9) 'archive' module
   - create compressed archive of 1+ files.
   - Ex:
   |- name: Compress directory /path/to/foo/ into /path/to/foo.tgz
   |  archive:
   |    path:
   |    - /path/to/foo
   |    - /path/wong/foo
   |    dest: /path/to/foo.tar.bz2
   |    format: bz2

10) Command
   - takes the command name followed by a list of space-delimited arguments.
Ex1:
- name: return motd to registered var
  command: cat /etc/motd .. ..  
  become: yes            # ← "sudo"
  become_user: db_owner  # ← effective user
  register: mymotd       # ← STDOUT to Ansible var mymotd
  args:                  # (optional) command-module args 
                         # (vs executed command arguments)
    chdir: somedir/      # ← change to dir 
    creates: /etc/a/b    # ← Execute command if path doesn't exists

@[https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html]
host fact → Play Vars
- UsingOºsetupº module at play/run-time. Ex:

  tasks:
    - ...
    - name: re-read facts after adding custom fact
    Bºsetup:ºfilter=ansible_local     ← re-run Bºsetup moduleº

$ ansible targetHost01 -m Oºsetupº
(Output will be similar to)
Next facts are available with:
- hosts: ...
Bºgather_facts: yesº ← Will execute the module "setup"

{
  Bº"ansible_os_family": "Debian",   º
  Bº"ansible_pkg_mgr": "apt",        º
  Bº"ansible_architecture": "x86_64",º
  b*"ansible_nodename": "ubuntu2.example.com",
    "ansible_all_ipv4_addresses": [ "REDACTED IP ADDRESS" ],
    "ansible_all_ipv6_addresses": [ "REDACTED IPV6 ADDRESS" ],
    "ansible_bios_date": "09/20/2012",
    ...
    "ansible_date_time": {
        "date": "2013-10-02",
        ...
    },
  Oº"ansible_default_ipv4": {º
  Oº    ...                  º
  Oº},                       º
    ...
    "ansible_devices": {
        "sda": {
            "partitions": {
                ...
                  Oº"size": "19.00 GB",º
            },
            ...
        },
        ...
    },
    ...
    "ansible_env": {
        "HOME": "/home/mdehaan",
      Oº"PWD": "/root/ansible",º
      Oº"SHELL": "/bin/bash",º
        ...
    },
  Oº"ansible_fqdn": "ubuntu2.example.com",º
  Oº"ansible_hostname": "ubuntu2",º
    ...
    "ansible_processor_cores": 1,
    "ansible_ssh_host_key_dsa_public": ...
    ...
}

/etc/ansible/facts.d
(Local provided facts, 1.3+)
Way to provide "locally supplied user values" as opposed to
               "centrally supplied user values"  or
               "locally dynamically determined values"

If any files inside /etc/ansible/facts.d (@remotely managed host)
ending in *.fact (JSON, INI, execs generating JSON, ...) can supply local facts

Ex: /etc/ansible/facts.d/preferences.fact contains:
[general]
asdf=1    ← Will be available as {{ ansible_local.preferences.general.asdf }}
bar=2       (keys are always converted to lowercase)


To copy local facts and make the usable in current play:
- hosts: webservers
  tasks:
    - name: create directory for ansible custom facts
      file: state=directory recurse=yes path=/etc/ansible/facts.d

    - name: install custom ipmi fact
      copy: src=ipmi.fact dest=/etc/ansible/facts.d ← Copy local facts

    - name: re-read facts after adding custom fact
    Bºsetup:ºfilter=ansible_local   ← re-run Bºsetup moduleº to make
                                    ← locals facts available in current play

Lookups: Query ext.data: file sh KeyValDB .. @[https://docs.ansible.com/ansible/latest/user_guide/playbooks_lookups.html] ... vars: motd_value: "{{Oºlookupº(Bº'file'º, '/etc/motd') }}" ^^^^^^ ^^^^ Use lookup One of: modules - file - password - pipe STDOUT of local exec. - env ENV.VAR. - template j2 tpl evaluation - csvfile Entry in .csv file - dnstxt - redis_kv Redis key lookup - etcd etcd key lookup
"Jinja2" template ex.
Bºnginx.conf.j2º
  server {
          listen 80 default_server;
          listen [::]:80 default_server ipv6only=on;
  
          listen 443 ssl;
  
          root /usr/share/nginx/html;
          index index.html index.htm;
  
          server_name         º{{º server_name º}}º;
          ssl_certificate     º{{º cert_file   º}}º;
          ssl_certificate_key º{{º key_file    º}}º;
  
          location / {
                  try_files $uri $uri/ =404;
          }
  }
Bºtemplates/default.conf.tplº
  templates/000_default.conf.tpl
  |˂VirtualHost *:80˃
  |    ServerAdmin webmaster@localhost
  |    DocumentRoot {{ doc_root }}
  |
  |    ˂Directory {{ doc_root }}˃
  |        AllowOverride All
  |        Require all granted
  |    ˂/Directory˃
  |˂/VirtualHost˃
  
  Task:
  |  - name: Setup default virt.host
  |    template: src=templates/default.conf.tpl dest=/etc/apache2/sites-available/000-default.conf

Bº(j2) filtersº
Oº|º must be interpreted as the "pipe" (input) to filter, not the "or" symbol.
  # default if undefined:
  - ...
    "HOST": "{{ database_host Oº| default('localhost')º }}"
  
  # fail after some debuging
  - ...
    register: result
  Oºignore_errors: Trueº
    ...
    failed_when: resultOº| failedº
  ...
  Oºfailed º True if registered value is a failed    task
  Oºchangedº True if registered value is a changed   task
  Oºsuccessº True if registered value is a succeeded task
  Oºskippedº True if registered value is a skipped   task

Bºpath filtersº
  Oºbasename  º
  Oºdirname   º
  Oºexpanduserº  '~' replaced by home dir.
  Oºrealpath  º  resolves sym.links
  Ej:
    vars:
      homepage: /usr/share/nginx/html/index.html
    tasks:
    - name: copy home page
      copy: ˂
        src={{ homepage Oº| basenameº }}
        dest={{ homepage }}

BºCustom filtersº
  filter_plugins/surround_by_quotes.py
  # From http://stackoverflow.com/a/15515929/742
  def surround_by_quote(a_list):
      return ['"%s"' % an_element for an_element in a_list]
  
  class FilterModule(object):
      def filters(self):
          return {'surround_by_quote': surround_by_quote}
notify vs register
@[https://stackoverflow.com/questions/33931610/ansible-handler-notify-vs-register]

  
  some tasks ...                         |     some tasks ...
 ºnotify:ºnginx_restart                  |    ºregister:ºnginx_restart
                                         |     
  # our handler                          |     # do this after nginx_restart changes
  - name: nginx_restart                  |    ºwhen:ºnginx_restart|changed
          ^^^^^^^^^^^^^
        - only fired when 
          tasks report changes
        - only visible in playbook  ← With register task is displayed as skipped
          if actually executed.       if 'when' condition is false.
        - can be called from any
          role.
        - (by default) executed at
          the end of the playbook.
        RºThis can be dangerousºif playbook
          fails midway, handler is NOT 
          notified. Second run can ignore
          the handle since task could have
          not changed now. Actually it will
        RºNOT be idempotentº (unless 
          --force-handler is set ) 
        - To fire at specific point flush
          all handlers by defining a task like:
          - meta: flush_handlers
        - called only once no matter how many
          times it was notified.
Handling secrets
Bºansible-vaultº: En/de-crypts values/data structures/files
@[https://github.com/tldr-pages/tldr/blob/master/pages/common/ansible-vault.md]
@[https://docs.ansible.com/ansible/latest/user_guide/vault.html#id17]

  $º$ ansible-vault create $vault_file           º  ← Create new encrypted vault file with
                                                      a prompt for a password.

  $º$ ansible-vault create \                     º  ← Create new encrypted vault file
  $º    --vault-password-file=$pass_file \       º     using a vault key file to encrypt it
  $º    $vault_file                              º 

  $º$ ansible-vault encrypt \                    º  ← Encrypt existing file using optional 
  $º    --vault-password-file=$pass_file \       º    password file
  $º    $vault_file                              º 

  $º$ ansible-vault encrypt_string               º  ← Encrypt string using Ansible's encrypted
                                                      string format, interactively

  $º$ ansible-vault view \                       º  ← View encrypted file, using pass.file 
  $º    --vault-password-file={{password_file}} \º    to decrypt
  $º   $vault_file                               º

  $º$ ansible-vault rekey \                      º  ← Re-key already encrypted vault file 
  $º --vault-password-file=$old_password_file    º    with new password file
  $º --new-vault-password-file=$new_pass_file    º
  $º $vault_file                                 º
 
@[https://docs.ansible.com/ansible/latest/user_guide/playbooks_vault.html]
- Ansible vaults use symetric-chiper encryption 

  INPUT               ENCRYPTING                OUTPUT                  Ussage 
                      COMMAND                   (can be added to SCM)   (Play.Execution)
  ──────────────      ─────────────             ────────────────────    ──────────────────
  external pass─┐  ┌→ $ ansible─vault \ (alt1)→ protectedPB.yml   ──┬→ $ ansible-playbook protectedPB.yml \  *1
                │  │    create protectedPB.yml                      │   º--ask-vault-passº                ← alt.A
  secret needed─┤(alt1)                                             │   º--vault-password-fileºpassFileº  ← alt.B_1
  at playbook   └──┤                                                │                          ^^^^^^^^
  execution        │                                                │                   content/exec.STDOUT
                   │                                                │          should be a single-line-string
                   │                                                │   export ANSIBLE_VAULT_PASSWORD_FILE=... ← alt.B_2
                 (alt.2)                                            │
                   │                                                │
                   │                                                │
                   └→ $ ansible-vault \ (alt2)→ yml to be embeded ──┘
                        encrypt_string          into existing playbook
                                                Ex:
                                                → mySecretToEncrypt              
                                                → bla bla blah(Ctrl+D)→ !vault ← C⅋P to a yml file:
                                                →    $ANSIBLE_VAULT;1.1;AES256   - vars:
                                                →    66386439653236336462...       - secret01: !vault |
                                                →    64316265363035303763...                $ANSIBLE_VAULT;1.1;AES256
                                                →           ...                             66386439653236336462...

*1: RºWARN:º Currently requires all files to be encrypted with same password                            
Ex (yum install)
apache@localhost
┌ ---
│ # file: ansible.yml
│ - hosts: localhost
│   connection: local
│   gather_facts: False
│ 
│   vars:
│     var_yum_prerequisites: [ 'httpd24'      , 'vim', 'tmux' ]
│     var_apt_prerequisites: [ 'apache-server', 'vim', 'tmux' ]
│ 
│   vars_files:
│     - /vars/vars_not_in_git.yml   ←  add to .gitignore
│                                      avoid sharing sensitive data
│                                      /vars/vars_not_in_git.yml will look like:
│                                      password: !vault |
│                                                $ANSIBLE_VAULT;1.1;AES256
│                                                ...
│ 
│   tasks:
│    - name: install yum pre-requisites
│      when: ansible_os_family == "RedHat"
│      become: true
│      yum:
│        name: {{ var_yum_prerequisites }}
│        state: present
│      notify:
│      - restart-apache2
│ 
│    - name: install apt pre-requisites
│      when: ansible_os_family == "Debian"
│      become: true
│      apt:
│        name: {{ var_apt_prerequisites }}
│        state: latest
│      notify:
│      - restart-apache2
│ 
│ 
│   handlers:
│   - name: restart-apache2
└     service: name=httpd state=restarted
Ex: Installing nginx
┌ web-tls.yml
│ - name: wait in control host for ssh server to be running
│   local_action: wait_for port=22 host="{{ inventory_hostname }}"
│     search_regex=OpenSSH
│ 
│ - name: Configure nginx
│  ºhosts:º webservers
│   become: True
│  ºvars:º
│     Oºkey_fileº: /etc/nginx/ssl/nginx.key
│     Gºcert_fileº: /etc/nginx/ssl/nginx.crt
│     Bºconf_fileº: /etc/nginx/sites-available/default
│     server_name: localhost
│  ºtasks:º
│     - name: install nginx
│       ºaptº: ºnameº=nginx ºupdate_cacheº=yes
│ 
│     - name: create directories for ssl certificates
│       ºfileº: ºpathº=/etc/nginx/ssl ºstateº=directory
│ 
│     - name: copy TLS key
│       ºcopyº: ºsrcº=files/nginx.key ºdestº={{ Oºkey_fileº }} owner=root ºmodeº=0600
│       ºnotifyº: restart nginx
│ 
│     - name: copy TLS certificate
│       ºcopyº: ºsrcº=files/nginx.crt ºdestº={{ Gºcert_fileº }}
│       ºnotifyº: restart nginx
│ 
│     - name: copy config file
│       ºcopyº: ºsrcº=files/nginx.confº.j2º ºdestº={{ Bºconf_fileº }}
│ 
│     - name: enable configuration
│       # set attributes of file, symlink or directory
│       ºfileº: ºdestº=/etc/nginx/sites-enabled/default ºsrcº={{ Bºconf_fileº }} state=link
│     - name: copy index.html
│       # template → new file → remote host
│       ºtemplateº: ºsrcº=templates/index.html.j2 ºdestº=/usr/share/nginx/html/index.html
│         mode=0644
│ 
│     - name: show a debug message
│       debug: "msg='Example debug message: conf_file {{ Bºconf_fileº }} included!'"
│ 
│     - name: Example to register new ansible variable
│       command: whoami
│       register: login
│     # (first debug helps to know who to write the second debug)
│     - debug: var=login
│     - debug: msg="Logged in as user {{ login.stdout }}"
│ 
│     - name: Example to ºignore errorsº
│       command: /opt/myprog
│       register: result
│       ignore_errors: ºTrueº
│     - debug: var=result
│ 
│  ºhandlers:º
│     - name: restart nginx
└       ºserviceº: ºnameº=nginx ºstateº=restarted

Insanely complet Ansible playbook
@[https://gist.github.com/marktheunissen/2979474]
---                               ← YAML documents must begin with doc.separator "---"

#### 
#### descriptive comment at the top of my playbooks.
#### 
#
# Overview: Playbook to bootstrap a new host for configuration management.
# Applies to: production
# Description:
#   Ensures that a host is configured for management with Ansible.
###########
#
# Note:
# RºYAML, like Python, cares about whitespaceº:BºIndent consistentlyº  .
# Be aware! Unlike Python, YAML refuses to allow the tab character for
# indentation, so always use spaces.
#
# Two-space indents feel comfortable to me, but do whatever you like.
# vim:ff=unix ts=2 sw=2 ai expandtab
#
# If you're new to YAML, keep in mind that YAML documents, like XML
# documents, represent a tree-like structure of nodes and text. More
# familiar with JSON?  Think of YAML as a strict and more flexible JSON
# with fewer significant characters (e.g., :, "", {}, [])
#
# The curious may read more about YAML at:
# http://www.yaml.org/spec/1.2/spec.html
#


###
# Notice the minus on the line below -- this starts the playbook's record
# in the YAML document. Only one playbook is allowed per YAML file.  Indent
# the body of the playbook.
-

  hosts: all
  ###########
  # Playbook attribute: hosts
  # Required: yes
  # Description:
  #   The name of a host or group of hosts that this playbook should apply to.
  #
  ## Example values:
  #   hosts: all -- applies to all hosts
  #   hosts: hostname -- apply ONLY to the host 'hostname'
  #   hosts: groupname -- apply to all hosts in groupname
  #   hosts: group1,group2 -- apply to hosts in group1 ⅋ group2
  #   hosts: group1,host1 -- mix and match hosts
  #   hosts: *.mars.nasa.gov wildcard matches work as expected
  #
  ## Using a variable value for 'hosts'
  #
  # You can, in fact, set hosts to a variable, for example:
  #
  #   hosts: $groups -- apply to all hosts specified in the variable $groups
  #
  # This is handy for testing playbooks, running the same playbook against a
  # staging environment before running it against production, occasional
  # maintenance tasks, and other cases where you want to run the playbook
  # against just a few systems rather than a whole group.
  #
  # If you set hosts as shown above, then you can specify which hosts to
  # apply the playbook to on each run as so:
  #
  #   ansible-playbook playbook.yml --extra-vars="groups=staging"
  #
  # Use --extra-vars to set $groups to any combination of groups, hostnames,
  # or wildcards just like the examples in the previous section.
  #

  sudo: True
  ###########
  # Playbook attribute: sudo
  # Default: False
  # Required: no
  # Description:
  #   If True, always use sudo to run this playbook, just like passing the
  #   --sudo (or -s) flag to ansible or ansible-playbook.

  user: remoteuser
  ###########
  # Playbook attribute:  user
  # Default: "root'
  # Required: no
  # Description
  #   Remote user to execute the playbook as

  ###########
  # Playbook attribute: vars
  # Default: none
  # Required: no
  # Description:
  #  Set configuration variables passed to templates ⅋ included playbooks
  #  and handlers.  See below for examples.
  vars:
    color: brown

    web:
      memcache: 192.168.1.2
      httpd: apache
    # Tree-like structures work as expected, but be careful to surround
    #  the variable name with ${} when using.
    #
    # For this example, ${web.memcache} and ${web.apache} are both usable
    #  variables.

    ########
    # The following works in Ansible 0.5 and later, and will set $config_path
    # "/etc/ntpd.conf" as expected.
    #
    # In older versions, $config_path will be set to the string "/etc/$config"
    #
    config: ntpd.conf
    config_path: /etc/$config

    ########
    # Variables can be set conditionally. This is actually a tiny snippet
    # of Python that will get filled in and evaluated during playbook execution.
    # This expressioun should always evaluate to True or False.
    #
    # In this playbook, this will always evaluate to False, because 'color'
    #  is set to 'brown' above.
    #
    # When ansible interprets the following, it will first expand $color to
    # 'brown' and then evaluate 'brown' == 'blue' as a Python expression.
    is_color_blue: "'$color' == 'blue'"

    #####
    # Builtin Variables
    #
    # Everything that the 'setup' module provides can be used in the
    # vars section.  Ansible native, Facter, and Ohai facts can all be
    # used.
    #
    # Run the setup module to see what else you can use:
    # ansible -m setup -i /path/to/hosts.ini host1
    main_vhost: ${ansible_fqdn}
    public_ip:  ${ansible_eth0.ipv4.address}

    # vars_files is better suited for distro-specific settings, however...
    is_ubuntu: "'${ansible_distribution}' == 'ubuntu'"


  ##########
  # Playbook attribute: vars_files
  # Required: no
  # Description:
  #   Specifies a list of YAML files to load variables from.
  #
  #   Always evaluated after the 'vars' section, no matter which section
  #   occurs first in the playbook.  Examples are below.
  #
  #   Example YAML for a file to be included by vars_files:
  #   ---
  #   monitored_by: phobos.mars.nasa.gov
  #   fish_sticks: "good with custard"
  #   # (END OF DOCUMENT)
  #
  #   A 'vars' YAML file represents a list of variables. Don't use playbook
  #   YAML for a 'vars' file.
  #
  #   Remove the indentation ⅋ comments of course, the '---' should be at
  #   the left margin in the variables file.
  #
  vars_files:
    # Include a file from this absolute path
    - /srv/ansible/vars/vars_file.yml

    # Include a file from a path relative to this playbook
    - vars/vars_file.yml

    # By the way, variables set in 'vars' are available here.
    - vars/$hostname.yml

    # It's also possible to pass an array of files, in which case
    # Ansible will loop over the array and include the first file that
    # exists.  If none exist, ansible-playbook will halt with an error.
    #
    # An excellent way to handle platform-specific differences.
    - [ vars/$platform.yml, vars/default.yml ]

    # Files in vars_files process in order, so later files can
    # provide more specific configuration:
    - [ vars/$host.yml ]

    # Hey, but if you're doing host-specific variable files, you might
    # consider setting the variable for a group in your hosts.ini and
    # adding your host to that group. Just a thought.


  ##########
  # Playbook attribute: vars_prompt
  # Required: no
  # Description:
  #   A list of variables that must be manually input each time this playbook
  #   runs.  Used for sensitive data and also things like release numbers that
  #   vary on each deployment.  Ansible always prompts for this value, even
  #   if it's passed in through the inventory or --extra-vars.
  #
  #   The input won't be echoed back to the terminal.  Ansible will always
  #   prompt for the variables in vars_prompt, even if they're passed in via
  #   --extra-vars or group variables.
  #
  #   TODO: I think that the value is supposed to show as a prompt but this
  #   doesn't work in the latest devel
  #
  vars_prompt:
    passphrase: "Please enter the passphrase for the SSL certificate"

    # Not sensitive, but something that should vary on each playbook run.
    release_version: "Please enter a release tag"

  ##########
  # Playbook attribute: tasks
  # Required: yes
  # Description:
  # A list of tasks to perform in this playbook.
  tasks:
    ##########
    # The simplest task
    # Each task must have a name ⅋ action.
    - name: Check that the server's alive
      action: ping

    ##########
    # Ansible modules do the work!
    - name: Enforce permissions on /tmp/secret
      action: file path=/tmp/secret mode=0600 owner=root group=root
    #
    # Format 'action' like above:
    # modulename  module_parameters
    #
    # Test your parameters using:
    #   ansible -m $module  -a "$module_parameters"
    #
    # Documentation for the stock modules:
    # http://ansible.github.com/modules.html

    ##########
    # Use variables in the task!
    #
    # Variables expand in both name and action
    - name: Paint the server $color
      action: command echo $color


    ##########
    # Trigger handlers when things change!
    #
    # Ansible detects when an action changes something.  For example, the
    # file permissions change, a file's content changed, a package was
    # just installed (or removed), a user was created (or removed).  When
    # a change is detected, Ansible can optionally notify one or more
    # Handlers.  Handlers can take any action that a Task can. Most
    # commonly they are used to restart a service when its configuration
    # changes. See "Handlers" below for more about handlers.
    #
    # Handlers are called by their name, which is very human friendly.

    # This will call the "Restart Apache" handler whenever 'copy' alters
    # the remote httpd.conf.
    - name: Update the Apache config
      action: copy src=httpd.conf dest=/etc/httpd/httpd.conf
      notify: Restart Apache

    # Here's how to specify more than one handler
    - name: Update our app's configuration
      action: copy src=myapp.conf dest=/etc/myapp/production.conf
      notify:
        - Restart Apache
        - Restart Redis

    ##########
    # Include tasks from another file!
    #
    # Ansible can include a list of tasks from another file. The included file
    # must represent a list of tasks, which is different than a playbook.
    #
    # Task list format:
    #   ---
    #   - name: create user
    #     action: user name=$user color=$color
    #
    #   - name: add user to group
    #     action: user name=$user groups=$group append=true
    #   # (END OF DOCUMENT)
    #
    #   A 'tasks' YAML file represents a list of tasks. Don't use playbook
    #   YAML for a 'tasks' file.
    #
    #   Remove the indentation ⅋ comments of course, the '---' should be at
    #   the left margin in the variables file.

    # In this example $user will be 'sklar'
    #  and $color will be 'red' inside new_user.yml
    - include: tasks/new_user.yml user=sklar color=red

    # In this example $user will be 'mosh'
    #  and $color will be 'mauve' inside new_user.yml
    - include: tasks/new_user.yml user=mosh color=mauve

    # Variables expand before the include is evaluated:
    - include: tasks/new_user.yml user=chris color=$color


    ##########
    # Run a task on each thing in a list!
    #
    # Ansible provides a simple loop facility. If 'with_items' is provided for
    # a task, then the task will be run once for each item in the 'with_items'
    # list.  $item changes each time through the loop.
    - name: Create a file named $item in /tmp
      action: command touch /tmp/$item
      with_items:
        - tangerine
        - lemon

    ##########
    # Choose between files or templates!
    #
    # Sometimes you want to choose between local files depending on the
    # value of the variable.  first_available_file checks for each file
    # and, if the file exists calls the action with $item={filename}.
    #
    # Mostly useful for 'template' and 'copy' actions.  Only examines local
    # files.
    #
    - name: Template a file
      action: template src=$item dest=/etc/myapp/foo.conf
      first_available_file:
        # ansible_distribution will be "ubuntu", "debian", "rhel5", etc.
        - templates/myapp/${ansible_distribution}.conf

        # If we couldn't find a distribution-specific file, use default.conf:
        - templates/myapp/default.conf

    ##########
    # Conditionally execute tasks!
    #
    # Sometimes you only want to run an action when a under certain conditions.
    # Ansible will 'only_if' as a Python expression and will only run the
    # action when the expression evaluates to True.
    #
    # If you're trying to run an task only when a value changes,
    # consider rewriting the task as a handler and using 'notify' (see below).
    #
    - name: "shutdown all ubuntu"
      action: command /sbin/shutdown -t now
      only_if: "$is_ubuntu"

    - name: "shutdown the government"
      action: command /sbin/shutdown -t now
      only_if: "'$ansible_hostname' == 'the_government'"

    ##########
    # Notify handlers when things change!
    #
    # Each task can optionally have one or more handlers that get called
    # when the task changes something -- creates a user, updates a file,
    # etc.
    #
    # Handlers have human-readable names and are defined in the 'handlers'
    #  section of a playbook.  See below for the definitions of 'Restart nginx'
    #  and 'Restart application'
    - name: update nginx config
      action: file src=nginx.conf dest=/etc/nginx/nginx.conf
      notify: Restart nginx

    - name: roll out new code
      action: git repo=git://codeserver/myapp.git dest=/srv/myapp version=HEAD branch=release
      notify:
        - Restart nginx
        - Restart application


    ##########
    # Run things as other users!
    #
    # Each task has an optional 'user' and 'sudo' flag to indicate which
    # user a task should run as and whether or not to use 'sudo' to switch
    # to that user.
    - name: dump all postgres databases
      action: pg_dumpall -w -f /tmp/backup.psql
      user: postgres
      sudo: False

    ##########
    # Run things locally!
    #
    # Each task also has a 'connection' setting to control whether a local
    # or remote connection is used.  The only valid options now are 'local'
    # or 'paramiko'.  'paramiko' is assumed by the command line tools.
    #
    # This can also be set at the top level of the playbook.
    - name: create tempfile
      action: dd if=/dev/urandom of=/tmp/random.txt count=100
      connection: local

  ##########
  # Playbook attribute: handlers
  # Required: no
  # Description:
  #   Handlers are tasks that run when another task has changed something.
  #   See above for examples.  The format is exactly the same as for tasks.
  #   Note that if multiple tasks notify the same handler in a playbook run
  #   that handler will only run once.
  #
  #   Handlers are referred to by name. They will be run in the order declared
  #   in the playbook.  For example: if a task were to notify the
  #   handlers in reverse order like so:
  #
  #   - task: touch a file
  #     action: file name=/tmp/lock.txt
  #     notify:
  #     - Restart application
  #     - Restart nginx
  #
  #   The "Restart nginx" handler will still run before the "Restart
  #   application" handler because it is declared first in this playbook.
  handlers:
    - name: Restart nginx
      action: service name=nginx state=restarted

    # Any module can be used for the handler action
    - name: Restart application
      action: command /srv/myapp/restart.sh

    # It's also possible to include handlers from another file.  Structure is
    # the same as a tasks file, see the tasks section above for an example.
- include: handlers/site.yml
Troubleshooting
  Problem ex:
  'django_manage' mondule always returns 'changed: False' for
  some "external" ddbb commands.
  (ºnonºidempotent task)
  Solution:
Oº'changed_when'/'failed_when'º provides hints to Ansible at play time:
- name: init-database
  django_manage:
    command: createdb --noinput --nodata
    app_path: "{{ proj_path }}"
    virtualenv: "{{ venv_path }}"
Oºfailed_whenº: False # ←  avoid stoping execution
  register:Gºresultº
Oºchanged_when:º Gºresult.outº is defined and '"Creating tables" in Gºresult.outº'

- debug: var=result

- fail:
Non-Classifed
Dynamic Inventory
@[https://docs.ansible.com/ansible/2.5/user_guide/intro_dynamic_inventory.html]
(EC2, OpenStack,...)
Fact Caching
@[https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#fact-caching]

- To benefit from cached facts you will set gather_facts to False in most plays.

- Ansible ships with two persistent cache plugins: redis and jsonfile.

- To configure fact caching using redis, enable it in ansible.cfg as follows:
[defaults]
gathering = smart
fact_caching = redis
fact_caching_timeout = 86400
AWX GUI
@[https://www.howtoforge.com/ansible-awx-guide-basic-usage-and-configuration/]

- AWX is an open source web application that provides a user interface, REST API, 
  and task engine for Ansible. It's the open source version of the Ansible Tower. 
  The AWX allows you to manage Ansible playbooks, inventories, and schedule jobs 
  to run using the web interface. 

- How to Run and Schedule Ansible Playbook Using AWX GUI
@[https://www.linuxtechi.com/run-schedule-ansible-playbook-awx-gui/]
Puppet (Ansible Alternative)
Puppet 101
REF:
@[https://blogs.sequoiainc.com/puppet-101-part-1/]
@[https://blogs.sequoiainc.com/puppet-101-part-2/]

master/agent architecture:
- PuppetºMasterº: - server holding all the configuration.

- PuppetºAgent º: - Installed on each "target" server,    ºAgent Certificateº: ← - signed Master's CA.
                    runs @ regular intervals:              ─────────────────     - Used for secure network
                  - Query desired state and if needed      -ºnode-nameº            communic between Master←→Agent
                    (configuration drift) update state.      ^
                                                             |
                                                  Ex. web01.myDomain.com (wildcards allowed)
                                                  Assigning/managing node names Rºcan be trickyº
                                                  in the cloud since DNS change frequently.
┌─────────────────────────────────────────────────────────┬────────────────────────────────────────────────────────────┐
│OºRESOURCEº                                              │ BºCLASSESº                                                 │
│(Concrete resource that must be present in server)       │ - A Class is a group of Resources that                     │
│   ┌─── user/file/package/...that must be present in     │   belong together conceptually,                            │
│   │    server (or custom resource)                      │   fulfilling a given instalation-                          │
│   v                         │  Ex:                      │   -requirement role.                                       │
│OºTYPEº{ TITLE ← must unique │Oºuserº{ 'jbar':           │ - variables can be defined to customize                    │
│      ATTRIBUTE, per Node    │   ensure  =˃ present,     │   target environments.                                     │
│      ATTRIBUTE,             │   home    =˃ '/home/jbar',│   (test,acceptance,pre,pro,..)                             │
│      ATTRIBUTE,             │   shell   =˃ '/bin/bash', │ - inheritance is allowed to save                           │
│      ...                    │  }                        │   duplicated definition                                    │
│   }  ^                          ^          ^            │                                                            │
│      |                          |          |            │                       │ Ex:                                │
│      key =˃ value              key       value          │ class BºCLASS_NAMEº { │ class Bºusersº {                   │
│                                                         │     RESOURCE          │     user { 'tomcat':               │
│$ puppet resource Oºuserº                                │     RESOURCE          │         ensure   =˃ present,       │
│         ^^^^^^^^^^^^^^^                                 │ }                     │         home     =˃ '/home/jbauer',│
│         Returns all users                               │                       │         shell    =˃ '/bin/bash',   │
│         (not just those configured/installed by Puppet) │                       │     }                              │
│         (same behaviour applies to any other resource)  │                       │     user { 'nginx':                │
│                                                         │                       │         ...                        │
│                                                         │                       │     }                              │
│                                                         │                       │     ...                            │
│                                                         │                       │ }                                  │
│                                                         │                       │ include Bºusersº                   │
│                                                         │                       │ ^^^^^^^^^^^^^^^^                   │
│                                                         │                       │ ºDon't forgetº. Otherwise class is │
│                                                         │                          ignored                           │
├─────────────────────────────────────────────────────────┼────────────────────────────────────────────────────────────┤
│QºNODE ("Server")º                                       │                                                            │
│- bundle of: [ class1, class2, .. , resource1,  ...]     │           QºNODEº 1 ←─────────→ NBºClassº                  │
│                                                         │                  1                1                        │
│                        must match Agent-Certificate.name│                   \              /                         │
│     SYNTAX               │Ex:     ┌───────┴────────┐    │                    \            /                          │
│node Q"NAME" {            │node Qº"web01.myDomain.com"º {│                     N          N                           │
│    include BºCLASS01º    │                              │                     OºResourceº                            │
│    include BºCLASS02º    │    include Bºtomcatº         │                                                            │
│    include Bº...º        │    include Bºusersº          │                                                            │
│    include OºRESOURCE01º │                              │ YºMANIFESTº: 0+QºNODEsº, 0+BºClassesº, 0+OºResourcesº      │
│    include OºRESOURCE02º │  Oºfileº{ '/etc/app.conf'    │                                                            │
│    include Oº...º        │        ...                   │ GºMODULEº: 1+Manifests, 0+supporting artifacts             │
│}                         │    }                         │            ^                                               │
│                          │}                             │ ($PUPPET/environments/$ENV/"module"/manifest/init.pp )     │
│                                                         │                                                            │
│The special name Qº"default"º will be applied to any     │  ºSITE MANIFESTº: Separated Manifests forming the catalog  │
│server (used for example to apply common security,       │                   (Read by the Puppet Agent)               │
│packages,...)                                            │                                                            │
└─────────────────────────────────────────────────────────┴────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────┐
│YºMANIFESTSº:                                            │GºMODULESº                                                   │
│*.pp file defining OºResourcesº, BºClassesº and QºNodesº │- reusable bundle of [ 1+ Manifests , "support file list" ]  │
│                                                         │- installed on Puppet Master                                 │
│ ┌────────────────────────────────────────────────       │  (can also be installed from a central repository using     │
│ │example.pp Manifest:                                   │   $ puppet module ... )                                     │
│ │ // variable declarations, logic constructs, ...       │- referenced by name in other Modules or in Manifests.       │
│ │                                                       │- Layout of                                                  │
│ │                                                       │  ${PUPPET}/environments/${ENVIRONMENT}/modules/ºmodule01º   │
│ │Oºuser{ 'jbauer':º                                     │                                name must                    │
│ │      ensure      =˃ present,                          │ ºmodule01º  ←───────────────── match                        │
│ │      home        =˃ '/home/jbauer',                   │  ├─ manifests                  vvvvvvvv                     │
│ │      shell       =˃ '/bin/bash',                      │  │  ├ºinit.ppº ←········ classºmodule01º{  │                │
│ │  }                                                    │  │  │                      ...             │                │
│ │                                                       │  │  │                    }                 │                │
│ │Bºclass 'security'º{                                   │  │  │                                      │                │
│ │      ...                                              │  │  ├ class01.pp (opt)←· class class01 {   │                │
│ │  }                                                    │  │  │                       ...                             │
│ │                                                       │  │  │                    }                                  │
│ │  include security                                     │  │  └ ...                      ^                            │
│ │                                                       │  │                       module01@init.pp   can be used as  │
│ │Bºclass 'tomcat'º{                                     │  ├─ files        (opt)   include module01                   │
│ │  }                                                    │  ├─ templates    (opt)   class01@class01.pp can be used as  │
│ │                                                       │  ├─ lib          (opt)   include module01::class01          │
│ │Qºnodeº'web01.example.com' {                           │  ├─ facts.d      (opt)   Retrieve storage,CPU,...before  ←─┐│
│ │      includeBºtomcatº                                 │  │                       exec. the catalog                 ││
│ │      ...                                              │  │@[https://puppet.com/docs/puppet/latest/core_facts.html] ││
│ │                                                       │  ├─ examples     (opt)                                     ││
│ │  }                                                    │  └─ spec         (opt)                                     ││
└─────────────────────────────────────────────────────────┴────────────────────────────────────────────────────────────┼┘
                                                                               ┌───────────────────────────────────────┘
┌───────────────────────────────────────────────────────────────────────────┐  Example custom "facter":
│YºSITE (MAIN) MANIFESTº                                                    │  $ cat ./modules/basic/facts/lib/facter/common.rb
│- area of Puppet configurationºseparated from Modulesº.                    │  → Facter.add("hostnamePart01") do
│- By default, all Manifests contained in                                   │  →   setcode do
│    º${PUPPET}/environments/${ENVIRONMENT}/manifestsº                      │  →     h = Facter.value(:hostname)
│  (vs ${PUPPET}/environments/${ENVIRONMENT}/modules/mod1...                │  →     h_a = h.split("-")[0].tr("0-9", "").chomp
│      ${PUPPET}/environments/${ENVIRONMENT}/modules/mod2...)               │  →   end
│- Its content is concatenated and executed as the Site Manifest.           │  → end
│-ºstarting pointºfor calculating the ºPUPPET catalogº ,                    │  → ...
│  i.e., the "sum total of applicable configuration" for a node.            │  →
│- This is the information queried by the Puppet Agent instaled on each     │
│  "satellite" server.                                                      │
│  - any standalone Resource or Class declarations is automatically applied │
│  - matching Nodes (Node_name vs Agent Certificate Name) are also applied  │
└───────────────────────────────────────────────────────────────────────────┘

ºADVANCED TOPICSº (TODO)
 - controlling the Resources order  execution
 - transient cloud servers
 - auto-signing and node name wildcards
 - ...
Puppet Bolt
"Agentless" version of Puppet follwing Ansible approach.
It can be installed on a local workstation and connects
directly to remote targets with SSH or WinRM, so you are 
not required to install any agent software.
Infra as Code(IaC)
Vagrant (VMs as code)
External Links
- Vagrant Docs
- CLI Reference

- Getting Started
- Providers list
- Boxes Search
- Networking
Boxes
- Pre-built VMs avoiding slow and tedious process. 
- Can be used as base image to quickly clone a virtual machine.
- Specifying the box to use for your Vagrant environment is always the first
  step after creating a new Vagrantfile.

Vagrant Share
@[https://www.vagrantup.com/intro/getting-started/share.html]
@[https://www.vagrantup.com/docs/share]

  $ vagrant share  ← - share a Vagrant environment with anyone in the world. 

- three primary modes or features (not mutually exclusive, can be combined)
  
  - HTTP sharing create a shareable URL pointing directly to Vagrant environment.
  BºURL "consumer" does not need Vagrant installed, so it can be shared º
  Bºwith anyone. Useful for testing webhooks, demos with clients, ...   º
  
  - SSH sharing:  instant SSH access to Vagrant environment by anyone
    running vagrant connect --ssh.                
    (pair programming, debugging ops problems, etc....)
  
  - General sharing allows anyone to access any exposed port of your 
    Vagrant environment by running vagrant connect on the remote side. 
    This is useful if the remote side wants to access your Vagrant 
    environment as if it were a computer on the LAN.
Command List
vagrant "COMMAND" -h
$ vagrant  # Most frequently used commands                                   | $ vagrant list-commands # (including rarely used command)
Usage: vagrant [options]  []                                  |
Common commands:                                                             |
box           manages boxes: installation, removal, etc.                     | box             manages boxes: installation, removal, etc.
destroy       stops and deletes all traces of the vagrant machine            | cap             checks and executes capability
global-status outputs status Vagrant environments for this user              | destroy         stops and deletes all traces of the vagrant machine
halt          stops the vagrant machine                                      | docker-exec     attach to an already-running docker container
help          shows the help for a subcommand                                | docker-logs     outputs the logs from the Docker container
init          initializes a new Vagrant environment by creating a Vagrantfile| docker-run      run a one-off command in the context of a container
login         log in to HashiCorp's Vagrant Cloud                            | global-status   outputs status Vagrant environments for this user
package       packages a running vagrant environment into a box              | halt            stops the vagrant machine
plugin        manages plugins: install, uninstall, update, etc.              | help            shows the help for a subcommand
port          displays information about guest port mappings                 | init            initializes a new Vagrant environment by creating a Vagrantfile
powershell    connects to machine via powershell remoting                    | list-commands   outputs all available Vagrant subcommands, even non-primary ones
provision     provisions the vagrant machine                                 | login           log in to HashiCorp's Vagrant Cloud
push          deploys code in this environment to a configured destination   | package         packages a running vagrant environment into a box
rdp           connects to machine via RDP                                    | plugin          manages plugins: install, uninstall, update, etc.
reload        restarts vagrant machine, loads new Vagrantfile configuration  | port            displays information about guest port mappings
resume        resume a suspended vagrant machine                             | powershell      connects to machine via powershell remoting
snapshot      manages snapshots: saving, restoring, etc.                     | provider        show provider for this environment
ssh           connects to machine via SSH                                    | provision       provisions the vagrant machine
ssh-config    outputs OpenSSH valid configuration to connect to the machine  | push            deploys code in this environment to a configured destination
status        outputs status of the vagrant machine                          | rdp             connects to machine via RDP
suspend       suspends the machine                                           | reload          restarts vagrant machine, loads new Vagrantfile configuration
up            starts and provisions the vagrant environment                  | resume          resume a suspended vagrant machine
validate      validates the Vagrantfile                                      | rsync           syncs rsync synced folders to remote machine
version       prints current and latest Vagrant version                      | rsync-auto      syncs rsync synced folders automatically when files change
                                                                             | snapshot        manages snapshots: saving, restoring, etc.
                                                                             | ssh             connects to machine via SSH
                                                                             | ssh-config      outputs OpenSSH valid configuration to connect to the machine
                                                                             | status          outputs status of the vagrant machine
                                                                             | suspend         suspends the machine
                                                                             | up              starts and provisions the vagrant environment
                                                                             | validate        validates the Vagrantfile
                                                                             | version         prints current and latest Vagrant version
Quick
Setup
$ mkdir vagrant_getting_started
$ cd vagrant_getting_started
$ vagrant init # creates new Vagrantfile
3 Virt.Box
Cluster Ex
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # Use the same key for each machine
  config.ssh.insert_key = false

  config.vm.define "vagrant1" do |vagrant1|
    vagrant1.vm.box = "ubuntu/xenial64"
    vagrant1.vm.network "forwarded_port", guest: 80, host: 8080
    vagrant1.vm.network "forwarded_port", guest: 443, host: 8443
  end
  config.vm.define "vagrant2" do |vagrant2|
    vagrant2.vm.box = "ubuntu/xenial64"
    vagrant2.vm.network "forwarded_port", guest: 80, host: 8081
    vagrant2.vm.network "forwarded_port", guest: 443, host: 8444
  end
  config.vm.define "vagrant3" do |vagrant3|
    vagrant3.vm.box = "ubuntu/xenial64"
    vagrant3.vm.network "forwarded_port", guest: 80, host: 8082
    vagrant3.vm.network "forwarded_port", guest: 443, host: 8445
  end
end

# -º- mode: ruby -º-
# vi: set ft=ruby :

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
        # Use the same key for each machine
        config.ssh.insert_key = false

        config.vm.define "vagrant1" do |vagrant1|
                vagrant1.vm.box = "ubuntu/xenial64"
                vagrant1.vm.provider :virtualbox do |v|
                        v.customize ["modifyvm", :id, "--memory", 1024]
                end
                vagrant1.vm.network "forwarded_port", guest: 80, host: 8080
                vagrant1.vm.network "forwarded_port", guest: 443, host: 8443
                vagrant1.vm.network "private_network", ip: "192.168.0.1"
                # Provision through custom bootstrap.sh script
                config.vm.provision :shell, path: "bootstrap.sh"
        end
        config.vm.define "vagrant2" do |vagrant2|
                vagrant2.vm.box = "ubuntu/xenial64"
                vagrant2.vm.provider :virtualbox do |v|
                        v.customize ["modifyvm", :id, "--memory", 2048]
                end
                vagrant2.vm.network "forwarded_port", guest: 80, host: 8081
                vagrant2.vm.network "forwarded_port", guest: 443, host: 8444
                vagrant2.vm.network "private_network", ip: "192.168.0.2"
        end
        config.vm.define "vagrant3" do |vagrant3|
                vagrant3.vm.box = "ubuntu/xenial64"
                vagrant3.vm.provider :virtualbox do |v|
                        v.customize ["modifyvm", :id, "--memory", 2048]
                end
                vagrant3.vm.network "forwarded_port", guest: 80, host: 8082
                vagrant3.vm.network "forwarded_port", guest: 443, host: 8445
                vagrant3.vm.network "private_network", ip: "192.168.0.3"
        end
end
Terraform(VMs+Firewalls+...)
Terraform 101
BºExternal Linksº:
- @[https://learn.hashicorp.com/terraform/getting-started/install.html]
- @[https://www.terraform.io/intro/use-cases.html]
  - Heroku App Setup
  - Multi-Tier Applications
  - Self-Service Clusters
  - Software Demos
  - Disposable Environments
  - Software Defined Networking
  - Resource Schedulers
  - Multi-Cloud Deployment

BºSTEP 1: Create tf fileº
  $ mkdir project01
  $ cd project01
  $ vim libvirt.tf like: 
  │ provider "libvirt" {              ← Alt 1: local kvm/libvirt provider 
  │   uri = "qemu:///system"            (Check KVM setup for more info)
  │ }
  │ # provider "libvirt" {            ← Alt 2: Remote provider
  │ #   alias = "server2"
  │ #   uri   = "qemu+ssh://root@192.168.100.10/system"
  │ # }
   
  │ resource "libvirt_volume" "centos7-qcow2" {
  │   name = "centos7.qcow2"
  │   pool = "default"
  │   source = "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2"
  │   #source = "./CentOS-7-x86_64-GenericCloud.qcow2"
  │   format = "qcow2"
  │ }
   
  │ # Adding default user STEP 1 {{{
  │ data "template_file" "user_data" {                      ← Our qcow2 does NOT provide 
  │   template = "${file("${path.module}/cloud_init.cfg")}"   default user/pass to log-in
  │ }                                                         See BºSTEP 1.2º
  │ # Use CloudInit to add the instance
  │ resource "libvirt_cloudinit_disk"º"commoninit" {        ← Resource used to "bootstrap"
  │   name = "commoninit.iso"                                 user data to the instance.
  │   user_data = "${data.template_file.user_data.rendered}"
  │ }
  │ # }}}
  │ # Adding default user STEP 2 {{{
  │      cloudinit = "${libvirt_cloudinit_disk.commoninit.id}"
  │ # }}}
  │
  │ resource "libvirt_domain" "db1" {    ← For KVM: Define KVM domain to create
  │   name   = "db1"
  │   memory = "1024"
  │   vcpu   = 1
  │
  │   network_interface {
  │     network_name = "default"
  │   }
  │
  │   disk {
  │     volume_id = libvirt_volume.centos6-qcow2.id
  │   }
  │
  │   console {
  │     type = "pty"
  │     target_type = "serial"
  │     target_port = "0"
  │   }
  │
  │   graphics {
  │     type = "spice"
  │     listen_type = "address"
  │     autoport = true
  │   }
  │ }
  │
  │ output "ip" {                                            ← Output Server IP
  │   value = "${libvirt_domain.db1.network_interface.0.addresses.0}"
  │ }
  

BºSTEP 1.2: Create cloud_init.cfg fileº
  (Needed when qcow2/... image doesn't provide initial user/pass to log in)
  $ vim cloud_init.cfg
  │ #cloud-config
  │ # vim: syntax=yaml                          
  │ #
  │ # ***********************
  │ #   ---- for more examples look at: ------
  │ #   https://cloudinit.readthedocs.io/en/latest/topics/examples.html
  │ # ******************************
  │ #
  │ # This is the configuration syntax that the write_files module
  │ # will know how to understand. encoding can be given b64|gz|gz+b64.
  │ # The content will be decoded accordingly and then written to the path 
  │ # that is provided.
  │ #
  │ # Note: Content strings here are truncated for example purposes.
  │ ssh_pwauth: True
  │ chpasswd:
  │   list: |
  │      root: StrongPassword
  │   expire: False
  │
  │ users:
  │   - name: jmutai       # ← Change by real one
  │     ssh_authorized_keys:
  │       - ssh-rsa AAAAXX # ← Change by real one
  │     sudo: ['ALL=(ALL) NOPASSWD:ALL']
  │     shell: /bin/bash
  │     groups: wheel
  │
  │     This will set root password to StrongPassword
  │     Add user named jmutai with specified Public SSH keys
  │     The user will be added to wheel group and be allowed to run sudo 
  │     commands without password.

BºSTEP 2: Initº
  $ terraformºinitº
  (Output like ...)
  → Initializing provider plugins…
  → Terraform has been successfully initialized!
  → ... Try runningº"terraform plan"ºto see any changes
  → required for your infrastructure. All commands should now work.
  →
  → If you everºset or change modulesºor backend configuration,
  → ºrerun this command to reinitializeºyour working directory.
  →  If you forget, other commands will detect it and remind you

BºSTEP 3: Check needed changesº
  (Short of "dry-run")
  $ terraformºplanº
  →  Refreshing Terraform state in-memory prior to plan...
  →  ....
  →  An execution plan has been generated ...
  →  ...
  →    # libvirt_domain.db1 will be created
  →    + resource "libvirt_domain" "db1" {
  →        + ...
  →        +ºid     º    = (known after apply)
  →        +ºmachineº    = (known after apply)
  →        + memory      = 1024
  →        + ...
  →        + console { ...  }
  →        + disk {  ...  }
  →
  →        + graphics {
  →            + autoport       = true
  →            + listen_address = "127.0.0.1"
  →            + listen_type    = "address"
  →           º+ type           = "spice"º
  →          }
  →
  →        + network_interface { ...  }
  →      }
  →
  →    # libvirt_volume.centos7-qcow2 will be created
  →    + resource "libvirt_volume" "centos7-qcow2" {
  →        + ...
  →        + pool   = "default"
  →        + size   = (known after apply)
  →        + source = "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2"
  →      }
  RºWARN:º You didn't specify an "-out" parameter to save this plan, so Terraform
           can't guarantee that exactly these actions will be performed if
           "terraform apply" is subsequently run.
BºSTEP 4:  "Execute" planº
  $ terraformºapplyº
  → libvirt_volume.centos7-qcow2: Creating...
  →   format: "" =˃ "qcow2"
  →   ...
  → libvirt_volume.centos7-qcow2:ºCreation completeºafter 8s (ID:º/var/lib/libvirt/images/db.qcow2º)
  → libvirt_domain.db1: Creating...
  →   arch:                             "" =˃ "˂computed˃"
  →   ...
  →  ºrunning:                          "" =˃ "true"º
  →   vcpu:                             "" =˃ "1"
  → libvirt_domain.db1:ºCreation completeºafter 0s (ID: e5ee28b9-e1da-4945-9eb0-0cda95255937)
  → Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

BºSTEP 5.1: Post creation checkº
  $ sudoºvirsh  listº
   Id   Name   State
  ----------------------
   ...
   7   ºdb1    runningº

BºSTEP 5.2: Post creation checkº
  $ sudoºvirsh net-dhcp-leases defaultº
  
   Expiry        MAC      Protocol IP       Hostname Client ID
   Time          address           address           or DUID
  --------------------------------------------------------------
   .. 16:11:18   52:54:.. ipv4     192...      -     -
   .. 15:30:18   52:54:.. ipv4     192...     rhel8  ff:61:..:d1

BºSTEP 5.3: Post creation checkº
  $  ping -c 1 192....

BºCleaning resources (Destroy)º
  $ cd .../project01
  $ terraformºdestroyº

KVM setup
REF: @[https://computingforgeeks.com/how-to-provision-vms-on-kvm-with-terraform/]
RºWARN:ºKVM/libvirt provider is NOT officially supported by Hashicorp
        Maintained by Duncan Mac-Vicar P and others.

 ºstep 1:ºInstall KVM hypervisor
          (consult Linux distro)
 ºstep 2:ºcheck install step 1
  $ sudo systemctl start libvirtd
  $ sudo systemctl enable libvirtd

 ºstep 3:º(Debian/Ubuntu/...?)
  $ sudo modprobe vhost_net  ← Enable vhost-net kernel module
  $ echo vhost_net | sudo tee -a /etc/modules

 ºstep 4:ºInstall Terraform  provider for libvirt ("==" KVM)
  $ mkdir ~/.terraform.d/plugins # ←  This dir will store Terraform Plugins.
  $ cp terraform-provider-libvirt  ~/.terraform.d/plugins
       ^^^^^^^^^^^^^^^^^^^^^^^^^^
       Downloaded from:
  @[https://github.com/dmacvicar/terraform-provider-libvirt/releases]

BºKVM Troubleshootingº
- Error similar to Can not read /var/lib/libvirt/images/...qcow2 on Ubuntu 18.02
  Looks to be reelated to Appmour according to:
 @[https://github.com/jedi4ever/veewee/issues/996#issuecomment-497976612]
 @[https://github.com/dmacvicar/terraform-provider-libvirt/issues/97]
 """ ... For testing purpose, I simply editº/etc/libvirt/qemu.confº  setting:
 security_driver = "none"
 """
cloud-init
https://cloudinit.readthedocs.io/en/latest/
"""industry standard multi-distribution method for
  cross-platform cloud instance initialization. It is supported across all major
  public cloud providers, provisioning systems for private cloud infrastructure,
  and bare-metal installations."""

Cloud-init has support across all major Linux distributions and FreeBSD:
Ubuntu ,SLES/openSUSE ,RHEL/CentOS ,Fedora ,Gentoo Linux ,Debian ,ArchLinux ,FreeBSD

Clouds

supported public clouds:
    AWS, Azure, GCP, Oracle Cloud, Softlayer, Rackspace Pub.Cloud,
    IBM Cloud, Digital Ocean, Bigstep, Hetzner, Joyent, CloudSigma,
    Alibaba Cloud, OVH ,OpenNebula ,Exoscale ,Scaleway ,CloudStack,
    AltCloud, SmartOS

supported private clouds:
    Bare metal installs , OpenStack , LXD, KVM, Metal-as-a-Service (MAAS)
Regula
@[https://www.helpnetsecurity.com/2020/01/16/fugue-regula/]
Fugue open sources Regula to evaluate Terraform for security misconfigurations 
and compliance violations
Pulumi
Monitoring
Infra vs App Monitoring
BºInfrastructure Monitoring:º
  - Prometheus + Grafana
    (Alternatives include Monit, Datadog, Nagios, Zabbix, ...)

BºApplication MOnitoringº
  - Jaeger, New Relic
    (Alternatives include AppDynamics, Instana, OpenTracing) 
 
Log Management
  - Elastic Stack
    (Alternative include Graylog, Splunk, Papertrail, ...)
Non Classified
Yaml References
@[http://docs.ansible.com/ansible/YAMLSyntax.html]
YAML                               JSON
---                                {
key1: val1                             "key1": "val1",
key2:                                 "key2": [
 - "thing1"                            "thing1",
 - "thing2"                            "thing2"
# I am a comment                     ]
                                   }

Bº Anchors, references and extensions
---
key1:º⅋anchorº  ← Defines          {
 K1: "One"        the anchor         "key1": {
 K2: "Two"                             "K1": "One",
                                       "K2": "Two"
key2:º*anchorº  ← References/        },
                 uses the anch.      "key2": {
key3:                                  "K1": "One",
 º˂˂: *anchorº ← Extends anch.         "K2": "Two"
  K2: "I Changed"                    }
  K3: "Three"                        "key3": {
                                       "K1": "One",
                                       "K2": "I Changed",
                                       "K3": "Three"
                                     }
                                   }
                      
                               
RºWARNº: Many NodeJS parsers break  the extend.

BºExtend Inlineº
  - take only SOME sub-keys from key1 to inject into key2

---                                {
key1:                                "key1": {
 ˂˂:º⅋anchorº ← Inject into            "K1": "One",
   K1: "One"    key1 and save          "K2": "Two"
 K2: "Two"      as anchor            }, 
                                     "bar": {
bar:                                   "K1": "One",
º˂˂: *anchorº                          "K3": "Three"
 K3: "Three"                         }
                                   }

BºBash Aliasº
 (To be added to .bashrc | .profile | ...)
  alias yaml2js="python -c 'import sys, yaml, json; \
                 json.dump(yaml.load(sys.stdin), sys.stdout, indent=4)'"

$º$ cat in.yaml | yaml2js º(json output)

RºWARN:º - Unfortunatelly there is no way to override or
           extends lists to append new elements to existing ones,
           only maps/dictionaries with the º˂˂ operatorº:
           º˂˂º "inserts" values of referenced map into
           current one being defined.
Nexus
Nexus Repository Management
https://blog.sonatype.com/using-nexus-3-as-your-repository-part-1-maven-artifacts
https://blog.sonatype.com/using-nexus-3-as-your-repository-part-2-npm-packages
https://blog.sonatype.com/using-nexus-3-as-your-repository-part-3-docker-images
Containerization

Reproducible Builds
@[https://reproducible-builds.org/]
- Reproducible builds are a set of software development practices 
  that create an independently-verifiable path from source to binary 
  code.
Container Standars
OCI Spec.
@[https://www.opencontainers.org/faq]

OCI mission: promote a set of common, minimal, open standards 
             and specifications around container technology
             focused on creating formal specification for 
             container image formats and runtime

- values: (mostly adopted from the appc founding values)
  - Composable: All tools for downloading, installing, and running containers should be well integrated, but independent and composable.
  - Portable: runtime standard should be usable across different hardware, 
    operating systems, and cloud environments.
  - Secure: Isolation should be pluggable, and the cryptographic primitives
    for strong trust, image auditing and application identity should be solid.
  - Decentralized: Discovery of container images should be simple and
    facilitate a federated namespace and distributed retrieval.
  - Open: format and runtime should be well-specified and developed by
          a community. 
  - Code leads spec, rather than vice-versa.
  - Minimalist: do a few things well, be minimal and stable, and 
  - Backward compatible:

- Docker donated both a draft specification and a runtime and code
  associated with a reference implementation of that specification:

BºIt includes entire contents of the libcontainer project, includingº
Bº"nsinit" and all modifications needed to make it run independentlyº 
Bºof Docker.  . This codebase, called runc, can be found at         º
Bºhttps://github.com/opencontainers/runc                            º

- the responsibilities of the Technical Oversight Board (TOB)
  ca be followed at https://github.com/opencontainers/tob:
  - Serving as a source of appeal if the project technical leadership 
    is not fulfilling its duties or is operating in a manner that is
    clearly biased by the commercial concerns of the technical 
    leadership’s employers.
  - Reviewing the tests established by the technical leadership for 
    adherence to specification
  - Reviewing any policies or procedures established by the technical leadership.

- The OCI seeks rough consensus and running code first.

What is the OCI’s perspective on the difference between a standard and a specification?

The v1.0.0 2017-07-19.

- Adopted by:
  - Cloud Foundry community by embedding runc via Garden 
  - Kubernetes is incubating a new Container Runtime Interface (CRI) 
    that adopts OCI components via implementations like CRI-O and rklet.
  - rkt community is adopting OCI technology already and is planning
    to leverage the reference OCI container runtime runc in 2017.
  - Apache Mesos.
  - AWS announced OCI image format in its Amazon EC2 Container Registry (ECR).

- Will the runtime and image format specs support multiple platforms?

- How does OCI integrate with CNCF?
    A container runtime is just one component of the cloud native 
  technical architecture but the container runtime itself is out of 
  initial scope of CNCF (as a CNCF project), see the charter Schedule A 
  for more information.
runc
@[https://github.com/opencontainers/runc]
- Reference runtime and cli tool donated by Docker
  for spawning and running containers according to the OCI 
  specification:
@[https://www.opencontainers.org/]

- Based on Go.

-BºIt reads a runtime specification and configures the Linux kernel.º
  - Eventually it creates and starts container processes.
  RºGo might not have been the best programming language for this taskº.
  Rºsince it does not have good support for the fork/exec model of computing.º
  Rº- Go's threading model expects programs to fork a second process      º
  Rº  and then to exec immediately.                                       º
  Rº- However, an OCI container runtime is expected to fork off the first º
  Rº  process in the container.  It may then do some additional           º
  Rº  configuration, including potentially executing hook programs, beforeº
  Rº  exec-ing the container process. The runc developers have added a lotº
  Rº  of clever hacks to make this work but are still constrained by Go's º
  Rº  limitations.                                                        º
  Bºcrun, C based, solved those problems.º

- reference implementation of the OCI runtime specification.


crun @[https://github.com/containers/crun/issues] @[https://www.redhat.com/sysadmin/introduction-crun] - fast, low-memory footprint container runtime by Giuseppe Scrivanoby (RedHat). - C based: Unlike Go, C is not multi-threaded by default, and was built and designed around the fork/exec model. It could handle the fork/exec OCI runtime requirements in a much cleaner fashion than 'runc'. C also interacts very well with the Linux kernel. It is also lightweight, with much smaller sizes and memory than runc(Go): compiled with -Os, 'crun' binary is ~300k (vs ~15M 'runc') "" We have experimented running a container with just Bº250K limit setº."" Bºor 50 times smaller.º and up to Bºtwice as fast. - cgroups v2 ("==" Upstream kernel, Fedora 31+) compliant from the scratch while runc -Docker/K8s/...- Rºgets "stuck" into cgroups v1.º (experimental support in 'runc' for v2 as of v1.0.0-rc91, thanks to Kolyshkin and Akihiro Suda). - feature-compatible with "runc" with extra experimental features. - Given the same Podman CLI/k8s YAML we get the same containers "almost always" since Bºthe OCI runtime's job is to instrument the kernel toº Bºcontrol how PID 1 of the container runs.º BºIt is up to higher-level tools like conmon or the container engine toº Bºmonitor the container.º - Sometimes users want to limit number of PIDs in containers to just one. With 'runc' PIDs limit can not be set too low, because the Go runtime spawns several threads. 'crun', written in C, does not have that problem. Ex: $º$ RUNC="/usr/bin/runc" , CRUN="/usr/bin/crun" º $º$ podman --runtime $RUNC run --rm --pids-limit 5 fedora echo it works º └────────────┘ →RºError: container create failed (no logs from conmon): EOFº $º$ podman --runtime $CRUN run --rm --pids-limit 1 fedora echo it works º └────────────┘ →Bºit worksº - OCI hooks supported, allowing the execution of specific programs at different stages of the container's lifecycle. - runc/crun comparative: $º$ CMD_RUNC="for i in {1..100}; do runc run foo ˂ /dev/null; done"º $º$ CMD_CRUN="for i in {1..100}; do crun run foo ˂ /dev/null; done"º $º$ time -v sh -c "$CMD_RUNC" º → User time (seconds): 2.16 → System time (seconds): 4.60 → Elapsed (wall clock) time (h:mm:ss or m:ss): 0:06.89 → Maximum resident set size (kbytes): 15120 → ... $º$ time -v sh -c "$CMD_CRUN" º → ... → User time (seconds): 0.53 → System time (seconds): 1.87 → Elapsed (wall clock) time (h:mm:ss or m:ss): 0:03.86 → Maximum resident set size (kbytes): 3752 → ... - Experimental features: - redirecting hooks STDOUT/STDERR via annotations. - Controlling stdout and stderr of OCI hooks Debugging hooks can be quite tricky because, by default, it's not possible to get the hook's stdout and stderr. - Getting the error or debug messages may require some yoga. - common trick: log to syslog to access hook-logs via journalctl. (Not always possible) - With 'crun' + 'Podman': $º$ podman run --annotation run.oci.hooks.stdout=/tmp/hook.stdoutº └───────────────────────────────────┘ executed hooks will write: STDOUT → /tmp/hook.stdout STDERR → /tmp/hook.stderr Bº(proposed fo OCI runtime spec)º - crun supports running older versions of systemd on cgroup v2 using --annotation run.oci.systemd.force_cgroup_v1, This forces a cgroup v1 mount inside the container for the name=systemd hierarchy, which is enough for systemd to work. Useful to run older container images, such as RHEL7, on a cgroup v2-enabled system. Ej: $º$ podman run --annotation run.oci.systemd.force_cgroup_v1=/sys/fs/cgroup \ º $º centos:7 /usr/lib/systemd/systemd º - Crun as a library: "We are considering to integrate it with Bºconmon, the container monitor used byº BºPodman and CRI-O, rather than executing an OCI runtime."º - 'crun' Extensibility: """... easily to use all the kernel features, including syscalls not enabled in Go.""" -Ex: openat2 syscall protects against link path attacks (already supported by crun). - 'crun' is more portable: Ex: Risc-V.
Container Network Iface (CNI)
@[https://github.com/containernetworking/cni]
- specification and libraries for writing plugins to configure network interfaces
  in Linux containers, along with a number of supported plugins.
- CNI concerns itself only with network connectivity of containers 
  and removing allocated resources when the container is deleted.
- CNI Spec

- CNI concerns itself only with network connectivity of
  containers and removing allocated resources when container
  are deleted.
- specification and libraries for writing plugins 
  to configure network interfaces in Linux containers, 
  along with a number of supported plugins:
  - libcni, a CNI runtime implementation
  - skel, a reference plugin implementation
    github.com/cotainernetworking/cni
- Set of reference and example plugins:
  - Inteface plugins:  ptp, bridge,macvlan,...
  - "Chained" plugins: portmap, bandwithd, tuning,
    github.com/cotainernetworking/pluginds

    NOTE: Plugins are executable programs with STDIN/STDOUT
                                  ┌ Network
                ┌─────→(STDIN)    │
  Runtime → ADD JSON    CNI ···───┤ 
   ^        ^^^         executable│
   │        ADD         plugin    └ Container(or Pod)
   │        DEL         └─┬──┘      Interface
   │        CHECK         v
   │        VERSION    (STDOUT) 
   │                 └────┬──────┘
   │                      │
   └──── JSON result ─────┘

 ºRuntimesº            º3rd party pluginsº
  K8s, Mesos, podman,   Calico ,Weave, Cilium,
  CRI-O, AWS ECS, ...   ECS CNI, Bonding CNI,...

- The idea of CNI is to provide common interface between
  the runtime and the CNI (executable) plugins through
  standarised JSON messages.

  Example cli Tool  executing CNI config:
@[https://github.com/containernetworking/cni/tree/master/cnitool]
   INPUT_JSON
   {
     "cniVersion":"0.4.0",   ← Standard attribute
     "name":Bº"myptp"º,
     "type":"ptp",
     "ipMasq":true,
     "ipam": {               ← Plugin specific attribute
       "type":"host-local",
       "subnet":"172.16.29.0/24",
       "routes":[{"dst":"0.0.0.0/0"}]
     }
   }
   $ echo $INPUT_JSON | \                  ← Create network config
     sudo tee /etc/cni/net.d/10-myptp.conf   it can be stored on file-system
                                             or runtime artifacts (k8s etcd,...)

   $ sudo ip netns add testing             ← Create network namespace.
                       └-----┘

   $ sudo CNI_PATH=./bin \                 ← Add container to network
     cnitool add Bºmyptpº  \
     /var/run/netns/testing

   $ sudo CNI_PATH=./bin \                ← Check config
     cnitool check myptp \
     /var/run/netns/testing


   $ sudo ip -n testing addr               ← Test
   $ sudo ip netns exec testing \
     ping -c 1 4.2.2.2

   $ sudo CNI_PATH=./bin \                 ← Clean up
     cnitool del myptp \
     /var/run/netns/testing
   $ sudo ip netns del testing

BºMaintainers (2020):º
  - Bruce Ma (Alibaba)
  - Bryan Boreham (Weaveworks)
  - Casey Callendrello (IBM Red Hat)
  - Dan Williams (IBM Red Hat)
  - Gabe Rosenhouse (Pivotal)
  - Matt Dupre (Tigera)
  - Piotr Skamruk (CodiLime)
  - "CONTRIBUTORS"

BºChat channelsº
  - https.//slack.cncf.io  - topic #cni
Portainer UI
(See also LazyDocker)
- Portainer, an open-source management interface used to manage a 
  Docker host, Swarm and k8s cluster.
- It's used by software engineers and DevOps teams to simplify and
  speed up software deployments.

Available on LINUX, WINDOWS ⅋ OSX
$ docker container run -d \
  -p 9000:9000 \
  -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer
Docker
External Links
- @[https://docs.docker.com/]
- @[https://github.com/jdeiviz/docker-training] D.Peman@github
- @[https://github.com/jpetazzo/container.training] container.training@Github
- @[http://container.training/]

Docker API
- @[https://docs.docker.com/engine/api/])
- @[https://godoc.org/github.com/docker/docker/api]
- @[https://godoc.org/github.com/docker/docker/api/types]

DockerD summary

dockerD can listen for Engine API requests via:
 - IPC socket: default /var/run/docker.sock
 - tcp       : WARN: default setup un-encrypted/un-authenticated 
 - fd        : Systemd based systems only. 
               dockerd -H fd://. 


BºDaemon configuration Optionsº
@[https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file]

  └ In the Official docker install options must be set in the file:
   º/lib/systemd/system/docker.serviceº, adding to the ExecStart= line.
    After editing the file, systemd must reload the service:
    $º$ sudo systemctl stop  docker.serviceº 
    $º$ sudo systemctl daemon-reload       º
    $º$ sudo systemctl start docker.serviceº 
--config-file string default "/etc/docker/daemon.json"
  -D, --debug           Enable debug mode
  
  --experimental        Enable experimental features
  
  --icc         Enable inter-container communication (default true)
  --log-driver string   default "json-file"
  -l, --log-level string  default "info"
  
  --mtu int  Set the containers network MTU
  --network-control-plane-mtu int         Network Control plane MTU (default 1500)
  
  --rootless  Enable rootless mode; typically used with RootlessKit (experimental)

BºSTORAGE:º
Oº--data-root   def:"/var/lib/docker"º
Oº--exec-root   def:"/var/run/docker"º

  --storage-driver def: overlay2
  --storage-opt  "..."

 ºENVIRONMENT VARIABLESº
    DOCKER_DRIVER     The graph driver to use.
    DOCKER_RAMDISK    If set this will disable "pivot_root".
  BºDOCKER_TMPDIR     Location for temporary Docker files.º
    MOBY_DISABLE_PIGZ Do not use unpigz to decompress layers in parallel
                      when pulling images, even if it is installed.
    DOCKER_NOWARN_KERNEL_VERSION Prevent warnings that your Linux kernel is 
                     unsuitable for Docker.



BºDaemon storage-driverº:
  See also: @[https://docs.docker.com/storage/storagedriver/]
  Docker daemon support next storage drivers:
  └ aufs        :Rºoldest (linux kernel patch unlikely to be merged)º
  ·              BºIt allows containers to share executable and shared library memory, º
  ·              Bº→ useful choice when running thousands of repeated containersº
  └ devicemapper:
  · thin provisioning and Copy on Write (CoW) snapshots. 
  · - For each devicemapper graph location - /var/lib/docker/devicemapper -
  ·   a thin pool is created based on two block devices:
  ·   - data    : loopback mount of automatically created sparse file
  ·   - metadata: loopback mount of automatically created sparse file
  ·
  └ btrfs       :
  · -Bºvery fastº
  · -Rºdoes not share executable memory between devicesº
  · -$º# dockerd -s btrfs -g /mnt/btrfs_partition º
  ·
  └ zfs         :
  · -Rºnot as fast as btrfsº
  · -Bºlonger track record on stabilityº.
  · -BºSingle Copy ARC shared blocks between clones allowsº
  ·  Bºto cache just onceº
  · -$º# dockerd -s zfsº  ← select a different zfs filesystem by setting
  ·                         set zfs.fsname option
  ·
  └ overlay     :
  · -Bºvery fast union filesystemº.
  · -Bºmerged in the main Linux kernel 3.18+º
  · -Bºsupport for page cache sharingº
  ·    (multiple containers accessing the same file
  ·     can share a single page cache entry/ies)
  · -$º# dockerd -s overlay º
  · -RºIt can cause excessive inode consumptionº
  ·
  └ overlay2    :
    -Bºsame fast union filesystem of overlayº
    -BºIt takes advantage of additional features in Linux kernel 4.0+
     Bºto avoid excessive inode consumption.º
    -$º#Call dockerd -s overlay2    º
    -Rºshould only be used over ext4 partitions (vs Copy on Write FS like btrfs)º

  @[https://www.infoq.com/news/2015/02/under-hood-containers]
  └ Vfs: a no thrills, no magic, storage driver, and one of the few 
  ·      that can run Docker in Docker.
  └ Aufs: fast, memory hungry, not upstreamed driver, which is only 
  ·       present in the Ubuntu Kernel. If the system has the aufs utilities 
  ·       installed, Docker would use it. It eats a lot of memory in cases 
  ·       where there are a lot of start/stop container events, and has issues 
  ·       in some edge cases, which may be difficult to debug.
  ·
  └ "... Diffs are a big performance area because the storage driver needs to 
     calculate differences between the layers, and it is particular to 
     each driver. Btrfs is fast because it does some of the diff 
     operations natively..."
    
    - The Docker portable image format is composed of tar archives that 
      are largely for transit:
      - Committing container to image with commit.
      - Docker push and save.
      - Docker build to add context to existing image.
    
    - When creating an image, Docker will diff each layer and create a 
      tar archive of just the differences. When pulling, it will expand the 
      tar in the filesystem. If you pull and push again, the tarball will 
      change, because it went through a mutation process, permissions, file 
      attributes or timestamps may have changed.
    
    - Signing images is very challenging, because, despite images being 
      mounted as read only, the image layer is reassembled every time. Can 
      be done externally with docker save to create a tarball and using gpg 
      to sign the archive.


BºDocker runtime execution optionsº
  └ The daemon relies on a OCI compliant runtime (invoked via the 
    containerd daemon) as its interface to the Linux kernel namespaces, 
    cgroups, and SELinux. More info at:
    - @[/DevOps/linux_administration_summary.html?id=selinux_summary].
    
  └ By default,Bºdockerd automatically starts containerdº.
    - to control/tune containerd startup, manually start 
      containerd and pass the path to the containerd socket
      using the --containerd flag. For example:
    $º# dockerd --containerd /var/run/dev/docker-containerd.sockº


BºInsecure registriesº

  └ Docker considers a private registry either:
    - secure
      - It uses TLS.
      - CA cert exists in /etc/docker/certs.d/myregistry:5000/ca.crt. 
    - insecure
      - not TLS used or/and
      - CA-certificate unknown.
      -º--insecure-registry myRegistry:5000º needs to docker daemon
        config file . The config path can vary depending on the system.
        It can be similar to next one in a SystemD enabled OS:
       º/etc/systemd/system/docker.service.d/docker-options.confº
        [Service]
        Environment="DOCKER_OPTS= --iptables=false \
         \
         \
        --data-root=/var/lib/docker \
        --log-opt max-size=50m --log-opt max-file=5 \
        --insecure-registry steps.everis.com:10114 \
        "


BºDaemon user namespace optionsº
  - The Linux kernel user namespace support provides additional security 
    by enabling a process, and therefore a container, to have a unique 
    range of user and group IDs which are outside the traditional user 
    and group range utilized by the host system. Potentially the most 
    important security improvement is that, by default, container 
 ☞Bºprocesses running as the root user will have expected administrativeº
  Bºprivilege (with some restrictions) inside the container but willº
  Bºeffectively be mapped to an unprivileged uid on the host.º
    More info at:
  @[https://docs.docker.com/engine/security/userns-remap/]

- Docker supports softlinks for :
  - Docker data directory:  (def. /var/lib/docker)
  - temporal    directory:  (def. /var/lib/docker/tmp) 
Resizing containers with the Device Mapper
@[http://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/]
$ docker help
Usage:	docker COMMAND

A self-sufficient runtime for containers

Options:
      --config string      Location of client config files (default "/root/.docker")
  -D, --debug              Enable debug mode
  -H, --host list          Daemon socket(s) to connect to
  -l, --log-level string   Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info")
      --tls                Use TLS; implied by --tlsverify
      --tlscacert string   Trust certs signed only by this CA (default "/root/.docker/ca.pem")
      --tlscert string     Path to TLS certificate file (default "/root/.docker/cert.pem")
      --tlskey string      Path to TLS key file (default "/root/.docker/key.pem")
      --tlsverify          Use TLS and verify the remote
  -v, --version            Print version information and quit

Management Commands:       | Commands:
            Manage ...     |   attach      Attach local STDIN/OUT/ERR streams to a running container
config      Docker configs |   build       Build an image from a Dockerfile
container   containers     |   commit      Create a new image from a container's changes
image       images         |   cp          Copy files/folders between a container and the local filesystem
network     networks       |   create      Create a new container
node        Swarm nodes    |   diff        Inspect changes to files or directories on a container's filesystem
plugin      plugins        |   events      Get real time events from the server
secret      Docker secrets |   exec        Run a command in a running container
service     services       |   export      Export a container's filesystem as a tar archive
swarm       Swarm          |   history     Show the history of an image
system      Docker         |   images      List images
trust       trust on       |   import      Import the contents from a tarball to create a filesystem image
            Docker images  |   info        Display system-wide information
volume      volumes        |   inspect     Return low-level information on Docker objects
                           |   kill        Kill one or more running containers
                           |   load        Load an image from a tar archive or STDIN
                           |   login       Log in to a Docker registry
                           |   logout      Log out from a Docker registry
                           |   logs        Fetch the logs of a container
                           |   pause       Pause all processes within one or more containers
                           |   port        List port mappings or a specific mapping for the container
                           |   ps          List containers
                           |   pull        Pull an image or a repository from a registry
                           |   push        Push an image or a repository to a registry
                           |   rename      Rename a container
                           |   restart     Restart one or more containers
                           |   rm          Remove one or more containers
                           |   rmi         Remove one or more images
                           |   run         Run a command in a new container
                           |   save        Save one or more images to a tar archive (streamed to STDOUT by default)
                           |   search      Search the Docker Hub for images
                           |   start       Start one or more stopped containers
                           |   stats       Display a live stream of container(s) resource usage statistics
                           |               ("top" summary for all existing containers)
                           |   stop        Stop one or more running containers
                           |   tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
                           |   top         Display the running processes of a container
                           |             RºWARN:º"docker stats" is really what most people want
                           |                   when searching for a tool similar to UNIX "top".
                           |   unpause     Unpause all processes within one or more containers
                           |   update      Update configuration of one or more containers
                           |   version     Show the Docker version information
                           |   wait        Block until one or more containers stop, then print their exit codes
Install ⅋ setup
Proxy settings
To configure Docker to work with an HTTP or HTTPS proxy server, follow
instructions for your OS:
Windows - Get Started with Docker for Windows
macOS   - Get Started with Docker for Mac
Linux   - Control⅋config. Docker with Systemd
docker global info
system setup
running/paused/stopped cont.
$ sudo docker info
Containers: 23
 Running: 10
 Paused: 0
 Stopped: 1
Images: 36
Server Version: 17.03.2-ce
ºStorage Driver: devicemapperº
 Pool Name: docker-8:0-128954-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: ext4
 Data file: /dev/loop0
 Metadata file: /dev/loop1
ºData Space Used: 3.014 GBº
ºData Space Total: 107.4 GBº
ºData Space Available: 16.11 GBº
ºMetadata Space Used: 4.289 MBº
ºMetadata Space Total: 2.147 GBº
ºMetadata Space Available: 2.143 GBº
ºThin Pool Minimum Free Space: 10.74 GBº
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
ºData loop file: /var/lib/docker/devicemapper/devicemapper/dataº
ºMetadata loop file: /var/lib/docker/devicemapper/devicemapper/metadataº
 Library Version: 1.02.137 (2016-11-30)
ºLogging Driver: json-fileº
ºCgroup Driver: cgroupfsº
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
ºSecurity Options:º
º seccompº
º  Profile: defaultº
Kernel Version: 4.17.17-x86_64-linode116
Operating System: Debian GNU/Linux 9 (stretch)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.838 GiB
Name: 24x7
ID: ZGYA:L4MN:CDCP:DANS:IEHQ:XYLD:C5KG:SUL4:3XLQ:ZO6M:3RSY:V6VB
ºDocker Root Dir: /var/lib/dockerº
ºDebug Mode (client): falseº
ºDebug Mode (server): falseº
*Registry: https://index.docker.io/v1/*
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
/var/run/docker.sock
@[https://medium.com/better-programming/about-var-run-docker-sock-3bfd276e12fd]
- Unix socket the Docker daemon listens on by default,
  used to communicate with the daemon from within a container.
- Can be mounted on containers to allow them to control Docker:
$ docker runº-v /var/run/docker.sock:/var/run/docker.sockº  ....

USSAGE EXAMPLE:

# STEP 1. Create new container
$ curl -XPOSTº--unix-socket /var/run/docker.sockº \
  -d '{"Image":"nginx"}' \
  -H 'Content-Type: application/json' \
  http://localhost/containers/create
Returns something similar to:
→ {"Id":"fcb65c6147efb862d5ea3a2ef20e793c52f0fafa3eb04e4292cb4784c5777d65","Warnings":null}

# STEP 2. Use /containers//start to start the newly created container.
$ curl -XPOSTº--unix-socket /var/run/docker.sockº \
  http://localhost/containers/fcb6...7d65/start

# STEP 3: Verify it's running:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fcb65c6147ef nginx “nginx -g ‘daemon …” 5 minutes ago Up 5 seconds 80/tcp, 443/tcp ecstatic_kirch
...

ºStreaming events from the Docker daemonº

- Docker API also exposes the*/events endpoint*

$ curlº--unix-socket /var/run/docker.sockº http://localhost/events
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  command hangs on, waiting for new events from the daemon.
  Each new event will then be streamed from the daemon.
Rootless Docker
@[https://docs.docker.com/engine/security/rootless/]
Docker components
Docker Networks
Create new network and use it in containers:
  $ docker ºnetwork createº OºredisNetworkº
  $ docker run --rm --name redis-server --network OºredisNetworkº -d redis
  $ docker run --rm --network OºredisNetworkº -it redis redis-cli -h redis-server -p 6379

List networks:
  $ docker network ls

Disconect and connect a container to the network:
  $ docker disconnect OºredisNetworkº redis-server
  $ docker connect --alias db OºredisNetworkº redis-server

- TODO:
@[https://github.com/tldr-pages/tldr/blob/master/pages/common/kompose.md]
  A tool to convert docker-compose applications to Kubernetes. More 
  information: https://github.com/kubernetes/kompose
Volumes

REUSE VOLUME FROM CONTAINER:
  STEP 0: Create new container with volume
    host-mach $ docker run -it Oº--name alphaº º-v "hostPath":/var/logº ubuntu bash
    container $ date > /var/log/now

  STEP 1: Create new container using volume from previous container:
    host-mach $ docker run --volumes-from Oºalphaº ubuntu
    container $ cat /var/log/now

CREAR VOLUME FOR REUSE IN DIFFERENT CONTAINERS

  STEP 0: Create Volume
  host-mach $ docker volume create --name=OºwebsiteVolumeº
  STEP 1: Use volume in new container
  host-mach $ docker run -d -p 8888:80 \
              -v OºwebsiteVolumeº:/usr/share/nginx/html
              -v logs:/var/log/nginx nginx
  host-mach $ docker run
              -v OºwebsiteVolumeº:/website
              -w /website \
              -it alpine vi index.html

Ex.: Update redis version without loosing data:
  host-mach $ docker network create dbNetwork
  host-mach $ docker run -d --network dbNetwork \
              --network-alias redis \
              --name redis28 redis:2.8
  host-mach $ docker run -it --network dbNetwork \
              alpine telnet redis 6379
              → SET counter 42
              → INFO server
              → SAVE
              → QUIT
  host-mach $ docker stop redis28
  host-mach $ docker run -d --network dbNetwork \
              --network-alias redis \
              --name redis30 \
              --volumes-from redis28 \
              redis:3.0
  host-mach $ docker run -it --network dbNetwork \
              alpine telnet redis 6379
              → GET counter
              → INFO server
              → QUIT
docker-compose

- YAML file defining services, networks and volumes. 
  Full ref: @[https://docs.docker.com/compose/compose-file/]

Best Patterns:
 @[https://docs.docker.com/compose/production/]

BºExample 1º
  C⅋P from https://github.com/bcgov/moh-prime/blob/develop/docker-compose.yml

  version: "3"
  
  services:
  ######################################################### Database #
    postgres:
      restart: always
      container_name: primedb
    Bºimage: postgres:10.6º                 # ← use pre-built image
      environment:
        POSTGRES_PASSWORD: postgres
        ...
      ports:
        - "5432:5432"
      volumes:
        - local_postgres_data:/var/lib/postgresql/data
    Oºnetworks:º                            # ← Networks to connect to
    Oº  - primenetº
  ########################################################## MongoDB #
    mongo:
      restart: always
      container_name: primemongodb
      image: mongo:3
      environment:
        MONGO_INITDB_ROOT_USERNAME: root
        ...
      ports:
        - 8081:8081
      volumes:
        - local_mongodb_data:/var/lib/mongodb/data
    Oºnetworks:º
    Oº  - primenetº
  ############################################################## API #
    dotnet-webapi:
      container_name: primeapi
      restart: always
     ºbuild:º                               # ← use Dockerfile to build image
        context: prime-dotnet-webapi/  RºWARNº: remember to rebuild image and recreate
                                              app’s containers like:
                                            ┌───────────────────────────────────────────────┐
                                            │ $ docker-compose build dotnet-webapi          │
                                            │                                               │
                                            │ $ docker-compose up \ ← stop,destroy,recreate │
                                            │   --no-deps           ← prevents from also    │
                                            │   -d dotnet-webapi      recreating any service│
                                            │                         primeapi depends on.  │
                                            └───────────────────────────────────────────────┘
      command: "..."
      environment:
        ...
    Oºports:          º  ← Exposed ports outside private "primenet" network
    Oº  - "5000:8080" º  ← Map internal port (right) to "external" port
    Oº  - "5001:5001" º
    Oºexpose:º          ←   Expose ports without publishing to host machine 
    Oº   - "5001"º          (only accessible to linked services).
                             Use internal port.
    Oºnetworks:º
    Oº  - primenetº
      depends_on:
        - postgres
  ##################################################### Web Frontend #
    nginx-angular:
      build:
           context: prime-angular-frontend/
      ...
  ################################################ Local SMTP Server #
    mailhog:
      container_name: mailhog
      restart: always
      image: mailhog/mailhog:latest
      ports:
        - 25:1025
        - 1025:1025
        - 8025:8025 # visit localhost:8025 to see the list of captured emails
      ...
  ########################################################### Backup #
    backup:
      ...
      restart: on-failure
      volumes:
      Oº- db_backup_data:/opt/backupº
      ...
  
  volumes:
    local_postgres_data:
    local_mongodb_data:
    db_backup_data:
  
Oºnetworks:º
    primenet:
      driver: bridge

BºExample 2º
  ---
  version: '3.6'
  
  x-besu-bootnode-def:
    ⅋besu-bootnode-def
    restart: "on-failure"
    image: hyperledger/besu:${BESU_VERSION:-latest}
    environment:
      - LOG4J_CONFIGURATION_FILE=/config/log-config.xml
    entrypoint:
      - /bin/bash
      - -c
      - |
        /opt/besu/bin/besu public-key export --to=/tmp/bootnode_pubkey;
        /opt/besu/bin/besu \
        --config-file=/config/config.toml \
        --p2p-host=$$(hostname -i) \
        --genesis-file=/config/genesis.json \
        --node-private-key-file=/opt/besu/keys/key \
        --min-gas-price=0 \
        --rpc-http-api=EEA,WEB3,ETH,NET,PERM,${BESU_CONS_API:-IBFT} \
        --rpc-ws-api=EEA,WEB3,ETH,NET,PERM,${BESU_CONS_API:-IBFT} ;
  
  x-besu-def:
    ⅋besu-def
    restart: "on-failure"
    image: hyperledger/besu:${BESU_VERSION:-latest}
    environment:
      - LOG4J_CONFIGURATION_FILE=/config/log-config.xml
    entrypoint:
      - /bin/bash
      - -c
      - |
        while [ ! -f "/opt/besu/public-keys/bootnode_pubkey" ]; do sleep 5; done ;
        /opt/besu/bin/besu \
        --config-file=/config/config.toml \
        --p2p-host=$$(hostname -i) \
        --genesis-file=/config/genesis.json \
        --node-private-key-file=/opt/besu/keys/key \
        --min-gas-price=0 \
        --rpc-http-api=EEA,WEB3,ETH,NET,PERM,${BESU_CONS_API:-IBFT} \
        --rpc-ws-api=EEA,WEB3,ETH,NET,PERM,${BESU_CONS_API:-IBFT} ;
  
  
  
  
  
  
  
  x-ethsignerProxy-def:
    ⅋ethsignerProxy-def
    image: consensys/quorum-ethsigner:${QUORUM_ETHSIGNER_VERSION:-latest}
    command: [
      "--chain-id=2018",
      "--http-listen-host=0.0.0.0",
      "--downstream-http-port=8545",
      "--downstream-http-host=rpcnode",
      "file-based-signer",
      "-k",
      "/opt/ethsigner/keyfile",
      "-p",
      "/opt/ethsigner/passwordfile"
    ]
    ports:
      - 8545
  
  services:
  
    validator1:
      ˂˂ : *besu-bootnode-def
      volumes:
        - public-keys:/tmp/
        - ./config/besu/config.toml:/config/config.toml
        - ./config/besu/permissions_config.toml:/config/permissions_config.toml
        - ./config/besu/log-config.xml:/config/log-config.xml
        - ./logs/besu:/var/log/
        - ./config/besu/${BESU_CONS_ALGO:-ibft2}Genesis.json:/config/genesis.json
        - ./config/besu/networkFiles/validator1/keys:/opt/besu/keys
      networks:
        quorum-dev-quickstart:
          ipv4_address: 172.16.239.11
  
    validator2:
      ˂˂ : *besu-def
      volumes:
        - public-keys:/opt/besu/public-keys/
        - ./config/besu/config.toml:/config/config.toml
        - ./config/besu/permissions_config.toml:/config/permissions_config.toml
        - ./config/besu/log-config.xml:/config/log-config.xml
        - ./logs/besu:/var/log/
        - ./config/besu/${BESU_CONS_ALGO:-ibft2}Genesis.json:/config/genesis.json
        - ./config/besu/networkFiles/validator2/keys:/opt/besu/keys
      depends_on:
        - validator1
      networks:
        quorum-dev-quickstart:
          ipv4_address: 172.16.239.12
  
    validator3:
      ˂˂ : *besu-def
      volumes:
        - public-keys:/opt/besu/public-keys/
        - ./config/besu/config.toml:/config/config.toml
        - ./config/besu/permissions_config.toml:/config/permissions_config.toml
        - ./config/besu/log-config.xml:/config/log-config.xml
        - ./logs/besu:/var/log/
        - ./config/besu/${BESU_CONS_ALGO:-ibft2}Genesis.json:/config/genesis.json
        - ./config/besu/networkFiles/validator3/keys:/opt/besu/keys
      depends_on:
        - validator1
      networks:
        quorum-dev-quickstart:
          ipv4_address: 172.16.239.13
  
    validator4:
      ˂˂ : *besu-def
      volumes:
        - public-keys:/opt/besu/public-keys/
        - ./config/besu/config.toml:/config/config.toml
        - ./config/besu/permissions_config.toml:/config/permissions_config.toml
        - ./config/besu/log-config.xml:/config/log-config.xml
        - ./logs/besu:/var/log/
        - ./config/besu/${BESU_CONS_ALGO:-ibft2}Genesis.json:/config/genesis.json
        - ./config/besu/networkFiles/validator4/keys:/opt/besu/keys
      depends_on:
        - validator1
      networks:
        quorum-dev-quickstart:
          ipv4_address: 172.16.239.14
  
    rpcnode:
      ˂˂ : *besu-def
      volumes:
        - public-keys:/opt/besu/public-keys/
        - ./config/besu/config.toml:/config/config.toml
        - ./config/besu/permissions_config.toml:/config/permissions_config.toml
        - ./config/besu/log-config.xml:/config/log-config.xml
        - ./logs/besu:/var/log/
        - ./config/besu/${BESU_CONS_ALGO:-ibft2}Genesis.json:/config/genesis.json
        - ./config/besu/networkFiles/rpcnode/keys:/opt/besu/keys
      depends_on:
        - validator1
      ports:
        - 8545:8545/tcp
        - 8546:8546/tcp
      networks:
        quorum-dev-quickstart:
          ipv4_address: 172.16.239.15
  
    ethsignerProxy:
      ˂˂ : *ethsignerProxy-def
      volumes:
        - ./config/ethsigner/password:/opt/ethsigner/passwordfile
        - ./config/ethsigner/key:/opt/ethsigner/keyfile
      depends_on:
        - validator1
        - rpcnode
      ports:
        - 18545:8545/tcp
      networks:
        quorum-dev-quickstart:
          ipv4_address: 172.16.239.40

    explorer:
      build: block-explorer-light/.
      image: quorum-dev-quickstart/block-explorer-light:develop
      depends_on:
        - rpcnode
      ports:
        - 25000:80/tcp
      networks:
        quorum-dev-quickstart:
          ipv4_address: 172.16.239.31
  
    prometheus:
      image: "prom/prometheus"
      volumes:
        - ./config/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
        - prometheus:/prometheus
      command:
        - --config.file=/etc/prometheus/prometheus.yml
      ports:
        - 9090:9090/tcp
      networks:
        quorum-dev-quickstart:
          ipv4_address: 172.16.239.32
  
    grafana:
      image: "grafana/grafana"
      environment:
        - GF_AUTH_ANONYMOUS_ENABLED=true
      volumes:
        - ./config/grafana/provisioning/:/etc/grafana/provisioning/
        - grafana:/var/lib/grafana
      ports:
        - 3000:3000/tcp
      networks:
        quorum-dev-quickstart:
          ipv4_address: 172.16.239.33
  
  volumes:
    public-keys:
    prometheus:
    grafana:
  
Oºnetworks:                           º
Oº  quorum-dev-quickstart:            º
Oº    driver: bridge                  º
Oº    ipam:                           º
Oº      config:                       º
Oº        - subnet: 172.16.239.0/24   º

SystemD Integration REF: https://gist.github.com/Luzifer/7c54c8b0b61da450d10258f0abd3c917 - /etc/compose/docker-compose.yml - /etc/systemd/system/docker-compose.service (Service unit to start and manage docker compose) [Unit] Description=Docker Compose container starter After=docker.service network-online.target Requires=docker.service network-online.target [Service] WorkingDirectory=/etc/compose Type=oneshot RemainAfterExit=yes ExecStartPre=-/usr/local/bin/docker-compose pull --quiet ExecStart=/usr/local/bin/docker-compose up -d ExecStop=/usr/local/bin/docker-compose down ExecReload=/usr/local/bin/docker-compose pull --quiet ExecReload=/usr/local/bin/docker-compose up -d [Install] WantedBy=multi-user.target - /etc/systemd/system/docker-compose-reload.service (Executing unit to trigger reload on docker-compose.service) [Unit] Description=Refresh images and update containers [Service] Type=oneshot ExecStart=/bin/systemctl reload-or-restart docker-compose.service - /etc/systemd/system/docker-compose-reload.timer (Timer unit to plan the reloads) [Unit] Description=Refresh images and update containers Requires=docker-compose.service After=docker-compose.service [Timer] OnCalendar=*:0/15 [Install] WantedBy=timers.target
Registry ("Image repository")
@[https://docs.docker.com/registry/#what-it-is]
@[https://docs.docker.com/registry/introduction/]
BºSummaryº
  $º$ docker run -d -p 5000:5000 \ º ← Start registry
  $º  --restart=always             º 
  $º  --name registry registry:2   º 

  $º$ docker pull ubuntu           º ← Pull (example) image
  $º$ docker image tag ubuntu \    º ← Tag the image to "point" 
  $º  localhost:5000/myfirstimage  º   to local registry
  $º$ docker push \                º ← Push to local registry
  $º  localhost:5000/myfirstimage  º   
  $º$ docker pull \                º ← final Check
  $º  localhost:5000/myfirstimage  º   
  
  NOTE: clean setup testing like:
  $º$ docker container stop  registry º
  $º$ docker container rm -v registry º




Dockerize
@[https://github.com/jwilder/dockerize]
- utility to simplify running applications in docker containers. 
  BºIt allows you to:º
  Bº- generate app config. files at container startup timeº
  Bº  from templates and container environment variablesº
  Bº- Tail multiple log files to stdout and/or stderrº
  Bº- Wait for other services to be available using TCP, HTTP(S),º
  Bº  unix before starting the main process.º

typical use case:
 - application that has one or more configuration files and
   you would like to control some of the values using environment variables.
 - dockerize allows to set an environment variable and update the config file before
   starting the contenerized application
 - other use case: forward logs from harcoded files on the filesystem to stdout/stderr
   (Ex: nginx logs to /var/log/nginx/access.log and /var/log/nginx/error.log by default)
Managing Containers
Boot-up/run container:
$ docker run \                             $ docker run \
  --rm  \        ←------ Remove ---------→   --rm  \
  --name clock  \        on exit             --name clock  \
 º-dº\             ← Daemon    interactive →º-tiº\
                     mode      mode
  jdeiviz/clock                              jdeiviz/clock


Show container logs:
$ docker logs docker
$ logs --tail 3
$ docker logs --tail 1 --follow

Stop container:
$ docker stop # Espera 10s docker kill

Prune stopped containers:

$ docker container prune

container help:
$ docker container
ENTRYPOINT vs COMMAND
Extracted from:
https://stackoverflow.com/questions/21553353/what-is-the-difference-between-cmd-and-entrypoint-in-a-dockerfile

- Docker default entrypoint is /bin/sh -c.
  - ENTRYPOINT allows to override the default.
    - $ docker --entrypoint allows to override effective entrypoint.
    (ºENTRYPOINT is (purposely) more difficult to overrideº)
    - ENTRYPOINT is similar to the "init" process in Linux. It is the
      first command to be executed. Command are the params passed to
      the ENTRYPOINT.

- There is no default command (to be executed by the entrypoint).
  It must be indicated either as:
   $ docker run -i -t ubuntu bash 
                             └─┬─┘ 
                        /bin/sh -c bash will be executed.
                        └───┬─────┘ 
                        Or non-default entrypoint 

BºAs everything is passed to the entrypoint, very nice behavior
  appearsº: They will act as binary executables:
  Ex. If using ENTRYPOINT ["/bin/cat"] then 
      $ ALIAS CAT="docker run myImage"
      $ CAT  /etc/passwd 
        └┬┘ 
       will effectively execute next command on container image:
        ┌──┴────┐ 
      $ /bin/cat  /etc/passwd 

  Ex. If using ENTRYPOINT ["redis", "-H", "something", "-u", "toto"]
      will be equivalent to executing redis with default params 

      $ docker run redisimg get key
Monitoring running containers
Monitoring (Basic)
List containers instances:
   $ docker ps     # only running
   $ docker ps -a  # also finished, but not yet removed (docker rm ...)
   $ docker ps -lq # TODO:

"top" containers showing Net IO read/writes, Disk read/writes:
   $ docker stats
   | CONTAINER ID   NAME                    CPU %   MEM USAGE / LIMIT     MEM %   NET I/O          BLOCK I/O      PIDS
   | c420875107a1   postgres_trinity_cache  0.00%   11.66MiB / 6.796GiB   0.17%   22.5MB / 19.7MB  309MB / 257kB  16
   | fdf2396e5c72   stupefied_haibt         0.10%   21.94MiB / 6.796GiB   0.32%   356MB / 693MB    144MB / 394MB  39

   $ docker top 'containerID'
   | UID       PID     PPID    C  STIME  TTY   TIME     CMD
   | systemd+  26779   121423  0  06:11  ?     00:00:00 postgres: ddbbName cache 172.17.0.1(35678) idle
   | ...
   | systemd+  121423  121407  0  Jul06  pts/0 00:00:44 postgres
   | systemd+  121465  121423  0  Jul06  ?     00:00:01 postgres: checkpointer process
   | systemd+  121466  121423  0  Jul06  ?     00:00:26 postgres: writer process
   | systemd+  121467  121423  0  Jul06  ?     00:00:25 postgres: wal writer process
   | systemd+  121468  121423  0  Jul06  ?     00:00:27 postgres: autovacuum launcher process
   | systemd+  121469  121423  0  Jul06  ?     00:00:57 postgres: stats collector process

SysDig
Container-focused Linux troubleshooting and monitoring tool.

Once Sysdig is installed as a process (or container) on the server,
it sees every process, every network action, and every file action
on the host. You can use Sysdig "live" or view any amount of historical
data via a system capture file.

Example: take a look at the total CPU usage of each running container:
   $ sudo sysdig -c topcontainers\_cpu
   | CPU% container.name
   | ----------------------------------------------------
   | 80.10% postgres
   | 0.14% httpd
   | ...
   |

Example: Capture historical data:
   $ sudo sysdig -w historical.scap

Example: "Zoom into a client":
   $ sudo sysdig -pc -c topprocs\_cpu container. name=client
   | CPU% Process container.name
   | ----------------------------------------------
   | 02.69% bash client
   | 31.04%curl client
   | 0.74% sleep client
Dockviz
@[https://github.com/justone/dockviz]
Show a graph of running containers dependencies and
image dependencies.

Other options:
$ºdockviz images -tº
└─511136ea3c5a Virtual Size: 0.0 B
  ├─f10ebce2c0e1 Virtual Size: 103.7 MB
  │ └─82cdea7ab5b5 Virtual Size: 103.9 MB
  │   └─5dbd9cb5a02f Virtual Size: 103.9 MB
  │     └─74fe38d11401 Virtual Size: 209.6 MB Tags: ubuntu:12.04, ubuntu:precise
  ├─ef519c9ee91a Virtual Size: 100.9 MB
  └─02dae1c13f51 Virtual Size: 98.3 MB
    └─e7206bfc66aa Virtual Size: 98.5 MB
      └─cb12405ee8fa Virtual Size: 98.5 MB
        └─316b678ddf48 Virtual Size: 169.4 MB Tags: ubuntu:13.04, ubuntu:raring

$ºdockviz images -t -l º← show only labelled images
└─511136ea3c5a Virtual Size: 0.0 B
  ├─f10ebce2c0e1 Virtual Size: 103.7 MB
  │ └─74fe38d11401 Virtual Size: 209.6 MB Tags: ubuntu:12.04, ubuntu:precise
  ├─ef519c9ee91a Virtual Size: 100.9 MB
  │ └─a7cf8ae4e998 Virtual Size: 171.3 MB Tags: ubuntu:12.10, ubuntu:quantal
  │   ├─5c0d04fba9df Virtual Size: 513.7 MB Tags: nate/mongodb:latest
  │   └─f832a63e87a4 Virtual Size: 243.6 MB Tags: redis:latest
  └─02dae1c13f51 Virtual Size: 98.3 MB
    └─316b678ddf48 Virtual Size: 169.4 MB Tags: ubuntu:13.04, ubuntu:raring


$ºdockviz images -tº-i º ← Show incremental size rather than cumulative
└─511136ea3c5a Virtual Size: 0.0 B
  ├─f10ebce2c0e1 Virtual Size: 103.7 MB
  │ └─82cdea7ab5b5 Virtual Size: 255.5 KB
  │   └─5dbd9cb5a02f Virtual Size: 1.9 KB
  │     └─74fe38d11401 Virtual Size: 105.7 MB Tags: ubuntu:12.04, ubuntu:precise
  └─02dae1c13f51 Virtual Size: 98.3 MB
    └─e7206bfc66aa Virtual Size: 190.0 KB
      └─cb12405ee8fa Virtual Size: 1.9 KB
        └─316b678ddf48 Virtual Size: 70.8 MB Tags: ubuntu:13.04, ubuntu:raring

Weave

cAdvisor+Prometheus+Grafana
@[https://blog.couchbase.com/monitoring-docker-containers-docker-stats-cadvisor-universal-control-plane/]
@[https://dzone.com/refcardz/intro-to-docker-monitoring?chapter=6]
@[https://github.com/google/cadvisor/blob/master/docs/running.md#standalone]
Managing Images
  Managing images
(List all image related commands with: $ docker image)

  $ docker images        # ← List local ("downloaded/instaled") images

  $ docker search redis  # ← Search remote images @ Docker Hub: 

  $ docker rmi /${IMG_NAME}:${IMG_VER}  # ← remove (local) image
  $ docker image prune                  # ← removeºallºnon used images

-ºPUSH/PULL Images from Private Registry:º

  -ºPRE-SETUP:º(Optional opinionated, but recomended)
    Define ENV. VARS. in BºENVIRONMENTº file

    $ catBºENVIRONMENTº
    #  COMMON ENV. PARAMS for PRIVATE/PUBLIC REGISTRY: {{
    USER=user01
    IMG_NAME="postgres_custom"
    IMG_VER="1.0"  # ← Defaults to 'latest'
    # }} 
    # PRIVATE REGISTRY ENV. PARAMS ONLY : {{
    SESSION_TOKEN="dAhYK9Z8..."  # ← Updated Each 'N' hours
    REGISTRY=docker_registry.myCompany.com
    # }}


  -ºUPLOAD IMAGEº 
   ºALT1: UPLOAD TO PRIVATE REGISTRY:º      │ ºALT2: UPLOAD TO DOCKER HUB:º
    $ cat push_image_to_private_registry.sh │  $ cat push_image_to_dockerhub_registry.sh
    #!/bin/bash                             │  #!/bin/bash
    set -e # ← stop on first error          │  set -e # ← stop on first error
    .BºENVIRONMENTº                         │  .BºENVIRONMENTº
                                            │  
    sudo dockerºloginº\                     │  sudo dockerºloginº\
       -u ${LOGIN_USER} \                   │     -u ${LOGIN_USER} \
       -p ${SESSION_TOKEN} \                │ 
       ${REGISTRY}                          │ 
                                            │  
    sudo dockerºpushº \                     │  sudo dockerºpushº \
       ${REGISTRY}/${USER}/\                │  /\
       /${IMG_NAME}:${IMG_VER}              │  /${IMG_NAME}:${IMG_VER}


  -ºDOWNLOAD IMAGEº 
   ºALT1: DOWNLOAD FROM PRIVATE REGISTRY:º  │ ºALT2: DOWNLOAD FROM DOCKER HUB:º
   $ docker pull \                          │ $ docker pull \
     ${REGISTRY}/${USER}/\                  │   \
     ${IMG_NAME}:${IMG_VER}                 │   ${IMG_NAME}:${IMG_VER}
Build image
72.7 MB layer ←→ FROM registry.redhat.io/ubi7/ubi              Put most frequently changed layer
40.0 MB layer ←→ COPY target/dependencies /app/dependencies    down the layer "stack", so that
 9.0 MB layer ←→ COPY target/resources    /app/resources       when uploading new images only it
 0.5 MB layer ←→ COPY target/classes      /app/classes       ← will be uploaded. Probably the most
                                                               frequently changed layer is also 
                                                               the smaller layer
                 ENTRYPOINT java -cp \
                   /app/dependencies/*:/app/resources:/app/classes \
                   my.app.Main

$ docker build \
   --build-arg http_proxy=http://...:8080 \
   --build-arg https_proxy=https://..:8080 \
   -t figlet .

$ cat ./Dockerfile
FROM ubuntu

RUN apt-get update
# Instalar figlet

ENTRYPOINT ["figlet", "-f", "script"]

Note: Unless you tell Docker otherwise, it will do as little work as possible when 
building an image. It caches the result of each build step of a Dockerfile that 
it has executed before and uses the result for each new build.
RºWARN:º 
   If a new version of the base image you’re using becomes available that 
   conflicts with your app, however, you won’t notice that when running the tests in 
   a container using an image that is built upon the older, cached version of the base image.
 BºYou can force build to look for newer verions of base image "--pull" flagº.
   Because new base images are only available once in a while, it’s not really 
   wasteful to use this argument all the time when building images.
   (--no-cache can also be useful)



  Image tags
adding a tag to the image essentially adds an alias
The tags consists of:
    'registry_server'/'user_name'/'image_name':'tag'
    ^^^^^^^^^^^^^^^^^
    default one if not
    indicated

Tag image:
  $ docker tag jdeiviz/clock /clock:1.0
Show image
change history
   $ docker history /clock:1.0
Commit image
modifications
(Discouraged most of the time, modify Dockerbuild instead)
host-mach $ docker run -it ubuntu bash     # Boot up existing image
container # apt-get install ...            # Apply changes to running instance
host-mach $ docker diff $(docker ps -lq)   # Show changes done in running container
host-mach $ docker commit $(docker ps -lq) # Commit/Confirm changes
host-mach $ docker tag figlet              # Tage new image
host-mach $ docker run -it figlet          # Boot new image instance
Future Improvements
@[https://lwn.net/Articles/788282/]
"Rethinking container image delivery"
Container images today are mostly delivered via container registries, 
like Docker Hub for public access, or an internal registry deployment 
within an organization. Crosby explained that Docker images are 
identified with a name, which is basically a pointer to content in a 
given container registry. Every container image comes down to a 
digest, which is a content address hash for the JSON files and layers 
contained in the image. Rather than relying on a centralized registry 
to distribute images, what Crosby and Docker are now thinking about 
is an approach whereby container images can also be accessed and 
shared via some form of peer-to-peer (P2P) transfer approach across 
nodes.

Crosby explained that a registry would still be needed to handle the 
naming of images, but the content address blobs could be transferred 
from one machine to another without the need to directly interact 
with the registry. In the P2P model for image delivery, a registry 
could send a container image to one node, and then users could share 
and distribute images using something like BitTorrent sync. Crosby 
said that, while container development has matured a whole lot since 
2013, there is still work to be done. "From where we've been over the 
past few years to where we are now, I think we'll see a lot of the 
same type of things and we'll still focus on stability and 
performance," he said.
Advanced Image creation
ONBUILD
(base Dockerfile
 for devel)

Modify base image adding "ONBUILD" in places that are executed just during build
in the image extending base image:
| Dockerfile.base                | Dockerfile
| FROM node:7.10-alpine          | FROM node-base
|                                |
| RUN mkdir /src                 | EXPOSE 8000
| WORKDIR /src
|
| ONBUILD ARG NODE_ENV
| ONBUILD ENV NODE_ENV $NODE_ENV
|
| COPY package.json /src
|
| RUN npm install
|
| COPY . /src
|
| CMD [ "npm", "start" ]

  $ docker build -t node-base -f Dockerfile.base . # STEP 1: Compile base image
  $ docker build -t node -f Dockerfile .           # STEP 2: Compile image
  $ docker run -p 8000:8000 -d node


Multi-Stage
- Multi-Stage allows for final "clean" images that will
  contain just the application binaries, with no building
  or compilation intermediate tools needed during the build.
  This allow for much lighter final images.
              ┌───────────────────────────────┬─────────────────────────────────────────┐
              │ "STANDARD" BUILD              │ multi─stage BUILD                       │
┌─────────────┼───────────────────────────────┼─────────────────────────────────────────┤
│Dockerfile   │ Dockerfile                    │ Dockerfile.ms                           │
│             │ FROMºgolang:alpineº           │ FROM ºgolang:alpineº AS Oºbuild─envº    │
│             │ WORKDIR /app                  │ ADD . /src                              │
│             │ ADD . /app                    │ RUN cd /src ; go build ─o app           │
│             │ RUN cd /app ; go build ─o app │                                         │
│             │ ENTRYPOINT ./app              │ FROMºalpineº                            │
│             │                               │ WORKDIR /app                            │
│             │                               │ COPY --from=Oºbuild─envº /src/app /app/ │
│             │                               │ ENTRYPOINT ./app                        │
├─────────────┼───────────────────────────────┼─────────────────────────────────────────┤
│ Compile     │ $ docker build . ─t hello─go  │ $ docker build . ─f Dockerfile.ms       │
│ image       │                               │   ─t hello─goms                         │
├─────────────┼───────────────────────────────┼─────────────────────────────────────────┤
│ Exec        │ $ docker run hello─go         │ $ docker run hello─goms                 │
├─────────────┼───────────────────────────────┼─────────────────────────────────────────┤
│ Check image │ $ docker images               │ $ docker images                         │
│ size        │                               │                                         │
└─────────────┴───────────────────────────────┴─────────────────────────────────────────┘

 Ex 2: Multi-stage NodeJS Build

    FROM node:12-alpine
    
    ADD . / app01_src/
    RUN cd app01_src/ ⅋⅋\            ←ºSTAGE 1: Compileº
        npm set unsafe-perm true ⅋⅋\ ← By default npm changes uid to the one specified 
                                       in user config or 'nobody' by default.
                                       Set to true to exec as root. Needed for
                                       installs.
        npm cache clean --force ⅋⅋\
        npm install ⅋⅋                 npm link in a package folder will create
        npm run build ⅋⅋ \             a symlink in the global folder 
        npm link                     ← {prefix}/lib/node_modules/$package
                                       linking to the package where the npm
                                       link command was executed.
                                        It will also link any bins in the package
                                       to {prefix}/bin/{name}.
    
    FROM node:12-alpine              ←ºSTAGE 2º
    RUN mkdir /opt/app01_src
    WORKDIR /opt/app01_src
    COPYº--from=0º/app01_src/dist  /opt/app01_src/dist
    COPYº--from=0º/app01_src/node_modules  /opt/app01_src/node_modules
    
    ENTRYPOINT ["node", "/opt/app01_src/dist/cli.js"]
  Distroless
- "Distroless" images contain only your application and its runtime dependencies.
(not package managers, shells,...)
Notice: In kubernetes we can also use init containers with non-light images
        containing all set of tools (sed, grep,...) for pre-setup, avoiding
        any need to include in the final image.

Stable:                      experimental (2019-06)
gcr.io/distroless/static     gcr.io/distroless/python2.7
gcr.io/distroless/base       gcr.io/distroless/python3
gcr.io/distroless/java       gcr.io/distroless/nodejs
gcr.io/distroless/cc         gcr.io/distroless/java/jetty
                             gcr.io/distroless/dotnet

Ex java Multi-stage Dockerfile:
@[https://github.com/GoogleContainerTools/distroless/blob/master/examples/java/Dockerfile]
 ºFROMºopenjdk:11-jdk-slim  ASOºbuild-envº
  ADD . /app/examples
  WORKDIR /app
  RUN javac examples/*.java
  RUN jar cfe main.jar examples.HelloJava examples/*.class

  FROM gcr.io/distroless/java:11
  COPY --from=Oºbuild-envº /app /app
  WORKDIR /app
  CMD ["main.jar"]
rootless Buildah
@[https://opensource.com/article/19/3/tips-tricks-rootless-buildah]
- Building containers in unprivileged environments
  - Buildah is a tool and library for building Open Container Initiative (OCI) container images.
  - In previous articles, including How does rootless Podman work?, I talked
  - about Podman, a tool that enables users to manage pods, containers, and container images.
  - Buildah is a tool and library for building Open Container Initiative (OCI)
    container images that is complementary to Podman. (Both projects are
    maintained by the containers organization, of which I'm a member.) In this
    article, I will talk about rootless Buildah, including the differences between it and Podman.


Build speed @[https://www.redhat.com/sysadmin/speeding-container-buildah] This article will address a second problem with build speed when using dnf/yum commands inside containers. Note that in this article I will use the name dnf (which is the upstream name) instead of what some downstreams use (yum) These comments apply to both dnf and yum.
Appsody
@[https://appsody.dev/docs]
pre-configured application stacks for rapid development
of quality microservice-based applications.

Stacks include language runtimes, frameworks, and any additional
libraries and tools needed for local development, providing 
consistency and best practices.

It consists of:

-ºbase-container-imageº:
  - local development
  - It defines the environment and specifies the stack behavior
    during the development lifecycle of the application.

-ºProject templatesº
  - starting point ('Hello World')
  - They can be customized/shared.

- Stack layout example, my-stack: 
  my-stack
  ├── README.md               # describes stack and how to use it
  ├── stack.yaml              # different attributes and which template 
  ├── image/                  # to use by default
  |   ├── config/
  |   |   └── app-deploy.yaml # deploy config using Appsody Operator
  |   ├── project/
  |   |   ├── php/java/...stack artifacts
  |   |   └── Dockerfile      # Final   (run) image ("appsody build")
  │   ├── Dockerfile-stack    # Initial (dev) image and ENV.VARs
  |   └── LICENSE             # for local dev.cycle. It is independent
  └── templates/              # of Dockerfile
      ├── my-template-1/
      |       └── "hello world"
      └── my-template-2/
              └── "complex application"

BºGenerated filesº
  -º".appsody-config.yaml"º. Generated by $º$ appsody initº
    It specifies the stack image used and can be overridden
    for testing purposes to point to a locally built stack.

Bºstability levels:
  -ºExperimentalº ("proof of concept")
    - Support  appsody init|run|build

  -ºIncubatorº: not production-ready.
    - active contributions and reviews by maintainers
    - Support  appsody init|run|build|test|deploy
    - Limitations described in README.md

  -ºStableº: production-ready.
    - Support all Appsody CLI commands
    - Pass appsody stack 'validate' and 'integration' tests
      on all three operating systems that are supported by Appsody
      without errors. 
      Example:
      - stack must not bind mount individual files as it is
        not supported on Windows.
      - Specify the minimum Appsody, Docker, and Buildah versions
        required in the stack.yaml
      - Support appsody build command with Buildah
      - Prevent creation of local files that cannot be removed 
        (i.e. files owned by root or other users)
      - Specify explicit versions for all required Docker images
      - Do not introduce any version changes to the content
        provided by the parent container images
        (No yum upgrade, apt-get dist-upgrade, npm audit fix).
         - If package contained in the parent image is out of date,
           contact its maintainers or update it individually.
      - Tag stack with major version (at least 1.0.0)
      - Follow Docker best practices, including:
        - Minimise the size of production images 
        - Use the official base images
        - Images must not have any major security vulnerabilities
        - Containers must be run by non-root users
      - Include a detailed README.md, documenting:
        - short description
        - prerequisites/setup required
        - How to access any endpoints provided
        - How users with existing projects can migrate to
          using the stack
        - How users can include additional dependencies 
          needed by their application

BºOfficial Appsody Repositories:º
https://github.com/appsody/stacks/releases/latest/download/stable-index.yaml
https://github.com/appsody/stacks/releases/latest/download/incubator-index.yaml
https://github.com/appsody/stacks/releases/latest/download/experimental-index.yaml

- By default, Appsody comes with the incubator and experimental repositories
  (RºWARNº: Not stable by default). Repositories can be added by running :
  $º$ appsody repoº
alpine how-to
Next image (golang) is justº6Mbytesºin size:
@[https://hub.docker.com/r/ethereum/solc/dockerfile]
Dockerfile:
    01	FROM alpine
    02	MAINTAINER chriseth 
    03	
    04	RUN \
    05	  apk --no-cache --update add build-base cmake boost-dev git ⅋⅋ \
    06	  sed -i -E -e 's/include ˂sys\/poll.h˃/include ˂poll.h˃/' /usr/include/boost/asio/detail/socket_types.hpp  ⅋⅋ \
    07	  git clone --depth 1 --recursive -b release https://github.com/ethereum/solidity                           ⅋⅋ \
    08	  cd /solidity ⅋⅋ cmake -DCMAKE_BUILD_TYPE=Release -DTESTS=0 -DSTATIC_LINKING=1                             ⅋⅋ \
    09	  cd /solidity ⅋⅋ make solc ⅋⅋ install -s  solc/solc /usr/bin                                               ⅋⅋\
    10	  cd / ⅋⅋ rm -rf solidity                                                                                   ⅋⅋ \
    11	  apk del sed build-base git make cmake gcc g++ musl-dev curl-dev boost-dev                                 ⅋⅋ \
    12	  rm -rf /var/cache/apk/*

Notes:
  - line 07: º--depth 1º: faster cloning (just last commit)
  - line 07: the cloned repo contains next º.dockerignoreº:
    01 # out-of-tree builds usually go here. This helps improving performance of uploading
    02 # the build context to the docker image build server
    03*/build*
    04
    05 # in-tree builds
    06*/deps*
TODO Classify
Troubleshooting
Bº/var/lib/docker/devicemapper/devicemapper/data consumes too much spaceº
$º$ sudo du -sch /var/lib/docker/devicemapper/devicemapper/dataº
$º14G     /var/lib/docker/devicemapper/devicemapper/data       º
[REF@StackOverflow]

BºDNS works on host, fails on continers:º
  Try to launch with --network host flag. Ex.:
  ... 
  DOCKER_OPTS="${DOCKER_OPTS} º--network hostº" 
  SCRIPT="wget https://repo.maven.apache.org/maven2" # ← DNS can fail with bridge
  echo "${MVN_SCRIPT}" | docker run ${DOCKER_OPTS} ${SCRIPT}

BºInspecting Linux namespaces of running containerº
  Use nsenter (Bºutil-linuxº package) to "enter" into the
  container (network, filesystem, IPC, ...) namespace.

  $ cat enterNetworkNamespace.sh
  #!/bin/bash
  
  # REF: man nsenter
  # Run shell with network namespace of container.
  # Allows to use ping, ss/netstat, wget, trace,.. in
  # in contect of the container.
  # Useful to check network setup is the appropiate one.
  CONT_PID=$( sudo docker inspect -f '{{.State.Pid}}' $1 )
  shift 1
  sudoºnsenterº-t ${CONT_PID}º-nº
                              ^^
                         Use network namespace of container

  Ex Ussage: 
  $ ./enterNetworkNamespace.sh myWebContainer01
  $ netstat -ntlp  
  Active Internet connections (only servers)
  Proto Recv-Q Send-Q Local Address           Foreign Address         State      
  tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN     
   
  * netstat installed on host (vs container). 
Live Restore
@[https://docs.docker.com/config/containers/live-restore/]
Keep containers alive during daemon downtime
weave
https://github.com/weaveworks/weave
Weaveworks is the company that delivers the most productive way for 
developers to connect, observe and control Docker containers.

This repository contains Weave Net, the first product developed by 
Weaveworks, with over 8 million downloads to date. Weave Net enables 
you to get started with Docker clusters and portable apps in a 
fraction of the time required by other solutions.

- Weave Net
  - Quickly, easily, and securely network and cluster containers 
    across any environment. Whether on premises, in the cloud, or hybrid, 
    there’s no code or configuration.
  - Build an ‘invisible infrastructure’
  - powerful cloud native networking toolkit. It creates a virtual network
    that connects Docker containers across multiple hosts and enables their 
    automatic discovery. Set up subsystems and sub-projects that provide
    DNS, IPAM, a distributed virtual firewall and more.

- Weave Scope:
  - Understand your application quickly by seeing it in a real time 
    interactive display. Pick open source or cloud hosted options.
  - Zero configuration or integration required — just launch and go.
  - automatically detects processes, containers, hosts.
    No kernel modules, agents, special libraries or coding.
  - Seamless integration with Docker, Kubernetes, DCOS and AWS ECS.

- Cortex: horizontally scalable, highly available, multi-tenant, 
  long term storage for Prometheus.

- Flux:
  - Flux is the operator that Bºmakes GitOps happen in your clusterº.
    It ensures that the cluster config matches the one in git and
    automates your deployments.
  - continuous delivery of container images, using version control
    for each step to ensure deployment is reproducible, 
    auditable and revertible. Deploy code as fast as your team creates 
    it, confident that you can easily revert if required.
  
    Learn more about GitOps. 
  @[https://www.weave.works/technologies/gitops/]
Clair
@[https://coreos.com/clair/docs/latest/]
open source project for the static analysis of vulnerabilities in 
appc and docker containers.

Vulnerability data is continuously imported from a known set of sources and
correlated with the indexed contents of container images in order to produce
lists of vulnerabilities that threaten a container. When vulnerability data
changes upstream, the previous state and new state of the vulnerability along
with the images they affect can be sent via webhook to a configured endpoint.
All major components can be customized programmatically at compile-time
without forking the project.
Skopeo
@[https://www.redhat.com/en/blog/skopeo-10-released]

Skopeo is a tool for moving container images between different types 
of container storages.  It allows you to copy container images 
between container registries like docker.io, quay.io, and your 
internal container registry or different types of storage on your 
local system. You can copy to a local container/storage repository, 
even directly into a Docker daemon.  

@[https://github.com/containers/skopeo]
skopeo is a command line utility that performs various operations on container images and image repositories.
skopeo does not require the user to be running as root to do most of its operations.
skopeo does not require a daemon to be running to perform its operations.
skopeo can work with OCI images as well as the original Docker v2 images.
Security Tunning
@[https://opensource.com/business/15/3/docker-security-tuning]
LazyDocker
@[https://github.com/jesseduffield/lazydocker]
A simple terminal UI for both docker and docker-compose, written in 
Go with the gocui library.
Convoy (Volume Driver for backups)
@[https://rancher.com/introducing-convoy-a-docker-volume-driver-for-backup-and-recovery-of-persistent-data/]
Introducing Convoy a Docker Storage Driver for Backup and Recovery of Volumes
Podman (IBM/RedHat)
Podman
Podman
- No system daemon required
- No daemon required.
- rootless containers 
- Podman is set to be the default container engine for the single-node
  use case in Red Hat Enterprise Linux 8.
  (CRI-O for OpenShift clusters)

- easy to use and intuitive.
  - Most users can simply alias Docker to Podman (alias docker=podman) 

-$º$ podman generate kubeº creates a Pod that can then be exported as Kubernetes-compatible YAML. 

- enables users to run different containers in different user namespaces


- Runs at native Linux speeds.
  (no daemon getting in the way of handling client/server requests)


-  OCI compliant Container Runtime (runc, crun, runv, etc)
  to interface with the OS.

- Podman  libpod library manages container ecosystem:
  - pods.
  - containers.
  - container images (pulling, tagging, ...)
  - container volumes.


Introduction

$º$ podman search busybox                             º
→ INDEX       NAME                          DESCRIPTION             STARS  OFFICIAL AUTOMATED
→ docker.io   docker.io/library/busybox     Busybox base image.     1882   [OK]
→ docker.io   docker.io/radial/busyboxplus  Full-chain, Internet... 30     [OK]
→ ...
$º$ podman run -it docker.io/library/busybox         º
$º/ #                                                º

$º$ URL="https://raw.githubusercontent.com/nginxinc/docker-nginx"º 
$º$ URL="${URL}/594ce7a8bc26c85af88495ac94d5cd0096b306f7/       "º 
$º$ URL="${URL}/mainline/buster/Dockerfile                      "º
$º$ podman build -t nginx ${URL}                                 º ← build Nginx web server using 
                    └─┬─┘                                            official Nginx Dockerfile
                      └────────┐
                             ┌─┴─┐
$º$ podman run -d -p 8080:80 nginx                               º ← run new image from local cache
                     └─┬─┘└┘
                       │   ^Port Declared @ Dockerfile
                 Effective
                 (Real)port 
                   

- To make it public publish to any other Register compatible with the
BºOpen Containers Initiative (OCI) formatº. The options are:
  - Private Register:
  - Public  Register:
    - quay.io
    - docker.io  

$º$ podman login quay.io                            º ← Login into quay.io
$º$ podman tag localhost/nginx quay.io/${USER}/nginxº ← re-tag the image
$º$ podman push quay.io/${USER}/nginx               º ← push the image
→ Getting image source signatures
→ Copying blob 38c40d6c2c85 done
→ ..
→ Writing manifest to image destination
→ Copying config 7f3589c0b8 done
→ Writing manifest to image destination
→ Storing signatures

$º$ podman inspect quay.io/${USER}/nginx            º ← Inspect image
→ [
→     {
→         "Id": "7f3589c0b8849a9e1ff52ceb0fcea2390e2731db9d1a7358c2f5fad216a48263",
→         "Digest": "sha256:7822b5ba4c2eaabdd0ff3812277cfafa8a25527d1e234be028ed381a43ad5498",
→         "RepoTags": [
→             "quay.io/USERNAME/nginx:latest",
→ ...
Podman commands
@[https://podman.readthedocs.io/en/latest/Commands.html]
BºImage Management:º
  build        Build an image using instructions from Containerfiles
  commit       Create new image based on the changed container
  history      Show history of a specified image
  image        
  └ build   Build an image using instructions from Containerfiles
    exists  Check if an image exists in local storage
    history Show history of a specified image
    prune   Remove unused images
    rm      Removes one or more images from local storage
    sign    Sign an image
    tag     Add an additional name to a local image
    tree    Prints layer hierarchy of an image in a tree format
    trust   Manage container image trust policy

  images       List images in local storage  ( == image list)
  inspect      Display the configuration of a container or image ( == image inspect)
  pull         Pull an image from a registry  (== image pull)
  push         Push an image to a specified destination (== image push)
  rmi          Removes one or more images from local storage
  search       Search registry for image
  tag          Add an additional name to a local image

BºImage Archive/Backups:º
  import       Import a tarball to create a filesystem image (== image import)
  load         Load an image from container archive ( == image load)
  save         Save image to an archive ( == image save)

BºPod Control:º
  attach       Attach to a running container ( == container attach)
  containers Management
  └ cleanup    Cleanup network and mountpoints of one or more containers
    commit     Create new image based on the changed container
    exists     Check if a container exists in local storage
    inspect    Display the configuration of a container or image
    list       List containers
    prune      Remove all stopped containers
    runlabel   Execute the command described by an image label

BºPod Checkpoint/Live Migration:º
  container checkpoint Checkpoints one or more containers
  container restore    Restores one or more containers from a checkpoint

  $º$ podman container checkpoint $container_id\ º← Checkpoint and prepareºmigration archiveº
  $º    -e /tmp/checkpoint.tar.gz                º
  $º$ podman container restore \                 º← Restore from archive at new server
  $º  -i /tmp/checkpoint.tar.gz                  º

  create       Create but do not start a container ( == container create)
  events       Show podman events
  exec         Run a process in a running container ( == container exec)
  healthcheck  Manage Healthcheck
  info         Display podman system information
  init         Initialize one or more containers ( == container init)
  kill         Kill one or more running containers with a specific signal ( == container kill)
  login        Login to a container registry
  logout       Logout of a container registry
  logs         Fetch the logs of a container ( == container logs)
  network      Manage Networks
  pause        Pause all the processes in one or more containers ( == container pause)
  play         Play a pod
  pod          Manage pods
  port         List port mappings or a specific mapping for the container ( == container port)
  ps           List containers
  restart      Restart one or more containers ( == container restart)
  rm           Remove one or more containers ( == container rm)
  run          Run a command in a new container ( == container run)
  start        Start one or more containers ( == container start)
  stats        Display a live stream of container resource usage statistics (== container stats)
  stop         Stop one or more containers ( == container stop)
  system       Manage podman
  top          Display the running processes of a container ( == container top)
  unpause      Unpause the processes in one or more containers ( == container unpause)
  unshare      Run a command in a modified user namespace
  version      Display the Podman Version Information
  volume       Manage volumes
  wait         Block on one or more containers ( == container wait)

BºPod Control: File systemº
  cp           Copy files/folders container ←→ filesystem (== container cp)
  diff         Inspect changes on container’s file systems ( == container diff)
  export       Export container’s filesystem contents as a tar archive ( ==  container export )
  mount        Mount a working container’s root filesystem  ( == container mount)
  umount       Unmounts working container’s root filesystem ( == container mount)


BºPod Integrationº
  generate     Generated structured data 
    kube       kube Generate Kubernetes pod YAML from a container or pod
    systemd    systemd Generate a BºSystemD unit fileº for a Podman container
SystemD Integration
https://www.redhat.com/sysadmin/improved-systemd-podman
- auto-updates help to make managing containers even more straightforward.

- SystemD is used in Linux to  managing services (background long-running jobs listening for client requests) and their dependencies.

BºPodman running SystemD inside a containerº
  └ /run               ← tmpfs
    /run/lock          ← tmpfs
    /tmp               ← tmpfs 
    /var/log/journald  ← tmpfs
    /sys/fs/cgroup      (configuration)(depends also on system running cgroup V1/V2 mode).
    └───────┬───────┘
     Podman automatically mounts next file-systems in the container when:
     - entry point of the container is either º/usr/sbin/init or /usr/sbin/systemdº
     -º--systemd=alwaysºflag is used 

BºPodman running inside SystemD servicesº
  - SystemD needs to know which processes are part of a service so it 
    can manage them, track their health, and properly handle dependencies.
  - This is problematic in Docker  (according to RedHat rival) due to the
    server-client architecture of Docker:
    - It's practically impossible to track container processes, and 
      pull-requests to improve the situation have been rejected.
    - Podman implements a more traditional architecture by forking processes:
      - Each container is a descendant process of Podman.
      - Features like sd-notify and socket activation make this integration
        even more important.
        - sd-notify service manager allows a service to notify SystemD that
          the process is ready to receive connections
        - socket activation permits SystemD to launch the containerized process
          only when a packet arrives from a monitored socket.
          
    - Compatible with audit subsystem (track records user actions).
      - the forking architecture allows systemd to track processes in a
        container and hence opens the door for seamless integration of
        Podman and systemd.

  $º$ podman generate systemd --new $containerº  ← Auto-generate containerized systemd units:
                              └─┬─┘
                              Ohterwise it will be tied to creating host 


     - Pods are also supported in Podman 2.0
       Container units that are part of a pod can now be restarted.
       especially helpful for auto-updates.

BºPodman auto-update  (1.9+)º
  - To use auto-updates:
    - containers must be created with :
      --label "io.containers.autoupdate=image"

    - run in a SystemD unit generated by
      $ podman generate systemd --new.

  $º$ podman auto-update º  ← Podman will first looks up running containers with the
                              "io.containers.autoupdate" label set to "image" and then
                              query the container registry for new images. 
                            $ºIf that's the case Podman restarts the corresponding º
                            $ºSystemD unit to stop the old container and create a  º
                            $ºnew one with the modified image.                     º

   (still marked as experimental while  collecting user feedback)

Setup Insec. HTTP registry
@[https://www.projectatomic.io/blog/2018/05/podman-tls/]

   /etc/containers/registries.conf. 
   
   # This is a system-wide configuration file used to
   # keep track of registries for various container backends.
   # It adheres to TOML format and does not support recursive
   # lists of registries.
   
   [registries.search]
   registries = ['docker.io', 'registry.fedoraproject.org', 'registry.access.redhat.com']
   
   # If you need to access insecure registries, add the registry's fully-qualified name.
   # An insecure registry is one that does not have a valid SSL certificate or only does HTTP.
   [registries.insecure]
 Bºregistries = ['localhost:5000']º
Security
Protecting against Doki malware
https://containerjournal.com/topics/container-security/protecting-containers-against-doki-malware/
2+Millions images with Critical Sec.Holes
https://www.infoq.com/news/2020/12/dockerhub-image-vulnerabilities/
OpenSCAP: Scanning Vulnerabilities
- Scanning Containers for Vulnerabilities on RHEL 8.2 With OpenSCAP and Podman:
@[https://www.youtube.com/watch?v=nQmIcK1vvYc]
Un-ordered
Container Networking
@[https://jvns.ca/blog/2016/12/22/container-networking/]
By Julia Evans

""" There are a lot of different ways you can network containers 
  together, and the documentation on the internet about how it works is 
  often pretty bad. I got really confused about all of this, so I'm 
  going to try to explain what it all is in laymen's terms. """

Bºwhat even is container networking?º

  When  a program in a container, you have two main options:
  - run app in host network namespace. (normal networking)
    "host_ip":"app_port"
  - run the program in its ownºnetwork namespaceº:
  RºIt turns out that this problem of how to connect º
  Rºtwo programs in containers together has a ton of º
  Rºdifferent solutions. º

- "every container gets an IP".  (k8s requirement)
    "172.16.0.1:8080" // Tomcat continer app 1
    "172.16.0.2:5432" // PostgreSQL container app1
    "172.17.0.1:8080" // Tomcat continer app 2
    ...
    └───────┬───────┘
    any other program in the cluster will target those IP:port
    Instead of single-IP:"many ports" we have "many IPs":"some ports"

   Q: How to get many IPs in a single host? 
    - Host IP: 172.9.9.9
    - Container private IP: 10.4.4.4
    - To route from 10.4.4.4 to 172.9.9.9:
    - Alt1: Configure Linux routing tables
    $º$ sudo ip route add 10.4.4.0/24 via 172.23.1.1 dev eth0º 
    - Alt2: Use AWS VPC Route tables
    - Alt3: Use Azure ...

BºEncapsulating to other networks:º

  LOCAL NETWORK     REMOTE NETWORK
                    (encapsulation)
  IP: 10.4.4.4      IP: 172.9.9.9
  TCP stuff         (extra wrapper stuff)
  HTTP stuff        IP: 10.4.4.4
                    TCP stuff
                    HTTP stuff
  
  - 2 different ways of doing encapsulation: 
    - "ip-in-ip": add extra IP-header on top "current" IP header.
      MAC:  11:11:11:11:11:11
      IP: 172.9.9.9
      IP: 10.4.4.4
      TCP stuff
      HTTP stuff
      Ex:
      $º$ sudo ip tunnel add mytun mode ipip \       º ← Create tunnel "mytun"
      $º   remote 172.9.9.9 local 10.4.4.4 ttl 255   º 
      $º   sudo ifconfig mytun 10.42.1.1             º 
      $º$ sudo route add -net 10.42.2.0/24 dev mytun º ← set up a route table
      $º$ sudo route list 
  
  
    - "vxlan": take whole packet
       (including the MAC address) and wrap
       it inside a UDP packet. Ex:
       MAC address: 11:11:11:11:11:11
       IP: 172.9.9.9
       UDP port 8472 (the "vxlan port")
       MAC address: ab:cd:ef:12:34:56
       IP: 10.4.4.4
       TCP port 80
       HTTP stuff

  -BºEvery container networking "thing" runs some kind of daemon program º
   Bºon every box which is in charge of adding routes to the route table.º
   Bºfor automatic route configuration.º
     - Alt1: routes are in etcd cluster, and program talks to the 
             etcd cluster to figure out which routes to set.
     - Alt2: use BGP protocol to gossip to each other about routes,
             and a daemon (BIRD) that listens for BGP messages on
             every box.

BºQ: How does that packet actually end up getting to your container program?º
  A: bridge networking
  - Docker/... creates fake (virtual) network interfaces for every 
    single one of your containers with a given IP address.
  - The fake interfaces are bridges to a real one.

BºFlannel:º
  - Supports vxlan (encapsulate all packets) and
    host-gw (just set route table entries, no encapsulation)
  - The daemon that sets the routes gets them ºfrom an etcd clusterº.

BºCalico:º
  - Supports ip-in-ip encapsulation and
    "regular" mode, (just set route table entries, no encaps.)
  - The daemon that sets the routes gets them ºusing BGP messagesº
    from other hosts. (etcd is  not used for distributing routes).
Packaging Apps
@[https://www.infoq.com/articles/metaparticle-pulumi-ballerina/]
Packaging Applications for Docker and Kubernetes:
Metaparticle vs Pulumi vs Ballerina
https://v1-0.ballerina.io/learn/by-example/
CRI-O
CRI-O: container runtime for K8s / OpenShift.
OCI compliant Container Runtime Engines:
- Docker
- CRI-O
- containerd
Kaniko
☞ NOTE: To build ºJAVA imagesº see also @[/JAVA/java_map.html?query=jib]

@[https://github.com/GoogleContainerTools/kaniko]
- tool to build container images inside an unprivileged container or
  Kubernetes cluster.
- Although kaniko builds the image from a supplied Dockerfile, it does
  not depend on a Docker daemon, and instead executes each command completely
  in userspace and snapshots the resulting filesystem changes.
- The majority of Dockerfile commands can be executed with kaniko, with
  the current exception of SHELL, HEALTHCHECK, STOPSIGNAL, and ARG.
  Multi-Stage Dockerfiles are also unsupported currently. The kaniko team
  have stated that work is underway on both of these current limitations.
Testcontainers
@[https://www.testcontainers.org/#who-is-using-testcontainers]
- Testcontainers is a Java library that supports JUnit tests, 
  providing lightweight, throwaway instances of common databases, 
  Selenium web browsers, or anything else that can run in a Docker 
  container.

- Testcontainers make the following kinds of tests easier:

  - Data access layer integration tests: use a containerized instance 
    of a MySQL, PostgreSQL or Oracle database to test your data access 
    layer code for complete compatibility, but without requiring complex 
    setup on developers' machines and safe in the knowledge that your 
    tests will always start with a known DB state. Any other database 
    type that can be containerized can also be used.
  - Application integration tests: for running your application in a 
    short-lived test mode with dependencies, such as databases, message 
    queues or web servers.
  - UI/Acceptance tests: use containerized web browsers, compatible 
    with Selenium, for conducting automated UI tests. Each test can get a 
    fresh instance of the browser, with no browser state, plugin 
    variations or automated browser upgrades to worry about. And you get 
    a video recording of each test session, or just each session where 
    tests failed.
  - Much more! 
    Testing Modules
    - Databases
      JDBC, R2DBC, Cassandra, CockroachDB, Couchbase, Clickhouse, DB2, Dynalite, InfluxDB, MariaDB, MongoDB, 
      MS SQL Server, MySQL, Neo4j, Oracle-XE, OrientDB, Postgres, Presto

    - Docker Compose Module
    - Elasticsearch container
    - Kafka Containers
    - Localstack Module
    - Mockserver Module
    - Nginx Module
    - Apache Pulsar Module
    - RabbitMQ Module
    - Solr Container
    - Toxiproxy Module
    - Hashicorp Vault Module
    - Webdriver Containers


Who is using Testcontainers?
-   ZeroTurnaround - Testing of the Java Agents, micro-services, Selenium browser automation
-   Zipkin - MySQL and Cassandra testing
-   Apache Gora - CouchDB testing
-   Apache James - LDAP and Cassandra integration testing
-   StreamSets - LDAP, MySQL Vault, MongoDB, Redis integration testing
-   Playtika - Kafka, Couchbase, MariaDB, Redis, Neo4j, Aerospike, MemSQL
-   JetBrains - Testing of the TeamCity plugin for HashiCorp Vault
-   Plumbr - Integration testing of data processing pipeline micro-services
-   Streamlio - Integration and Chaos Testing of our fast data platform based on Apache Puslar, Apache Bookeeper and Apache Heron.
-   Spring Session - Redis, PostgreSQL, MySQL and MariaDB integration testing
-   Apache Camel - Testing Camel against native services such as Consul, Etcd and so on
-   Infinispan - Testing the Infinispan Server as well as integration tests with databases, LDAP and KeyCloak
-   Instana - Testing agents and stream processing backends
-   eBay Marketing - Testing for MySQL, Cassandra, Redis, Couchbase, Kafka, etc.
-   Skyscanner - Integration testing against HTTP service mocks and various data stores
-   Neo4j-OGM - Testing new, reactive client implementations
-   Lightbend - Testing Alpakka Kafka and support in Alpakka Kafka Testkit
-   Zalando SE - Testing core business services
-   Europace AG - Integration testing for databases and micro services
-   Micronaut Data - Testing of Micronaut Data JDBC, a database access toolkit
-   Vert.x SQL Client - Testing with PostgreSQL, MySQL, MariaDB, SQL Server, etc.
-   JHipster - Couchbase and Cassandra integration testing
-   wescale - Integration testing against HTTP service mocks and various data stores
-   Marquez - PostgreSQL integration testing
-   Transferwise - Integration testing for different RDBMS, kafka and micro services
-   XWiki - Testing XWiki under all supported configurations
-   Apache SkyWalking - End-to-end testing of the Apache SkyWalking, 
    and plugin tests of its subproject, Apache SkyWalking Python, and of 
    its eco-system built by the community, like SkyAPM NodeJS Agent
-   jOOQ - Integration testing all of jOOQ with a variety of RDBMS


docker-compose: dev vs pro
https://stackoverflow.com/questions/60604539/how-to-use-docker-in-the-development-phase-of-a-devops-life-cycle/60780840#60780840

Modify your Compose file for production🔗
CRIU.org: Container Live Migration
@[https://criu.org/Main_Page]

CRIU: project to implement checkpoint/restore functionality for Linux.

Checkpoint/Restore In Userspace, or CRIU (pronounced kree-oo, IPA: 
/krɪʊ/, Russian: криу), is a Linux software. It can freeze a 
running container (or an individual application) and checkpoint its 
state to disk. The data saved can be used to restore the application 

Used for example to bootstrap JVMs in millisecs (vs secs)
@[/JAVA/java_map.html#?jvm_app_checkpoint]

and run it exactly as it was during the time of the freeze. Using 
this functionality, application or container live migration, 
snapshots, remote debugging, and many other things are now possible. 
Avoid huge log dumps
https://devops.stackexchange.com/questions/12944/any-way-to-limit-docker-logs-output-by-default/12970#12970

- Problem Context:
  - Container output huge logs (maybe gigabytes).
  - $ docker logs 'container' knocks down the host server when output is processed.

- To limit docker logs, specify limits in docker daemon's config file like:
  /etc/docker/daemon.json
  {
    "log-driver": "json-file",
    "log-opts": {
      "max-size": "10m",
      "max-file": "3" 
    }
  }
  (then restart docker daemon after edit)
NOTE: maybe ulimit can fix it at global (Linux OS) scope.
ContainerCoreInterceptor 
https://github.com/AmadeusITGroup/ContainerCoreInterceptor 
GitHub - AmadeusITGroup/ContainerCoreInterceptor: Core_interceptor 
can be used to handle core dumps in a dockerized environment. It 
listens on the local docker daemon socket for events. When it 
receives a die event it checks if the dead container produced any 
core dump or java heap dump.
KVM Kata containers
@[https://katacontainers.io/]
- Security: Runs in a dedicated kernel, providing isolation of 
  network, I/O and memory and can utilize hardware-enforced isolation 
  with virtualization VT extensions.
- Compatibility: Supports industry standards including OCI container 
  format, Kubernetes CRI interface, as well as legacy virtualization 
  technologies.
- Performance: Delivers consistent performance as standard Linux 
  containers; increased isolation without the performance tax of 
  standard virtual machines.
- Simplicity: Eliminates the requirement for nesting containers 
  inside full blown virtual machines; standard interfaces make it easy 
  to plug in and get started. 
avoid "sudo" docker
$º $ sudo usermod -a -G docker "myUser"º
$ newgrp docker  (take new group without re-login)
test images in 0.5 secs
@[https://medium.com/@aelsabbahy/tutorial-how-to-test-your-docker-image-in-half-a-second-bbd13e06a4a9]

...When you’re done with this tutorial you’ll have a small YAML 
file that describes your docker image’s desired state. This will 
allow you to test this:

  $ docker run -p 8080:80 nginx

  With this:

  $ dgoss run -p 8080:80 nginx

- Goss is a YAML based serverspec alternative tool for validating a 
  server’s configuration. It eases the process of writing tests by 
  allowing the user to generate tests from the current system state. 
  Once the test suite is written they can be executed, waited-on, or 
  served as a health endpoint.